Advanced Topics in Machine Learning - Notes part 3

It’s surprising how the brain is able to recognize objects regardless of their position, scale, rotation, and illumination. The intuitive fact that the brain is able to recognize some persistent or invariant characteristics that identify an object is the concept at the basis of the following notes. The idea is that our visual cortex and up to some degree the ANN used in computer vision are able to recognize objects by learning a set of invariant representations of the input....

November 6, 2023 · 22 min · Alessandro Serra

Advanced Topics in Machine Learning - Notes part 2

Regularization We continue our analysis of Regularization techniques: Batch normalization Quoting from Wikipedia: “Batch normalization (also known as batch norm) is a method used to make training of artificial neural networks faster and more stable through normalization of the layers’ inputs by re-centering and re-scaling” Batch normalization is a very effective technique to justify its importance we report the following advantages listed in the article Networks train faster — Each training iteration will actually be slower because of the extra calculations during the forward pass and the additional hyperparameters to train during backpropagation....

October 18, 2023 · 26 min · Alessandro Serra

Advanced Topics in Machine Learning - Notes part 1

This is the first of a series of posts that collects notes from the course advaced topics in machine learning. If you find any mistakes or I’ve forgotten to cite you feel free to reach out! Neural Tangent Kernel We want to explicit the relationship between kernel methods and neural network: Shallow learning (using kernel): the feature map $\phi(x)$ is fixed the model is $f(x) = \langle w , \phi(x) \rangle$ Deep learning (using neural network):...

September 18, 2023 · 27 min · Alessandro Serra