Foundational Deep Learning

Machine Learning is a rapdily growing field, both in industry and at academic institutions. Earlier this year, some formal results on the accuracy (or lack thereof) of neural networks were published, initiating a classification theory on the limitations of deep learning, reminiscent of similar work by Godel and Turing in the fields of logic and computer science, respectively.

After coming across this paper by Colbrook, Antun and Hansen, I decided to give a proof of Theorem 2.1 in a (relatively) self contained and didactic format, covering only the necessary lemmas and preliminary definitions. The bulk of my paper builds a framework (namely that of computational problems and algorithms) under which such a formal result can be proven. I particularly enjoyed coming up with the formal algorithm underpinning one of the main arguments in the proof. Note that I treated the case of deterministic algorithms and the original paper provides a proof for the statement about probabilistic algorithms.

Contact me if you are interested in the paper!