How did you get into machine learning?
I really wanted to be an artist, and stumbled into machine learning by accident. In 2000, when it was still more frequently called artificial intelligence (AI), I attended a third-year course taught by Andries Engelbrecht at the University of Pretoria. He roped us into research, made us derive rules for back-propagating the gradients of a feed-forward neural network, and encouraged us to code and experiment with ideas ourselves. A year later, I found myself writing his legendary 24-hour honours AI exam, for which one really had to pull an all-nighter. It was exhilarating, in a strange way. I guess I should attribute “getting into machine learning” entirely to Andries. In 2001, rumours of a new technique called “support vector machines” (SVMs) reached Pretoria, and we dabbled in that for a while. Back then, it was quite hard to find others in South Africa who were working in the same area!
What will you be teaching?
I’ll talk about unsupervised learning, and in particular, how the “variational framework” fits into the collection deep learning tools. You can expect to see Bayes’s theorem, and to learn about the difference between “inference” and “learning”.
What advice would you give to those getting started in machine/deep learning?
Today, we have the luxury of having tools like TensorFlow, with which one can construct very intricate functions, and then get the gradients of those functions for free. It is very tempting to stop thinking about minute modelling details, and gloss over many fundamental principles, models, and techniques. Don’t. They’ll help you in the long run! Try for instance to compute a gradient by hand, to understand the interaction between variables in your model.
My second piece of advice is borrowed from David MacKay. Before diving into literature to solve a problem, first think and try to solve it yourself, and develop your own independent ideas. Then, once you’re satisfied that you’ve thought through it and worked on it, then only look at what others have done. Finally, if you can, make your modelling assumptions clear by “always writing down the probability of everything”.
My second piece of advice is borrowed from David MacKay. Before diving into literature to solve a problem, first think and try to solve it yourself, and develop your own independent ideas. Then, once you’re satisfied that you’ve thought through it and worked on it, then only look at what others have done. Finally, if you can, make your modelling assumptions clear by “always writing down the probability of everything”.