Monday, September 20, afternoon
As machine learning matures as a field it becomes more and more a mainstay of complex applications (in NLP, vision, user interfaces, etc.) with an identity as a branch of computer science and artificial intelligence. This tutorial considers some recent trends in machine learning from a Bayesian perspective, and in so doing covers some of the fundamental principles of Bayesian statistics and decision theory. The tutorial will cover basic justifications and connections to related theories from the perspective if inteliigent systems. Prior probabilities will be covered in some detail as the key differentiator and tool of Bayesian methods. Basic tools will then be presented in a grand tour (what is available, and how, rather than details) such as MCMC, non-parametric methods, graphical models. A basic format (and emerging standard) for presenting the theory of a machine learning algorithm will be given. Some current areas will then be presented from this perspective: topic models, information retrieval, transfer learning, language learning, and structure object learning.
This workshop was unfortunately cancelled, and replaced with G. Lugosi's.