Bayesian Statistical Inference and Machine Learning
The course will develop a general approach to building models of economic and financial processes, with a focus on statistical learning techniques that scale to large data sets. We begin by introducing the key elements of a parametric statistical model: the likelihood, prior, and posterior, and show how to use them to make predictions. We shall also discuss conjugate priors and exponential families, and their applications to big data. We treat linear and generalized-linear models in some detail, including variable selection techniques, penalized regression methods such as the lasso and elastic net, and a fully Bayesian treatment of the linear model. As applications of these techniques, we shall discuss Ross’ Arbitrage Pricing Theory (APT), and its applications to risk management and portfolio optimization. As extensions, we will discuss multilevel and hierarchical models, and conditional inference trees and forests. We also treat model-selection methodologies including cross-validation, AIC, and BIC and show how to apply them to all of the financial data sets presented as examples in class. Then we move on to dynamic models for time series including Markov state-space models, as special cases. As we introduce models, we will also introduce solution techniques including the Kalman filter and particle filter, the Viterbi algorithm, Metropolis-Hastings and Gibbs Sampling, and the EM algorithm. As an application of the EM algorithm, we shall discuss mixture density estimation, and its application to building classifiers.