AIML Guest Presentation: Optimisation-centric Generalisations of Bayesian Inference
Dr Knoblauch summarises a recent line of research and advocate for an optimization-centric generalisation of Bayesian inference. The main thrust of this argument relies on identifying the tension between the assumptions motivating the Bayesian posterior and the realities of modern Bayesian Machine Learning. Our generalisation is a useful conceptual device, but also has methodological merit: it can address various challenges that arise when the standard Bayesian paradigm is deployed in Machine Learning鈥攊ncluding robustness to model misspecification, robustness to poorly chosen priors, and inference in intractable models