Predictive machine learning and statistical models have been successfully applied to a vast variety of fields. Traditional approaches provide point estimates of unknown parameters and predictions. In the best-case scenario, results are supplied with confidence intervals, but overall give very little insight into the uncertainty of the estimates and provided predictions. Furthermore, small data or highly flexible models can lead to overfitting. Overconfident predictions in sensitive fields such as healthcare may be costly and harmful. Bayesian approach to model formulation offers a tool to resolve those shortcomings and allows for a lot of flexibility: a broad range of models – from linear regression to neural networks – can be formalised with the help of probabilistic programming languages (PPL), prior knowledge can be taken into account, multiple sources of uncertainty can be incorporated and propagated into the uncertainty of produced estimates and predictions. This makes probabilistic modelling applicable to even small datasets, where classical models would fail to produce reliable results. As an introduction to the workshop, we will discuss the basics of Bayesian inference. The focus, however, will be on the hands-on experience. We will consider a number of problems and implement them in a Julia-based probabilistic modelling language Turing. Introduction to Julia will also be given at the start. Those who prefer R or Python to Julia can also follow along: translation of the Bayesian workflow into R/Stan and Python/PyMC3 will be provided in a GitHub repository.
I designed and delivered a hands-on workshop on Bayesian inference with implementations in Julia/Turing.jl, R/Stan and Python/PyMC3.