Sammendrag
Bayesian neural networks (BNNs) have recently regained a significant amount of attention in the deep learning community due to the development of scalable approximate Bayesian inference techniques. There are several advantages of using a Bayesian approach: Parameter and prediction uncertainties become easily available, facilitating rigorous statistical analysis. Furthermore, prior knowledge can be incorporated. However so far there have been no scalable techniques capable of combining both model (structural) and parameter uncertainty. In this paper we introduce the concept of model uncertainty in BNNs and hence make inference in the joint space of models and parameters.
Moreover, we suggest an adaptation of a scalable variational inference approach with reparametrization of marginal inclusion probabilities to incorporate the model space constraints. Experimental results on a range of benchmark data sets show that we obtain comparable accuracy results with the competing models, but based on methods that are much more sparse than ordinary BNNs. This is particularly the case in model selection settings, but also within a Bayesian model averaging setting a considerable sparsification is achieved. As expected, model uncertainties give higher, but more reliable uncertainty measures.
Vis fullstendig beskrivelse