Compared to the SVM the Bayesian formulation allows avoiding the set of free parameters that the SVM has and that usually require cross-validation based post optimizations. However RVMs use an Expectation Maximization (EM)-like learning method and are therefore at risk of local minima, unlike the standard SMO-based algorithms employed by SVMs which are guaranteed to find a global optimum.[來源請求]
- Tipping, Michael E. Sparse Bayesian Learning and the Relevance Vector Machine. Journal of Machine Learning Research. 2001, 1: 211–244. doi:10.1162/15324430152748236.