Review Articles

A discussion of ‘prior-based Bayesian information criterion (PBIC)’

Jun Shao ,

Department of Statistics, University of Wisconsin-Madison, Madison, WI, USA

shao@stat.wisc.edu

Sheng Zhang

Department of Statistics, University of Wisconsin-Madison, Madison, WI, USA

Pages 19-21 | Received 10 Jan. 2019, Accepted 12 Jan. 2019, Published online: 06 Mar. 2019,
  • Abstract
  • Full Article
  • References
  • Citations

References

  1. Brown, P. J., Vannucci, M., & Fearn, T. (1998). Multivariate Bayesian variable selection and prediction. Journal of the Royal Statistical Society, Series B60, 627–641. doi: 10.1111/1467-9868.00144 [Crossref], [Google Scholar]
  2. Dellaportas, P., Forster, J. J., & I., Ntzoufras (1997). On Bayesian model and variable selection using MCMC (Technical Report). Athens: Department of Statistics, Athens University of Economics and Business. [Google Scholar]
  3. George, E. I., & McCulloch, R. E. (1993). Variable selection via Gibbs sampling. Journal of the American Statistical Association85, 398–409. [Google Scholar]
  4. Green, P. J. (1995). Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrika82, 711–732. doi: 10.1093/biomet/82.4.711 [Crossref][Web of Science ®], [Google Scholar]
  5. Griffin, J. E., & Brown, P. J. (2009). Inference with Normal-Gamma prior distributions in regression problems (Technical Report). Institute of Mathematics, Statistics and Actuarial Science, University of Kent. [Google Scholar]
  6. Hoti, F., & Sillanpää, M. J. (2006). Bayesian mapping of genotype x expression interactions in quantitative and qualitative traits. Heredity97, 4–18. doi: 10.1038/sj.hdy.6800817 [Crossref][Web of Science ®], [Google Scholar]
  7. Kuo, L., & Mallick, B. (1998). Variable selection for regression models. Sankhya Series B60, 65–81. [Google Scholar]
  8. Kyung, M., Gilly, J., Ghosh, M., & Casella, G. (2010). Penalized regression, standard errors, and Bayesian Lassos. Bayesian Analysis5, 369–412. doi: 10.1214/10-BA607 [Crossref][Web of Science ®], [Google Scholar]
  9. O'Hara, R. B., & Sillanpää, M. J. (2009). Review of Bayesian variable selection methods: What, how and which. Bayesian Analysis4, 85–118. doi: 10.1214/09-BA403 [Crossref][Web of Science ®], [Google Scholar]
  10. Park, T., & Casella, G. (2008). The Bayesian Lasso. Journal of the American Statistical Association103, 681–686. doi: 10.1198/016214508000000337 [Taylor & Francis Online][Web of Science ®], [Google Scholar]
  11. Stamey, T., Kabalin, J., McNeal, J., Johnstone, I., Freiha, F., Redwine, E., & Yang, N. (1989). Prostate specific antigen in the diagnosis and treatment of adenocarcinoma of the prostate. II. Radical prostatectomy treated patients. Journal of Urology141, 1076–1083. doi: 10.1016/S0022-5347(17)41175-X [Crossref][Web of Science ®], [Google Scholar]
  12. Tibshirani, R. (1996). Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society, Series B58, 267–288. [Crossref], [Google Scholar]
  13. Tipping, M. (2001). Sparse Bayesian learning and the relevance vector machine. Journal of Machine Learning1, 211–244. [Crossref][Web of Science ®], [Google Scholar]
  14. Yuan, M., & Lin, Y. (2005). Efficient empirical Bayes variable selection and estimation in linear models. Journal of the American Statistical Association100, 1215–1225. doi: 10.1198/016214505000000367 [Taylor & Francis Online][Web of Science ®], [Google Scholar]
  15. Zou, H., & Hastie, T. (2005). Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society, Series B67, 301–320. doi: 10.1111/j.1467-9868.2005.00503.x [Crossref], [Google Scholar]