Extended Bayesian Classifiers
For some years, I have been intrigued with the naive Bayesian
classifier, an algorithm for supervised learning that stores
a single probabilistic summary for each class and that assumes
conditional independence of the attributes given the class. Despite
these simpiifying assumptions, in many domains naive Bayes gives
results as good or better than much more sophisticated approaches
In addition to analyses of naive Bayes' behavior, my colleagues
(Stephanie Sage and George John) and I have extended the basic
algorithm along a number of fronts, but striving to keep the
representational ability (and thus the inductive bias) within
reasonable bounds. This constrasts with much of the recent work
on learning with unrestricted Bayesian networks. Our studies
have led to the publications shown below.
Langley, P., & Sage, S. (1999).
Tractable average-case analysis of naive Bayesian classifiers.
Proceedings of the Sixteenth International Conference on Machine
Learning (pp. 220-228). Bled, Slovenia: Morgan Kaufmann.
John, G. H., & Langley, P. (1995).
Estimating continuous distributions in Bayesian classifiers.
Proceedings of the Eleventh Conference on Uncertainty in
Artificial Intelligence (pp. 338-345). Montreal, Quebec: Morgan
Langley, P., & Sage, S. (1994).
Induction of selective Bayesian classifiers.
Proceedings of the Tenth Conference on Uncertainty in Artificial
Intelligence (pp. 399-406). Seattle, WA: Morgan Kaufmann.
Langley, P. (1993).
Induction of recursive Bayesian classifiers.
Proceedings of the 1993 European Conference on Machine Learning
(pp. 153-164). Vienna: Springer-Verlag.
Langley, P., Iba, W., & Thompson, K. (1992).
An analysis of Bayesian classifiers.
Proceedings of the Tenth National Conference on Artificial Intelligence
(pp. 223-228). San Jose, CA: AAAI Press.
For more information, send electronic mail to