1956–, Computer scientist, statistician
Also known as: Michael I. Jordan
Michael Irwin Jordan is an American computer scientist and statistician whose research has covered nearly every aspect of statistical machine learning. Mixture-of-experts models (1991, with Robert Jacobs), variational inference for graphical models, latent Dirichlet allocation (with Blei and Ng, 2003) and the broader synthesis of Bayesian statistics with neural networks all bear his influence.
As a teacher and adviser at MIT and UC Berkeley, Jordan has trained, directly or as a postdoctoral mentor, a substantial fraction of the leading machine-learning researchers of the current generation, including David Blei, Andrew Ng, Ben Taskar, Tommi Jaakkola, Yoshua Bengio (postdoc), Zoubin Ghahramani (postdoc), Lawrence Saul and many others. He continues research at Berkeley and remains an outspoken sceptic of inflated AI hype and of the deep-learning community's relative neglect of statistical decision-theoretic foundations.
Video
Related people: Judea Pearl, Yoshua Bengio
Works cited in this book:
- Latent Dirichlet Allocation (2003) (with David M. Blei, Andrew Y. Ng)
Discussed in:
- Chapter 1: What Is AI?, A Brief History of AI