Dr. Brian Milch
The publication Artificial General Intelligence through Large-Scale, Multimodal Bayesian Learning said
A long-standing goal of artificial intelligence is to build a single system that can answer questions as diverse as, “Wow can I get from Boston to New Haven without a car?”, “How many Nobel Prizes have been won by people from developing countries?”, and “In this scene showing people on a city street, about how cold is it?”. Answering such questions requires broad knowledge, on topics ranging from public transit to geography to weather-appropriate clothing. These questions also require deep reasoning, not just scanning for keywords or looking at simple features in an image.
So far, we do not have AI systems whose knowledge is both broad and deep enough to answer this range of questions. The most prominent efforts to acquire such knowledge are Cyc and Open Mind, both of which have significant limitations. The knowledge they collect is primarily deterministic: it does not include quantitative measures of how often things tend to occur. Furthermore, adding new knowledge requires effort by humans, which limits the breadth of questions that can be answered.
Meanwhile, other branches of AI have focused on reasoning with probabilistic models that explicitly quantify uncertainty, and on learning probabilistic models automatically from data. This probabilistic approach to AI has been successful in narrow domains, ranging from gene expression analysis to terrain modeling for autonomous driving. It has also seen domain-independent applications, such as sentence parsing and object recognition, but these applications have been relatively shallow: they have not captured enough semantics to answer questions of the kind we posed above.
Brian Milch, Ph.D. was author of this paper and is
a post-doc working with
Professor Leslie Kaelbling at the MIT Computer
Science and Artificial Intelligence Laboratory.
He was
named one of the “Ten to Watch” in AI by IEEE Intelligent
Systems in 2008.
Brian research subject is artificial intelligence (AI). His long-term
goal is to
understand how anything made of unintelligent parts could possibly be as
smart as a human being (or even a lab rat). More specifically, he works
on
probabilistic inference and machine learning. His dissertation research
was on a probabilistic modeling language called Bayesian Logic, or BLOG.
A prototype version of the
BLOG Inference Engine is available.
He coauthored
Query-Free News Search,
Multi-Agent Influence Diagrams for Representing and Solving
Games,
BLOG: Probabilistic Models with Unknown Objects”,
Learning Probabilistic Relational Dynamics for Multiple
Tasks,
General-Purpose MCMC Inference over Relational Structures,
Approximate Inference for Infinite Contingent Bayesian
Networks,
Identity Uncertainty and Citation Matching,
Searching the Web by Voice,
SPOOK: A System for Probabilistic Object-Oriented Knowledge,
First-Order Probabilistic Languages: Into the Unknown, and
Reasoning about Large Populations with Lifted Probabilistic
Inference.
Read the
full list of his publications!
Brian earned a B.S. with distinction and with honors in Symbolic Systems
with the thesis “Reasoning about Agents’ Beliefs and Decisions with
Probabilistic Models”
at Stanford University in 2000. He concentrated on Artificial
Intelligence with a minor in Mathematics. He earned a Ph.D. in Computer
Science with the dissertation
Probabilistic Models with Unknown
Objects from University of California, Berkeley in
2006.