Home » Practicing Principles » Modern Causal Inference » Judea Pearl’s works » On media » AI’s struggle to reach “understanding” and “meaning”

AI’s struggle to reach “understanding” and “meaning”

Deep learning is very good at ferreting out correlations between tons of data points, but when it comes to digging deeper into the data and forming abstractions and concepts, they barely scratch the surface (even that might be an overstatement). We have AI systems that can locate objects in images and convert audio to text, but none that can empathize. In fact, our AI systems start to break as soon as they face situations that are slightly different from the data they’ve been trained on.

Like the term “artificial intelligence,” the notions of “meaning” and “understanding” are hard to define and measure. Therefore, instead of trying to give the terms a formal definition, the participants in the workshop defined a list of “correlates,” abilities and skills closely tied to our capacity to understand situations. They also examined to what extent current AI systems enjoy these capacities.

“Understanding is built on a foundation of innate core knowledge,” Mitchell writes. Our basic understanding of physics, gravity, object persistence, and causality enable us to trace the relations between objects and their parts, think about counterfactuals and what-if scenarios, and act in the world with consistency. Recent research indicates that intuitive physics and causal models play a key role in our understanding of visual scenes, and scientists have described it as one of the key components of the “dark matter” of computer vision.

The original article can be found here.

Regarding to Causal Inference and AI, Professor Judea Pearl is a pioneer in this work and was recognized with a Turing Award in 2011. In 2020, Professor Pearl is also awarded as World Leader in AI World Society ( by Michael Dukakis Institute for Leadership and Innovation (MDI) and Boston Global Forum (BGF). At this moment, Professor Pearl also contribute on Causal Inference for AI transparency, which is one of important AI World Society ( topics on AI Ethics from by Michael Dukakis Institute for Leadership and Innovation (MDI) and Boston Global Forum (BGF).