Blog

Home » Practicing Principles » Modern Causal Inference » Discussing » Artificial intelligence: The dark matter of computer vision

Artificial intelligence: The dark matter of computer vision

What makes us humans so good at making sense of visual data? That’s a question that has preoccupied artificial intelligence and computer vision scientists for decades. Efforts at reproducing the capabilities of human vision have so far yielded results that are commendable but still leave much to be desired.

Our current artificial intelligence algorithms can detect objects in images with remarkable accuracy, but only after they’ve seen many (thousands or maybe millions) examples and only if the new images are not too different from what they’ve seen before.

There is a range of efforts aimed at solving the shallowness and brittleness of deep learning, the main AI algorithm used in computer vision today. But sometimes, finding the right solution is predicated on asking the right questions and formulating the problem in the right way. And at present, there’s a lot of confusion surrounding what really needs to be done to fix computer vision algorithms.

In a paper published last month, scientists at Massachusetts Institute of Technology and University of California, Los Angeles, argue that the key to making AI systems that can reason about visual data like humans is to address the “dark matter” of computer vision, the things that are not visible in pixels.

Causality enables us not only to reason about what’s happening in a scene but also about counterfactuals, “what if” scenarios that have not taken place. “Observers recruit their counterfactual reasoning capacity to interpret visual events. In other words, interpretation is not based only on what is observed, but also on what would have happened but did not,” the AI researchers write.

Why is this important? So far, success in AI systems have been largely tied to providing more and more data to make up for the lack of causal reasoning. This is especially true in reinforcement learning, in which AI agents are unleashed to explore environments through trial and error. Tech giants such as Google use their sheer computational power and limitless financial resources to brute-force their AI systems through millions of scenarios in hopes of capturing all possible combinations. This is the approach has largely been successful in areas such as board and video games.

The original article can be found here.

In the field of causality, Professor Judea Pearl and the science writer Dana Mackenzie published a well-known book, “The Book of Why: The New Science of Cause and Effect.” Despite this well-considered skepticism, Professor Pearl is remarkably optimistic about what artificial intelligence can achieve and even whether we can make machines that are capable of distinguishing good and evil. In 2020, Professor Pearl is also awarded as World Leader in AI World Society (AIWS.net) by Michael Dukakis Institute for Leadership and Innovation (MDI) and Boston Global Forum (BGF).  At this moment, Professor Judea Pearl also contribute on Causal Inference for AI transparency, which is one of important AIWS.net topics on AI Ethics.