Blog

Home » Practicing Principles » Modern Causal Inference » Discussing » The Domestication of Causal Reasoning

The Domestication of Causal Reasoning

1. Introduction

On Wednesday December 23 I had the honor of participating in “AI Debate 2”, a symposium organized by Montreal AI, which brought together an impressive group of scholars to discuss the future of AI. I spoke on

“The Domestication of Causal Reasoning: Cultural and Methodological Implications,”

and the reading list I proposed as background material was: 

  1. “The Seven Tools of Causal Inference with Reflections on Machine Learning,” July 2018 https://ucla.in/2HI2yyx
  2. “Radical Empiricism and Machine Learning Research,” July 26, 2020 https://ucla.in/32YKcWy
  3. “Data versus Science: Contesting the Soul of Data-Science,” July 7, 2020 https://ucla.in/3iEDRVo

The debate was recorded here https://montrealartificialintelligence.com/aidebate2/ and my talk can be accessed here: https://youtu.be/gJW3nOQ4SEA

Below is an edited script of my talk.

2. What I would have said had I been given six (6), instead of three (3) minutes

This is the first time I am using the word “domestication” to describe what happened in causality-land in the past 3 decades. I’ve used other terms before: “democratization,” “mathematization,” or “algorithmization,” but Domestication sounds less provocative when I come to talk about the causal revolution.

What makes it a “revolution” is seeing dozens of practical and conceptional problems that only a few decades ago where thought to be metaphysical or unsolvable give way to simple mathematical solutions.

“DEEP UNDERSTANDING” is another term used here for the first time. It so happened that, while laboring to squeeze out results from causal inference engines, I came to realize that we are sitting on a gold mine, and what we are dealing with is none other but:

A computational model of a mental state that deserves the title “Deep Understanding” 

“Deep Understanding” is not the nebulous concept that you probably think it is, but something that is defined formally as any system capable of covering all 3 levels of the causal hierarchy: What is – What if – Only if. More specifically: What if I see (prediction) – What if I do (intervention) – and what if acted differently (retrospection, in light of the outcomes observed).

This may sound like cheating – I take the capabilities of one system (i.e., a causal model) and I posit them as a general criterion for defining a general concept such as: “Deep Understanding.”

It isn’t cheating. Given that causal reasoning is so deeply woven into our day to day language, our thinking, our sense of justice, our humor and of course our scientific understanding, I think that it won’t be too presumptuous of me to propose that we take Causal Modeling as a testing ground of ideas on other modes of reasoning associated with “understanding.”

Specifically, causal models should provide an arena for various theories explanations, fairness, adaptation, imagination, humor, consciousness, free will, attention, and curiosity.

I also dare speculate that learning from the way causal reasoning was domesticated, would benefit researchers in other area of AI, including vision and NLP, and enable them to examine whether similar paths could be pursued to overcome obstacles that data-centric paradigms have imposed.

I would like now to say a few words on the Anti-Cultural implications of the Causal revolution. Here I refer you to my blog post, https://ucla.in/32YKcWy where I argue that radical empiricism is a stifling culture. It lures researchers into a data-centric paradigm, according to which Data is the source of all knowledge rather than a window through which we learn about the world around us.

What I advocate is a hybrid system that supplements data with domain knowledge, commonsense constraints, culturally transmitted concepts, and most importantly, our innate causal templates that enable toddlers to quickly acquire an understanding of their toy-world environment.

It is hard to find a needle in a hay stack, it is much harder if you haven’t seen a needle before. The module we are using for causal inference gives us a picture of what the needle looks like and what you can do once you find one.

Source