December 29, 2020
Hello and welcome to the last “Eye on A.I.” of 2020! I spent last week immersed in the Neural Information Processing Systems (NeurIPS) conference, the annual gathering of top academic A.I. researchers. It’s always a good spot for taking the pulse of the field. Held completely virtually this year thanks to COVID-19, it attracted more than 20,000 participants. Here were a few of the highlights.
Marloes Maathuis, a professor of theoretical and applied statistics at ETH Zurich, looked at how directed acyclic graphs (DAGs) could be used to derive causal relationships in data. Understanding causality is essential for many real world uses of A.I., particularly in contexts like medicine and finance. Yet one of the biggest problems with neural network-based deep learning is that such systems are very good at discovering correlations, but often useless for figuring out causation. One of Maathuis’s main points was that in order to suss out causation it is important to make causal assumptions and then test them. And that means talking to domain experts who can at least hazard some educated guesses about the underlying dynamics. Too often machine learning engineers don’t bother, falling back on deep learning to work out correlations. That’s dangerous, Maathuis implied.
It was hard to ignore that this year’s conference took place against the backdrop of the continuing controversy over Google’s treatment of Timnit Gebru, the well-respected A.I. ethics researcher and one of the very few Black women in the company’s research division, who left the company two weeks earlier (she says she was fired; the company continues to insist she resigned). Some attending NeurIPS voiced support for Gebru in their talks. (Many more did so on Twitter. Gebru herself also appeared on a few panels that were part of a conference workshop on creating “Resistance A.I.”) The academics were particularly disturbed Google had forced Gebru to withdraw a research paper it didn’t like, noting that it raised troubling questions about corporate influence over A.I. research in general, and A.I. ethics research in particular. A paper presented at the “Resistance A.I.” workshop explicitly compared Big Tech’s involvement in A.I. ethics to Big Tobacco’s funding of bogus science around the health effects of smoking. Some researchers said they would stop reviewing conference papers from Google-affiliated researchers since they now could not be sure the authors weren’t hopelessly conflicted.
The original article was published here at Fortune.
In the field of AI application with Causation, Professor Judea Pearl is a distinguished pioneer for developing a theory of causal and counterfactual inference based on structural models. In 2011, Professor Pearl won the Turing Award. In 2020, Michael Dukakis Institute also awarded Professor Pearl as World Leader in AI World Society (AIWS.net) for Leadership and Innovation (MDI) and Boston Global Forum (BGF). At this moment, Professor Judea is a Mentor of AIWS.net and Head of Modern Causal Inference section, which is one of important AIWS.net topics on AI Ethics to develop common good AI applications for a better world society.