# Causal Analysis in Theory and Practice

## December 19, 2017

### NIPS 2017: Q&A Follow-up

Filed under: Conferences,General — Judea Pearl @ 6:42 am
Dear friends in causal research,
Last week I spoke at a workshop on machine learning and causality, which followed the NIPS conference in Long Beach. Below please find my response to several questions I was asked
after my talk. I hope you will find the questions and answers to be of relevance to issues discussed on this blog.
-Judea
———————————————–
To: Participants at the NIPS “What If” workshop
Dear friends,
Some of you asked me for copies of my slides. I am attaching them with this message, and you can get the accompanying paper by clicking here:
http://ftp.cs.ucla.edu/pub/stat_ser/r475.pdf

NIPS 17 – What If? Workshop Slides (PDF)

NIPS 17 – What If? Workshop Slides (PPT [zipped])

I have also received interesting questions at the end of my talk, which I could not fully answer in the short break we had. I will try to answer them below.

Q.1. What do you mean by the “Causal Revolution”?
Ans.1: “Revolution” is a poetic word to summarize Gary King’s observation:  “More has been learned about causal inference in the last few decades than the sum total of everything that had been learned about it in all prior recorded history” (see cover of Morgan and Winship’s book, 2015). It captures the miracle that only three decades ago we could not write a formula for: “Mud does not
cause Rain” and, today, we can formulate and estimate every causal or counterfactual statement.

Q2: Are the estimates produced by graphical models the same as those produced by the potential outcome approach?
Ans.2: Yes, provided the two approaches start with the same set of assumptions. The assumptions in the graphical approach are advertised in the graph, while those in the potential outcome approach are articulated separately by the investigator, using counterfactual vocabulary.

Q3: The method of imputing potential outcomes to individual units in a table appears totally different from the methods used in the graphical approach. Why the difference?
Ans.3: Imputation works only when certain assumptions of conditional ignorability hold. The table itself does not show us what the assumption are, nor what they mean. To see what they mean we need a graph, since no mortal can process such assumptions in his/her head. The apparent difference in procedures reflects the insistence (in the graphical framework) on seeing the assumptions, rather than wishing them away.

Q4: Some say that economists do not use graphs because their problems are different, and they cannot afford to model the entire economy. Do you agree with this explanation?
Ans.4: No way! Mathematically speaking, economic problems are no different from those faced by epidemiologists (or other social scientists) for whom graphical models have become a second language. Moreover, epidemiologists have never complained that graphs force them to model the entirety of the human anatomy. Graph-avoidance among (some) economists is a cultural phenomenon, reminiscent of telescope-avoidance among Church astronomers in 17th century Italy. Bottom line: epidemiologists can judge the plausibility of their assumptions — graph-avoiding economists cannot. (I have offered them many opportunities to demonstrate it in public, and I don’t blame them for remaining silent; it is not a problem that can be managed by an unaided intellect)

Q.5: Isn’t deep-learning more than just glorified curve-fitting? After all, the objective of curve-fitting is to maximize “fit”, while in deep-learning much effort goes into minimizing “over-fit”.
Ans.5: No matter what acrobatics  you go through to minimize overfitting or other flaws in your learning strategy, you are still optimizing some property of the observed data while making no reference to the world outside the data.  This puts  you right back on rung-1 of the Ladder of Causation with all the limitations that rung-1 entails.

If you have additional questions on these or other topics, feel free to post them here on our blog causality.cs.ucla.edu/blog, (anonymity will be respected), and I will try my best to answer them.

Enjoy,
Judea
———————————————–

## 2 Comments »

1. Regarding Q4: economists say that because the big “macro econometrics” models failed them, and now they focus on “smaller” but supposedly more credible questions. I presume the reason they get scared when they see a big graph is that it reminds them of a big macro econometrics model, and they think the graph begs them to estimate every single edge there. This is misleading.

The graph does not force you to estimate every single quantity. On the contrary, the graph helps you reason about what parts of the model you need to focus and what parts you can safely ignore to properly answer your query. Not only that, economists are in effect using an approach that is mathematically equivalent to graphs, with potential outcomes, but are wishing away the modeling problem by assuming ignorability, in many cases without understanding what it means. So I am not sure I would go as far as equating economists to Church astronomers, but it’s surely an educational gap that will take some time to address.

Comment by Carlos — December 20, 2017 @ 6:31 pm

2. Carlos,
I agree with your observation that economists’ excuses
for avoiding graphs are getting clumsier by the day.
But I do not understand why
you consider the analogy with Church astronomers
Church astronomers bedeviled the telescope for fears of losing credibility
after centuries of preaching that new knowledge
can only come from deeper studies of the scripture,
not from new tools or new facts unveiled by
the new tools.

Now, aren’t these fears similar to those that
haunt graph-avoiding economists?
What else can explain a 3-decade avoidance
if not fears of losing credibility after
promising that:

1. Graphs are not helpful for the “kind of problems”
researchers are facing in social and health sciences.
2. Potential outcomes provide a natural way of expressing
causal assumptions.
3. Ignorability conditions can be assumed apriori.
4. Causal inference is a missing data problem
5. Potential outcome researchers can tell if a set
of assumptions are testable or not.
6. Graphs are too seductive

I wish historians of science could join this discussion

Judea

Comment by Judea Pearl — December 23, 2017 @ 3:20 pm