Causal Analysis in Theory and Practice

October 29, 2014

Fall Greetings from UCLA Causality Blog

Filed under: Announcement,General — eb @ 6:10 am

Friends in causality research,
This Fall greeting from UCLA Causality blog contains:

A. News items concerning causality research,
B. New postings, new problems and new solutions.

A. News items concerning causality research
A1. The American Statistical Association has announced an early submission deadline for the 2015 “Causality in Statistics Education Award” — February 15, 2015.
For details and selection criteria, see

A2. Vol. 2 Issue 2 of the Journal of Causal Inference (JCI) is now out, and can be viewed here:
As always, submissions are welcome on all aspects of causal analysis, especially those deemed methodological.

A3. New Tutorial: Causality for Policy Assessment and Impact Analysis, is offered by BayesiaLab , see here.

A4. A Conference on Counterfactual anaysis for Policy Evaluation will take place at USC, November 20, 2014

A5. A Conference focused on Causal Inference will take place at Kyoto, Japan, November 17-18, 2014
Kyoto International Conference on Modern Statistics in the 21st Century
General info:

B. New postings, new problems and new solutions.
B1. A confession of a graph-avoiding econometrician.

Guido Imbens explains why some economists do not find causal graphs to be helpful. Miquel Porta describes the impact of causal graphs in epidemiology as a “revolution”. The question naturally arises: “Are economists smarter than epidemiologists?” or, “What drives epidemiologists to seek the light of new tools while graph-avoiding economists resign to parial blindness?”

See [link] for attempted answer.

B2. Lord’s Paradox Revisited — (Oh Lord! Kumbaya!)

This is a historical journey which traces back Lord’s paradox from its original formulation (1967), resolves it using modern tools of causal analysis, explains why it presented difficulties in previous attempts at resolution and, finally, addresses the general issue of whether adjustments for pre-existing conditions is justified in group comparison applications.

B3. “Causes of Effects and Effects of Causes”

An expansion of a previous note with same title, including additional demonstration that “causes of effects” are not metaphysical (Dawid, 2000) and a simple visualization of how the probability of necessity (PN) is shaped by experimental and observational findings. It comes together with “A note on Causes of Effects” link a rebuttal to recent attempts at mystification.

October 27, 2014

Are economists smarter than epidemiologists? (Comments on Imbens’s recent paper)

Filed under: Discussion,Economics,Epidemiology,General — eb @ 4:45 pm

In a recent survey on Instrumental Variables (link), Guido Imbens fleshes out the reasons why some economists “have not felt that graphical models have much to offer them.”

His main point is: “In observational studies in social science, both these assumptions [exogeneity and exclusion] tend to be controversial. In this relatively simple setting [3-variable IV setting] I do not see the causal graphs as adding much to either the understanding of the problem, or to the analyses.” [page 377]

What Imbens leaves unclear is whether graph-avoiding economists limit themselves to “relatively simple settings” because, lacking graphs, they cannot handle more than 3 variables, or do they refrain from using graphs to prevent those “controversial assumptions” from becoming transparent, hence amenable to scientific discussion and resolution.

When students and readers ask me how I respond to people of Imbens’s persuasion who see no use in tools they vow to avoid, I direct them to the post “The deconstruction of paradoxes in epidemiology”, in which Miquel Porta describes the “revolution” that causal graphs have spawned in epidemiology. Porta observes: “I think the “revolution — or should we just call it a renewal”? — is deeply changing how epidemiological and clinical research is conceived, how causal inferences are made, and how we assess the validity and relevance of epidemiological findings.”

So, what is it about epidemiologists that drives them to seek the light of new tools, while economists (at least those in Imbens’s camp) seek comfort in partial blindness, while missing out on the causal revolution? Can economists do in their heads what epidemiologists observe in their graphs? Can they, for instance, identify the testable implications of their own assumptions? Can they decide whether the IV assumptions (i.e., exogeneity and exclusion) are satisfied in their own models of reality? Of course the can’t; such decisions are intractable to the graph-less mind. (I have challenged them repeatedly to these tasks, to the sound of a pin-drop silence)

Or, are problems in economics different from those in epidemiology? I have examined the structure of typical problems in the two fields, the number of variables involved, the types of data available, and the nature of the research questions. The problems are strikingly similar.

I have only one explanation for the difference: Culture.

The arrow-phobic culture started twenty years ago, when Imbens and Rubin (1995) decided that graphs “can easily lull the researcher into a false sense of confidence in the resulting causal conclusions,” and Paul Rosenbaum (1995) echoed with “No basis is given for believing” […] “that a certain mathematical operation, namely this wiping out of equations and fixing of variables, predicts a certain physical reality” [ See discussions here. ]

Lingering symptoms of this phobia are still stifling research in the 2nd decade of our century, yet are tolerated as scientific options. As Andrew Gelman put it last month: “I do think it is possible for a forward-looking statistician to do causal inference in the 21st century without understanding graphical models.” (link)

I believe the most insightful diagnosis of the phenomenon is given by Larry Wasserman:
“It is my impression that the “graph people” have studied the Rubin approach carefully while the reverse is not true.” (link)

Powered by WordPress