Causal Analysis in Theory and Practice

November 25, 2012

Eric Neufeld on Rubin vs. Pearl models

Filed under: General — eb @ 1:45 pm

Eric Neufeld (University of Saskatchewan/Canada) asks:

I have been interested in giving a lecture to lay people about your work, but would like to compare/contrast it with Rubin’s work. I understand how strongly you disagree! But I would appreciate it if you could point me to a couple of your articles that lay the argument out in an accessible way.

Judea Pearl replies:

I have written a fairly decent section on Rubin’s model in Statistics Surveys:
http://ftp.cs.ucla.edu/pub/stat_ser/r350.pdf
and a less technical one in:
http://ftp.cs.ucla.edu/pub/stat_ser/r348-warning.pdf
(unpublished but highly recommended.)

Important to stress: I do not disagree with Rubin’s work (it is subsumed by structural modeling), I am merely mused by the tunnel-visioned culture that this work has engendered.

Conrad (Ontario/Canada) on SEM in Epidemiology

Filed under: Counterfactual,Epidemiology,structural equations — moderator @ 4:00 am

Conrad writes:

In the recent issue of IJE (http://aje.oxfordjournals.org/content/176/7/608), Tyler VanderWeele argues that SEM should be used in Epidemiology only when 1) the interest is on a wide range of effects 2) the purpose of the analysis is to generate hypothesis. However if the interest is on a single fixed exposure, he thinks traditional regression methods are more superior.

According to him, the latter relies on fewer assumptions e.g. we don’t need to know the functional form of the association between a confounder and exposure (or outcome) during estimation, and hence are less prone to bias. How valid is this argument given that some of (if not all) the causal modeling methods are simply a special case of SEM (e.g. the Robin’s G methods and even the regression methods he’s talking about).

Judea replies:

Dear Conrad,

Thank you for raising these questions about Tyler’s article. I believe several of Tyler’s statements stand the risk of being misinterpreted by epidemiologists, for they may create the impression that the use of SEM, including its nonparametric variety, is somehow riskier than the use of other techniques. This is not the case. I believe Tyler’s critics were aimed specifically at parametric SEM, such as those used in Arlinghaus etal (2012), but not at nonparametric SEMs which he favors and names “causal diagrams”. Indeed, nonparametric SEM’s are blessed with unequal transparency to assure that each and every assumption is visible and passes the scrutiny of scientific judgment.

While it is true that SEMs have the capacity to make bolder assumptions, some not discernible from experiments, (e.g., no confounding between mediator and outcome) this does not mean that investigators, acting properly, would make such assumptions when they stand contrary to scientific judgment, nor does it mean that investigators are under weaker protection from the ramifications of unwarranted assumptions. Today we know precisely which of SEM’s claims are discernible from experiments (i.e., reducible to do(x) expressions) and which are not (see Shpitser and Pearl, 2008) http://ftp.cs.ucla.edu/pub/stat_ser/r334-uai.pdf

I therefore take issue with Tyler’s statement: “SEMs themselves tend to make much stronger assumptions than these other techniques” (from his abstract) when applied to nonparametric analysis. SEMs do not make assumptions, nor do they “tend to make assumptions”; investigators do. I am inclined to believe that Tyler’s critics were aims at a specific application of SEM rather than SEM as a methodology.

Purging SEM from epidemiology would amount to purging counterfactuals from epidemiology — the latter draws its legitimacy from the former.

I also reject occasional calls to replace SEM and Causal Diagrams with weaker types of graphical models which presumably make weaker assumptions. No matter how we label alternative models (e.g., interventional graphs, agnostic graphs, causal Bayesian networks, FFRCISTG models, influence diagrams, etc.), they all must rest on judgmental assumptions and people think science (read SEM), not experiments. In other words, when an investigators asks him/herself whether an arrow from X to Y is warranted, the investigator does not ask whether an intervention on X would change the probability of Y (read: P(y|do(x)) = P(y)) but whether the function f in the mechanism y=f(x, u) depends on x for some u. Claims that the stronger assumptions made by SEMs (compared with interventional graphs) may have unintended consequences are supported by a few contrived cases where people can craft a nontrivial f(x,u) despite the equality P(y|do(x)) = P(y)). (See an example in Causality page 24.)

For a formal distinction between SEM and interventional graphs (also known as “Causal Bayes networks”, see Causality pages 23-24, 33-36). For more philosophical discussions defending counterfactuals and SEM against false alarms see:
http://ftp.cs.ucla.edu/pub/stat_ser/R269.pdf
http://ftp.cs.ucla.edu/pub/stat_ser/r393.pdf

I hope this help clarify the issue.

November 1, 2012

A New Prize Announced Causality in Statistics Education

Filed under: Announcement,General — judea @ 6:30 pm

The American Statistical Association has announced a new Prize,
“Causality in Statistics Education”, aimed to encourage the teaching of
basic causal inference in introductory statistics courses.

The motivation for the prize is discussed in an interview I gave to Ron Wasserstein:
http://magazine.amstat.org/blog/2012/11/01/pearl/

Nomination procedures and selection criteria can be found here:
http://www.amstat.org/education/causalityprize/

I hope readers of this blog will participate, either by innovating new
ways of teaching causation or by identifying candidates deserving of the prize.
Judea

Powered by WordPress