Causal Analysis in Theory and Practice

May 31, 2020

What Statisticians Want to Know about Causal Inference and The Book of Why

Filed under: Causal Effect,DAGs,Discussion,Economics,Epidemiology,Opinion — Judea Pearl @ 4:09 pm

I was privileged to be interviewed recently by David Hand, Professor of Statistics at Imperial College, London, and a former President of the Royal Statistical Society. I would like to share this interview with readers of this blog since many of the questions raised by David keep coming up in my conversations with statisticians and machine learning researchers, both privately and on Twitter.

For me, David represents mainstream statistics and, the reason I find his perspective so valuable is that he does not have a stake in causality and its various formulations. Like most mainstream statisticians, he is simply curious to understand what the big fuss is all about and how to communicate differences among various approaches without taking sides.

So, I’ll let David start, and I hope you find it useful.

Judea Pearl Interview by David Hand

There are some areas of statistics which seem to attract controversy and disagreement, and causal modelling is certainly one of them. In an attempt to understand what all the fuss is about, I asked Judea Pearl about these differences in perspective. Pearl is a world leader in the scientific understanding of causality. He is a recipient of the AMC Turing Award (computing’s “Nobel Prize”), for “fundamental contributions to artificial intelligence through the development of a calculus for probabilistic and causal reasoning”, the David E. Rumelhart Prize for Contributions to the Theoretical Foundations of Human Cognition, and is a Fellow of the American Statistical Association.

QUESTION 1:

I am aware that causal modelling is a hotly contested topic, and that there are alternatives to your perspective – the work of statisticians Don Rubin and Phil Dawid spring to mind, for example. Words like counterfactual, Popperian falsifiability, potential outcomes, appear. I’d like to understand the key differences between the various perspectives, so can you tell me what are the main grounds on which they disagree?

ANSWER 1:

You might be surprised to hear that, despite what seems to be hotly contested debates, there are very few philosophical differences among the various “approaches.” And I put “approaches” in quotes because the differences are more among historical traditions, or “frameworks” than among scientific principles. If we compare, for example, Rubin’s potential outcome with my framework, named “Structural Causal Models” (SCM), we find that the two are logically equivalent; a theorem in one is a theorem in the other and an assumption in one can be written as an assumption in the other. This means that, starting with the same set of assumptions, every solution obtained in one can also be obtained in the other.

But logical equivalence does not means “modeling equivalence” when we consider issues such as transparency, credibility or tractability. The equations for straight lines in polar coordinates are equivalent to those in Cartesian coordinates yet are hardly manageable when it comes to calculating areas of squares or triangles.

In SCM, assumptions are articulated in the form of equations among measured variables, each asserting how one variable responds to changes in another. Graphical models are simple abstractions of those equations and, remarkably, are sufficient for answering many causal questions when applied to non-experimental data. An arrow X—>Y in a graphical model represents the capacity to respond to such changes. All causal relationships are derived mechanically from those qualitative primitives, demanding no further judgment of the modeller.

In Rubin’s framework, assumptions are expressed as conditional independencies among counterfactual variables, also known as “ignorability conditions.” The mental task of ascertaining the plausibility of such assumptions is beyond anyone’s capacity, which makes it extremely hard for researchers to articulate or to verify. For example, the task of deciding which measurements to include in the analysis (or in the propensity score) is intractable in the language of conditional ignorability. Judging whether the assumptions are compatible with the available data, is another task that is trivial in graphical models and insurmountable in the potential outcome framework.

Conceptually, the differences can be summarized thus: The graphical approach goes where scientific knowledge resides, while Rubin’s approach goes where statistical routines need to be justified. The difference shines through when simple problems are solved side by side in both approaches, as in my book Causality (2009). The main reason differences between approaches are still debated in the literature is that most statisticians are watching these debates as outsiders, instead of trying out simple examples from beginning to end. Take for example Simpson’s paradox, a puzzle that has intrigued a century of statisticians and philosophers. It is still as vexing to most statisticians today as it was to Pearson in 1889, and the task of deciding which data to consult, the aggregated or the disaggregated is still avoided by all statistics textbooks.

To summarize, causal modeling, a topic that should be of prime interest to all statisticians, is still perceived to be a “hotly contested topic”, rather than the main frontier of statistical research. The emphasis on “differences between the various perspectives” prevents statisticians from seeing the exciting new capabilities that now avail themselves, and which “enable us to answer questions that we have always wanted but were afraid to ask.” It is hard to tell whether fears of those “differences” prevent statisticians from seeing the excitement, or the other way around, and cultural inhibitions prevent statisticians from appreciating the excitement, and drive them to discuss “differences” instead.

QUESTION 2:

There are different schools of statistics, but I think that most modern pragmatic applied statisticians are rather eclectic, and will choose a method which has the best capability to answer their particular questions. Does the same apply to approaches to causal modelling? That is, do the different perspectives have strengths and weaknesses, and should we be flexible in our choice of approach?

ANSWER 2:

These strengths and weaknesses are seen clearly in the SCM framework, which unifies several approaches and provides a flexible way of leveraging the merits of each. In particular, SCM combines graphical models and potential outcome logic. The graphs are used to encode what we know (i.e., the assumptions we are willing to defend) and the logic is used to encode what we wish to know, that is, the research question of interest. Simple mathematical tools can then combine these two with data and produce consistent estimates.

The availability of these unifying tools now calls on statisticians to become actively involved in causal analysis, rather than attempting to judge approaches from a distance. The choice of approach will become obvious once research questions are asked and the stage is set to articulate subject matter information that is necessary in answering those questions.

QUESTION 3:

To a very great extent the modern big data revolution has been driven by so-called “databased” models and algorithms, where understanding is not necessarily relevant or even helpful, and where there is often no underlying theory about how the variables are related. Rather, the aim is simply to use data to construct a model or algorithm which will predict an outcome from input variables (deep learning neural networks being an illustration). But this approach is intrinsically fragile, relying on an assumption that the data properly represent the population of interest. Causal modelling seems to me to be at the opposite end of the spectrum: it is intrinsically “theory-based”, because it has to begin with a causal model. In your approach, described in an accessible way in your recent book The Book of Why, such models are nicely summarised by your arrow charts. But don’t theory-based models have the complementary risk that they rely heavily on the accuracy of the model? As you say on page 160 of The Book of Why, “provided the model is correct”.

ANSWER 3:

When the tasks are purely predictive, model-based methods are indeed not immediately necessary and deep neural networks perform surprisingly well. This is level-1 (associational) in the Ladder of Causation described in The Book of Why. In tasks involving interventions, however (level-2 of the Ladder), model-based methods become a necessity. There is no way to predict the effect of policy interventions (or treatments) unless we are in possession of either causal assumptions or controlled randomized experiments employing identical interventions. In such tasks, and absent controlled experiments, reliance on the accuracy of the model is inevitable, and the best we can do is to make the model transparent, so that its accuracy can be (1) tested for compatibility with data and/or (2) judged by experts as well as policy makers and/or (3) subjected to sensitivity analysis.

A major reason why statisticians are reluctant to state and rely on untestable modeling assumptions stems from lack of training in managing such assumptions, however plausible. Even stating such unassailable assumptions as “symptoms do not cause diseases” or “drugs do not change patient’s sex” require a vocabulary that is not familiar to the great majority of living statisticians. Things become worse in the potential outcome framework where such assumptions resist intuitive interpretation, let alone judgment of plausibility. It is important at this point to go back and qualify my assertion that causal models are not necessary for purely predictive tasks. Many tasks that, at first glance appear to be predictive, turn out to require causal analysis. A simple example is the problem of external validity or inference across populations. Differences among populations are very similar to differences induced by interventions, hence methods of transporting information from one population to another can leverage all the tools developed for predicting effects of interventions. A similar transfer applies to missing data analysis, traditionally considered a statistical problem. Not so. It is inherently a causal problem since modeling the reason for missingness is crucial for deciding how we can recover from missing data. Indeed modern methods of missing data analysis, employing causal diagrams are able to recover statistical and causal relationships that purely statistical methods have failed to recover.

QUESTION 4:

In a related vein, the “backdoor” and “frontdoor” adjustments and criteria described in the book are very elegant ways of extracting causal information from arrow diagrams. They permit causal information to be obtained from observational data. Provided that is, the arrow diagram accurately represents the relationships between all the relevant variables. So doesn’t valid application of this elegant calculus depends critically on the accuracy of the base diagram?

ANSWER 4:

Of course. But as we have agreed above, EVERY exercise in causal inference “depends critically on the accuracy” of the theoretical assumptions we make. Our choice is whether to make these assumptions transparent, namely, in a form that allows us to scrutinize their veracity, or bury those assumptions in cryptic notation that prevents scrutiny.

In a similar vein, I must modify your opening statement, which described the “backdoor” and “frontdoor” criteria as “elegant ways of extracting causal information from arrow diagrams.” A more accurate description would be “…extracting causal information from rudimentary scientific knowledge.” The diagrammatic description of these criteria enhances, rather than restricts their range of applicability. What these criteria in fact do is extract quantitative causal information from conceptual understanding of the world; arrow diagrams simply represent the extent to which one has or does not have such understanding. Avoiding graphs conceals what knowledge one has, as well as what doubts one entertains.

QUESTION 5:

You say, in The Book of Why (p5-6) that the development of statistics led it to focus “exclusively on how to summarise data, not on how to interpret it.” It’s certainly true that when the Royal Statistical Society was established it focused on “procuring, arranging, and publishing ‘Facts calculated to illustrate the Condition and Prospects of Society’,” and said that “the first and most essential rule of its conduct [will be] to exclude carefully all Opinions from its transactions and publications.” But that was in the 1830s, and things have moved on since then. Indeed, to take one example, clinical trials were developed in the first half of the Twentieth Century and have a history stretching back even further. The discipline might have been slow to get off the ground in tackling causal matters, but surely things have changed and a very great deal of modern statistics is directly concerned with causal matters – think of risk factors in epidemiology or manipulation in experiments, for example. So aren’t you being a little unfair to the modern discipline?

ANSWER 5:

Ronald Fisher’s manifesto, in which he pronounced that “the object of statistical methods is the reduction of data” was published in 1922, not in the 19th century (Fisher 1922). Data produced in clinical trials have been the only data that statisticians recognize as legitimate carriers of causal information, and our book devotes a whole chapter to this development. With the exception of this singularity, however, the bulk of mainstream statistics has been glaringly disinterested in causal matters. And I base this observation on three faithful indicators: statistics textbooks, curricula at major statistics departments, and published texts of Presidential Addresses in the past two decades. None of these sources can convince us that causality is central to statistics.

Take any book on the history of statistics, and check if it considers causal analysis to be of primary concern to the leading players in 20th century statistics. For example, Stigler’s The Seven Pillars of Statistical Wisdom (2016) barely makes a passing remark to two (hardly known) publications in causal analysis.

I am glad you mentioned epidemiologists’ analysis of risk factors as an example of modern interest in causal questions. Unfortunately, epidemiology is not representative of modern statistics. In fact epidemiology is the one field where causal diagrams have become a second language, contrary to mainstream statistics, where causal diagrams are still a taboo. (e.g., Efron and Hastie 2016; Gelman and Hill, 2007; Imbens and Rubin 2015; Witte and Witte, 2017).

When an academic colleague asks me “Aren’t you being a little unfair to our discipline, considering the work of so and so?”, my answer is “Must we speculate on what ‘so and so’ did? Can we discuss the causal question that YOU have addressed in class in the past year?” The conversation immediately turns realistic.

QUESTION 6:

Isn’t the notion of intervening through randomisation still the gold standard for establishing causality?

ANSWER 6:

It is. Although in practice, the hegemony of randomized trial is being contested by alternatives. Randomized trials suffer from incurable problems such as selection bias (recruited subject are rarely representative of the target population) and lack of transportability (results are not applicable when populations change). The new calculus of causation helps us overcome these problems, thus achieving greater over all credibility; after all, observational studies are conducted at the natural habitat of the target population.

QUESTION 7:

What would you say are the three most important ideas in your approach? And what, in particular, would you like readers of The Book of Why to take away from the book.

ANSWER 7:

The three most important ideas in the book are: (1) Causal analysis is easy, but requires causal assumptions (or experiments) and those assumptions require a new mathematical notation, and a new calculus. (2) The Ladder of Causation, consisting of (i) association (ii) interventions and (iii) counterfactuals, is the Rosetta Stone of causal analysis. To answer a question at layer (x) we must have assumptions at level (x) or higher. (3) Counterfactuals emerge organically from basic scientific knowledge and, when represented in graphs, yield transparency, testability and a powerful calculus of cause and effect. I must add a fourth take away: (4) To appreciate what modern causal analysis can do for you, solve one toy problem from beginning to end; it would tell you more about statistics and causality than dozens of scholarly articles laboring to overview statistics and causality.

REFERENCES

Efron, B. and Hastie, T., Computer Age Statistical Inference: Algorithms, Evidence, and Data Science, New York, NY: Cambridge University Press, 2016.

Fisher, R., “On the mathematical foundations of theoretical statistics,” Philosophical Transactions of the Royal Society of London, Series A 222, 311, 1922.

Gelman, A. and Hill, J., Data Analysis Using Regression and Multilevel/Hierarchical Models, New York: Cambridge University Press, 2007.

Imbens, G.W. and Rubin, D.B., Causal Inference for Statistics, Social, and Biomedical Sciences: An Introduction, Cambridge, MA: Cambridge University Press, 2015.

Witte, R.S. and Witte, J.S., Statistics, 11th edition, Hoboken, NJ: John Wiley & Sons, Inc. 2017.

December 20, 2014

A new book out, Morgan and Winship, 2nd Edition

Filed under: Announcement,Book (J Pearl),General,Opinion — judea @ 2:49 pm

Here is my book recommendation for the month:
Counterfactuals and Causal Inference: Methods and Principles for Social Research (Analytical Methods for Social Research) Paperback – November 17, 2014
by Stephen L. Morgan (Author), Christopher Winship (Author)
ISBN-13: 978-1107694163 ISBN-10: 1107694167 Edition: 2nd

My book-cover blurb reads:
“This improved edition of Morgan and Winship’s book elevates traditional social sciences, including economics, education and political science, from a hopeless flirtation with regression to a solid science of causal interpretation, based on two foundational pillars: counterfactuals and causal graphs. A must for anyone seeking an understanding of the modern tools of causal analysis, and a must for anyone expecting science to secure explanations, not merely descriptions.”

But Gary King puts it in a more compelling historical perspective:
“More has been learned about causal inference in the last few decades than the sum total of everything that had been learned about it in all prior recorded history. The first comprehensive survey of the modern causal inference literature was the first edition of Morgan and Winship. Now with the second edition of this successful book comes the most up-to-date treatment.” Gary King, Harvard University

King’s statement is worth repeating here to remind us that we are indeed participating in an unprecedented historical revolution:

“More has been learned about causal inference in the last few decades than the sum total of everything that had been learned about it in all prior recorded history.”

It is the same revolution that Miquel Porta noted to be transforming the discourse in Epidemiology (link).

Social science and Epidemiology have been spear-heading this revolution, but I don’t think other disciplines will sit idle for too long.

In a recent survey (here), I attributed the revolution to “a fruitful symbiosis between graphs and counterfactuals that has unified the potential outcome framework of Neyman, Rubin, and Robins with the econometric tradition of Haavelmo, Marschak, and Heckman. In this symbiosis, counterfactuals emerge as natural byproducts of structural equations and serve to formally articulate research questions of interest. Graphical models, on the other hand, are used to encode scientific assumptions in a qualitative (i.e. nonparametric) and transparent language and to identify the logical ramifications of these assumptions, in particular their testable implications.”

Other researchers may wish to explain the revolution in other ways; still, Morgan and Winship’s book is a perfect example of how the symbiosis can work when taken seriously.

A new review of Causality

Filed under: Book (J Pearl),General,Opinion — eb @ 2:46 pm

A new review of Causality (2nd Edition, 2013 printing) has appeared in Acta Sociologica 2014, Vol. 57(4) 369-375.
http://bayes.cs.ucla.edu/BOOK-2K/elwert-review2014.pdf
Reviewed by Felix Elwert, University of Wisconsin-Madison, USA.

Elwert highlights specific sections of Causality that can empower social scientists with new insights or new tools for applying modern methods of causal inference in their research. Coming from a practical social science perspective, this review is a welcome addition to the list of 33 other reviews of Causality, which tend to be more philosophical. see http://bayes.cs.ucla.edu/BOOK-2K/book_review.html

I am particularly gratified by Elwert’s final remarks:
“Pearl’s language empowers social scientists to communicate causal models with each other across sub-disciplines…and enables social scientists to communicate more effectively with statistical methodologists.”

September 2, 2014

In Defense of Unification (Comments on West and Koch’s review of *Causality*)

Filed under: Discussion,General,Opinion — moderator @ 3:05 am

A new review of my book *Causality* (Pearl, 2009) has appeared in the Journal of Structural Equation Modeling (SEM), authored by Stephen West and Tobias Koch (W-K). See http://bayes.cs.ucla.edu/BOOK-2K/west-koch-review2014.pdf

I find the main body of the review quite informative, and I thank the reviewers for taking the time to give SEM readers an accurate summary of each chapter, as well as a lucid description of the key ideas that tie the chapters together. However, when it comes to accepting the logical conclusions of the book, the reviewers seem reluctant, and tend to cling to traditions that lack the language, tools and unifying perspective to benefit from the chapters reviewed.

The reluctance culminates in the following paragraph:
“We value Pearl’s framework and his efforts to show that other frameworks can be translated into his approach. Nevertheless we believe that there is much to be gained by also considering the other major approaches to causal inference.”

W-K seem to value my “efforts” toward unification, but not the unification itself, and we are not told whether they doubt the validity of the unification, or whether they doubt its merits.
Or do they accept the merits and still see “much to be gained” by pre-unification traditions? If so, what is it that can be gained by those traditions and why can’t these gains be achieved within the unified framework presented in *Causality*?

To read more, click here.

July 31, 2012

Follow-up note posted by Elias Bareinboim

Filed under: Discussion,General,Identification,Opinion — eb @ 4:15 pm

Andrew Gelman and his blog readers followed-up with the previous discussion (link here) on his methods to address issues about causal inference and transportability of causal effects based on his “hierarchical modeling” framework, and I just posted my answer.

This is the general link for the discussion:
http://andrewgelman.com/2012/07/examples-of-the-use-of-hierarchical-modeling-to-generalize-to-new-settings/

Here is my answer:
http://andrewgelman.com/2012/07/examples-of-the-use-of-hierarchical-modeling-to-generalize-to-new-settings/#comment-92499

Cheers,
Bareinboim

July 19, 2012

A note posted by Elias Bareinboim

In the past week, I have been engaged in a discussion with Andrew Gelman and his blog readers regarding causal inference, selection bias, confounding, and generalizability. I was trying to understand how his method which he calls “hierarchical modelling” would handle these issues and what guarantees it provides. Unfortunately, I could not reach an understanding of Gelman’s method (probably because no examples were provided).

Still, I think that this discussion having touched core issues of scientific methodology would be of interest to readers of this blog, the link follows:
http://andrewgelman.com/2012/07/long-discussion-about-causal-inference-and-the-use-of-hierarchical-models-to-bridge-between-different-inferential-settings/

Previous discussions took place regarding Rubin and Pearl’s dispute, here are some interesting links:
http://andrewgelman.com/2009/07/disputes_about/
http://andrewgelman.com/2009/07/more_on_pearlru/
http://andrewgelman.com/2009/07/pearls_and_gelm/
http://andrewgelman.com/2012/01/judea-pearl-on-why-he-is-only-a-half-bayesian/

If anyone understands how “hierarchical modeling” can solve a simple toy problem (e.g., M-bias, control of confounding, mediation, generalizability), please share with us.

Cheers,
Bareinboim

September 4, 2011

Comments on an article by Grice, Shlimgen and Barrett (GSB): “Regarding Causation and Judea Pearl’s Mediation Formula”

Filed under: Discussion,Mediated Effects,Opinion — moderator @ 3:00 pm

Stan Mulaik called my attention to a recent article by Grice, Shlimgen and Barrett (GSB) (linked here http://psychology.okstate.edu/faculty/jgrice/personalitylab/OOMMedForm_2011A.pdf ) which is highly critical of structural equation modeling (SEM) in general, and of the philosophy and tools that I presented in “The Causal Foundation of SEM” (Pearl 2011) ( http://ftp.cs.ucla.edu/pub/stat_ser/r370.pdf.)  In particular, GSB disagree with the conclusions of the Mediation Formula — a tool for assessing what portion of a given effect is mediated through a specific pathway.

I responded with a detailed account of the disagreements between us (copied below), which can be summarized as follows:

Summary

1. The “OOM” analysis used by GSB is based strictly on frequency tables (or “multi-grams”) and, as such, cannot assess cause-effect relations without committing to some causal assumptions. Those assumptions are missing from GSB account, possibly due to their rejection of SEM.

2. I define precisely what is meant by “the extent to which the effect of X on Y is mediated by a third variable, say Z,” and demonstrate both, why such questions are important in decision making and model building and why they cannot be captured by observation-oriented methods such as OOM.

3. Using the same data and a slightly different design, I challenge GSB to answer a simple cause-effect question with their method (OOM), or with any method that dismisses SEM or causal algebra as unnecessary.

4. I further challenge GSB to present us with ONE RESEARCH QUESTION that they can answer and that is not answered swiftly, formally and transparently by the SEM methodology presented in Pearl (2011). (starting of course with the same assumptions and same data.)

5. I explain what gives me the assurance that no such research question will ever be found, and why even the late David Friedman, whom GSB lionize for his staunch critics of SEM, has converted to SEM thinking at the end of his life.

6. I alert GSB to two systematic omissions from their writings and posted arguments, without which no comparison can be made to other methodologies:
(a) A clear statement of the research question that the investigator attempts to answer, and
(b) A clear statement of the assumptions that the investigator is willing to make about reality.

Click here for the full response.

=======Judea

May 31, 2010

An Open Letter from Judea Pearl to Nancy Cartwright concerning “Causal Pluralism”

Filed under: Discussion,Nancy Cartwright,Opinion,structural equations — moderator @ 1:40 pm

Dear Nancy,

This letter concerns the issue of “causal plurality” which came up in my review of your book “Hunting Causes and Using Them” (Cambridge 2007) and in your recent reply to my review, both in recent issue of Economics and Philosophy (26:69-77, 2010).

My review:
http://journals.cambridge.org/action/displayFulltext?type=1&fid=7402268&jid=&volumeId=&issueId=&aid=7402260

Cartwright Reply:
http://journals.cambridge.org/action/displayFulltext?type=1&fid=7402292&jid=&volumeId=&issueId=&aid=7402284

I have difficulties understanding causal pluralism because I am a devout mono-theist by nature, especially when it comes to causation and, although I recognize that causes come in various shades, including total, direct, and indirect causes, necessary and sufficient causes, actual and generic causes, I have seen them all defined, analyzed and understood within a single formal framework of Structural Causal Models (SCM) as described in Causality (Chapter 7).

So, here I am, a mono-theist claiming that every query related to cause-effect relations can be formulated and answered in the SCM framework, and here you are, a pluralist, claiming exactly the opposite. Quoting:

“There are a variety of different kinds of causal systems; methods for discovering causes differ across different kinds of systems as do the inferences that can be made from causal knowledge once discovered. As to causal models, these must have different forms depending on what they are to be used for and on what kinds of systems are under study.

If causal pluralism is right, Pearl’s demand to tell economists how they ought to think about causation is misplaced; and his own are not the methods to use. They work for special kinds of problems and for special kinds of systems – those whose causal laws can be represented as Pearl represents them. HC&UT argues these are not the only kinds there are, nor uncontroversially the most typical.

I am very interested in finding out if, by committing to SCM I have not overlooked important problem areas that are not captured in SCM. But for this we need an example; i.e., an example of ONE problem that cannot be formulated and answered in SCM.

The trouble I have with the examples sited in your reply is that they are based on other examples and concepts that are scattered on many pages in your book and, thus, makes it hard to follow. Can we perhaps see one such example, hopefully with no more than 10 variables, described in the following format:

Example: An agent is facing a decision or a question.

Given: The agent assumes the following about the world: 1. 2. 3. ….
The agent has data about …., taken under the following conditions.
Needed: The agent wishes to find out whether…..

Why use this dry format, you may ask, when your book is full with dozens of imaginative examples, from physics to econometrics? Because if you succeed in showing ONE example in this concise format you will convert one heathen to pluralism, and this heathen will be grateful to you for the rest of his spiritual life.

And if he is converted, he will try and help you convert others (I promise) and, then, who knows? life on this God given earth would become so much more enlightened.

And, as Aristotle used to say (or should have) May clarity shine on causality land.

Sincerely,

Judea Pearl

May 3, 2010

On Mediation, counterfactuals and manipulations

Filed under: Discussion,Opinion — moderator @ 9:00 pm

Opening remarks

A few days ago, Dan Sharfstein posed a question regarding the “well-defineness” of “direct effects” in situations where the mediating variables cannot be manipulated. Dan’s question triggered a private email discussion that has culminated in a posting by Thomas Richardson and Jamie Robins (below) followed by Judea Pearl’s reply.

We urge more people to join this important discussion.

Thomas Richardson and James Robins’ discussion:

Hello,

There has recently been some discussion of mediation and direct effects.

There are at least two issues here:

(1) Which counterfactuals are well defined.

(2) Even when counterfactuals are well defined, should we include assumptions that identify effects (ie the natural direct effect) that could never be confirmed even in principle by a Randomized Controlled Trial (RCT).

As to (1) it is clear to most that all counterfactuals are vague to a certain extent and can be made more precise by carefully describing the (quite possibly only hypothetical) intervention you want the counterfactual to represent. For this reason,  whether you take manipulation or causality as ontologically primary, we need to relate causation to manipulation to clarify and make more precise which counterfactual world we are considering.

On (2) we have just finished a long paper on the issue, fleshing out considerably an argument I (Jamie) made at the American Statistical Association (in 2005) discussing a talk by Judea on natural (pure and total) direct effects.

“Alternative Graphical Causal Models and the Identification of Direct Effects”

It is available at
http://www.csss.washington.edu/Papers/wp100.pdf.

Here is a brief summary:

Click here for the full post.

Best wishes,

Jamie Robins  and Thomas Richardson

Judea Pearl’s reply:

1.
As to the which counterfactuals are “well defined”, my position is that counterfactuals attain their “definition” from the laws of physics and, therefore, they are “well defined” before one even contemplates any specific intervention. Newton concluded that tides are DUE to lunar attraction without thinking about manipulating the moon’s position; he merely envisioned how water would react to gravitaional force in general.

In fact, counterfactuals (e.g., f=ma) earn their usefulness precisely because they are not tide to specific manipulation, but can serve a vast variety of future inteventions, whose details we do not know in advance; it is the duty of the intervenor to make precise how each anticipated manipulation fits into our store of counterfactual knowledge, also known as “scientific theories”.

2.
Regarding identifiability of mediation, I have two comments to make; ‘ one related to your Minimal Causal Models (MCM) and one related to the role of structural equations models (SEM) as the logical basis of counterfactual analysis.al basis of counterfactual analysis.

Click here for Judea’s reply.

Best regards,

Judea Pearl

September 23, 2009

Differences Induced by Adding Covariates

Filed under: Opinion — moderator @ 3:00 am

Donald Klein writes:

Unfortunately, much of this is over my head. As a practicing trialist the apparent disagreements re managements of # of covariates in ANCOVA seems important but I can’t find a reference to an actual trial where these alternative analyses produced different results. Even a model trial would be helpful to show the import of these theoretical differences.

Similarly with regard to missing data , which turns the experimental randomization towards a naturalistic study.

Cordially,

Don Klein

Next Page »

Powered by WordPress