Causal Analysis in Theory and Practice

June 28, 2016

On the Classification and Subsumption of Causal Models

Filed under: Causal Effect,Counterfactual,structural equations — bryantc @ 5:32 pm

From Christos Dimitrakakis:

>> To be honest, there is such a plethora of causal models, that it is not entirely clear what subsumes what, and which one is equivalent to what. Is there a simple taxonomy somewhere? I thought that influence diagrams were sufficient for all causal questions, for example, but one of Pearl’s papers asserts that this is not the case.

Reply from J. Pearl:

Dear Christos,

From my perspective, I do not see a plethora of causal models at all, so it is hard for me to answer your question in specific terms. What I do see is a symbiosis of all causal models in one framework, called Structural Causal Model (SCM) which unifies structural equations, potential outcomes, and graphical models. So, for me, the world appears simple, well organized, and smiling. Perhaps you can tell us what models lured your attention and caused you to see a plethora of models lacking subsumption taxonomy.

The taxonomy that has helped me immensely is the three-level hierarchy described in chapter 1 of my book Causality: 1. association, 2. intervention, and 3 counterfactuals. It is a useful hierarchy because it has an objective criterion for the classification: You cannot answer questions at level i unless you have assumptions from level i or higher.

As to influence diagrams, the relations between them and SCM is discussed in Section 11.6 of my book Causality (2009), Influence diagrams belong to the 2nd layer of the causal hierarchy, together with Causal Bayesian Networks. They lack however two facilities:

1. The ability to process counterfactuals.
2. The ability to handle novel actions.

To elaborate,

1. Counterfactual sentences (e.g., Given what I see, I should have acted differently) require functional models. Influence diagrams are built on conditional and interventional probabilities, that is, p(y|x) or p(y|do(x)). There is no interpretation of E(Y_x| x’) in this framework.

2. The probabilities that annotate links emanating from Action Nodes are interventional type, p(y|do(x)), that must be assessed judgmentally by the user. No facility is provided for deriving these probabilities from data together with the structure of the graph. Such a derivation is developed in chapter 3 of Causality, in the context of Causal Bayes Networks where every node can turn into an action node.

Using the causal hierarchy, the 1st Law of Counterfactuals and the unification provided by SCM, the space of causal models should shine in clarity and simplicity. Try it, and let us know of any questions remaining.

Judea

June 21, 2016

Spring Greeting from the UCLA Causality Blog

Filed under: Announcement — bryantc @ 3:13 am

Dear friends in causality research,
————————————
This Spring Greeting from UCLA Causality blog contains:
A. News items concerning causality research,
B. New postings, new problems and some solutions.
————————————

A1.
The American Statistical Association (ASA) has announced recipients of the 2016 “Causality in Statistics Education Award”.
http://www.amstat.org/newsroom/pressreleases/05162016_Causality_Award.pdf
Congratulations go to Onyebuchi Arah and Arvid Sjolander who will receive this Award in July, at the 2016 JSM meeting in Chicago.
For details of purpose and selection criteria, see http://www.amstat.org/education/causalityprize/

A2.
I will be giving another tutorial at the 2016 JSM meeting, titled “Causal Inference in Statistics: A Gentle Introduction.”
Details and Abstract can be viewed here: https://www.amstat.org/meetings/jsm/2016/onlineprogram/AbstractDetails.cfm?abstractid=321839

A3. Causal Inference — A Primer
For the many readers who have inquired, the print version of our new book “Causal Inference in Statistics – A Primer” is now up and running on Amazon and Wiley, and is awaiting your reviews, your questions and suggestions. We have posted a book page for this very purpose http://bayes.cs.ucla.edu/PRIMER/, which includes selected excerpts from each chapter, errata and updates, and a sample homework solution manual.

The errata page was updated recently under the diligent eye of Adamo Vincenzo. Thank you Adamo!

The Solution Manual will be available for instructors and will incorporate software solutions based on a DAGitty R package, authored by Johannes Textor.  See http://dagitty.net/primer/

A4.
Vol. 4 Issue 2 of the Journal of Causal Inference (JCI) is scheduled to appear in September 2018. The current issue can be viewed here: http://www.degruyter.com/view/j/jci.2016.4.issue-1/issue-files/jci.2016.4.issue-1.xml My own contribution to the current issue discusses Savage’s Sure Thing Principle and its ramifications to causal reasoning. http://ftp.cs.ucla.edu/pub/stat_ser/r466.pdf

As always, submissions are welcome on all aspects of causal analysis, especially those deemed foundational. Chances of acceptance are inversely proportional to the time it takes a reviewer to figure out what problem the paper attempts to solve. So, please be transparent.

B1.
Recollections from the WCE conference at Stanford.

On May 21, Kosuke Imai and I participated in a panel on Mediation, at the annual meeting of the West Coast Experiment Conference, organized by Stanford Graduate School of Business. http://www.gsb.stanford.edu/facseminars/conferences/west-coast-experiment-conference

Some of my recollections are summarized on our Causality Blog here: http://causality.cs.ucla.edu/blog/index.php/2016/06/20/recollections-from-the-wce-conference-at-stanford/

B2. Generalizing Experimental findings
————————————
In light of new results concerning generalizability and selection bias, our team has updated the “external validity” entry of wikipedia. Previously, the entry was all about threats to validity, with no word on how those threats can be circumvented. You may wish to check this entry for accuracy and possible extensions.

B3. Causality celebrates its 10,000 citations
————————————
According to Google Scholar, https://scholar.google.com/citations, my book Causality (Cambridge, 2000, 2009) has crossed the symbolic mark of 10,000 citations. To celebrate this numerological event, I wish to invite all readers of this blog to an open online party with the beer entirely on me. I dont exactly know how to choreograph such a huge party, or how to make sure that each of you gets a fair share of the inspiration (or beer). So, please send creative suggestions for posting on this blog.

On a personal note: I am extremely gratified by this sign of receptiveness, and I thank readers of Causality for their comments, questions, corrections and reservations which have helped bring this book to its current shape (see http://bayes.ca.ucla.edu/BOOK-2K/)

Cheers,
Judea

June 20, 2016

Recollections from the WCE conference at Stanford

Filed under: Counterfactual,General,Mediated Effects,structural equations — bryantc @ 7:45 am

On May 21, Kosuke Imai and I participated in a panel on Mediation, at the annual meeting of the West Coast Experiment Conference, organized by Stanford Graduate School of Business http://www.gsb.stanford.edu/facseminars/conferences/west-coast-experiments-conference. The following are some of my recollections from that panel.

1.
We began the discussion by reviewing causal mediation analysis and summarizing the exchange we had on the pages of Psychological Methods (2014)
http://ftp.cs.ucla.edu/pub/stat_ser/r389-imai-etal-commentary-r421-reprint.pdf

My slides for the panel can be viewed here:
http://web.cs.ucla.edu/~kaoru/stanford-may2016-bw.pdf

We ended with a consensus regarding the importance of causal mediation and the conditions for identifying of Natural Direct and Indirect Effects, from randomized as well as observational studies.

2.
We proceeded to discuss the symbiosis between the structural and the counterfactual languages. Here I focused on slides 4-6 (page 3), and remarked that only those who are willing to solve a toy problem from begining to end, using both potential outcomes and DAGs can understand the tradeoff between the two. Such a toy problem (and its solution) was presented in slide 5 (page 3) titled “Formulating a problem in Three Languages” and the questions that I asked the audience are still ringing in my ears. Please have a good look at these two sets of assumptions and ask yourself:

a. Have we forgotten any assumption?
b. Are these assumptions consistent?
c. Is any of the assumptions redundant (i.e. does it follow logically from the others)?
d. Do they have testable implications?
e. Do these assumptions permit the identification of causal effects?
f. Are these assumptions plausible in the context of the scenario given?

As I was discussing these questions over slide 5, the audience seemed to be in general agreement with the conclusion that, despite their logical equivalence, the graphical language  enables  us to answer these questions immediately while the potential outcome language remains silent on all.

I consider this example to be pivotal to the comparison of the two frameworks. I hope that questions a,b,c,d,e,f will be remembered, and speakers from both camps will be asked to address them squarely and explicitly .

The fact that graduate students made up the majority of the participants gives me the hope that questions a,b,c,d,e,f will finally receive the attention they deserve.

3.
As we discussed the virtues of graphs, I found it necessary to reiterate the observation that DAGs are more than just “natural and convenient way to express assumptions about causal structures” (Imbens and Rubin , 2013, p. 25). Praising their transparency while ignoring their inferential power misses the main role that graphs play in causal analysis. The power of graphs lies in computing complex implications of causal assumptions (i.e., the “science”) no matter in what language they are expressed.  Typical implications are: conditional independencies among variables and counterfactuals, what covariates need be controlled to remove confounding or selection bias, whether effects can be identified, and more. These implications could, in principle, be derived from any equivalent representation of the causal assumption, not necessarily graphical, but not before incurring a prohibitive computational cost. See, for example, what happens when economists try to replace d-separation with graphoid axioms http://ftp.cs.ucla.edu/pub/stat_ser/r420.pdf.

4.
Following the discussion of representations, we addressed questions posed to us by the audience, in particular, five questions submitted by Professor Jon Krosnick (Political Science, Stanford).

I summarize them in the following slide:

Krosnick’s Questions to Panel
———————————————-
1) Do you think an experiment has any value without mediational analysis?
2) Is a separate study directly manipulating the mediator useful? How is the second study any different from the first one?
3) Imai’s correlated residuals test seems valuable for distinguishing fake from genuine mediation. Is that so? And how it is related to traditional mediational test?
4) Why isn’t it easy to test whether participants who show the largest increases in the posited mediator show the largest changes in the outcome?
5) Why is mediational analysis any “worse” than any other method of investigation?
———————————————-
My answers focused on question 2, 4 and 5, which I summarize below:

2)
Q. Is a separate study directly manipulating the mediator useful?
Answer: Yes, it is useful if physically feasible but, still, it cannot give us an answer to the basic mediation question: “What percentage of the observed response is due to mediation?” The concept of mediation is necessarily counterfactual, i.e. sitting on the top layer of the causal hierarchy (see “Causality” chapter 1). It cannot be defined therefore in terms of population experiments, however clever. Mediation can be evaluated with the help of counterfactual assumptions such as “conditional ignorability” or “no interaction,” but these assumptions cannot be verified in population experiments.

4)
Q. Why isn’t it easy to test whether participants who show the largest increases in the posited mediator show the largest changes in the outcome?
Answer: Translating the question to counterfactual notation the test suggested requires the existence of monotonic function f_m such that, for every individual, we have Y_1 – Y_0 =f_m (M_1 – M_0)

This condition expresses a feature we expect to find in mediation, but it cannot be taken as a DEFINITION of mediation. This condition is essentially the way indirect effects are defined in the Principal Strata framework (Frangakis and Rubin, 2002) the deficiencies of which are well known. See http://ftp.cs.ucla.edu/pub/stat_ser/r382.pdf.

In particular, imagine a switch S controlling two light bulbs L1 and L2. Positive correlation between L1 and L2 does not mean that L1 mediates between the switch and L2. Many examples of incompatibility are demonstrated in the paper above.

The conventional mediation tests (in the Baron and Kenny tradition) suffer from the same problem; they test features of mediation that are common in linear systems, but not the essence of mediation which is universal to all systems, linear and nonlinear, continuous as well as categorical variables.

5)
Q. Why is mediational analysis any “worse” than any other method of investigation?
Answer: The answer is closely related to the one given to question 3). Mediation is not a “method” but a property of the population which is defined counterfactually, and therefore requires counterfactual assumption for evaluation. Experiments are not sufficient; and in this sense mediation is “worse” than other properties under investigation, eg., causal effects, which can be estimated entirely from experiments.

About the only thing we can ascertain experimentally is whether the (controlled) direct effect differs from the total effect, but we cannot evaluate the extent of mediation.

Another way to appreciate why stronger assumptions are needed for mediation is to note that non-confoundedness is not the same as ignorability. For non-binary variables one can construct examples where X and Y are not confounded ( i.e., P(y|do(x))= P(y|x)) and yet they are not ignorable, (i.e., Y_x is not independent of X.) Mediation requires ignorability in addition to nonconfoundedness.

Summary
Overall, the panel was illuminating, primarily due to the active participation of curious students. It gave me good reasons to believe that Political Science is destined to become a bastion of modern causal analysis. I wish economists would follow suit, despite the hurdles they face in getting causal analysis to economics education.
http://ftp.cs.ucla.edu/pub/stat_ser/r391.pdf
http://ftp.cs.ucla.edu/pub/stat_ser/r395.pdf

Judea

June 10, 2016

Post-doc Causality and Machine Learning

Filed under: Announcement — bryantc @ 7:58 am

We received the following announcement from Isabelle Guyon (UPSud/INRIA):

The Machine Learning and Optimization (TAO) group of the Laboratory of Research in Informatics (LRI) is seeking a postdoctoral researcher for working at the interface of machine learning and causal modeling to support scientific discovery and computer assisted decision making using big data. The researcher will work with an interdisciplinary group including Isabelle Guyon (UPSud/INRIA), Cecile Germain UPSud), Balazs Kegl (CNRS), Antoine Marot (RTE), Patrick Panciatici (RTE), Marc Schoenauer (INRIA), Michele Sebag (CNRS), and Olivier Teytaud (INRIA).

Some research directions we want to pursue include: extending the formulation of causal discovery as a pattern recognition problem (developed through the ChaLearn cause-effect pairs challenge) to times series and spatio-temporal data; combining feature learning using deep learning methods with the creation of cause-effect explanatory models; furthering the unification of structural equation models and reinforcement learning approaches; and developing interventional learning algorithms.

As part of the exciting applications we are working on, we will be leveraging a long term collaboration with the company RTE (French Transmission System Operator for electricity). With the current limitations on adding new transportation lines, the opportunity to use demand response, and the advent of renewable energies interfaced through fast power electronics to the grid, there is an urgent need to adapt the historical way to operate the electricity power grid. The candidate will have the opportunity to use a combination of historical data (several years of data for the entire RTE network sampled every 5 minutes) and very accurate simulations (precise at the MW level), to develop causal models capable of identifying strategies to prevent or to mitigate the impact of incidents on the network as well as inferring what would have happened if we had intervened (i.e., counterfactual).Other applications we are working on with partner laboratories include epidemiology studies about diabetes and happiness in the workplace, modeling embryologic development, modeling high energy particle collision, analyzing human behavior in videos, and game playing.

The candidate will also be part of the Paris-Saclay Center of Data Science and will be expected to participate in the mission of the center through its activities, including organizing challenges on machine learning, and help advising PhD students.

We are accepting candidates with background in machine learning, reinforcement learning, causality, statistics, scientific modeling, physics, and other neighboring disciplines. The candidate should have the ability of working on cross-disciplinary problems, have a strong math background, and the experience or strong desire to work on practical problems.

The TAO group (https://tao.lri.fr) conducts interdisciplinary research in theory, algorithms, and applications of machine learning and optimization and it has also strong ties with AppStat the physics machine learning group of the Linear Accelerator Laboratory (http://www.lal.in2p3.fr/?lang=fr). Both laboratories are part of the University Paris-Saclay, located in the outskirts of Paris. The position is available for a period of three years, starting in (the earliest) September, 2016. The monthly salary is around 2500 Euros per month. Interested candidates should send a motivation letter, a CV, and the names and addresses of three referees to Isabelle Guyon.

Contact: Isabelle Guyon (iguyon@lri.fr)
Deadline: June 30, 2016, then every in 2 weeks until the position is filled.

Powered by WordPress