Causal Analysis in Theory and Practice

February 22, 2017

Winter-2017 Greeting from UCLA Causality Blog

Filed under: Announcement,Causal Effect,Economics,Linear Systems — bryantc @ 6:03 pm

Dear friends in causality research,

In this brief greeting I would like to first call attention to an approaching deadline and then discuss a couple of recent articles.

1.
Causality in Education Award – March 1, 2017

We are informed that the deadline for submitting a nomination for the ASA Causality in Statistics Education Award is March 1, 2017. For purpose, criteria and other information please see http://www.amstat.org/education/causalityprize/ .

2.
The next issue of the Journal of Causal Inference (JCI) is schedule to appear March, 2017. See https://www.degruyter.com/view/j/jci

MY contribution to this issue includes a tutorial paper entitled: “A Linear ‘Microscope’ for Interventions and Counterfactuals”. An advance copy can be viewed here: http://ftp.cs.ucla.edu/pub/stat_ser/r459.pdf
Enjoy!

3.
Overturning Econometrics Education (or, do we need a “causal interpretation”?)

My attention was called to a recent paper by Josh Angrist and Jorn-Steffen Pischke titled: “Undergraduate econometrics instruction” (A NBER working paper) http://www.nber.org/papers/w23144?utm_campaign=ntw&utm_medium=email&utm_source=ntw

This paper advocates a pedagogical paradigm shift that has methodological ramifications beyond econometrics instruction; As I understand it, the shift stands contrary to the traditional teachings of causal inference, as defined by Sewall Wright (1920), Haavelmo (1943), Marschak (1950), Wold (1960), and other founding fathers of econometrics methodology.

In a nut shell, Angrist and Pischke  start with a set of favorite statistical routines such as IV, regression, differences-in-differences among others, and then search for “a set of control variables needed to insure that the regression-estimated effect of the variable of interest has a causal interpretation”. Traditional causal inference (including economics) teaches us that asking whether the output of a statistical routine “has a causal interpretation” is the wrong question to ask, for it misses the direction of the analysis. Instead, one should start with the target causal parameter itself, and asks whether it is ESTIMABLE (and if so how), be it by IV, regression, differences-in-differences, or perhaps by some new routine that is yet to be discovered and ordained by name. Clearly, no “causal interpretation” is needed for parameters that are intrinsically causal; for example, “causal effect”, “path coefficient”, “direct effect”, “effect of treatment on the treated”, or “probability of causation”.

In practical terms, the difference between the two paradigms is that estimability requires a substantive model while interpretability appears to be model-free. A model exposes its assumptions explicitly, while statistical routines give the deceptive impression that they run assumptions-free (hence their popular appeal). The former lends itself to judgmental and statistical tests, the latter escapes such scrutiny.

In conclusion, if an educator needs to choose between the “interpretability” and “estimability” paradigms, I would go for the latter. If traditional econometrics education
is tailored to support the estimability track, I do not believe a paradigm shift is warranted towards an “interpretation seeking” paradigm as the one proposed by Angrist and Pischke,

I would gladly open this blog for additional discussion on this topic.

I tried to post a comment on NBER (National Bureau of Economic Research), but was rejected for not being an approved “NBER family member”. If any of our readers is a “”NBER family member” feel free to post the above. Note: “NBER working papers are circulated for discussion and comment purposes.” (page 1).

July 9, 2016

The Three Layer Causal Hierarchy

Filed under: Causal Effect,Counterfactual,Discussion,structural equations — bryantc @ 8:57 pm

Recent discussions concerning causal mediation gave me the impression that many researchers in the field are not familiar with the ramifications of the Causal Hierarchy, as articulated in Chapter 1 of Causality (2000, 2009). This note presents the Causal Hierarchy in table form (Fig. 1) and discusses the distinctions between its three layers: 1. Association, 2. Intervention, 3. Counterfactuals.

Judea

June 28, 2016

On the Classification and Subsumption of Causal Models

Filed under: Causal Effect,Counterfactual,structural equations — bryantc @ 5:32 pm

From Christos Dimitrakakis:

>> To be honest, there is such a plethora of causal models, that it is not entirely clear what subsumes what, and which one is equivalent to what. Is there a simple taxonomy somewhere? I thought that influence diagrams were sufficient for all causal questions, for example, but one of Pearl’s papers asserts that this is not the case.

Reply from J. Pearl:

Dear Christos,

From my perspective, I do not see a plethora of causal models at all, so it is hard for me to answer your question in specific terms. What I do see is a symbiosis of all causal models in one framework, called Structural Causal Model (SCM) which unifies structural equations, potential outcomes, and graphical models. So, for me, the world appears simple, well organized, and smiling. Perhaps you can tell us what models lured your attention and caused you to see a plethora of models lacking subsumption taxonomy.

The taxonomy that has helped me immensely is the three-level hierarchy described in chapter 1 of my book Causality: 1. association, 2. intervention, and 3 counterfactuals. It is a useful hierarchy because it has an objective criterion for the classification: You cannot answer questions at level i unless you have assumptions from level i or higher.

As to influence diagrams, the relations between them and SCM is discussed in Section 11.6 of my book Causality (2009), Influence diagrams belong to the 2nd layer of the causal hierarchy, together with Causal Bayesian Networks. They lack however two facilities:

1. The ability to process counterfactuals.
2. The ability to handle novel actions.

To elaborate,

1. Counterfactual sentences (e.g., Given what I see, I should have acted differently) require functional models. Influence diagrams are built on conditional and interventional probabilities, that is, p(y|x) or p(y|do(x)). There is no interpretation of E(Y_x| x’) in this framework.

2. The probabilities that annotate links emanating from Action Nodes are interventional type, p(y|do(x)), that must be assessed judgmentally by the user. No facility is provided for deriving these probabilities from data together with the structure of the graph. Such a derivation is developed in chapter 3 of Causality, in the context of Causal Bayes Networks where every node can turn into an action node.

Using the causal hierarchy, the 1st Law of Counterfactuals and the unification provided by SCM, the space of causal models should shine in clarity and simplicity. Try it, and let us know of any questions remaining.

Judea

August 11, 2015

Mid-Summer Greeting from the UCLA Causality Blog

Filed under: Announcement,Causal Effect,Counterfactual,General — moderator @ 6:09 pm

Friends in causality research,

This mid-summer greeting of UCLA Causality blog contains:
A. News items concerning causality research
B. Discussions and scientific results

1. The next issue of the Journal of Causal Inference is scheduled to appear this month, and the table of content can be viewed here.

2. A new digital journal “Observational Studies” is out this month (link) and its first issue is dedicated to the legacy of William Cochran (1909-1980).

My contribution to this issue can be viewed here:
http://ftp.cs.ucla.edu/pub/stat_ser/r456.pdf

See also comment 1 below.

3. A video recording of my Cassel Lecture at the SER conference, June 2015, Denver, CO, can be viewed here:
https://epiresearch.org/about-us/archives/video-archives-2/the-scientific-approach-to-causal-inference/

4. A video of a conversation with Robert Gould concerning the teaching of causality can be viewed on Wiley’s Statistics Views, link (2 parts, scroll down).

5. We are informed of the upcoming publication of a new book, Rex Kline “Principles and Practice of Structural Equation Modeling, Fourth Edition (link). Judging by the chapters I read, this book promises to be unique; it treats structural equation models for what they are: carriers of causal assumptions and tools for causal inference. Kudos, Rex.

6. We are informed of another book on causal inference: Imbens, Guido W.; Rubin, Donald B. “Causal Inference in Statistics, Social, and Biomedical Sciences: An Introduction” Cambridge University Press (2015). Readers will quickly realize that the ideas, methods, and tools discussed on this blog were kept out of this book. Omissions include: Control of confounding, testable implications of causal assumptions, visualization of causal assumptions, generalized instrumental variables, mediation analysis, moderation, interaction, attribution, external validity, explanation, representation of scientific knowledge and, most importantly, the unification of potential outcomes and structural models.

Given that the book is advertised as describing “the leading analysis methods” of causal inference, unsuspecting readers will get the impression that the field as a whole is facing fundamental obstacles, and that we are still lacking the tools to cope with basic causal tasks such as confounding control and model testing. I do not believe mainstream methods of causal inference are in such state of helplessness.

The authors’ motivation and rationale for this exclusion were discussed at length on this blog. See
“Are economists smarter than epidemiologists”
http://causality.cs.ucla.edu/blog/?p=1241

and “On the First Law of Causal Inference”
http://causality.cs.ucla.edu/blog/?m=201411

As most of you know, I have spent many hours trying to explain to leaders of the potential outcome school what insights and tools their students would be missing if not given exposure to a broader intellectual environment, one that embraces model-based inferences side by side with potential outcomes.

This book confirms my concerns, and its insularity-based impediments are likely to evoke interesting public discussions on the subject. For example, educators will undoubtedly wish to ask:

(1) Is there any guidance we can give students on how to select covariates for matching or adjustment?.

(2) Are there any tools available to help students judge the plausibility of ignorability-type assumptions?

(3) Aren’t there any methods for deciding whether identifying assumptions have testable implications?.

I believe that if such questions are asked often enough, they will eventually evoke non-ignorable answers.

7. The ASA has come up with a press release yesterday, recognizing Tyler VanderWeele’s new book “Explanation in Causal Inference,” winner of the 2015 Causality in Statistics Education Award
http://www.amstat.org/newsroom/pressreleases/JSM2015-CausalityinStatisticsEducationAward.pdf

Congratulations, Tyler.

Information on nominations for the 2016 Award will soon be announced.

8. Since our last Greetings (Spring, 2015) we have had a few lively discussions posted on this blog. I summarize them below:

8.1. Indirect Confounding and Causal Calculus
(How getting too anxious to criticize do-calculus may cause you to miss an easy solution to a problem you thought was hard).
July 23, 2015
http://causality.cs.ucla.edu/blog/?p=1545

8.2. Does Obesity Shorten Life? Or is it the Soda?
(Discusses whether it was the earth that caused the apple to fall? or the gravitational field created by the earth?.)
May 27, 2015
http://causality.cs.ucla.edu/blog/?p=1534

8.3. Causation without Manipulation
(Asks whether anyone takes this mantra seriously nowadays, and whether we need manipulations to store scientific knowledge)
May 14, 2015
http://causality.cs.ucla.edu/blog/?p=1518

8.4. David Freedman, Statistics, and Structural Equation Models
(On why Freedman invented “response schedule”?)
May 6, 2015
http://causality.cs.ucla.edu/blog/?p=1502

8.5. We also had a few breakthroughs posted on our technical report page
http://bayes.cs.ucla.edu/csl_papers.html

My favorites this summer are these two:
http://ftp.cs.ucla.edu/pub/stat_ser/r452.pdf
http://ftp.cs.ucla.edu/pub/stat_ser/r450.pdf
because they deal with the tough and long-standing problem:
“How generalizable are empirical studies?”

Enjoy the rest of the summer
Judea

July 23, 2015

Indirect Confounding and Causal Calculus (On three papers by Cox and Wermuth)

Filed under: Causal Effect,Definition,Discussion,do-calculus — eb @ 4:52 pm

1. Introduction

This note concerns three papers by Cox and Wermuth (2008; 2014; 2015 (hereforth WC‘08, WC‘14 and CW‘15)) in which they call attention to a class of problems they named “indirect confounding,” where “a much stronger distortion may be introduced than by an unmeasured confounder alone or by a selection bias alone.” We will show that problems classified as “indirect confounding” can be resolved in just a few steps of derivation in do-calculus.

This in itself would not have led me to post a note on this blog, for we have witnessed many difficult problems resolved by formal causal analysis. However, in their three papers, Cox and Wermuth also raise questions regarding the capability and/or adequacy of the do-operator and do-calculus to accurately predict effects of interventions. Thus, a second purpose of this note is to reassure students and users of do-calculus that they can continue to apply these tools with confidence, comfort, and scientifically grounded guarantees.

Finally, I would like to invite the skeptic among my colleagues to re-examine their hesitations and accept causal calculus for what it is: A formal representation of interventions in real world situations, and a worthwhile tool to acquire, use and teach. Among those skeptics I must include colleagues from the potential-outcome camp, whose graph-evading theology is becoming increasing anachronistic (see discussions on this blog, for example, here).

2 Indirect Confounding – An Example

To illustrate indirect confounding, Fig. 1 below depicts the example used in WC‘08, which involves two treatments, one randomized (X), and the other (Z) taken in response to an observation (W) which depends on X. The task is to estimate the direct effect of X on the primary outcome (Y), discarding the effect transmitted through Z.

As we know from elementary theory of mediation (e.g., Causality, p. 127) we cannot block the effect transmitted through Z by simply conditioning on Z, for that would open the spurious path X → W ← U → Y , since W is a collider whose descendant (Z) is instantiated. Instead, we need to hold Z constant by external means, through the do-operator do(Z = z). Accordingly, the problem of estimating the direct effect of X on Y amounts to finding P(y|do(x, z)) since Z is the only other parent of Y (see Pearl (2009, p. 127, Def. 4.5.1)).


Figure 1: An example of “indirect confounding” from WC‘08. Z stands for a treatment taken in response to a test W, whose outcome depend ends on a previous treatment X. U is unobserved. [WC‘08 attribute this example to Robins and Wasserman (1997); an identical structure is treated in Causality, p. 119, Fig. 4.4, as well as in Pearl and Robins (1995).]

Solution:
     P(y|do(x,z))
    =P(y|x, do(z))                             (since X is randomized)
    = ∑w P(Y|x,w,do(z))P(w|x, do(z))         (by Rule 1 of do-calculus)
    = ∑w P(Y|x,w,z)P(w|x)               (by Rule 2 and Rule 3 of do-calculus)

We are done, because the last expression consists of estimable factors. What makes this problem appear difficult in the linear model treated by WC‘08 is that the direct effect of X on Y (say α) cannot be identified using a simple adjustment. As we can see from the graph, there is no set S that separates X from Y in Gα. This means that α cannot be estimated as a coefficient in a regression of Y on X and S. Readers of Causality, Chapter 5, would not panic by such revelation, knowing that there are dozens of ways to identify a parameter, going way beyond adjustment (surveyed in Chen and Pearl (2014)). WC‘08 identify α using one of these methods, and their solution coincides of course with the general derivation given above.

The example above demonstrates that the direct effect of X on Y (as well as Z on Y ) can be identified nonparametrically, which extends the linear analysis of WC‘08. It also demonstrates that the effect is identifiable even if we add a direct effect from X to Z, and even if there is an unobserved confounder between X and W – the derivation is almost the same (see Pearl (2009, p. 122)).

Most importantly, readers of Causality also know that, once we write the problem as “Find P(y|do(x, z))” it is essentially solved, because the completeness of the do-calculus together with the algorithmic results of Tian and Shpitser can deliver the answer in polynomial time, and, if terminated with failure, we are assured that the effect is not estimable by any method whatsoever.

3 Conclusions

It is hard to explain why tools of causal inference encounter slower acceptance than tools in any other scientific endeavor. Some say that the difference comes from the fact that humans are born with strong causal intuitions and, so, any formal tool is perceived as a threatening intrusion into one’s private thoughts. Still, the reluctance shown by Cox and Wermuth seems to be of a different kind. Here are a few examples:

Cox and Wermuth (CW’15) write:
“…some of our colleagues have derived a ‘causal calculus’ for the challenging
process of inferring causality; see Pearl (2015). In our view, it is unlikely that
a virtual intervention on a probability distribution, as specified in this calculus,
is an accurate representation of a proper intervention in a given real world
situation.” (p. 3)

These comments are puzzling because the do-operator and its associated “causal calculus” operate not “on a probability distribution,” but on a data generating model (i.e., the DAG). Likewise, the calculus is used, not for “inferring causality” (God forbid!!) but for predicting the effects of interventions from causal assumptions that are already encoded in the DAG.

In WC‘14 we find an even more puzzling description of “virtual intervention”:
“These recorded changes in virtual interventions, even though they are often
called ‘causal effects,’ may tell next to nothing about actual effects in real interventions
with, for instance, completely randomized allocation of patients to
treatments. In such studies, independence result by design and they lead to
missing arrows in well-fitting graphs; see for example Figure 9 below, in the last
subsection.” [our Fig. 1]

“Familiarity is the mother of acceptance,” say the sages (or should have said). I therefore invite my colleagues David Cox and Nanny Wermuth to familiarize themselves with the miracles of do-calculus. Take any causal problem for which you know the answer in advance, submit it for analysis through the do-calculus and marvel with us at the power of the calculus to deliver the correct result in just 3–4 lines of derivation. Alternatively, if we cannot agree on the correct answer, let us simulate it on a computer, using a well specified data-generating model, then marvel at the way do-calculus, given only the graph, is able to predict the effects of (simulated) interventions. I am confident that after such experience all hesitations will turn into endorsements.

BTW, I have offered this exercise repeatedly to colleagues from the potential outcome camp, and the response was uniform: “we do not work on toy problems, we work on real-life problems.” Perhaps this note would entice them to join us, mortals, and try a small problem once, just for sport.

Let’s hope,

Judea

References

Chen, B. and Pearl, J. (2014). Graphical tools for linear structural equation modeling. Tech. Rep. R-432, , Department of Com- puter Science, University of California, Los Angeles, CA. Forthcoming, Psychometrika.
Cox, D. and Wermuth, N. (2015). Design and interpretation of studies: Relevant concepts from the past and some extensions. Observational Studies This issue.
Pearl, J. (2009). Causality: Models, Reasoning, and Inference. 2nd ed. Cambridge Uni- versity Press, New York.
Pearl, J. (2015). Trygve Haavelmo and the emergence of causal calculus. Econometric Theory 31 152–179. Special issue on Haavelmo Centennial.
Pearl, J. and Robins, J. (1995). Probabilistic evaluation of sequential plans from causal models with hidden variables. In Uncertainty in Artificial Intelligence 11 (P. Besnard and S. Hanks, eds.). Morgan Kaufmann, San Francisco, 444–453.
Robins, J. M. and Wasserman, L. (1997). Estimation of effects of sequential treatments by reparameterizing directed acyclic graphs. In Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI ‘97). Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 409–420.
Wermuth, N. and Cox, D. (2008). Distortion of effects caused by indirect confounding. Biometrika 95 17–33.
Wermuth, N. and Cox, D. (2014). Graphical Markov models: Overview. ArXiv: 1407.7783.

May 27, 2015

Does Obesity Shorten Life? Or is it the Soda?

Filed under: Causal Effect,Definition,Discussion,Intuition — moderator @ 1:45 pm

Our discussion of “causation without manipulation” (link) acquires an added sense of relevance when considered in the context of public concerns with obesity and its consequences. A Reuters story published on September 21 2012 (link) cites a report projecting that at least 44 percent of U.S adults could be obese by 2030, compared to 35.7 percent today, bringing an extra $66 billion a year in obesity-related medical costs. A week earlier, New York City adopted a regulation banning the sale of sugary drinks in containers larger than 16 ounces at restaurants and other outlets regulated by the city health department.

Interestingly, an article published in the International Journal of Obesity {(2008), vol 32, doi:10.1038/i} questions the logic of attributing consequences to obesity. The authors, M A Hernan and S L Taubman (both of Harvard’s School of Public Health) imply that the very notion of “obesity-related medical costs” is undefined, if not misleading and that, instead of speaking of “obesity shortening life” or “obesity raising medical costs”, one should be speaking of manipulable variables like “life style” or “soda consumption” as causing whatever harm we tend to attribute to obesity.

The technical rational for these claims is summarized in their abstract:
“We argue that observational studies of obesity and mortality violate the condition of consistency of counterfactual (potential) outcomes, a necessary condition for meaningful causal inference, because (1) they do not explicitly specify the interventions on body mass index (BMI) that are being compared and (2) different methods to modify BMI may lead to different counterfactual mortality outcomes, even if they lead to the same BMI value in a given person.

Readers will surely notice that these arguments stand in contradiction to the structural, as well as closest-world definitions of counterfactuals (Causality, pp. 202-206, 238-240), according to which consistency is a theorem in counterfactual logic, not an assumption and, therefore, counterfactuals are always consistent (link). A counterfactual appears to be inconsistent when its antecedant A (as in “had A been true”) is conflated with an external intervention devised to enforce the truth of A. Practical interventions tend to have side effects, and these need to be reckoned with in estimation, but counterfactuals and causal effects are defined independently of those interventions and should not, therefore, be denied existence by the latter’s imperfections. To say that obesity has no intrinsic effects because some interventions have side effects is analogous to saying that stars do not move because telescopes have imperfections.

Rephrased in a language familiar to readers of this blog Hernan and Taubman claim that the causal effect P(mortality=y|Set(obesity=x)) is undefined, seemingly because the consequences of obesity depend on how we choose to manipulate it. Since the probability of death will generally depend on whether you manipulate obesity through diet versus, say, exercise. (We assume that we are able to perfectly define quantitative measures of obesity and mortality), Hernan and Taubman conclude that P(mortality=y|Set(obesity=x)) is not formally a function of x, but a one-to-many mapping.

This contradicts, of course, what the quantity P(Y=y|Set(X=x)) represents. As one who coined the symbols Set(X=x) (Pearl, 1993) [it was later changed to do(X=x)] I can testify that, in its original conception:

1. P(mortality = y| Set(obesity = x) does not depend on any choice of intervention; it is defined relative to a hypothetical, minimal intervention needed for establishing X=x and, so, it is defined independently of how the event obesity=x actually came about.

2. While it is true that the probability of death will generally depend on whether we manipulate obesity through diet versus, say, exercise, the quantity P(mortality=y|Set(obesity=x)) has nothing to do with diet or exercise, it has to do only with the level x of X and the anatomical or social processes that respond to this level of X. Set(obesity=x) describes a virtual intervention, by which nature sets obesity to x, independent of diet or exercise, while keeping everything else in tact, especially the processes that respond to X. The fact that we, mortals, cannot execute such incisive intervention, does not make this intervention (1) undefined, or (2) vague, or (3) replaceable by manipulation-dependent operators.

To elaborate:
(1) The causal effects of obesity are well-defined in the SEM model, which consists of functions, not manipulations.

(2) The causal effects of obesity are as clear and transparent as the concept of functional dependency and were chosen in fact to serve as standards of scientific communication (See again Wikipedia, Cholesterol, how relationships are defined by “absence” or “presence” of agents not by the means through which those agents are controlled.

(3) If we wish to define a new operator, say Set_a(X=x), where $a$ stands for the means used in achieving X=x (as Larry Wasserman suggested), this can be done within the syntax of the do-calculus, But that would be a new operator altogether, unrelated to do(X=x) which is manipulation-neutral.

There are several ways of loading the Set(X=x) operator with manipulational or observational specificity. In the obesity context, one may wish to consider P(mortality=y|Set(diet=z)) or P(mortality=y|Set(exercise=w)) or P(mortality=y|Set(exercise=w), Set(diet=z)) or P(mortality=y|Set(exercise=w), See (diet=z)) or P(mortality=y|See(obesity=x), Set(diet=z)) The latter corresponds to the studies criticized by Hernan and Taubman, where one manipulates diet and passively observes Obesity. All these variants are legitimate quantities that one may wish to evaluate, if called for, but have nothing to do with P(mortality=y|Set(obesity =x)) which is manipulation-neutral..

Under certain conditions we can even infer P(mortality=y|Set(obesity =x)) from data obtained under dietary controlled experiments. [i.e., data governed by P(mortality=y|See(obesity=x), Set(diet=z)); See R-397.) But these conditions can only reveal themselves to researchers who acknowledge the existence of P(mortality=y|Set(obesity=x)) and are willing to explore its properties.

Additionally, all these variants can be defined and evaluated in SEM and, moreover, the modeler need not think about them in the construction of the model, where only one relation matters: Y LISTENS TO X.

My position on the issues of manipulation and SEM can be summarized as follows:

1. The fact that morbidity varies with the way we choose to manipulate obesity (e.g., diet, exercise) does not diminish our need, or ability to define a manipulation-neutral notion of “the effect of obesity on morbidity”, which is often a legitimate target of scientific investigation, and may serve to inform manipulation-specific effects of obesity.

2. In addition to defining and providing identification conditions for the manipulation-neutral notion of “effect of obesity on morbidity”, the SEM framework also provides formal definitions and identification conditions for each of the many manipulation-specific effects of obesity, and this can be accomplished through a single SEM model provided that the version-specific characteristics of those manipulations are encoded in the model.

I would like to say more about the relationship between knowledge-based statements (e.g., “obesity kills”) and policy-specific statements (e.g., “Soda kills.”) I wrote a short note about it in the Journal of Causal Inference http://ftp.cs.ucla.edu/pub/stat_ser/r422.pdf and I think it would add another perspective to our discussion. A copy of the introduction section is given below.

Is Scientific Knowledge Useful for Policy Analysis?
A Peculiar Theorem Says: No

(from http://ftp.cs.ucla.edu/pub/stat_ser/r422.pdf)

1 Introduction
In her book, Hunting Causes and Using Them [1], Nancy Cartwright expresses several objections to the do(x) operator and the “surgery” semantics on which it is based (pp. 72 and 201). One of her objections concerned the fact that the do-operator represents an ideal, atomic intervention, different from the one implementable by most policies under evaluation. According to Cartwright, for policy evaluation we generally want to know what would happen were the policy really set in place, and the policy may affect a host of changes in other variables in the system, some envisaged and some not.

In my answer to Cartwright [2, p. 363], I stressed two points. First, the do-calculus enables us to evaluate the effect of compound interventions as well, as long as they are described in the model and are not left to guesswork. Second, I claimed that in many studies our goal is not to predict the effect of the crude, non-atomic intervention that we are about to implement but, rather, to evaluate an ideal, atomic policy that cannot be implemented given the available tools, but that represents nevertheless scientific knowledge that is pivotal for our understanding of the domain.

The example I used was as follows: Smoking cannot be stopped by any legal or educational means available to us today; cigarette advertising can. That does not stop researchers from aiming to estimate “the effect of smoking on cancer,” and doing so from experiments in which they vary the instrument — cigarette advertisement — not smoking. The reason they would be interested in the atomic intervention P(Cancer|do(Smoking)) rather than (or in addition to) P(cancer|do(advertising)) is that the former represents a stable biological characteristic of the population, uncontaminated by social factors that affect susceptibility to advertisement, thus rendering it transportable across cultures and environments. With the help of this stable characteristic, one can assess the effects of a wide variety of practical policies, each employing a different smoking-reduction instrument. For example, if careful scientific investigations reveal that smoking has no effect on cancer, we can comfortably conclude that increasing cigarette taxes will not decrease cancer rates and that it is futile for schools to invest resources in anti-smoking educational programs. This note takes another look at this argument, in light of recent results in transportability theory (Bareinboim and Pearl [3], hereafter BP).

Robert Platt called my attention to the fact that there is a fundamental difference between Smoking and Obesity; randomization is physically feasible in the case of smoking (say, in North Korea) — not in the case of obesity.

I agree; it would have been more effective to use Obesity instead of Smoking in my response to Cartwright. An RCT experiment on Smoking can be envisioned, (if one is willing to discount obvious side effect of forced smoking or forced withdrawal) while RCT on Obesity requires more creative imagination; not through a powerful dictator, but through an agent such as Lady Nature herself, who can increase obesity by one unit and evaluate its consequences on various body functions.

This is what the do-operator does, it simulates an experiment conducted by Lady Nature who, for all that we know is all mighty, and can permit all the organisms that are affected by BMI (and fat content etc etc [I assume here that we can come to some consensus on the vector of measurements that characterizes Obesity]) to respond to a unit increase of BMI in the same way that they responded in the past. Moreover, she is able to do it by an extremely delicate surgery, without touching those variables that we mortals need to change in order to drive BMI up or down.

This is not a new agent by any means, it is the standard agent of science. For example, consider the 1st law of thermodynamic, PV =n R T. While Volume (V), Temperature (T) and the amount of gas (n) are independently manipulable, pressure (P) is not. This means that whenever we talk about the pressure changing, it is always accompanied by a change in V, n and/or T which, like diet and exercise, have their own side effects. Does this prevent us from speaking about the causal effect of tire pressure on how bumpy the road is? Must we always mention V, T or n when we speak about the effect of air pressure on the size of the balloon we are blowing? Of course not.! Pressure has life of its own (the rate of momentum transfer to a wall that separates two vessels ) independent on the means by which we change it.

Aha!!! The skeptic argues: “Things are nice in physics, but epidemiology is much more complex, we do not know the equations or the laws, and we will never in our lifetime know the detailed anatomy of the human body. This ignorance-pleading argument always manages to win the hearts of the mystic, especially among researchers who feel uncomfortable encoding partial scientific knowledge in a model. Yet Lady Nature does not wait for us to know things before she makes our heart muscle respond to the fat content in the blood. And we need not know the exact response to postulate that such response exists.

Scientific thinking is not unique to physics. Consider any standard medical test and let’s ask ourselves whether the quantities measured have “well-defined causal effects” on the human body. Does “blood pressure” have any effect on anything? Why do we not hear complaints about “blood pressure” being “not well defined”.? After all, following the criterion of Hernan and Taubman (2008), the “effect of X on Y” is ill-defined whenever Y depends on the means we use to change X. So “blood pressure” has no well defined
effect on any organ in the human body. The same goes for “blood count” “kidney function” …. Rheumatoid Factor…. If these variables have no effects on anything why do we measure them? Why do physicians communicate with each other through these measurements, instead of through the “interventions” that may change these measurements?

My last comment is for epidemiologists who see their mission as that of “changing the world for the better” and, in that sense, they only *care* about treatments (causal variables) that are manipulable. I have only admiration for this mission. However, to figure out which of those treatments should be applied in any given situation, we need to understand the situation and, it so happened that “understanding” involves causal relationships between manipulable as well as non-manipulable variables. For instance, if someone offers to sell you a new miracle drug that (provenly) reduces obesity, and your scientific understanding is that obesity has no effect whatsoever on anything that is important to you, then, regardless of other means that are available for manipulating obesity you would tell the salesman to go fly a kite. And you would do so regardless of whether those other means produced positive or negative results. The basis for rejecting the new drug is precisely your understanding that “Obesity has no effect on outcome”, the very quantity that some of epidemiologists now wish to purge from science, all in the name of only caring about manipulable treatments.

Epidemiology, as well as all empirical sciences need both scientific and clinical knowledge to sustain and communicate that which we have learned and to advance beyond it. While the effects of diet and exercise are important for controlling obesity, the health consequences of obesity are no less important; they constitute legitimate targets of scientific pursuit, regardless of current shortcomings in clinical knowledge.

Judea

May 14, 2015

Causation without Manipulation

The second part of our latest post “David Freedman, Statistics, and Structural Equation Models” (May 6, 2015) has stimulated a lively email discussion among colleagues from several disciplines. In what follows, I will be sharing the highlights of the discussion, together with my own position on the issue of manipulability.

Many of the discussants noted that manipulability is strongly associated (if not equated) with “comfort of interpretation”. For example, we feel more comfortable interpreting sentences of the type “If we do A, then B would be more likely” compared with sentences of the type “If A were true, then B would be more likely”. Some attribute this association to the fact that empirical researchers (say epidemiologists) are interested exclusively in interventions and preventions, not in hypothetical speculations about possible states of the world. The question was raised as to why we get this sense of comfort. Reference was made to the new book by Tyler VanderWeele, where this question is answered quite eloquently:

“It is easier to imagine the rest of the universe being just as it is if a patient took pill A rather than pill B than it is trying to imagine what else in the universe would have had to be different if the temperature yesterday had been 30 degrees rather than 40. It may be the case that human actions, seem sufficiently free that we have an easier time imagining only one specific action being different, and nothing else.”
(T. Vanderweele, “Explanation in causal Inference” p. 453-455)

This sensation of discomfort with non-manipulable causation stands in contrast to the practice of SEM analysis, in which causes are represented as relations among interacting variables, free of external manipulation. To explain this contrast, I note that we should not overlook the purpose for which SEM was created — the representation of scientific knowledge. Even if we agree with the notion that the ultimate purpose of all knowledge is to guide actions and policies, not to engage in hypothetical speculations, the question still remains: How do we encode this knowledge in the mind (or in textbooks) so that it can be accessed, communicated, updated and used to guide actions and policies. By “how” I am concerned with the code, the notation, its
syntax and its format.

There was a time when empirical scientists could dismiss questions of this sort (i.e., “how do we encode”) as psychological curiosa, residing outside the province of “objective” science. But now that we have entered the enterprise of causal inference, and we express concerns over the comfort and discomfort of interpreting counterfactual utterances, we no longer have the luxury of ignoring those questions; we must ask: how do scientists encode knowledge, because this question holds the key to the distinction between the comfortable and the uncomfortable, the clear versus the ambiguous.

The reason I prefer the SEM specification of knowledge over a manipulation-restricted specification comes from the realization that SEM matches the format in which humans store scientific knowledge. (Recall, by “SEM” we mean a manipulation-free society of variables, each listening to the others and each responding to what it hears) In support of this realization, I would like to copy below a paragraph from Wikipedia’s entry on Cholesterol, section on “Clinical Significance.” (It is about 20 lines long but worth a serious linguistic analysis).

——————–from Wikipedia, dated 5/10/15 —————
According to the lipid hypothesis , abnormal cholesterol levels ( hyperchol esterolemia ) or, more properly, higher concentrations of LDL particles and lower concentrations of functional HDL particles are strongly associated with cardiovascular disease because these promote atheroma development in arteries ( atherosclerosis ). This disease process leads to myocardial infraction (heart attack), stroke, and peripheral vascular disease . Since higher blood LDL, especially higher LDL particle concentrations and smaller LDL particle size, contribute to this process more than the cholesterol content of the HDL particles, LDL particles are often termed “bad cholesterol” because they have been linked to atheroma formation. On the other hand, high concentrations of functional HDL, which can remove cholesterol from cells and atheroma, offer protection and are sometimes referred to as “good cholesterol”. These balances are mostly genetically determined, but can be changed by body build, medications , food choices, and other factors. [ 54 ] Resistin , a protein secreted by fat tissue, has been shown to increase the production of LDL in human liver cells and also degrades LDL receptors in the liver. As a result, the liver is less able to clear cholesterol from the bloodstream. Resistin accelerates the accumulation of LDL in arteries, increasing the risk of heart disease. Resistin also adversely impacts the effects of statins, the main cholesterol-reducing drug used in the treatment and prevention of cardiovascular disease.
————-end of quote ——————

My point in quoting this paragraph is to show that, even in “clinical significance” sections, most of the relationships are predicated upon states of variables, as opposed to manipulations of variables. They talk about being “present” or “absent”, being at high concentration or low concentration, smaller particles or larger particles; they talk about variables “enabling,” “disabling,” “promoting,” “leading to,” “contributing to,” etc. Only two of the sentences refer directly to exogenous manipulations, as in “can be changed by body build, medications, food choices…”

This manipulation-free society of sensors and responders that we call “scientific knowledge” is not oblivious to the world of actions and interventions; it was actually created to (1) guide future actions and (2) learn from interventions.

(1) The first frontier is well known. Given a fully specified SEM, we can predict the effect of compound interventions, both static and time varying, pre-planned or dynamic. Moreover, given a partially specified SEM (e.g., a DAG) we can often use data to fill in the missing parts and predict the effect of such interventions. These require however that the interventions be specified by “setting” the values of one or several variables. When the action of interest is more complex, say a disjunctive action like: “paint the wall green or blue” or “practice at least 15 minutes a day”, a more elaborate machinery is needed to infer its effects from the atomic actions and counterfactuals that the model encodes (See http://ftp.cs.ucla.edu/pub/stat_ser/r359.pdf and Hernan etal 2011.) Such derivations are nevertheless feasible from SEM without enumerating the effects of all disjunctive actions of the form “do A or B” (which is obviously infeasible).

(2) The second frontier, learning from interventions, is less developed. We can of course check, using the methods above, whether a given SEM is compatible with the results of experimental studies (Causality, Def.1.3.1). We can also determine the structure of an SEM from a systematic sequence of experimental studies. What we are still lacking though are methods of incremental updating, i.e., given an SEM M and an experimental study that is incompatible with M, modify M so as to match the new study, without violating previous studies, though only their ramifications are encoded in M.

Going back to the sensation of discomfort that people usually express vis a vis non-manipulable causes, should such discomfort bother users of SEM when confronting non-manipulable causes in their model? More concretely, should the difficulty of imagining “what else in the universe would have had to be different if the temperature yesterday had been 30 degrees rather than 40,” be a reason for misinterpreting a model that contains variables labeled “temperature” (the cause) and “sweating” (the effect)? My answer is: No. At the deductive phase of the analysis, when we have a fully specified model before us, the model tells us precisely what else would be different if the temperature yesterday had been 30 degrees rather than 40.”

Consider the sentence “Mary would not have gotten pregnant had she been a man”. I believe most of us would agree with the truth of this sentence despite the fact that we may not have a clue what else in the universe would have had to be different had Mary been a man. And if the model is any good, it would imply that regardless of other things being different (e.g. Mary’s education, income, self esteem etc.) she would not have gotten pregnant. Therefore, the phrase “had she been a man” should not be automatically rejected by interventionists as meaningless — it is quite meaningful.

Now consider the sentence: “If Mary were a man, her salary would be higher.” Here the discomfort is usually higher, presumably because not only we cannot imagine what else in the universe would have had to be different had Mary been a man, but those things (education, self esteem etc.) now make a difference in the outcome (salary). Are we justified now in declaring discomfort? Not when we are reading our model. Given a fully specified SEM, in which gender, education, income, and self esteem are bonified variables, one can compute precisely how those factors should be affected by a gender change. Complaints about “how do we know” are legitimate at the model construction phase, but not when we assume having a fully specified model before us, and merely ask for its ramifications.

To summarize, I believe the discomfort with non-manipulated causes represents a confusion between model utilization and model construction. In the former phase counterfactual sentences are well defined regardless of whether the antecedent is manipulable. It is only when we are asked to evaluate a counterfactual sentence by intuitive, unaided judgment, that we feel discomfort and we are provoked to question whether the counterfactual is “well defined”. Counterfactuals are always well defined relative to a given model, regardless of whether the antecedent is manipulable or not.

This takes us to the key question of whether our models should be informed by the the manipulability restriction and how. Interventionists attempt to convince us that the very concept of causation hinges on manipulability and, hence, that a causal model void of manipulability information is incomplete, if not meaningless. We saw above that SEM, as a representation of scientific knowledge, manages quite well without the manipulability restriction. I would therefore be eager to hear from interventionists what their conception is of “scientific knowledge”, and whether they can envision an alternative to SEM which is informed by the manipulability restriction, and yet provides a parsimonious account of that which we know about the world.

My appeal to interventionists to provide alternatives to SEM has so far not been successful. Perhaps readers care to suggest some? The comment section below is open for suggestions, disputations and clarifications.

May 6, 2015

David Freedman, Statistics, and Structural Equation Models

Filed under: Causal Effect,Counterfactual,Definition,structural equations — moderator @ 12:40 am

(Re-edited: 5/6/15, 4 pm)

Michael A Lewis (Hunter College) sent us the following query:

Dear Judea,
I was reading a book by the late statistician David Freedman and in it he uses the term “response schedule” to refer to an equation which represents a causal relationship between variables. It appears that he’s using that term as a synonym for “structural equation” the one you use. In your view, am I correct in regarding these as synonyms? Also, Freedman seemed to be of the belief that response schedules only make sense if the causal variable can be regarded as amenable to manipulation. So variables like race, gender, maybe even socioeconomic status, etc. cannot sensibly be regarded as causes since they can’t be manipulated. I’m wondering what your view is of this manipulation perspective.
Michael


My answer is: Yes. Freedman’s “response schedule” is a synonym for “structural equation.” The reason why Freedman did not say so explicitly has to do with his long and rather bumpy journey from statistical to causal thinking. Freedman, like most statisticians in the 1980’s could not make sense of the Structural Equation Models (SEM) that social scientists (e.g., Duncan) and econometricians (e.g., Goldberger) have adopted for representing causal relations. As a result, he criticized and ridiculed this enterprise relentlessly. In his (1987) paper “As others see us,” for example, he went as far as “proving” that the entire enterprise is grounded in logical contradictions. The fact that SEM researchers at that time could not defend their enterprise effectively (they were as confused about SEM as statisticians — judging by the way they responded to his paper) only intensified Freedman criticism. It continued well into the 1990’s, with renewed attacks on anything connected with causality, including the causal search program of Spirtes, Glymour and Scheines.

I have had a long and friendly correspondence with Freedman since 1993 and, going over a file of over 200 emails, it appears that it was around 1994 when he began to convert to causal thinking. First through the do-operator (by his own admission) and, later, by realizing that structural equations offer a neat way of encoding counterfactuals.

I speculate that the reason Freedman could not say plainly that causality is based on structural equations was that it would have been too hard for him to admit that he was in error criticizing a model that he misunderstood, and, that is so simple to understand. This oversight was not entirely his fault; for someone trying to understand the world from a statistical view point, structural equations do not make any sense; the asymmetric nature of the equations and those slippery “error terms” stand outside the prism of the statistical paradigm. Indeed, even today, very few statisticians feel comfortable in the company of structural equations. (How many statistics textbooks do we know that discuss structural equations?)

So, what do you do when you come to realize that a concept you ridiculed for 20 years is the key to understanding causation? Freedman decided not to say “I erred”, but to argue that the concept was not rigorous enough for statisticians to understood. He thus formalized “response schedule” and treated it as a novel mathematical object. The fact is, however, that if we strip “response schedule” from its superlatives, we find that it is just what you and I call a “function”. i.e., a mapping between the states of one variable onto the states of another. Some of Freedman’s disciples are admiring this invention (See R. Berk’s 2004 book on regression) but most people that I know just look at it and say: This is what a structural equation is.

The story of David Freedman is the story of statistical science itself and the painful journey the field has taken through the causal reformation. Starting with the structural equations of Sewal Wright (1921), and going through Freedman’s “response schedule”, the field still can’t swallow the fundamental building block of scientific thinking, in which Nature is encoded as a society of sensing and responding variables. Funny, econometrics is yet to start its reformation, though it has been housing SEM since Haavelmo (1943). (How many econometrics textbooks do we know which teach students how to read counterfactuals from structural equations?).


I now go to your second question, concerning the mantra “no causation without manipulation.” I do not believe anyone takes this slogan as a restriction nowadays, including its authors, Holland and Rubin. It will remain a relic of an era when statisticians tried to define causation with the only mental tool available to them: the randomized controlled trial (RCT).

I summed it up in Causality, 2009, p. 361: “To suppress talk about how gender causes the many biological, social, and psychological distinctions between males an females is to suppress 90% of our knowledge about gender differences”

I further elaborated on this issue in (Bollen and Pearl 2014 p. 313) saying:

Pearl (2011) further shows that this restriction has led to harmful consequence by forcing investigators to compromise their research questions only to avoid the manipulability restriction. The essential ingredient of causation, as argued in Pearl (2009: 361), is responsiveness, namely, the capacity of some variables to respond to variations in other variables, regardless of how those variations came about.”

In (Causality 2009 p. 361) I also find this paragraph: “It is for that reason, perhaps, that scientists invented counterfactuals; it permit them to state and conceive the realization of antecedent conditions without specifying the physical means by which these conditions are established;”

All in all, you have touched on one of the most fascinating chapters in the history of science, featuring a respectable scientific community that clings desperately to an outdated dogma, while resisting, adamantly, the light that shines around it. This chapter deserves a major headline in Kuhn’s book on scientific revolutions. As I once wrote: “It is easier to teach Copernicus in the Vatican than discuss causation with a statistician.” But this was in the 1990’s, before causal inference became fashionable. Today, after a vicious 100-year war of reformation, things are begining to change (See http://www.nasonline.org/programs/sackler-colloquia/completed_colloquia/Big-data.html). I hope your upcoming book further accelerates the transition.

April 29, 2015

Spring Greeting from the UCLA Causality Blog

Filed under: Announcement,Causal Effect,Generalizability — eb @ 12:17 am

Friends in causality research,

This Spring greeting from UCLA Causality blog contains:
A. News items concerning causality research,
B. New postings, new problems and new solutions.

A. News items concerning causality research
A1. Congratulations go to Tyler VanderWeele, winner of the 2015 ASA “Causality in Statistics Education Award” for his book “Explanation in Causal Inference” (Oxford, 2015). Thanks, Tyler. The award ceremony will take place at the 2015 JSM conference, August 8-13, in Seattle.

Another good news, Google has joined Microsoft in sponsoring next year’s award, so please upgrade your 2016 nominations. For details of nominations and selection criteria, see http://www.amstat.org/education/causalityprize/

A2. Vol. 3 Issue 1 (March 2015) of the Journal of Causal Inference (JCI) is now in print.
The Table of Content and full text pdf can be viewed here. Submissions are welcome on all aspects of causal analysis. A highly urgent request is in place: Please start your article with a crisp description of the research problem addressed.

A3. 2015 Atlantic Causal Inference
The 2015 Atlantic Causal Conference will take place in Philadelphia, May 20th through May 21 2015. The web site for the registration and conference is http://www.med.upenn.edu/cceb/biostat/conferences/ACIC15/index_acic15.php

A4. A 2-Day Course: Causal Inference with Graphical Models will be offered in San Jose, CA, on June 15-16, by professor Felix Elwert (University of Wisconsin). The organizers (BayesiaLab) offer generous dacademic discounts to students and faculty. See here.

B. New postings, new problems and new solutions.

B1. Causality and Big data

The National Academy of Sciences has organized a colloquium on “Drawing Causal Inference from Big Data”. The colloquium took place March 26-27, in Washington DC, and reflected a growing realization that statistical analysis void of causal explanations would not satisfy users of big data systems. The colloquium program can be viewed here:
http://www.nasonline.org/programs/sackler-colloquia/completed_colloquia/Big-data.html

My talk (with E. Bareinboim) focused on the problem of fusing data from multiple sources so as to provide valid answers to causal questions of interest. The main point was that this seemingly hopeless task can now be reduced to mathematics. See abstract and slides here: http://www.nasonline.org/programs/sackler-colloquia/documents/pearl1.pdf
and a youtube video here: https://www.youtube.com/watch?v=sjtBalq7Ulc

B2. A recent post on our blog deals with one of the most crucial and puzzling questions of causal inference: “How generalizable are our randomized clinical trials?” It turns out that the tools developed for transportability theory in http://ftp.cs.ucla.edu/pub/stat_ser/r400.pdf also provide an elegant answer to this question. Our post compares this answer to the way researchers have attempted to tackle the problem using the language of ignorability, usually resorting to post-stratification. It turns out that ignorability-type assumptions are fairly limited, both in their ability to define conditions that permit generalizations, and in the way they impede interpretation in specific applications.

B3. We welcome the journal publication of the following research reports, Please update your citations:

B3.1 On the interpretation and Identification of mediation
Link: http://ftp.cs.ucla.edu/pub/stat_ser/r389.pdf

B3.2 On transportability
Link: http://ftp.cs.ucla.edu/pub/stat_ser/r400.pdf

B3.3 Back to mediation
Link: http://ftp.cs.ucla.edu/pub/stat_ser/r421-reprint.pdf

B4. Finally, enjoy our recent fruits on
http://bayes.cs.ucla.edu/csl_papers.html

Cheers,
Judea

January 27, 2015

Winter Greeting from the UCLA Causality Blog

Filed under: Announcement,Causal Effect — eb @ 7:34 am

This Winter greeting from UCLA Causality blog contains:
A. News items concerning causality research,
B. New postings, new problems and new solutions.

A. News items concerning causality research
A1. Reminder: The 2015 ASA “Causality in Statistics Education Award” has an early submission deadline — February 15, 2015. For details of purpose and selection criteria, see here.

A2. Vol. 3 Issue 1 of the Journal of Causal Inference (JCI) is about to appear in March 2015. The Table of content will be posted on our blog. For previous issues see here. As always, submissions are welcome on all aspects of causal analysis, especially those deemed heretical.

A3. How others view “statistical control”.
Last month, columninst Ezra Klein wrote a post about the use and abuse of statistical “controls” (or “adjustments”), especially in studies concerning racial or gender discrimination (link). His bottom line:”sometimes, you can control for too much. Sometines you end up controlling for the thing you’re trying to measure.”

Matthew Martin, writing in here echos Klein’s concern and adds to it two other flaws of improper control: confounding and selection bias. His bottom line: “be suspicious whenever a paper says “controlling for ____”. There is a good chance you can’t actually control for that.”

I am posting these two articles to stimulate discussion on whether we have done enough to educate the general public, as well as the scientific community on what modern causal analysis has to say about “statistical control”.

B. New postings, new problems and new solutions.

B1. Flowers of the First Law of Causal Inference

Our discussion with Guido Imbens on why some economists avoid graphs at all cost (link) has moved on to another question: “Why some economists refuse to benefit from the First Law” (link). I am convinced that this refusal reflects resistance to accept the fact that structural equations constitute the scientific basis for potential outcomes; it goes contrary to conventional teachings in some circles.

But resistance aside, the past two postings lay before readers two miracles of the first law, which I labeled “Flowers”. The first tells us how counterfactuals can be seen in the causal graph (link), and the second clarifies questions concerned with conditioning on post-treatment variables. (link).

B2. Causality in Logical Setting

In the past 15 years, most causality research at UCLA has focused on causal reasoning in statistical setting, attempting to infer causal parameters from statistical data. It was refreshing for me to receive a new paper from Bochman and Lifschitz on “Causality in a Logical Setting” (link). The paper reminded me of a whole body of work that has been going on in the logic-based community, where the task is to communicate causal knowledge and reason with it commonsensibly, from beliefs to interventions to counterfactuals Worth our undivided attention.

B3. At the request of many, I am posting a copy of the Epilogue of Causality (2000, 2009) which, so far was available only as a public lecture (http://bayes.cs.ucla.edu/BOOK-2K/causality2-epilogue.pdf). I am amazed to realize that there are very few things I would change in this text today, almost 20 years after the lecture was written (1996). Still, if you spot a gap, or a need for additional stories, quotes, anecdotes, ideas or personalities, please share.

B4. Dont miss our previous postings on this blog and, of course, our steady outflow of new results, here.

Some are really neat!
Enjoy,
Judea

Next Page »

Powered by WordPress