Causal Analysis in Theory and Practice

May 27, 2015

Does Obesity Shorten Life? Or is it the Soda?

Filed under: Causal Effect,Definition,Discussion,Intuition — moderator @ 1:45 pm

Our discussion of “causation without manipulation” (link) acquires an added sense of relevance when considered in the context of public concerns with obesity and its consequences. A Reuters story published on September 21 2012 (link) cites a report projecting that at least 44 percent of U.S adults could be obese by 2030, compared to 35.7 percent today, bringing an extra $66 billion a year in obesity-related medical costs. A week earlier, New York City adopted a regulation banning the sale of sugary drinks in containers larger than 16 ounces at restaurants and other outlets regulated by the city health department.

Interestingly, an article published in the International Journal of Obesity {(2008), vol 32, doi:10.1038/i} questions the logic of attributing consequences to obesity. The authors, M A Hernan and S L Taubman (both of Harvard’s School of Public Health) imply that the very notion of “obesity-related medical costs” is undefined, if not misleading and that, instead of speaking of “obesity shortening life” or “obesity raising medical costs”, one should be speaking of manipulable variables like “life style” or “soda consumption” as causing whatever harm we tend to attribute to obesity.

The technical rational for these claims is summarized in their abstract:
“We argue that observational studies of obesity and mortality violate the condition of consistency of counterfactual (potential) outcomes, a necessary condition for meaningful causal inference, because (1) they do not explicitly specify the interventions on body mass index (BMI) that are being compared and (2) different methods to modify BMI may lead to different counterfactual mortality outcomes, even if they lead to the same BMI value in a given person.

Readers will surely notice that these arguments stand in contradiction to the structural, as well as closest-world definitions of counterfactuals (Causality, pp. 202-206, 238-240), according to which consistency is a theorem in counterfactual logic, not an assumption and, therefore, counterfactuals are always consistent (link). A counterfactual appears to be inconsistent when its antecedant A (as in “had A been true”) is conflated with an external intervention devised to enforce the truth of A. Practical interventions tend to have side effects, and these need to be reckoned with in estimation, but counterfactuals and causal effects are defined independently of those interventions and should not, therefore, be denied existence by the latter’s imperfections. To say that obesity has no intrinsic effects because some interventions have side effects is analogous to saying that stars do not move because telescopes have imperfections.

Rephrased in a language familiar to readers of this blog Hernan and Taubman claim that the causal effect P(mortality=y|Set(obesity=x)) is undefined, seemingly because the consequences of obesity depend on how we choose to manipulate it. Since the probability of death will generally depend on whether you manipulate obesity through diet versus, say, exercise. (We assume that we are able to perfectly define quantitative measures of obesity and mortality), Hernan and Taubman conclude that P(mortality=y|Set(obesity=x)) is not formally a function of x, but a one-to-many mapping.

This contradicts, of course, what the quantity P(Y=y|Set(X=x)) represents. As one who coined the symbols Set(X=x) (Pearl, 1993) [it was later changed to do(X=x)] I can testify that, in its original conception:

1. P(mortality = y| Set(obesity = x) does not depend on any choice of intervention; it is defined relative to a hypothetical, minimal intervention needed for establishing X=x and, so, it is defined independently of how the event obesity=x actually came about.

2. While it is true that the probability of death will generally depend on whether we manipulate obesity through diet versus, say, exercise, the quantity P(mortality=y|Set(obesity=x)) has nothing to do with diet or exercise, it has to do only with the level x of X and the anatomical or social processes that respond to this level of X. Set(obesity=x) describes a virtual intervention, by which nature sets obesity to x, independent of diet or exercise, while keeping everything else in tact, especially the processes that respond to X. The fact that we, mortals, cannot execute such incisive intervention, does not make this intervention (1) undefined, or (2) vague, or (3) replaceable by manipulation-dependent operators.

To elaborate:
(1) The causal effects of obesity are well-defined in the SEM model, which consists of functions, not manipulations.

(2) The causal effects of obesity are as clear and transparent as the concept of functional dependency and were chosen in fact to serve as standards of scientific communication (See again Wikipedia, Cholesterol, how relationships are defined by “absence” or “presence” of agents not by the means through which those agents are controlled.

(3) If we wish to define a new operator, say Set_a(X=x), where $a$ stands for the means used in achieving X=x (as Larry Wasserman suggested), this can be done within the syntax of the do-calculus, But that would be a new operator altogether, unrelated to do(X=x) which is manipulation-neutral.

There are several ways of loading the Set(X=x) operator with manipulational or observational specificity. In the obesity context, one may wish to consider P(mortality=y|Set(diet=z)) or P(mortality=y|Set(exercise=w)) or P(mortality=y|Set(exercise=w), Set(diet=z)) or P(mortality=y|Set(exercise=w), See (diet=z)) or P(mortality=y|See(obesity=x), Set(diet=z)) The latter corresponds to the studies criticized by Hernan and Taubman, where one manipulates diet and passively observes Obesity. All these variants are legitimate quantities that one may wish to evaluate, if called for, but have nothing to do with P(mortality=y|Set(obesity =x)) which is manipulation-neutral..

Under certain conditions we can even infer P(mortality=y|Set(obesity =x)) from data obtained under dietary controlled experiments. [i.e., data governed by P(mortality=y|See(obesity=x), Set(diet=z)); See R-397.) But these conditions can only reveal themselves to researchers who acknowledge the existence of P(mortality=y|Set(obesity=x)) and are willing to explore its properties.

Additionally, all these variants can be defined and evaluated in SEM and, moreover, the modeler need not think about them in the construction of the model, where only one relation matters: Y LISTENS TO X.

My position on the issues of manipulation and SEM can be summarized as follows:

1. The fact that morbidity varies with the way we choose to manipulate obesity (e.g., diet, exercise) does not diminish our need, or ability to define a manipulation-neutral notion of “the effect of obesity on morbidity”, which is often a legitimate target of scientific investigation, and may serve to inform manipulation-specific effects of obesity.

2. In addition to defining and providing identification conditions for the manipulation-neutral notion of “effect of obesity on morbidity”, the SEM framework also provides formal definitions and identification conditions for each of the many manipulation-specific effects of obesity, and this can be accomplished through a single SEM model provided that the version-specific characteristics of those manipulations are encoded in the model.

I would like to say more about the relationship between knowledge-based statements (e.g., “obesity kills”) and policy-specific statements (e.g., “Soda kills.”) I wrote a short note about it in the Journal of Causal Inference and I think it would add another perspective to our discussion. A copy of the introduction section is given below.

Is Scientific Knowledge Useful for Policy Analysis?
A Peculiar Theorem Says: No


1 Introduction
In her book, Hunting Causes and Using Them [1], Nancy Cartwright expresses several objections to the do(x) operator and the “surgery” semantics on which it is based (pp. 72 and 201). One of her objections concerned the fact that the do-operator represents an ideal, atomic intervention, different from the one implementable by most policies under evaluation. According to Cartwright, for policy evaluation we generally want to know what would happen were the policy really set in place, and the policy may affect a host of changes in other variables in the system, some envisaged and some not.

In my answer to Cartwright [2, p. 363], I stressed two points. First, the do-calculus enables us to evaluate the effect of compound interventions as well, as long as they are described in the model and are not left to guesswork. Second, I claimed that in many studies our goal is not to predict the effect of the crude, non-atomic intervention that we are about to implement but, rather, to evaluate an ideal, atomic policy that cannot be implemented given the available tools, but that represents nevertheless scientific knowledge that is pivotal for our understanding of the domain.

The example I used was as follows: Smoking cannot be stopped by any legal or educational means available to us today; cigarette advertising can. That does not stop researchers from aiming to estimate “the effect of smoking on cancer,” and doing so from experiments in which they vary the instrument — cigarette advertisement — not smoking. The reason they would be interested in the atomic intervention P(Cancer|do(Smoking)) rather than (or in addition to) P(cancer|do(advertising)) is that the former represents a stable biological characteristic of the population, uncontaminated by social factors that affect susceptibility to advertisement, thus rendering it transportable across cultures and environments. With the help of this stable characteristic, one can assess the effects of a wide variety of practical policies, each employing a different smoking-reduction instrument. For example, if careful scientific investigations reveal that smoking has no effect on cancer, we can comfortably conclude that increasing cigarette taxes will not decrease cancer rates and that it is futile for schools to invest resources in anti-smoking educational programs. This note takes another look at this argument, in light of recent results in transportability theory (Bareinboim and Pearl [3], hereafter BP).

Robert Platt called my attention to the fact that there is a fundamental difference between Smoking and Obesity; randomization is physically feasible in the case of smoking (say, in North Korea) — not in the case of obesity.

I agree; it would have been more effective to use Obesity instead of Smoking in my response to Cartwright. An RCT experiment on Smoking can be envisioned, (if one is willing to discount obvious side effect of forced smoking or forced withdrawal) while RCT on Obesity requires more creative imagination; not through a powerful dictator, but through an agent such as Lady Nature herself, who can increase obesity by one unit and evaluate its consequences on various body functions.

This is what the do-operator does, it simulates an experiment conducted by Lady Nature who, for all that we know is all mighty, and can permit all the organisms that are affected by BMI (and fat content etc etc [I assume here that we can come to some consensus on the vector of measurements that characterizes Obesity]) to respond to a unit increase of BMI in the same way that they responded in the past. Moreover, she is able to do it by an extremely delicate surgery, without touching those variables that we mortals need to change in order to drive BMI up or down.

This is not a new agent by any means, it is the standard agent of science. For example, consider the 1st law of thermodynamic, PV =n R T. While Volume (V), Temperature (T) and the amount of gas (n) are independently manipulable, pressure (P) is not. This means that whenever we talk about the pressure changing, it is always accompanied by a change in V, n and/or T which, like diet and exercise, have their own side effects. Does this prevent us from speaking about the causal effect of tire pressure on how bumpy the road is? Must we always mention V, T or n when we speak about the effect of air pressure on the size of the balloon we are blowing? Of course not.! Pressure has life of its own (the rate of momentum transfer to a wall that separates two vessels ) independent on the means by which we change it.

Aha!!! The skeptic argues: “Things are nice in physics, but epidemiology is much more complex, we do not know the equations or the laws, and we will never in our lifetime know the detailed anatomy of the human body. This ignorance-pleading argument always manages to win the hearts of the mystic, especially among researchers who feel uncomfortable encoding partial scientific knowledge in a model. Yet Lady Nature does not wait for us to know things before she makes our heart muscle respond to the fat content in the blood. And we need not know the exact response to postulate that such response exists.

Scientific thinking is not unique to physics. Consider any standard medical test and let’s ask ourselves whether the quantities measured have “well-defined causal effects” on the human body. Does “blood pressure” have any effect on anything? Why do we not hear complaints about “blood pressure” being “not well defined”.? After all, following the criterion of Hernan and Taubman (2008), the “effect of X on Y” is ill-defined whenever Y depends on the means we use to change X. So “blood pressure” has no well defined
effect on any organ in the human body. The same goes for “blood count” “kidney function” …. Rheumatoid Factor…. If these variables have no effects on anything why do we measure them? Why do physicians communicate with each other through these measurements, instead of through the “interventions” that may change these measurements?

My last comment is for epidemiologists who see their mission as that of “changing the world for the better” and, in that sense, they only *care* about treatments (causal variables) that are manipulable. I have only admiration for this mission. However, to figure out which of those treatments should be applied in any given situation, we need to understand the situation and, it so happened that “understanding” involves causal relationships between manipulable as well as non-manipulable variables. For instance, if someone offers to sell you a new miracle drug that (provenly) reduces obesity, and your scientific understanding is that obesity has no effect whatsoever on anything that is important to you, then, regardless of other means that are available for manipulating obesity you would tell the salesman to go fly a kite. And you would do so regardless of whether those other means produced positive or negative results. The basis for rejecting the new drug is precisely your understanding that “Obesity has no effect on outcome”, the very quantity that some of epidemiologists now wish to purge from science, all in the name of only caring about manipulable treatments.

Epidemiology, as well as all empirical sciences need both scientific and clinical knowledge to sustain and communicate that which we have learned and to advance beyond it. While the effects of diet and exercise are important for controlling obesity, the health consequences of obesity are no less important; they constitute legitimate targets of scientific pursuit, regardless of current shortcomings in clinical knowledge.



  1. Anonymous readers have provided several questions on my last posting
    “Does Obesity Shorten Life?” ; I will answer them one at a time.

    What logic brings us from
    a “society of listeners” to practical interventions,
    which is the ultimate goal of causal inference research.

    Since a “society of listener” is defined by passive response
    functions, and invokes no notion of interventions in its
    description, there ought to be some logic which defines
    “interventions” in terms of those response functions.

    Imagine a society of listeners consisting of
    agents equipped with listening devices, each
    member observes the leader, and each time the
    leader raises his right (or left) hand, the agent takes
    a step to the right (or left). The leader too has a listening
    device; he is listening to a thermometer and a barometer
    which are hung on the wall of the room that house this

    So far, all is passive. To introduce interventions,
    assume that we know for every agent who he is listening
    to and what his response would be for any signal.
    Can we predict how the system behaves if were
    to convince the leader to ignore the reading of
    the thermometer and just raise his right hand?
    The skeptic would say: No, because the agents
    may listen to what we told the leader and change
    their behavior. The faithful would say: ok
    so let us assume we whisper our instruction to the leader.
    Can we now predict the behavior of the system after
    our whispered intervention?
    The answer is YES, because the agents do not
    care if the leader raises his hand as a result of
    observing the thermometer or complying with our
    request. We have thus formed the logical
    connection between the passive listening and
    behavior under intervention.
    (The same holds true if our whispering modifies
    each agent’s response function, as long as we know
    the change).

    Q. 2.
    Is anything important for policy
    decisions is left behind in a system in
    which ’causes’ are equated to interventions?

    Let Y be “years to death” and X be “Obesity”
    Assuming I go through an SEM exercise and come up
    with the conclusion
    (1) E(Y|set(x+1)) – E(Y|set(x)) = 20
    The question is: “What would be missed by someone who ignores this
    conclusion for any reason, say because set(x) is undefined?

    As a red-blooded epidemiologist, I would jump and say:
    “Wow, one unit of Obesity can shorten life by 20 years !!!
    And this is Obesity ALONE, none of those side effects!!
    We must find effective means for weight loss,
    hopefully those means would not have their own side effects,
    but it sure worth the effort of finding one, even it it
    does have side effects.

    Compare it with another hypothetical conclusion:
    (2) E(Y|set(x+1)) – E(Y|set(x)) = 0.1
    Here I would say: “Stop all that foolish and expensive research
    on trying to find a “weight loss drug”, in the best
    case, and assuming no side effect, all you are going
    to gain is 1 month in life expectancy (per unit),
    its not worth it.

    If epidemiologists reaction to conclusion (1) differs so
    profoundly from their reaction to conclusion (2),
    even at the qualitative level, it must be
    that the conclusion is meaningful to rank and file
    epidemiologists and those who ignore it would be
    penalized by spending resources unwisely.

    Note, the meaning lies in the intervention: “should we invest
    in weight loss research?”, yet the analysis started in
    a passive listening, it then continued through a whispering
    intervention Set(X=x), which is hypothetical but not realizable
    and, lo and behold, the final conclusion informs
    real-life, hard policy questions: “should we invest
    in weight loss research?”

    It is miracle, one that only logicians can appreciate!!!

    can we trust the SEM conclusion?

    More specifically, can we trust the SEM
    conclusion if it conflicts drastically with
    the conclusions obtained by pure interventional
    experiments, say on diet and exercise?

    I have answered this question in my last message,
    [SEM conclusions] come equipped with mathematical
    guarantee that the estimate obtained is no less plausible
    than the assumptions in the model.

    This is a fairly useful guarantee, for a quantity
    that the skeptics label “undefined”, and which expresses precisely
    the research question we have in mind: “What is
    the contribution to life expectancy of obesity ALONE,
    uncontaminated by the side-effects of diet and exercise.”

    There are also the dimensions of scientific
    communication and parsimony of knowledge representation.
    These features do not affect decision making per se,
    but organizing science around manipulations, though doable,
    would result in complex and non-communicable representation.
    It is sometimes called “apprenticeship”.,

    Comment by Judea Pearl — June 2, 2015 @ 3:14 am

  2. Excellent comments by Ian Shrier and Maria Glymour reminded me
    that the issue before us is no different from
    the one confronted in mediation analysis (with non
    manipulable mediator) where the effect of the
    mediator on the outcome is meaningful because it provides information on the
    commonalities among the available interventions, i.e., if one of them is effective
    others are likely to be effective as well. Knowing the causal effect of
    a non-manipulable variable X (say obesity) on the outcome, enables us to select among
    competing interventions (e.g., diet, exercise) as well as discover new interventions
    capable of acting on the outcome.

    I conjecture in fact that the great majority of all effective treatments in existence were discovered
    by virtue of their effects on an intermediary variable suspected of causing the outcome. The discovery
    process goes somewhat like this:

    1. I observe correlation between Z and outcome Y
    2. I go experimental, and observe the causal effect of do(Z) on Y
    3. I notice that whenever Z increases Y, there is another variable, X, that tends to increase with Y
    4. I wonder: Is it possible that X mediates the effect of Z on Y?
    If this is the case, then I can think of many ways by which we can raise X, much
    cheaper and more effective than Z.
    (True, most of them have side effects but, if I am right,
    we can perhaps discover a new way of raising X, which has no side effects.)
    5. But how can I test the hypothesis that Z acts on Y through X????
    Unfortunately, X is non-manipulable.
    6. In other words, I need to estimate the quantity E(Y|Set(X=x))
    from the data that I have and determine if it varies significantly
    with x.
    7. This however is heresy. E(Y|Set(X=x)) has been declared “undefined,”
    by many thinkers of our generation, according to whom
    the only thing that makes sense is interventional
    effects like E(Y|Set(Z=z)). I am stuck.
    8. Never mind heresy. I will estimate E(Y|Set(X=x)) by
    whatever means I have, judgmentally if needed and,
    then, if I find it highly sensitive to x, I will proceed to investigate other means of raising x,
    9. I thought about it. I have good reason to believe that
    E(Y|Set(X=x)) is sensitive to x.
    10. Victory. We have found that W1, W2, …Wn provide wonderful
    means of raising x and, based on RCTs on those W’s we now recommend
    Wk as the most effective and safe way of controlling Y.

    Our discussion on manipulable causation deals with step 8,
    in which I am advocating replacement of the judgmental
    process with a formal method of estimating
    E(Y|Set(X=x)) using prior scientific knowledge and
    the logic of counterfactuals. Being formal, the method
    also provides us with plausibility guarantees that the quantity we
    estimate is equal to the quantity we need to estimate,
    in order to decide whether we should go ahead with
    investigating W1, W2…..Wn


    Comment by judea pearl — June 2, 2015 @ 11:10 pm

RSS feed for comments on this post. TrackBack URI

Leave a comment

Powered by WordPress