# Causal Analysis in Theory and Practice

## June 27, 2009

### Effects of Treatment on the Treated: Identification and Generalization

Filed under: Counterfactual — moderator @ 4:00 am

Ilya Shpitser and Judea Pearl recently presented an article at the UAI conference which offers a solution to the (counterfactual) problem of evaluating the Effect of Treatment on the Treated.

The article may be viewed here: http://ftp.cs.ucla.edu/pub/stat_ser/r349.pdf

## February 27, 2007

### Counterfactuals in linear systems

Filed under: Counterfactual,Linear Systems — judea @ 4:08 pm

What do we know about counterfactuals in linear models?

Here is a neat result concerning the testability of counterfactuals in linear systems.
We know that counterfactual queries of the form P(Yx=y|e) may or may not be empirically identifiable, even in experimental studies. For example, the probability of causation, P(Yx=y|x',y') is in general not identifiable from experimental data (Causality, p. 290, Corollary 9.2.12) when X and Y are binary.1 (Footnote-1: A complete graphical criterion for distinguishing testable from nontestable counterfactuals is given in Shpitser and Pearl (2007, upcoming)).

This note shows that things are much friendlier in linear analysis:

Claim A. Any counterfactual query of the form E(Yx |e) is empirically identifiable in linear causal models, with e an arbitrary evidence.

Claim B. E(Yx|e) is given by

E(Yx|e) = E(Y|e) + T [xE(X|e)]      (1)

where T is the total effect coefficient of X on Y, i.e.,

T = d E[Yx]/dx = E(Y|do(x+1)) – E(Y|do(x))      (2)

Thus, whenever the causal effect T is identified, E(Yx|e) = is identified as well.

## February 16, 2006

### The meaning of counterfactuals

Filed under: Counterfactual,Definition — moderator @ 12:00 am

From Dr. Patrik Hoyer (University of Helsinki, Finland):

I have a hard time understanding what counterfactuals are actually useful for. To me, they seem to be answering the wrong question. In your book, you give at least a couple of different reasons for when one would need the answer to a counterfactual question, so let me tackle these separately:

1. Legal questions of responsibility. From your text, I infer that the American legal system says that a defendant is guilty if he or she caused the plaintiff's misfortune. You take this to mean that if the plaintiff had not suffered misfortune had the defendant not acted the way he or she did, then the defendant is to be sentenced. So we have a counterfactual question that needs to be determined to establish responsibility. But in my mind, the law is clearly flawed. Responsibility should rest with the predicted outcome of the defendant's action, not with what actually happened. Let me take a simple example: say that I am playing a simple dice-game for my team. Two dice are to be thrown and I am to bet on either (a) two sixes are thrown, or (b) anything else comes up. If I guess correctly, my team wins a dollar, if I guess wrongly, my team loses a dollar. I bet (b), but am unlucky and two sixes actually come up. My team loses a dollar. Am I responsible for my team's failure? Surely, in the counterfactual sense yes: had I bet differently my team would have won. But any reasonable person on the team would thank me for betting the way I did. In the same fashion, a doctor should not be held responsible if he administers, for a serious disease, a drug which cures 99.99999% of the population but kills 0.00001%, even if he was unlucky and his patient died. If the law is based on the counterfactual notion of responsibility then the law is seriously flawed, in my mind.

A further example is that on page 323 of your book: the desert traveler. Surely, both Enemy-1 and Enemy-2 are equally 'guilty' for trying to murder the traveler. Attempted murder should equal murder. In my mind, the only rationale for giving a shorter sentence for attempted murder is that the defendant is apparently not so good at murdering people so it is not so important to lock him away… (?!)

2. The use of context in decision-making. On page 217, you write "At this point, it is worth emphasizing that the problem of computing counterfactual expectations is not an academic exercise; it represents in fact the typical case in almost every decision-making situation." I agree that context is important in decision making, but do not agree that we need to answer counterfactual questions.

In decision making, the things we want to estimate is P(future | do(action), see(context) ). This is of course a regular do-probability, not a counterfactual query. So why do we need to compute counterfactuals?

In your example in section 7.2.1, your query (3): "Given that the current price is P=p0, what would be the expected value of the demand Q if we were to control the price at P=p1?". You argue that this is counterfactual. But what if we introduce into the graph new variables Qtomorrow and Ptomorrow, with parent sets (U1, I, Ptomorrow) and (W,U)2,Qtomorrow), respectively, and with the same connection-strengths d1, d2, b2, and b1. Now query (3) reads: "Given that we observe P=p0, what would be the expected value of the demand Qtomorrow if we perform the action do(Ptomorrow=p1)?" This is the same exact question but it is not counterfactual, it is just P(Qtomorrow | do(Ptomorrow=p1), see(P=P0)). Obviously, we get the correct answer by doing the counterfactual analysis, but the question per se is no longer counterfactual and can be computed using regular do( )-machinery. I guess this is the idea of your 'twin network' method of computing counterfactuals. In this case, why say that we are computing a counterfactual when what we really want is prediction (i.e. a regular do-expression)?

3. In the latter part of your book, you use counterfactuals to define concepts such as 'the cause of X' or 'necessary and sufficient cause of Y'. Again, I can understand that it is tempting to mathematically define such concepts since they are in use in everyday language, but I do not think that this is generally very helpful. Why do we need to know 'the cause' of a particular event? Yes, we are interested in knowing 'causes' of events in the sense that they allows us to predict the future, but this is again a case of point (2) above.

To put it in the most simplified form, my argument is the following: Regardless of if we represent individuals, businesses, organizations, or government, we are constantly faced with decisions of how to act (and these are the only decisions we have!). What we want to know is, what will likely happen if we act in particular ways. So we want to know is P(future | do(action), see(context) ). We do not want nor need the answers to counterfactuals.

Where does my reasoning go wrong?

## May 18, 2000

### Counterfactual notation

Filed under: Book (J Pearl),Counterfactual — moderator @ 12:00 am

From Jos Lehmann (University of Amsterdam):

Jos Lehmann noticed potential ambiguity in the notation used for counterfactual propositions. Capital letters, like "A" or "B," are sometimes used to denote propositional variables, and sometimes to denote propositions. For example, in the function A = C (Model M, page 209) "A" stands for the variable "whether rifleman-A shoots", and takes on values in {true, false}, while in statements S1-S5 (page 208), A stands for a proposition (e.g., "Fireman-A shot").

« Previous Page