Causation without Manipulation
The second part of our latest post “David Freedman, Statistics, and Structural Equation Models” (May 6, 2015) has stimulated a lively email discussion among colleagues from several disciplines. In what follows, I will be sharing the highlights of the discussion, together with my own position on the issue of manipulability.
Many of the discussants noted that manipulability is strongly associated (if not equated) with “comfort of interpretation”. For example, we feel more comfortable interpreting sentences of the type “If we do A, then B would be more likely” compared with sentences of the type “If A were true, then B would be more likely”. Some attribute this association to the fact that empirical researchers (say epidemiologists) are interested exclusively in interventions and preventions, not in hypothetical speculations about possible states of the world. The question was raised as to why we get this sense of comfort. Reference was made to the new book by Tyler VanderWeele, where this question is answered quite eloquently:
“It is easier to imagine the rest of the universe being just as it is if a patient took pill A rather than pill B than it is trying to imagine what else in the universe would have had to be different if the temperature yesterday had been 30 degrees rather than 40. It may be the case that human actions, seem sufficiently free that we have an easier time imagining only one specific action being different, and nothing else.”
(T. Vanderweele, “Explanation in causal Inference” p. 453-455)
This sensation of discomfort with non-manipulable causation stands in contrast to the practice of SEM analysis, in which causes are represented as relations among interacting variables, free of external manipulation. To explain this contrast, I note that we should not overlook the purpose for which SEM was created — the representation of scientific knowledge. Even if we agree with the notion that the ultimate purpose of all knowledge is to guide actions and policies, not to engage in hypothetical speculations, the question still remains: How do we encode this knowledge in the mind (or in textbooks) so that it can be accessed, communicated, updated and used to guide actions and policies. By “how” I am concerned with the code, the notation, its
syntax and its format.
There was a time when empirical scientists could dismiss questions of this sort (i.e., “how do we encode”) as psychological curiosa, residing outside the province of “objective” science. But now that we have entered the enterprise of causal inference, and we express concerns over the comfort and discomfort of interpreting counterfactual utterances, we no longer have the luxury of ignoring those questions; we must ask: how do scientists encode knowledge, because this question holds the key to the distinction between the comfortable and the uncomfortable, the clear versus the ambiguous.
The reason I prefer the SEM specification of knowledge over a manipulation-restricted specification comes from the realization that SEM matches the format in which humans store scientific knowledge. (Recall, by “SEM” we mean a manipulation-free society of variables, each listening to the others and each responding to what it hears) In support of this realization, I would like to copy below a paragraph from Wikipedia’s entry on Cholesterol, section on “Clinical Significance.” (It is about 20 lines long but worth a serious linguistic analysis).
——————–from Wikipedia, dated 5/10/15 —————
According to the lipid hypothesis , abnormal cholesterol levels ( hyperchol esterolemia ) or, more properly, higher concentrations of LDL particles and lower concentrations of functional HDL particles are strongly associated with cardiovascular disease because these promote atheroma development in arteries ( atherosclerosis ). This disease process leads to myocardial infraction (heart attack), stroke, and peripheral vascular disease . Since higher blood LDL, especially higher LDL particle concentrations and smaller LDL particle size, contribute to this process more than the cholesterol content of the HDL particles, LDL particles are often termed “bad cholesterol” because they have been linked to atheroma formation. On the other hand, high concentrations of functional HDL, which can remove cholesterol from cells and atheroma, offer protection and are sometimes referred to as “good cholesterol”. These balances are mostly genetically determined, but can be changed by body build, medications , food choices, and other factors. [ 54 ] Resistin , a protein secreted by fat tissue, has been shown to increase the production of LDL in human liver cells and also degrades LDL receptors in the liver. As a result, the liver is less able to clear cholesterol from the bloodstream. Resistin accelerates the accumulation of LDL in arteries, increasing the risk of heart disease. Resistin also adversely impacts the effects of statins, the main cholesterol-reducing drug used in the treatment and prevention of cardiovascular disease.
————-end of quote ——————
My point in quoting this paragraph is to show that, even in “clinical significance” sections, most of the relationships are predicated upon states of variables, as opposed to manipulations of variables. They talk about being “present” or “absent”, being at high concentration or low concentration, smaller particles or larger particles; they talk about variables “enabling,” “disabling,” “promoting,” “leading to,” “contributing to,” etc. Only two of the sentences refer directly to exogenous manipulations, as in “can be changed by body build, medications, food choices…”
This manipulation-free society of sensors and responders that we call “scientific knowledge” is not oblivious to the world of actions and interventions; it was actually created to (1) guide future actions and (2) learn from interventions.
(1) The first frontier is well known. Given a fully specified SEM, we can predict the effect of compound interventions, both static and time varying, pre-planned or dynamic. Moreover, given a partially specified SEM (e.g., a DAG) we can often use data to fill in the missing parts and predict the effect of such interventions. These require however that the interventions be specified by “setting” the values of one or several variables. When the action of interest is more complex, say a disjunctive action like: “paint the wall green or blue” or “practice at least 15 minutes a day”, a more elaborate machinery is needed to infer its effects from the atomic actions and counterfactuals that the model encodes (See http://ftp.cs.ucla.edu/pub/stat_ser/r359.pdf and Hernan etal 2011.) Such derivations are nevertheless feasible from SEM without enumerating the effects of all disjunctive actions of the form “do A or B” (which is obviously infeasible).
(2) The second frontier, learning from interventions, is less developed. We can of course check, using the methods above, whether a given SEM is compatible with the results of experimental studies (Causality, Def.1.3.1). We can also determine the structure of an SEM from a systematic sequence of experimental studies. What we are still lacking though are methods of incremental updating, i.e., given an SEM M and an experimental study that is incompatible with M, modify M so as to match the new study, without violating previous studies, though only their ramifications are encoded in M.
Going back to the sensation of discomfort that people usually express vis a vis non-manipulable causes, should such discomfort bother users of SEM when confronting non-manipulable causes in their model? More concretely, should the difficulty of imagining “what else in the universe would have had to be different if the temperature yesterday had been 30 degrees rather than 40,” be a reason for misinterpreting a model that contains variables labeled “temperature” (the cause) and “sweating” (the effect)? My answer is: No. At the deductive phase of the analysis, when we have a fully specified model before us, the model tells us precisely what else would be different if the temperature yesterday had been 30 degrees rather than 40.”
Consider the sentence “Mary would not have gotten pregnant had she been a man”. I believe most of us would agree with the truth of this sentence despite the fact that we may not have a clue what else in the universe would have had to be different had Mary been a man. And if the model is any good, it would imply that regardless of other things being different (e.g. Mary’s education, income, self esteem etc.) she would not have gotten pregnant. Therefore, the phrase “had she been a man” should not be automatically rejected by interventionists as meaningless — it is quite meaningful.
Now consider the sentence: “If Mary were a man, her salary would be higher.” Here the discomfort is usually higher, presumably because not only we cannot imagine what else in the universe would have had to be different had Mary been a man, but those things (education, self esteem etc.) now make a difference in the outcome (salary). Are we justified now in declaring discomfort? Not when we are reading our model. Given a fully specified SEM, in which gender, education, income, and self esteem are bonified variables, one can compute precisely how those factors should be affected by a gender change. Complaints about “how do we know” are legitimate at the model construction phase, but not when we assume having a fully specified model before us, and merely ask for its ramifications.
To summarize, I believe the discomfort with non-manipulated causes represents a confusion between model utilization and model construction. In the former phase counterfactual sentences are well defined regardless of whether the antecedent is manipulable. It is only when we are asked to evaluate a counterfactual sentence by intuitive, unaided judgment, that we feel discomfort and we are provoked to question whether the counterfactual is “well defined”. Counterfactuals are always well defined relative to a given model, regardless of whether the antecedent is manipulable or not.
This takes us to the key question of whether our models should be informed by the the manipulability restriction and how. Interventionists attempt to convince us that the very concept of causation hinges on manipulability and, hence, that a causal model void of manipulability information is incomplete, if not meaningless. We saw above that SEM, as a representation of scientific knowledge, manages quite well without the manipulability restriction. I would therefore be eager to hear from interventionists what their conception is of “scientific knowledge”, and whether they can envision an alternative to SEM which is informed by the manipulability restriction, and yet provides a parsimonious account of that which we know about the world.
My appeal to interventionists to provide alternatives to SEM has so far not been successful. Perhaps readers care to suggest some? The comment section below is open for suggestions, disputations and clarifications.
Dear Judea,
Is it possible that the confusion arises from mixing identification (encoded using NPSEM) with estimation (which might require some sort of manipulation).
So perhaps the question is whether identification is helpful without estimation.
-Conrad.
Comment by Conrad — May 15, 2015 @ 8:59 am
Dear Conrad,
Thanks for joining this discussion.
No. I do not think the confusion arises from mixing identification with estimation.
Why? Because because both identification and estimation require no manipulation.
In identification we glance and the graph and come up with an estimand of the causal effect needed,
which is simply an expression about conditional probabilities. In estimation we glance and the estimand
and endeavor to estimate it from a finite sample. No manipulation is involved in either.
Rather, I believe there is some confusion about what SEM is, and how come a collection of functions,
specified in a passive mode, has the capability to predict the effects of so many interventions and combinations
of interventions; so many counterfactuals and conbinations of counterfactuals. I call it the “miracle of science”
and will elaborate about it in subsequent comment.
Best
Judea
Comment by Judea Pearl — May 16, 2015 @ 12:23 am
Could you please help me figuring out what is a “fully specified SEM”? As I understand, it would be a model explicitly specifying all the relationships between all the possible variables in the universe that would make a difference in the outcome. Am I right? And, if that is the case, how does it work in practice to build a fully specified SEM, given that “we cannot imagine what else in the universe would have had to be different had Mary been a man”?
Comment by Hernando Casas — May 16, 2015 @ 12:40 am
Dear all,
Concerning the relationships between SEM and manipulations, it seems that my original post was not specific enough
on how SEM guides actions and policies given that no action or policy participates in the specification of SEM. I will be
more specific by answering a couple of questions that a colleague posed to me in a private email.
Colleague (paraphrased):
The notation p(y|set X=x) is too vague. The operator “set X=x”
should be replaced by a family of operators, representing the different
ways one can “set X to be x”. These correspond to all the possible
different manipulations.
So, there are infinitely many possible SEM’s rather than one. Choosing a
particular SEM (i.e choosing a “set” operator) corresponds to choosing
the manipulation we have in mind.
Here are a couple of concrete questions to you:
(1) Do you agree there are many possible SEM’s one could choose from?
(2) Do you agree that choosing one SEM (i.e. model construction) essentially amounts
to describing which manipulation we are modeling?
JP answer:
The answer to both questions is NO, and I will explain why.
JP:
Given a society of sensors-responders, which I call SEM, I am defining
a mathematical operation called set(X=x) which has nothing to do with
interventions in the real world. It is defined by the removal of one equation,
and substituting x for X. Call it “incisive surgery”.
Again, It has nothing to do with interventions or potential interventions — truly nothing.
Is it well defined? Sure! It is like taking a partial derivative on a system of
equations, without committing it to be a description of how heat flows in an engine.
Is the partial derivative well defined? Sure. It depends on the nature of the functions
involved, not on the engine.
Lets continue to actual interventions:
In my vocabulary, you are concerned about having many possible ways to
intervene to ensure that X=x will be true in the world, each may have a different effect
on the outcome Y. Not however that there is only
one way to intervene in the way Set(X=x) was defined. Namely, perturbing
only one mechanism in the world, the one that enables X to listen to its parents
variables. If the intervention of interest complies with this restriction, we say that it can
be represented by Set(X=x) and, consequently, that we can predict its consequences
by solving the mutilated set of equations resulting from Set(X=x). If it does not,
we ask: What other mechanisms does it perturb? and we then attempt to represent
this compound perturbance withing our SEM (in a variety of ways, some involving
setting other variables, some involving removing arrows etc.) If we succeed, we
rejoice, if not, we declare: UNDEFINED. Namely, the intervention of interest
that is too complex to be described by our SEM. Namely, the SEM needs to be enriched
with further details, to accommodate for the indiocyncratic features of the intervention
of interest.
This usually happens when the intervention of interest has side effects that the author
of the SEM did not contemplate. If this is the case, the SEM must be enriched to describe
how thos side effects impact the outcome of interest. There is no way of predicting the effect
of policies that no one suspected to have side effects.
Let us go to the second question:
(2) Do you agree that choosing one SEM (i.e. model construction) essentually amounts
to describing which manipulation we are modeling?
Nop. When we construct a SEM we need not think about manipulation at all. We merely ask each variable,
“Who do you listen to when you change your state? ” “Give me an answer and I will tell you who your “parents” are
in the model”. These innocent questions about listening do not involve any manipulation and yet, they can
predict the effects of millions potential manipulations, depending on what we know about them.
Neat, isnt it?
It is actually a miracle, that we can represent so parsimoniously
the answer to so many potential manipulations. This is the miracle
of “scientific knowledge” and why scientists crave to acquire it,
ie, so that they wouldn’t need to carry in the mind a new
system of equations for every nuance of implementation. One system
suffices. This is the miracle of science.
Comment by Judea Pearl — May 16, 2015 @ 1:19 am
Dear Hernando,
Your question has two parts.
1.
What is a “fully specified SEM”?
It is indeed a model explicitly specifying all the functions between all the variables in the model (not in the universe)
The variables in the models are selected judgmentally by the modeller and only those deemed relevant are represented.
For example, the modeller should decide whether Mary’s kidneys and blood count should qualify as “relevant ” or not.
Moreover, in practice, we never specify the functions, we merely postulate their existence and specify only the graphs, then we ask
whether data can make up for our ignorance of those functions. This is what identification is all about.
2.
How does it work in practice to build a fully specified SEM, given that “we cannot imagine what else in the universe would
have had to be different had Mary been a man”?
Ans. We need only specify the graph, and this too is not simple.
In practice, the modeler needs to filter (judgmentally) the relevant from the irrelevant. For example, he/she needs to
decide if Mary’s education is relevant, in which case a node named “education” will be part of the graph, one of its parents
will be Mary’s gender. The exact relationship between gender and education need not be specified. He/she needs also decide
if the reaction of Mary’s peers to the news that Mary underwent a sex-change operation is relevant. This
variable will be deemed relevant if indeed we are contemplating such an operation as part of the policy question. But
if we are concerned only with the effect of Mary’s gender, not in the method by which it might change, I would not include
this variable as relevant.
I should reiterate here my previous comment. No one can predict the effect of causes that have unspecified side effects,
and this holds true regarding if the causes are manipulable (like diet) or non-manipulable (like gender). Such information must
come from someplace; the model is only helpful in putting such pieces of information together , not in providing them.
I hope this is helpful.
Judea
Comment by Judea Pearl — May 16, 2015 @ 2:22 am
Dear Judea,
Thank you for clarification, it is very helpful. I have two more questions which I think are more relevant at the data analysis stage.
1) Suppose someone is interested on whether the hypothesis x–>y is supported by data (she is not sure if the arrow exists or not). Now she finds that P(Y=y/set[X=x]) =30% but she is faced with one dilemma. She can’t figure out how much of 30% is due to those who would still experience an event if X was set to x’. If the value is close to 30% she might decide to get rid of the arrow (because it doesn’t matter at what level X is, the outcome will still be 30%), otherwise she will keep it. Is there a way to address her problem without getting into the exercise of estimating counterfactuals?
2)This question is on practicability of statistical tools. Statistical models for drawing causal inference (e.g. PS, MSMs etc.) usually require a positivity assumption, which means it might not be helpful for the data analyst if the only information s/she has is for X=x and other levels for X (e.g. x’) are missing. How can one use such models without having to estimate potential outcomes?
-Conrad
Comment by Conrad — May 16, 2015 @ 9:35 am
Dear Conrad,
1. Non-existence of an arrow x—>y means that y is non reponsive to X. Getting only one point for P(y|do(x)) does not tell us if Y is responsive or not. So it is not enough
for deciding whether the arrow X—>Y makes a difference or not.
But I do not understand your last sentence: “without getting into the exercise of estimating counterfactuals?”
No such exercise would help us here, if the information is not available, all the kings horse will not provide it.
Plus, counterfactuals are derivatives of SEM, so, whatever cannot be done with SEM is not doable in counterfactuals.
2) Again , if positivity is violated no technique in the world (especially not estimation techniques) can replenish this information.
And yoy expess again hope through ” having to estimate potential outcomes?” Potential outcomes are just abstractions of
SEM, they cannot replenish missing information.
Best
JP
Comment by Judea Pearl — May 17, 2015 @ 12:23 am
Dear all,
An epidemiologist colleague sent me the following query (paraphrased):
“Most epidemiologists would never construct a causal model for the purpose of investigating the causal effect of things which are non-manipulable. Of course, they model sex/ethnicity in the DAG, but only as confounders with respect to a main exposure which IS manipulable. I do NOT – in general – write down a DAG for the purpose of assessing the causal effect of sex on some outcome. ”
My Answer:
True, your research question starts and ends with ONE treatment in mind.
But this is not the style in all disciplines.
In economics and ecology for examples, they are concerned with
multitude of interventions. One day it is taxation, the next day it is
interest rates, unemployment, duties, subsidies, etc…
So, to be prepared for such plurality of interventions, they just model
the entire economy (or a segment of it) and once this is done, every
variable can potentially become a “intervention”, be it directly or indirectly.
You say that you would never think of using sex as a “cause”.
Well, I must confess to have committed this sin, and I am proud of the results.
Here it is:
Example : A policy maker wishes to assess the extent to which gender disparity in hiring can be reduced by making hiring decisions gender-blind, rather than eliminating gender inequality in education or job training. The former concerns the “direct effect” of gender on hiring, while the latter concerns the “indirect effect,” or the effect mediated via job qualification.
Here I purposely defied the mantra “no causation without manipulation”, and openly proclaimed
the task to be “estimate the direct effect of gender on hiring”, to the sound of protests:
“undefined!!” “undefined!!”, coming from loyal interventionists .
The result was a principled answer to the policy maker dilemma, which the interventionists
would not dare pose, let alone solve. (See
http://ftp.cs.ucla.edu/pub/stat_ser/R273-U.pdf and
http://ftp.cs.ucla.edu/pub/stat_ser/r382.pdf )
I have then learned that there is profound wisdom in “freedom of expression.”
I have experienced the same feeling of liberation when I took the first class on
complex numbers. Surely, there are no imaginary numbers; we do not need
priestly chantings to remind us that they do not exist. Yet by allowing ourselves a temporary
immersion in the freedom of complex analysis we get answers to questions about
real numbers that are inconceivable without that freedom (or sin?).
The manipulability issue is a red herring. I could have easily pacified the alarmists by
agreeing to annotate all my DAGs with red/blue labels, to distinguish manipulable
from non-manipulable variables, and then promise never to call the latter “causes”.
But that would be a betrayal of science and of human thoughts. Our courts define
discrimination in terms of counterfactuals on gender (eg, “had Mary been of different
sex”) and our minds are wired with unmanipulable counteractuals.
I would rather maintain a scientific view of the world than allow
alarmists to distort it. The fact that the most vocal alarmists come from a camp
that lacks a representation for scientific knowledge does not add credibility to the alarm.
Judea
Comment by Judea Pearl — May 17, 2015 @ 1:24 am
Dear Prof. Judea
Thank you for your answer. It is very helpful indeed. A couple of further questions though …
1) So, in building a fully specified SEM we should postulate the existence of the functions between the relevant variables and specify those in graphs. So, I guess this should be 100% a conceptual exercise and we should include all relevant variables regardless of their observability or manipulability. Am I right?. Then, it is a matter of identification. But you could end up (more often than not I guess) with a nice conceptual model that, in practice, you cannot estimate/identify?
2) At some point above you said: “There is no way of predicting the effect of policies that no one suspected to have side effects”. What about policies that everybody suspected to have one or two side effects but nobody suspected to have a third side effect?
3) Can you point me to a judicious application of the SEM approach for causal inference/prediction?, I ask this because although conceptually I think it is clear and appealing, I have many troubles figuring out how does it work in practice for very complex problems, where you have many mediating variables, bi-directional causality, many unobservables, etc.
Sorry if my questions are not so sophisticated but I am still struggling to understand what exactly “The Confusion
of the Century” is all about.
Comment by Hernando Casas — May 17, 2015 @ 8:19 am
Dear Hernando,
(1) You are right, if nature is complex, we cannot uncover it without experiments.
(2) The third side effect is the same as the first. Unsuspected side effects cannot be anticipated
by the model. The model just takes the input information and delivers it logical ramifications
(these logical ramifications may be surprising, because humans are not too good in logic, but
they are still logical consequences of the input information)
(3)The books by Tyler VanderWeele (cited above) and Morgan and Winship contain examples
of complex real life studies. Actually, almost every modern study in epidemiology or social science starts with a
graphical conception, some display it, and some hide it. Stick with the formers.
(4) Remind me where I used this phrase “confusion of the century”. I think it was a slide on regression analysis.
Yes, regression is an embarrassment to economists and statisticians. But I have made so many enemies trying
to reform this literature that I must stay silent for a few years, give them chance to repent on their own; they
will eventually come back with the right understanding, as if they knew it all along. Science is 90% timing.
Best
Judea
Comment by Judea Pearl — May 17, 2015 @ 10:15 pm
A comment from a rank amateur at this game. Re the sentences “If we do A, then B would be more likely” vs “If A were true, then B would be more likely”. To me it seems that we are more comfortable with the first because it is manifestly clear which way the causality goes: A can be manipulated, so it is the cause. In the second version, it’s not so clear: it could still be causal (A causes B), but it could also be inferential rather than causal (from A we *infer* B). For example, the second sentence could be “If the barometer reading is high, then the air pressure is [more likely to be] high”, but the high air pressure is the cause, not the barometer reading. Ambiguity leads to discomfort; manipulability identifies the cause unambiguously.
Comment by June Lester — May 18, 2015 @ 10:03 pm
Dear June,
Excellent question.
But note that the second conditional is phrased in a subjunctive “if it were” , not
indicative mood. The purpose of the subjunctive is to take us back in time, change
the world (minimally) and project back to the future. The indicative, on the other hand
says ” if we see A, we should expect B”.
I should have made it clearer.
Thanks,
JP
Comment by Judea Pearl — May 19, 2015 @ 12:08 am
Dear June,
Excellent question.
But note that the second conditional is phrased in a subjunctive “if it were” , not
indicative mood. The purpose of the subjunctive is to take us back in time, change
the world (minimally) and project back to the future. The indicative, on the other hand
says ” if we see A, we should expect B”.
I should have made it clearer.
Thanks,
JP
Comment by Judea Pearl — May 19, 2015 @ 12:08 am
[…] discussion of “causation without manipulation” (link) acquires an added sense of relevance when considered in the context of public concerns with […]
Pingback by Causal Analysis in Theory and Practice » Does Obesity Shorten Life? Or is it the Soda? — May 27, 2015 @ 1:46 pm
Hm, “… Given a fully specified SEM, we can predict the effect of compound interventions, both static and time varying, pre-planned or dynamic. Moreover, given a partially specified SEM (e.g., a DAG) we can often use data to fill in the missing parts and predict the effect of such interventions. …” – I very much doubt that. If you carry out a double-blind study there are hardly any follow-up studies (which would often be prohibitively complex) to see how the drug actually pans out in praxi. Not only can you not eliminate the placebo effect in tablet-form drugs, because, say, a red, oval pill might be the ideal placebo in a certain case while a rectangular blue placebo would have no effect at all to alleviate symptoms. To really find out the efficacity of an active ingredient it should be administered as a blue rectangular pill competing against lactose in a red oval-shaped placebo. And there goes the “blind” part of any study unless it were injections. And forcing studies to be registered before publication so that the “bad apples” cannot be suppressed just leads to first doing a study and reproducing it once more -after registering it this time- when the unregistered pilot looked promising enough. Against this backdrop, all efforts to come to grips with pharmacological reasoning are a bit like wondering how many angels fit on the tip of a pin …
Comment by Maureen Coffey — June 13, 2015 @ 5:00 am
Dear Judea,
could you elaborate on why manipulability is considered so important for causality?
Obviously, it has been crucial in experimental design that has been the foundation of statistical/probabilistic causality. But there are many aspects of causality that require only ‘hypothetical manipulability’, i.e., us being able to reason about counterfactuals in hypothetical, thought experiments (e.g., what if the Earth stopped spinning?)
Regarding your interesting question on “scientific knowledge” for “interventionists” (although I am unsure about the meaning of the latter term). My understanding is that scientific knowledge is encoded in the experimental phase.
I can see that SEMs can be very useful tools in the design process but they may not be necessary.
Comment by Panos — December 29, 2015 @ 5:25 am
Dear Panos,
Here are my attempts to answer your questions
1. could you elaborate on why manipulability is considered so important for causality?
Ans. I do not consider it “important”, but some people do. This page summarizes the two positions.
2.Obviously, it has been crucial in experimental design that has been the foundation of statistical/probabilistic causality. But there are many aspects of causality that require only ‘hypothetical manipulability’, i.e., us being able to reason about counterfactuals in hypothetical, thought experiments (e.g., what if the Earth stopped spinning?)
Ans. Agree, this is my position too, with the added reminded that we cannot run “hypothetical, thought experiment” unless we have a symbolic representation of reality on which we can run those experiments.
3. Regarding your interesting question on “scientific knowledge” for “interventionists” (although I am unsure about the meaning of the latter term). My understanding is that scientific knowledge is encoded in the experimental phase.I can see that SEMs can be very useful tools in the design process but they may not be necessary.
Ans. Show me one exercise in causal reasoning that does not invoke SEM, and I will show you a set of assumptions that can be derived from SEM.
So, why keep SEM in the hiding if our assumptions reside there?
There is one exception: RCT. Here we can keep SEM in the hiding, because the randomness of the coin is common knowledge, it need not be justified case by case.
Judea
Comment by Judea — January 15, 2016 @ 11:21 pm
Dear Judea,
thanks for the reply.
Regarding point 3, what about all the classical work in experimental design that does not invoke SEM?
Or maybe you consider this work as part of “RCT” ?
Comment by Panos — January 16, 2016 @ 1:10 am