Causal Analysis in Theory and Practice

August 13, 2019

Lord’s Paradox: The Power of Causal Thinking

Filed under: Uncategorized — Judea Pearl @ 9:41 pm

Background

This post aims to provide further insight to readers of “Book of Why” (BOW) (Pearl and Mackenzie, 2018) on Lord’s paradox and the simple way this decades-old paradox was resolved when cast in causal language. To recap, Lord’s paradox (Lord, 1967; Pearl, 2016) involves two statisticians, each using what seems to be a reasonable strategy of analysis, yet reaching opposite conclusions when examining the data shown in Fig. 1 (a) below.

This image has an empty alt attribute; its file name is Screen-Shot-2019-08-13-at-2.43.43-PM-1024x580.png               

Figure 1: Wainer and Brown’s revised version of Lord’s paradox and the corresponding causal diagram.

The story, in the form described by Wainer and Brown (2017) reads:

“A large university is interested in investigating the effects on the students of the diet provided in the university dining halls …. Various types of data are gathered. In particular, the weight of each student at the time of his arrival in September and his weight the following June (WF) are recorded.”

The first statistician (named John) looks at the weight gains associated with the two dining halls, find them equally distributed, and naturally concludes that Diet has no effect on Gain.  The second statistician (named Jane) uses the initial weight (WI) as a covariate and finds that, for every level of WI, the final weight (WF) distribution for Hall B is shifted above that of Hall A. Thus concluding Diet has an effect on Gain. Who is right?

The Book of Why resolved this paradox using causal analysis. First, noting that at issue is “the effect of Diet on weight Gain”, a causal model is postulated, in the form of the diagram of Fig. 1(b). Second, noting the WIis the only confounder of Diet and Gain, Jane was declared “unambiguously correct” and John “incorrect”.

The Critics

The simplicity of this solution invariably evokes skepticism among statisticians. “But how can we be sure of the diagram?” they ask. This kind of skepticism is natural since, statisticians are not trained in postulating causal assumptions, that is, assumptions that cannot be articulated in the language of mainstream statistics, and cannot therefore be tested using the available data.  However, after reminding the critics that the contention  between John and Jane surrounds the notion of “effect”, and that “effect” is a causal, not statistical notion, enlightened statisticians accept the idea that diagrams need to be drawn and that the one in Fig. 1(b) is reasonable; its main assumptions are: Diet does not affect the initial weight and the initial weight is the only factor affecting both Diet and final weight.

A series of recent posts by S. Senn, however, introduced a new line of criticism into our story (Senn, 2019). It focuses on the process by which the data of Fig. 1(a) was generated, and invokes RCT considerations such as block design, experiments with many halls, analysis of variance, standard errors, and more. Statisticians among my Twitter followers “liked” Senn’s critiques and I am not sure whether they were convinced by my argument that Lord’s paradox has nothing to do with  experimental procedures. In other words, the conflict between John and Jane persists even when the data is generated by clean and un-complicated process, as the one depicted in Fig. 1(b).

Senn’s critiques can be summarized thus (quoted):

“I applied John Nedler’s experimental calculus [5, 6] … and came to the conclusion that the second statistician’s solution is only correct given an untestable assumption and that even if the assumption were correct and hence the estimate were appropriate, the estimated standard error would almost certainly be wrong.”

My response was:

Lord’s paradox is about causal effects of Diet. In your words: “diet has no effect” according to John and “diet does have an effect” according to Jane. We know that, inevitably, every analysis of “effects” must rely on causal, hence “untestable assumptions”. So BOW did a superb job in calling the attention of analysts to the fact that the nature of Lord’s paradox is causal, hence outside the province of mainstream statistical analysis. This explains why I agree with your conclusion that “the second statistician’s solution is only correct given an untestable assumption”. Had you concluded that we can decide who is correct without relying on “an untestable assumption”, you and Nelder would have been the first mortals to demonstrate the impossible, namely, that assumption-free correlation does imply causation.

Now let me explain why your last conclusion also attests to the success of BOW. You conclude: “even if the assumption were correct, … the estimated standard error would almost certainly be wrong.”

The beauty of Lord’s paradox is that it demonstrates the surprising clash between John and Jane in purely qualitative terms, with no appeal to numbers, standard errors, or confidence intervals. Luckily, the surprising clash persists in the asymptotic limit where Lord’s ellipses represent infinite samples, tightly packed into those two elliptical clouds.

Some people consider this asymptotic abstraction to be a “limitation” of graphical models. I consider it a blessing and a virtue, enabling us, again, to separate things that matter (clash over causal effects) from those that don’t (sample variability, standard errors, p-values etc.). More generally, it permits us to separate issues of estimation, that is, going from samples to distributions, from those of identification, that is, going from distributions to cause-effect relationships. BOW goes to great length explaining why this last stage presented an insurmountable hurdle to analysts lacking the appropriate language of causation.

Note that BOW declares Jane to be “unambiguously correct” in the context of the causal assumptions displayed in the diagram (Fig.1 (b)) where Diet is shown NOT to influence initial weight, and the initial weight is shown to be the (only) factor that makes students prefer one diet or another. Changing these assumptions may lead to another problem and another resolution but, once we agree with the assumptions our choice of Jane as the correct statistician is “unambiguously correct”

As an example (requested on Twitter) if dining halls have their own effect on weight gain (say Hall-A provides free weight-watching instructions to diners) our model will change as depicted in Fig 2. In this setup, Wis no longer a sole confounder and both Wand Hall need to be adjusted to obtain the effect of Diet on Gain. In other words, Jane will no longer be “correct” unless she analyzes each stratum of the Diet-Hall combination and finds preference of Diet-A over Diet-B.

This image has an empty alt attribute; its file name is Screen-Shot-2019-08-13-at-2.44.57-PM.png

Figure 2:  Separating Diet from Hall in Lord’s Story

 

New Insights

The upsurge of interest in Lord’s paradox gives me an opportunity to elaborate on another interesting aspect of our Diet-weight model, Fig. 1.

Having concluded that Statistician-2 (Jane) is “unambiguously correct” and that Statistician-1 (John) is wrong, an astute reader would ask: “And what about the sure-thing principle? Isn’t the overall gain just an average of the stratum-specific gains?” (where each stratum represents a level of the initial weight WI). Previously, in the original version of the paradox (Fig. 6.8 of BOW) we dismissed this intuition by noting that Wwas affected by the causal variable (Sex) but, now, with the arrow pointing from Wto we can no longer use this argument. Indeed, the diagram tells us (using the back-door criterion) that the causal effect of on can be obtained by adjusting for the (only) confounder, WI, yielding:

P(Y|do(Diet)) = ∑WIP(Y|Diet,WI) P(WI)

In other words, the overall gain resulting from administering a given diet to everyone is none other but the gain observed in a given diet-weight group, averaged over the weight. How is it possible then for the latter to be positive (as seen from the shifted ellipses) and, simultaneously, for the former to be zero (as seen by the perfect alignment of the ellipses along the W= Wline)

One would be tempted to suggest that data matching the ellipses of Fig 6.9(a) can never be generated by the model of Fig. 6.9(b) , in which WIis the only confounder? But this could not possibly be the case, because we know that the model has no refuting implications, so it cannot be refuted by the position of the two ellipses.

The answer is that the sure-thing principle applies to causal effects, not to statistical associations. The perfect alignment of the ellipses does not mean that the effect of Diet on Gain is zero; it means only that the Gain is statistically independent of Diet:

P(Gain|Diet=A) = P(Gain|Diet=B)

not that Gain is causally unaffected by Diet. In other words, the equality above does not imply the equality

P(Gain|do(Diet=A)) = P(Gain|do(Diet=B))

which statistician-1 (John) wants us to believe.

Our astute student will of course question this explanation and, pointing to Fig. 1(b), will ask: How can Gain be independent of Diet when the diagram shows them connected? The answer is that the three paths connecting Diet and Gain cancel each other in such a way that an overall independence shows up in the data,

Conclusions

Lord’s paradox starts with a clash between two strong intuitions: (1) To get the effect we want, we must make “proper allowances” for uncontrolled preexisting differences between groups” (i.e. initial weights) and (2) The overall effect (of Diet on Gain) is just the average of the stratum-specific effects. Like the bulk of human intuitions, these two are CAUSAL. Therefore, to reconcile the apparent clash between them we need a causal language; statistics alone won’t do.

The difficulties that generations of statisticians have had in resolving this apparent clash stem from lacking a formal language to express the two intuitions as well as the conditions under which they are applicable. Missing were: (1) A calculus of “effects” and its associated causal sure-thing principle and (2) a criterion (back door) for deciding when “proper allowances for preexisting conditions” is warranted. We are now in possession of these two ingredients,  and we should enjoy the power of causal analysis to resolve this paradox, which generations of statisticians have found intriguing, if not vexing. We should also feel empowered to resolve all the paradoxes that surface from the causation-association confusion  that our textbooks have bestowed upon us.

 

References

Lord, F.M. “A paradox in the interpretation of group comparisons,” Psychological Bulletin, 68(5):304-305, 1967.

Pearl, J. “Lord’s Paradox Revisited — (Oh Lord! Kumbaya!)”, Journal of Causal Inference, Causal, Casual, and Curious Section, 4(2), September 2016. https://ftp.cs.ucla.edu/pub/stat_ser/r436.pdf

Pearl, J. and Mackenzie, D. Book of Why, NY: Basic Books, 2018. http://bayes.cs.ucla.edu/WHY/

Senn, S. “Red herrings and the art of cause fishing: Lord’s Paradox revisited” (Guest post) August 2, 2019. https://errorstatistics.com/2019/08/02/s-senn-red-herrings-and-the-art-of-cause-fishing-lords-paradox-revisited-guest-post/

Wainer and Brown, L.M., “Three statistical paradoxes in the interpretation of group differences: Illustrated with medical school admission and licensing data,” in C.R. Rao and S. Sinharay (Eds.), Handbook of Statistics 26: Psychometrics, North Holland: Elsevier B.V., pp. 893-918, 2007.

June 1, 2019

Graphical Models and Instrumental Variables

Filed under: Uncategorized — Judea Pearl @ 8:09 am

At the request of readers, we re-post below a previous comment from Bryant and Elias (2014) concerning the use of graphical models for determining whether a variable is a valid IV.


Dear Conrad,
Following your exchange with Judea, we would like to present concrete examples of how graphical tools can help  determine whether a variable qualifies as an instrument. We use the example of  job training program which Imbens used in his paper on instrumental variables.

In this example, the goal is to estimate the effect of a training program (X) on earnings (Y). Imbens suggested  proximity (Z) as a possible instrument to assess the effect of X on Y. He then mentioned that the assumption that Z is independent of the potential outcomes {Yx} is a strong one, noting that this can be made more plausible by conditioning on covariates.

To illustrate how graphical models can be used  in determining the plausibility of the exclusion restriction, conditional on different covariates, let us consider the following scenarios.

Scenario 1. Suppose that the training program is located in the workplace. In this case, proximity (Z) may affect the numbers of hours  employees spend at the office (W) since they spend less time commuting, and this, in turn, may affect their earnings (Y).

Scenario 2. Suppose further that the efficiency of the workers (unmeasured) affects both the number of hours (W) and their salary (Y). (This is represented in the graph through the inclusion of a bidirected arrow between W and Y.)

Scenario 3. Suppose even further that this is a high-tech industry and workers can easily work from home. In this case, the number of hours spent at the office (W) has no effect on earnings (Y). (This is represented in the graph through the removal of the directed arrow from W to Y.)

Scenario 4. Finally, suppose that worker efficiency also affects whether they attend the program because less efficient workers are more likely to benefit from training. (This is represented in the graph through the inclusion of a bidirected arrow from W to X.)

The following figures correspond to the scenarios discussed above. 

IV graphs

The reasons we like to work with graphs on such problems is, first, we can represent these scenarios clearly and unambiguously and, second, we can derive the answer in each of these scenarios by inspection of the causal graphs. Here are our  answers: (We assume a linear model. For nonparametric, use LATE.)

Scenario 1. 
Is the effect of X on Y identifiable? Yes
How? Using Z as an instrument conditioning on W and the effect is equal to r_{zy.w} / r_{zx.w}.
Testable implications? (W independent X given Z)

Scenario 2. 
Is the effect of X on Y identifiable? No
How? n/a.
Testable implications? (W independent X given Z)

Scenario 3. 
Is the effect of X on Y identifiable? Yes
How? Using Z as an instrument and the effect is equal to r_{zy} / r_{zx}.
Remark. Conditioning on W disqualifies Z as an instrument.
Testable implications? (W independent X given Z)

Scenario 4. 
Is the effect of X on Y identifiable? Yes
How? Using Z as an instrument and the effect is equal to r_{zy} / r_{zx}.
         Conditioning on W disqualifies Z as an instrument.
Testable implications?

In summary, the examples demonstrate Imben’s point that judging whether a variable (Z) qualifies as an instrument hinges on substantive assumptions underlying the problem being studied. Naturally, these assumptions follow from the causal story about the phenomenon under study. We believe graphs can be an attractive language to solve this type of problem for two reasons. First, it is a transparent representation in which researchers can express the causal story and discuss its plausibility. Second, as a formal representation of those assumptions, it allows us to apply mechanical procedures to evaluate the queries of interest. For example, whether a specific set Z qualifies as an instrument; whether there exists a set Z that qualifies as instrument; what are the testable implications of the causal story.

We hope the examples illustrate these points.
Bryant and Elias

March 19, 2019

CAUSAL INFERENCE SUMMER SHORT COURSE AT HARVARD

Filed under: Uncategorized — Judea Pearl @ 5:37 am

We are informed of the following short course  at Harvard. Readers of this blog will probably wonder what this Harvard-specific jargon is all about, and whether it has a straightforward translation into Structural Causal Models. It has! And one of the challengesof contemporary causal inference is to navigate the literature despite its seeming diversity, and to work towards convergence of ideas, tools and terminology.

Summer Short Course “An Introduction to Causal Inference”

Date: June 3-7, 2019

Instructors: Miguel Hernán, Judith Lok, James Robins, Eric Tchetgen Tchetgen & Tyler VanderWeele

This 5-day course introduces concepts and methods for causal inference from observational data. Upon completion of the course, participants will be prepared to further explore the causal inference literature. Topics covered include the g-formula, inverse probability weighting of marginal structural models, g-estimation of structural nested models, causal mediation analysis, and methods to handle unmeasured confounding. The last day will end with a “capstone” open Q&A session with the instructors.

Prerequisites: Participants are expected to be familiar with basic concepts in epidemiology and biostatistics, including linear and logistic regression and survival analysis techniques.

Tuition: $600/person, to be paid at the time of registration. A limited number of tuition waivers are available for students.

Date/Location: June 3-7, 2019 at the Harvard T.H. Chan School of Public Health. 

Details and registration:https://www.hsph.harvard.edu/causal/shortcourse/

February 12, 2019

Lion Man – Ulm Museum

Filed under: Uncategorized — Judea Pearl @ 6:25 am

Stefan Conrady, Managing Partner of Bayesia, was kind enough to send us an interesting selfie he took with the Lion Man that is featured in Chapter 1 of Book of Why.

He also added that the Ulm Museum (where the Lion Man is on exhibit) is situated near the house where Albert Einstein was born in 1879.

This makes Ulm a home to two revolutions of human cognition.

January 15, 2019

More on Gelman’s views of causal inference

Filed under: Uncategorized — Judea Pearl @ 5:37 pm

In the past two days I have been engaged in discussions regarding Andrew Gelman’s review of Book of Why.

These discussions unveils some of our differences as well as some agreements. I am posting some of the discussions below, because Gelman’s blog represents the thinking of a huge segment of practicing statisticians who are, by and large, not very talkative about causation. It is interesting therefore to understand how they think, and what makes them tick.


Judea Pearl says: January 12, 2019 at 8:24 am

Andrew,
I appreciate your kind invitation to comment on your blog. Let me start with a Tweet that I posted on https://twitter.com/yudapearl

(updated 1.10.19)
1.8.19 @11:59pm – Gelman’s review of #Bookofwhy should be of interest because it represents an attitude that paralyzes wide circles of statistical researchers. My initial reaction is now posted on https://bit.ly/2H3BH3b Related posts: https://ucla.in/2sgzkPZ and https://ucla.in/2v72QK5

These postings speak for themselves but I would like to respond here to your recommendation: “Similarly, I’d recommend that Pearl recognize that the apparatus of statistics, hierarchical regression modeling, interactions, post-stratification, machine learning, etc etc solves real problems in causal inference.”

It sounds like a mild and friendly recommendation, and your readers would probably get upset at anyone who would be so stubborn as to refuse it.

But I must. Because, from everything I know about causation, the apparatus you mentioned does NOT, and CANNOT solve any problem known as “causal” by the causal-inference community (which includes your favorites Rubin, Angrist, Imbens, Rosenbaum, etc etc.). Why?

Because the solution to any causal problem must rest on causal assumptions and the apparatus you mentioned has no representation for such assumptions.

1. Hierarchical models are based on set-subset relationships, not causal relationships.

2. “interactions” is not an apparatus unless you represent them in some model, and act upon them.

3. “post-stratification” is valid only after you decide what you stratify on, and this requires a causal structure (which you claim above to be an unnecessary “wrapping” and complication”)

4. “Machine learning” is just fancy curve fitting of data see https://ucla.in/2umzd65

Thus, what you call “statistical apparatus” is helpless in solving causal problems. We came to this juncture several times in the past and, invariably, you pointed me to books, articles, and elaborated works which, in your opinion, do solve “real life causal problems”. So, how are we going to resolve our disagreement on whether those “real life” problems are “causal” and, if they are, whether your solution of them is valid. I suggested applying your methods to toy problems whose causal character is beyond dispute. You did not like this solution, and I do not blame you, because solving ONE toy problem will turn your perception of causal analysis upside down. It is frightening. So I would not press you. But I will add another Tweet before I depart:

1.9.19 @2:55pm – An ounce of advice to readers who comment on this “debate”: Solving one toy problem in causal inference tells us more about statistics and science than ten debates, no matter who the debaters are. #Bookofwhy

Addendum. Solving ONE toy problem will tells you more than dozen books and articles and multi-cited reports. You can find many such toy problems (solved in R) here: https://ucla.in/2KYYviP sample of solution manual: https://ucla.in/2G11xUE

For your readers convenience, I have provided free access to chapter 4 here: https://ucla.in/2G2rWBv It is about counterfactuals and, if I were not inhibited by modesty, I would confess that it is the best text on counterfactuals and their applications that you can find anywhere.

I hope you take advantage of my honesty.
Enjoy
Judea


Andrew says: January 12, 2019 at 11:37 am

Judea:

We are in agreement. I agree that data analysis alone cannot solve any causal problems. Substantive assumptions are necessary too. To take a familiar sort of example, there are people out there who just think that if you fit a regression of the form, y = a + bx + cz + error, that the coefficients b and c can be considered as causal effects. At the level of data analysis, there are lots of ways of fitting this regression model. In some settings with good data, least squares is just fine. In more noisy problems, you can do better with regularization. If there is bias in the measurements of x, z, and y, that can be incorporated into the model also. But none of this legitimately gives us a causal interpretation until we make some assumptions. There are various ways of expressing such assumptions, and these are talked about in various ways in your books, in the books by Angrist and Pischke, in the book by Imbens and Rubin, in my book with Hill, and in many places. Your view is that your way of expressing causal assumptions is better than the expositions of Angrist and Pischke, Imbens and Rubin, etc., that are more standard in statistics and econometrics. You may be right! Indeed, I think that for some readers your formulation of this material is the best thing out there.

Anyway, just to say it again: We agree on the fundamental point. This is what I call in the above post the division of labor, quoting Frank Sinatra etc. To do causal inference requires (a) assumptions about causal structure, and (b) models of data and measurement. Neither is enough. And, as I wrote above:

I agree with Pearl and Mackenzie that typical presentations of statistics, econometrics, etc., can focus way too strongly on the quantitative without thinking at all seriously about the qualitative aspects of the problem. It’s usually all about how to get the answer given the assumptions, and not enough about where the assumptions come from. And even when statisticians write about assumptions, they tend to focus on the most technical and least important ones, for example in regression focusing on the relatively unimportant distribution of the error term rather than the much more important concerns of validity and additivity.

If all you do is set up probability models, without thinking seriously about their connections to reality, then you’ll be missing a lot, and indeed you can make major errors in casual reasoning . . .

Where we disagree is just on terminology, I think. I wrote, “the apparatus of statistics, hierarchical regression modeling, interactions, poststratification, machine learning, etc etc., solves real problems in causal inference.” When I speak of this apparatus, I’m not just talking about probability models; I’m also talking about assumptions that map those probability models to causality. I’m talking about assumptions such as those discussed by Angrist and Pischke, Imbens and Rubin, etc.—and, quite possibly, mathematically equivalent in these examples to assumptions expressed by you.

So, to summarize: To do causal inference, we need (a) causal assumptions (assumptions of causal structure), and (b) models or data analysis. The statistics curriculum spends much more time on (b) than (a). Econometrics focuses on (a) as well as (b). You focus on (a). When Angrist, Pischke, Imbens, Rubin, Hill, me, and various others do causal inference, we do both (a) and (b). You argue that if we were to follow your approach on (a), we’d be doing better work for those problems that involve causal inference. You may be right, and in any case I’m glad you and Mackenzie wrote this book which so many people have found helpful, just as I’m glad that the aforementioned researchers wrote their books on causal inference which so many have found helpful. A framework for causal inference—whatever that framework may be—is complementary to, not in competition with, data-analysis tools such as hierarchical modeling, poststratification, machine learning, etc.

P.S. I’ll ignore the bit in your comment where you say you know what is “frightening” to me.


Judea Pearl says: January 13, 2019 at 6:59 am

Andrew,

I would love to believe that where we disagree is just on terminology. Indeed, I see sparks of convergence in your last post, where you enlighten me to understand that by “the apparatus of statistics, …’ you include the assumptions that PO folks (Angrist and Pischke, Imbens and Rubin etc.) are making, namely, assumptions of conditional ignorability. This is a great relief, because I could not see how the apparatus of regression, interaction, post-stratification or machine learning alone, could elevate you from rung-1 to rung-2 of the Ladder of Causation. Accordingly, I will assume that whenever Gelman and Hill talk about causal inference they tacitly or explicitly make the ignorability assumptions that are needed to take them
from associations to causal conclusions. Nice. Now we can proceed to your summary and see if we still have differences beyond terminology.

I almost agree with your first two sentences: “So, to summarize: To do causal inference, we need (a) causal assumptions (assumptions of causal structure), and (b) models or data analysis. The statistics curriculum spends much more time on (b) than (a)”.

But we need to agree that just making “causal assumptions” and leaving them hanging in the air is not enough. We need to do something with the assumptions, listen to them, and process them so as to properly guide us in the data analysis stage.

I believe that by (a) and (b) you meant to distinguish identification from estimation. Identification indeed takes the assumptions and translate them into a recipe with which we can operate on the data so as to produce a valid estimate of the research question of interest. If my interpretation of your (a) and (b) distinction is correct, permit me to split (a) into (a1) and (a2) where (a2) stands for identification.

With this refined-taxonomy, I have strong reservation to your third sentence: “Econometrics focuses on (a) as well as (b).” Not all of econometrics. The economists you mentioned, while commencing causal analysis with “assumptions” (a1), vehemently resist to organizing these assumptions in any “structure”, be it a DAG or structural equations (Some even pride themselves of being “model-free”). Instead, they restrict their assumptions to conditional ignorability statements so as to justify familiar estimation routines. [In https://ucla.in/2mhxKdO, I labeled them: “experimentalists” or “structure-free economists” to be distinguished from “structuralists” like Heckman, Sims, or Matzkin.]

It is hard to agree therefore that these “experimentalists” focus on (a2) — identification. They actually assume (a2) away rather than use it to guide data analysis.

Continuing with your summary, I read: “You focus on (a).” Agree. I interpret (a) to mean (a) = (a1) + (a2) and I let (b) be handled by smart statisticians, once they listen to the guidance of (a2).

Continuing, I read:
“When Angrist, Pischke, Imbens, Rubin, Hill, me, and various others do causal inference, we do both (a) and (b). Not really. And it is not a matter of choosing “an approach”. By resisting structure, these researchers a priori deprive themselves of answering causal questions that are identifiable by do-calculus and not by a single conditional ignorability assumption. Each of those questions may require a different estimand, which means that you cannot start doing the “data analysis” phase before completing the identification phase. 

[Currently, even questions that are identifiable by conditional ignorability assumption cannot be answered by structure-free PO folks, because deciding on the conditioning set of covariates is intractable without the aid of DAGs, but this is a matter of efficiency not of essence.]

But your last sentence is hopeful:
“A framework for causal inference — whatever that that framework may be — is complementary to, not in competition with, data-analysis tools such as hierarchical modeling, post-stratification, machine learning, etc.”

Totally agree, with one caveat: the framework has to be a genuine “framework,” ie, capable of leverage identification to guide data-analysis.

Let us look now at why a toy problem would be frightening; not only to you, but to anyone who believes that the PO folks are offering a viable framework for causal inference.

Lets take the simplest causal problem possible, say a Markov chain X —>Z—>Y with X standing for Education, Z for Skill and Y for Salary. Let Salary be determined by Skill only, regardless of Education. Our research problem is to find the causal effect of Education on Salary given observational data of (perfectly measured) X,Y,Z.

To appreciate the transformative power of a toy example, please try to write down how Angrist, Pischke, Imbens, Rubin, Hill, would go about doing (a) and (b) according to your understanding of their framework. You are busy, I know, so let me ask any of your readers to try and write down step by step how the graph-less school would go about it. Any reader who tries this exercise ONCE will never be thesame. It is hard to believe unless you actually go through this frightening exercise, please try. 

Repeating my sage-like advice: Solving one toy problem in causal inference tells us more about statistics and science than ten debates, no matter who the debaters are.
Try it.

[Judea Pearl added in editing: I have received no solution  thus far, not even an attempt. For readers of this blog, the chain is part of the front-door model which is treated in Causality pp. 232-4, in both graphical and potential outcome frameworks. I have yet to meet a PO researcher who can formulate this toy story in PO, let alone solve it. Not because they can’t, but because the very idea of listening to their  understanding of a problem and translating that understanding to formal assumption is foreign to them, having been conditioned to assume ignorability and estimate a quantity that is easily estimable]


Andrew says:January 13, 2019 at 8:26 pm

Judea:

I think we agree on much of the substance. And I agree with you regarding “not all econometrics” (and, for that matter, not all of statistics, not all of sociology, etc.). As I wrote in my review of your book with Mackenzie, and in my review of Angrist and Pischke’s book, causal identification is an important topic and worth its own books.

In practice, our disagreement is, I think, that we focus on different sorts of problems and different sorts of methods. And that’s fine! Division of labor. You have toy problems that interest you, I have toy problems that interest me. You have applied problems that interest you, I have applied problems that interest me. I would not expect you to come up with methods of solving the causal inference problems that I work on, but that’s OK: your work is inspirational to many people and I can well believe it has been useful in certain applications as well as in developing conceptual understanding. I consider toy problems of my own for that same reason. I’m not particularly interested in your toy problems, but that’s fine; I doubt you’re particularly interested in the problems I focus on. It’s a big world out there.

In the meantime, you continue to characterize me as being frightened or lacking courage. I wish you’d stop doing that.

[Judea Pearl added in editing: Gelman wants to move identification to separate books, because it is important, but the fact that one cannot start estimation before having an identifiable estimand is missing from his comment. Is he aware of it? Does he really do estimation before identification? I do not know, it is a foreign culture to me.]


Judea Pearl says: January 13, 2019 at 10:51 pm

Andrew,
Convergence is in sight, modulo two corrections:
1. You say:
“You [Pearl] have toy problems that interest you, I [Andrew] have toy problems that interest me. …I doubt you’re particularly interested in the problems I focus on. ”
Wrong! I am very interested in your toy problems, especially those with causal flavor. Why? Because I love to challenge the SCM framework with new tasks and new angles that other researchers found to be important, and see if SCM can be enriched with expanded scope. So, by all means, if you have a new twist, shoot. I have not been able to do it in the past, because your shots were not toy-like, e.g., 3-4 variables, clear task, with correct answer known.

2. You say:
“you continue to characterize me as being frightened or lacking courage” This was not my intention. My last remark on frightening toys was general, everyone is frightened by the honesty and transparency of toys — the adequacy of one’s favorite method is undergoing a test of fire. Who wouldn’t be frightened? But, since you prefer, I will stop using this metaphor.

3. Starting afresh, and the sake of good spirit: How about attacking a toy problem? Just for fun, just for sport.


Andrew says: January 13, 2019 at 11:24 pm

Judea:

I’ve attacked a lot of toy problems.

For an example of a toy problem in causality, see pages 962-963 of this article.

But most of the toy problems I’ve looked at do not involve causality; see for example this paper, item 4 in this post, and this paper.  This article on experimental design is simple enough that I think it could count as a toy problem: it’s a simple example without data which allows us to compare different methods. And here’s a theoretical paper I wrote awhile ago that has three toy examples. Not involving causal inference, though.

I’ve written lots of papers with causal inference, but they’re almost all applied work. This may be because I consider myself much more of a practitioner of causal inference than a researcher on causal inference. To the extent I’ve done research on causal inference, it’s mostly been to resolve some confusions in my mind (as in this paper).

This gets back to the division-of-labor thing. I’m happy for you and Imbens and Hill and Robins and VanderWeele and others to do research on fundamental methods for causal inference, while I do research on statistical analysis. The methods that I’ve learned have allowed my colleagues and I to make progress on a lot of applied problems in causal inference, and have given me some clarity in understanding problems with some naive formulations of causal reasoning (as in the first reference above in this comment).

[Judea Pearl. Added in editing: Can one really make progress on a lot of applied problems in causal inference without dealing with identification Evidently, PO folks think so, at least those in Gelman’s circles]

As I wrote in my above post, I think your book with Mackenzie has lots of great things in it; I just can’t go with a statement such as, “Using a calculus of cause and effect developed by Pearl and others, scientists now have the ability to answer such questions as whether a drug cured an illness, when discrimination is to blame for disparate outcomes, and how much worse global warming can make a heat wave”—because scientists have been answering such questions before Pearl came along, and scientists continue to answer such questions using methods other than Pearl’s. For what it’s worth, I don’t think the methods that my colleagues and I have developed are necessary for solving these or any problems. Our methods are helpful in some problems, some of the time, at least until something better comes along—I think that’s pretty much all that any of us can hope for! That, and we can hope that our writings inspire new researchers to come up with new methods that are useful in the future.


Judea Pearl says:January 14, 2019 at 2:18 am

Andrew,
Agree to division of labor: causal inference on one side and statistical analysis on the other.

Assuming that you give me some credibility on the first, let me try and show you that even the publisher advertisement that you mock with disdain is actually true and carefully expressed. It reads: “Using a calculus of cause and effect developed by Pearl and others, scientists now have the ability to answer such questions as whether a drug cured an illness, when discrimination is to blame for disparate outcomes, and how much worse global warming can make a heat wave”.

First, note that it includes “Pearl and others”, which theoretically might include the people you have in mind. But it does not; it refers to those who developed mathematical formulation and mathematical tools to answer such questions. So let us examine the first question: “whether a a drug cured an illness”. This is a counterfactual “cause of effect” type question. Do you know when it was first formulated mathematically? [Don Rubin declared it non-scientific].

Now lets go to the second: “when discrimination is to blame for disparate outcomes,” This is a mediation problem. Care to guess when this problem was first formulated (see Book of Why chapter 9) and what the solution is Bottom line, Pearl is not as thoughtless as your review portrays him to be and, if you advise your readers to control their initial reaction: “Hey, statisticians have been doing it for centuries” they would value learning how things were first formulated, first solved and why statisticians were not always the first.


Andrew says:January 14, 2019 at 6:46 pm

Judea:

I disagree with your implicit claim that, before your methods were developed, scientists were not able to answer such questions as whether a drug cured an illness, when discrimination is to blame for disparate outcomes, and how much worse global warming can make a heat wave. I doubt much will be gained by discussing this particular point further so I’m just clarifying that this is a point of disagreement.

Also, I don’t think in my review I portrayed you as thoughtless. My message was that your book with Mackenzie is valuable and interesting even though it has some mistakes. In my review I wrote about the positive part as well as the mistakes. Your book is full of thought!

[Judea Pearl. Added in edit: Why can’t Gelman  “go with a statement such as, “Using a calculus of cause and effect developed by Pearl and others, scientists now have the ability to answer such questions as whether a drug cured an illness, when discrimination is to blame for disparate outcomes, and how much worse global warming can make a heat wave”? His answer is: “because scientists have been answering such questions before Pearl came along” True, by trial and error, but not by mathematical analysis. And my statement marvels at the ability of doing it analytically. So why can’t Gelman acknowledge that a marvelous progress has been made, not by me, but by several researchers who realized that graph-less PO is a deadend.?]

January 9, 2019

Can causal inference be done in statistical vocabulary?

Filed under: Uncategorized — Judea Pearl @ 6:59 am

Andrew Gelman has just posted a review of The Book of Why (https://andrewgelman.com/2019/01/08/book-pearl-mackenzie/), my answer to some of his comments follows below:

“Andrew, 

The hardest thing for people to snap out of is the bubble of their own language. You say: “I find it baffling that Pearl and his colleagues keep taking statistical problems and, to my mind, complicating them by wrapping them in a causal structure (see, for example, here).” 

No way! and again: No way! There is no way to answer causal questions without snapping out of statistical vocabulary.  I have tried to demonstrate it to you in the past several years, but was not able to get you to solve ONE toy problem from beginning to end. 

This will remain a perennial stumbling block until one of your readers tries honestly to solve ONE toy problem from beginning to end. No links to books or articles, no naming of fancy statistical techniques, no global economics problems, just a simple causal question whose answer we know in advance. (e.g. take Simpson’s paradox: Which data should be consulted? The aggregated or the disaggregated?) 

Even this group of 73 Editors found it impossible, and have issued the following guidelines for reporting observational studies: https://www.atsjournals.org/doi/pdf/10.1513/AnnalsATS.201808-564PS

To readers of your blog: Please try it. The late Dennis Lindley was the only statistician I met who had the courage to admit:  “We need to enrich our language with a do-operator”. Try it, and you will see why he came to this conclusion, and perhaps you will also see why Andrew is unable to follow him.”

Addendum:

In his response to my comment above, Andrew Gelman suggested that we agree to disagree, since science is full of disagreements and there is lots of room for progress using different methods. Unfortunately, the need to enrich statistics with new vocabulary is a mathematical fact, not an opinion. This need cannot be resolved by “there are many ways to skin a cat” without snapping out of traditional statistical language and enriching it  with causal vocabulary.  Neyman-Rubin’s potential outcomes vocabulary is an example of such enrichment, since it goes beyond joint distributions of observed variables.

Andrew further refers us to three chapters in his book (with Jennifer Hill) on causal inference. I am craving instead for one toy problem, solved from assumptions to conclusions, so that we can follow precisely the roll played by the extra-statistical vocabulary, and why it is absolutely needed. The Book of Why presents dozen such examples, but readers would do well to choose their own.

September 15, 2016

Summer-end Greeting from the UCLA Causality Blog

Filed under: Uncategorized — bryantc @ 4:39 am

Dear friends in causality research,
—————————————
This greeting from UCLA Causality blog contains news and discussion on the following topics:

1. Reflections on 2016 JSM meeting.
2. The question of equivalent representations.
3. Simpson’s Paradox (Comments on four recent papers)
4. News concerning Causal Inference Primer
5. New books, blogs and other frills.

1. Reflections on JSM-2016
—————————————
For those who missed the JSM 2016 meeting, my tutorial slides can be viewed here: http://bayes.cs.ucla.edu/jsm-august2016.ppt

As you can see, I argue that current progress in causal inference should be viewed as a major paradigm shift in the history of statistics and, accordingly, nuances and disagreements are merely linguistic realignments within a unified framework. To support this view, I chose for discussion six specific achievements (called GEMS) that should make anyone connected with causal analysis proud, empowered, and mighty motivated.

The six gems are:
1. Policy Evaluation (Estimating “Treatment Effects”)
2. Attribution Analysis (Causes of Effects)
3. Mediation Analysis (Estimating Direct and Indirect Effects)
4. Generalizability (Establishing External Validity)
5. Coping with Selection Bias
6. Recovering from Missing Data

I hope you enjoy the slides and appreciate the gems.

2. The question of equivalent representations
—————————————
One challenging question that came up from the audience at JSM concerned the unification of the graphical and potential-outcome frameworks. “How can two logically equivalent representations be so different in actual use?”. I elaborate on this question in a separate post titled “Logically equivalent yet way too different.” http://causality.cs.ucla.edu/blog/index.php/2016/09/12/

3. Simpson’s Paradox: The riddle that would not die
(Comments on four recent papers)
—————————————
If you search Google for “Simpson’s paradox”, as I did yesterday, you would get 111,000 results, more than any other statistical paradox that I could name. What elevates this innocent reversal of associations to “paradoxical” status, and why it has captured the fascination of statisticians, mathematicians and philosophers for over a century are questions that we discussed at length on this (and other) blogs. The reason I am back to this topic is the publication of four recent papers that give us a panoramic view at how the understanding of causal reasoning has progressed in communities that do not usually participate in our discussions.  http://causality.cs.ucla.edu/blog/index.php/2016/08/24/

4. News concerning Causal Inference – A Primer
—————————————
We are grateful to Jim Grace for his in-depth review on Amazon: https://www.amazon.com/gp/customer-reviews/R2T3OB4WRGRRC0/ref=cm_cr_dp_d_rvw_ttl?ie=UTF8&ASIN=1119186846

For those of you awaiting the solutions to the study questions in the Primer, http://bayes.cs.ucla.edu/PRIMER/ I am informed that the Solution Manual is now available (to instructors) from Wiley. To obtain a copy, see page 2 of: http://bayes.cs.ucla.edu/PRIMER/CIS-Manual-PUBLIC.pdf However, rumor has it that a quicker way to get it is through your local Wiley representative, at https://professor.wiley.com/CGI-BIN/LANSAWEB?PROCFUN+PROF1+PRFFN15

If you encounter difficulties, please contact us at causality.ucla@gmail.com and we will try to help. Readers tell me that the solutions are more enlightening than the text. I am not surprised, there is nothing more invigorating than seeing a non-trivial problem solved from A to Z.

5. New books, blogs and other frills
—————————————
5.1
We are informed that a new book by Joseph Halpern, titled “Actual Causality”, is available now from MIT Press. (https://www.amazon.com/Actual-Causality-Press-Joseph-Halpern/dp/0262035022). Readers familiar with Halpern’s fundamental contributions to causal reasoning will not be surprised to find here a fresh and comprehensive solution to the age-old problem of actual causality. Not to be missed.

5.2
Adam Kelleher writes about an interesting math-club and causal-minded blog that he is orchestrating. See his post, http://causality.cs.ucla.edu/blog/index.php/2016/09/11/

5.3
Glenn Shafer just published a review paper: “A Mathematical Theory of Evidence turn 40” celebrating the 40th anniversary of the publication of his 1976 book “A Mathematical Theory of Evidence” http://www.glennshafer.com/assets/downloads/MathTheoryofEvidence-turns-40.pdf I have enjoyed reading this article for nostalgic reasons, reminding me of the stormy days in the 1980’s, when everyone was arguing for another calculus of evidential reasoning. My last contribution to that storm, just before sailing off to causality land, was this paper: http://ftp.cs.ucla.edu/pub/stat_ser/r136.pdf. Section 10 of Shafer’s article deals with his 1996 book “The Art of Causal Conjecture” My thought: Now, that the causal inference field has matured, perhaps it is time to take another look at the way Shafer views causation.

Wishing you a super productive Fall season.

J. Pearl

September 12, 2016

Logically equivalent yet way too different

Filed under: Uncategorized — bryantc @ 2:50 am

Contributor: Judea Pearl

In comparing the tradeoffs between the structural and potential outcome frameworks, I often state that the two are logically equivalent yet poles apart in terms of transparency and computational efficiency. (See Slide #34 of the JSM tutorial). Indeed, anyone who examines how the two frameworks solve a specific problem from begining to end (See, e.g., Slides #35-36 ) would find the differences astonishing.

The question naturally arises: How can two equivalent frameworks differ so substantially in actual use.

The answer is that epistemic equivalence does not mean representational equivalence. Two representations of the same information may highlight different aspects of the problem and thus differ substantially in how easy it is to solve a given problem.  This is a recurrent theme in complexity analysis, but is not generally appreciated outside computer science. We saw it in our discussions with Guido Imbens who could not accept the fact that the use of graphical models is a mathematical necessity not just a matter of taste. (http://causality.cs.ucla.edu/blog/index.php/2014/10/27/are-economists-smarter-than-epidemiologists-comments-on-imbenss-recent-paper/)

The examples usually cited in complexity analysis are combinatorial problems whose solution times depend critically on the initial representation. I hesitated from bringing up these examples, fearing that they will not be compelling to readers on this blog who are more familiar with classical mathematics.

Last week I stumbled upon a very simple example that demonstrates representational differences in no ambiguous terms; I would like to share it with readers.

Consider the age-old problem of finding a solution to an algebraic equation, say
y(x) = x3 + ax2 + bx + c = 0

This is a tough problem for those of us who do not remember Tartalia’s solution of the cubic.  (It can be made much tougher once we go to quintic equation.)

But there are many syntactic ways of representing the same function y(x) . Here is one equivalent representation:
y(x) = x(x2+ax) + b(x+c/b) = 0
and here is another:
y(x) = (x-x1)(x-x2)(x-x3) = 0,
where x1, x2, and x3 are some functions of a, b, c.

The last representation permits an immediate solution, which is:
x=x1, x=x2, x=x3.

The example may appear trivial, and some may even call it cheating, saying that finding x1, x2, and x3 is as hard as solving the original problem. This is true, but the purpose of the example was not to produce an easy solution to the cubic. The purpose was to demonstrate that different syntactic ways of representing the same information (i.e., the same polynomial) may lead to substantial differences in the complexity of computing an answer to a query (i.e., find a root).

A preferred representation is one that makes certain desirable aspects of the problem explicit, thus facilitating a speedy solution. Complexity theory is full of such examples.

Note that the complexity is query-dependent. Had our goal been to find a value x that makes the polynomial y(x) equal 4, not zero, the representation above y(x) = (x-x1)(x-x2)(x-x3) would offer no help at all. For this query, the representation
y(x) = (x-z1)(x-z2)(x-z3) + 4  
would yield an immediate solution
x=z1, x=z2, x=z3,
where z1, z2, and z3 are the roots of another polynomial:
x3 + ax2 + bx + (c-4) = 0

This simple example demonstrates nicely the principle that makes graphical models more efficient than alternative representations of the same causal information, say a set of ignorability assumptions. What makes graphical models efficient is the fact that they make explicit the logical ramifications of the conditional-independencies conveyed by the model. Deriving those ramifications by algebraic or logical means takes substantially more work. (See http://ftp.cs.ucla.edu/pub/stat_ser/r396.pdf for the logic of counterfactual independencies)

A typical example of how nasty such derivations can get is given in Heckman and Pinto’s paper on “Causal Inference after Haavelmo” (Econometric Theory, 2015). Determined to avoid graphs at all cost, Heckman and Pinto derived conditional independence relations directly from Dawid’s axioms and the Markov condition (See https://en.wikipedia.org/wiki/Graphoid.) The results are pages upon pages of derivations of independencies that are displayed explicitly in the graph. http://ftp.cs.ucla.edu/pub/stat_ser/r420.pdf

Of course, this and other difficulties will not dissuade econometricians to use graphs; that would rake a scientific revolution of Kuhnian proportions. (see http://ftp.cs.ucla.edu/pub/stat_ser/r391.pdf) Still, awareness of these complexity issues should give inquisitive students the ammunition to hasten the revolution and equip econometrics with modern tools of causal analysis.

They eventually will.

February 12, 2016

Winter Greeting from the UCLA Causality Blog

Friends in causality research,
This greeting from the UCLA Causality blog contains:

A. An introduction to our newly published book, Causal Inference in Statistics – A Primer, Wiley 2016 (with M. Glymour and N. Jewell)
B. Comments on two other books: (1) R. Klein’s Structural Equation Modeling and (2) L Pereira and A. Saptawijaya’s on Machine Ethics.
C. News, Journals, awards and other frills.

A.
Our publisher (Wiley) has informed us that the book “Causal Inference in Statistics – A Primer” by J. Pearl, M. Glymour and N. Jewell is already available on Kindle, and will be available in print Feb. 26, 2016.
http://www.amazon.com/Causality-A-Primer-Judea-Pearl/dp/1119186846
http://www.amazon.com/Causal-Inference-Statistics-Judea-Pearl-ebook/dp/B01B3P6NJM/ref=mt_kindle?_encoding=UTF8&me=

This book introduces core elements of causal inference into undergraduate and lower-division graduate classes in statistics and data-intensive sciences. The aim is to provide students with the understanding of how data are generated and interpreted at the earliest stage of their statistics education. To that end, the book empowers students with models and tools that answer nontrivial causal questions using vivid examples and simple mathematics. Topics include: causal models, model testing, effects of interventions, mediation and counterfactuals, in both linear and nonparametric systems.

The Table of Contents, Preface and excerpts from the four chapters can be viewed here:
http://bayes.cs.ucla.edu/PRIMER/
A book website providing answers to home-works and interactive computer programs for simulation and analysis (using dagitty)  is currently under construction.

B1
We are in receipt of the fourth edition of Rex Kline’s book “Principles and Practice of Structural Equation Modeling”, http://psychology.concordia.ca/fac/kline/books/nta.pdf

This book is unique in that it treats structural equation models (SEMs) as carriers of causal assumptions and tools for causal inference. Gone are the inhibitions and trepidation that characterize most SEM texts in their treatments of causation.

To the best of my knowledge, Chapter 8 in Kline’s book is the first SEM text to introduce graphical criteria for parameter identification — a long overdue tool
in a field that depends on identifiability for model “fitting”. Overall, the book elevates SEM education to new heights and promises to usher a renaissance for a field that, five decades ago, has pioneered causal analysis in the behavioral sciences.

B2
Much has been written lately on computer ethics, morality, and free will. The new book “Programming Machine Ethics” by Luis Moniz Pereira and Ari Saptawijaya formalizes these concepts in the language of logic programming. See book announcement http://www.springer.com/gp/book/9783319293530. As a novice to the literature on ethics and morality, I was happy to find a comprehensive compilation of the many philosophical works on these topics, articulated in a language that even a layman can comprehend. I was also happy to see the critical role that the logic of counterfactuals plays in moral reasoning. The book is a refreshing reminder that there is more to counterfactual reasoning than “average treatment effects”.

C. News, Journals, awards and other frills.
C1.
Nominations are Invited for the Causality in Statistics Education Award (Deadline is February 15, 2016).

The ASA Causality in Statistics Education Award is aimed at encouraging the teaching of basic causal inference in introductory statistics courses. Co-sponsored by Microsoft Research and Google, the prize is motivated by the growing importance of introducing core elements of causal inference into undergraduate and lower-division graduate classes in statistics. For more information, please see http://www.amstat.org/education/causalityprize/ .

Nominations and questions should be sent to the ASA office at educinfo@amstat.org . The nomination deadline is February 15, 2016.

C.2.
Issue 4.1 of the Journal of Causal Inference is scheduled to appear March 2016, with articles covering all aspects of causal analysis. For mission, policy, and submission information please see: http://degruyter.com/view/j/jci

C.3
Finally, enjoy new results and new insights posted on our technical report page: http://bayes.cs.ucla.edu/csl_papers.html

Judea

UAB’s Nutrition Obesity Research Center — Causal Inference Course

Filed under: Announcement,Uncategorized — bryantc @ 1:03 am
We received the following announcement from Richard F. Sarver (UAB):

UAB’s Nutrition Obesity Research Center invite you to join them at one or both of our five-day short courses at the University of Alabama at Birmingham.

June: The Mathematical Sciences in Obesity Research The mathematical sciences including engineering, statistics, computer science, physics, econometrics, psychometrics, epidemiology, and mathematics qua mathematics are increasingly being applied to advance our understanding of the causes, consequences, and alleviation of obesity. These applications do not merely involve routine well-established approaches easily implemented in widely available commercial software. Rather, they increasingly involve computationally demanding tasks, use and in some cases development of novel analytic methods and software, new derivations, computer simulations, and unprecedented interdigitation of two or more existing techniques. Such advances at the interface of the mathematical sciences and obesity research require bilateral training and exposure for investigators in both disciplines. July: Strengthening Causal Inference in Behavioral Obesity Research Identifying causal relations among variables is fundamental to science. Obesity is a major problem for which much progress in understanding, treatment, and prevention remains to be made. Understanding which social and behavioral factors cause variations in adiposity and which other factors cause variations is vital to producing, evaluating, and selecting intervention and prevention strategies. In addition, developing a greater understanding of obesity’s causes, requires input from diverse disciplines including statistics, economics, psychology, epidemiology, mathematics, philosophy, and in some cases behavioral or statistical genetics. However, applying techniques from these disciplines does not involve routine well-known ‘cookbook’ approaches but requires an understanding of the underlying principles, so the investigator can tailor approaches to specific and varying situations. For full details of each of the courses, please refer to our websites below: Mon 6/13/2016 – Fri 6/17/2016: The Mathematical Sciences in Obesity, http://www.soph.uab.edu/energetics/shortcourse/third Mon 7/25/2016 – Fri 7/29/2016: Strengthening Causal Inference in Behavioral Obesity Research, http://www.soph.uab.edu/energetics/causal_inference_shortcourse/second Limited travel scholarships are available to young investigators. Please apply by Fri 4/1/2016 and be notified of acceptance by Fri 4/8/2016. Women, members of underrepresented minority groups and individuals with disabilities are strongly encouraged to apply. We look forward to seeing you in Birmingham this summer!

Powered by WordPress