Causal Analysis in Theory and Practice

July 7, 2020

Data versus Science: Contesting the Soul of Data-Science

Filed under: Book (J Pearl),Counterfactual,Data Fusion — judea @ 1:02 pm

Summary
The post below is written for the upcoming Spanish translation of The Book of Why, which was announced today. It expresses my firm belief that the current data-fitting direction taken by “Data Science” is temporary (read my lips!), that the future of “Data Science” lies in causal data interpretation and that we should prepare ourselves for the backlash swing.

Data versus Science: Contesting the Soul of Data-Science
Much has been said about how ill-prepared our health-care system was in coping with catastrophic outbreaks like COVID-19. Yet viewed from the corner of my expertise, the ill-preparedness can also be seen as a failure of information technology to keep track of and interpret the outpour of data that have arrived from multiple and conflicting sources, corrupted by noise and omission, some by sloppy collection and some by deliberate misreporting, AI could and should have equipped society with intelligent data-fusion technology, to interpret such conflicting pieces of information and reason its way out of the confusion.

Speaking from the perspective of causal inference research, I have been part of a team that has developed a complete theoretical underpinning for such “data-fusion” problems; a development that is briefly described in Chapter 10 of The Book of Why. A system based on data fusion principles should be able to attribute disparities between Italy and China to differences in political leadership, reliability of tests and honesty in reporting, adjust for such differences and automatically infer behavior in countries like Spain or the US. AI is in a position to to add such data-interpreting capabilities on top of the data-fitting technologies currently in use and, recognizing that data are noisy, filter the noise and outsmart the noise makers.

“Data fitting” is the name I frequently use to characterize the data-centric thinking that dominates both statistics and machine learning cultures, in contrast to the “data-interpretation” thinking that guides causal inference. The data-fitting school is driven by the faith that the secret to rational decisions lies in the data itself, if only we are sufficiently clever at data mining. In contrast, the data-interpreting school views data, not as a sole object of inquiry but as an auxiliary means for interpreting reality, and “reality” stands for the processes that generate the data.

I am not alone in this assessment. Leading researchers in the “Data Science” enterprise have come to realize that machine learning as it is currently practiced cannot yield the kind of understanding that intelligent decision making requires. However, what many fail to realize is that the transition from data-fitting to data-understanding involves more than a technology transfer; it entails a profound paradigm shift that is traumatic if not impossible. Researchers whose entire productive career have committed them to the supposition that all knowledge comes from the data cannot easily transfer allegiance to a totally alien paradigm, according to which extra-data information is needed, in the form of man-made, causal models of reality. Current machine learning thinking, which some describe as “statistics on steroids,” is deeply entrenched in this self-propelled ideology.

Ten years from now, historians will be asking: How could scientific leaders of the time allow society to invest almost all its educational and financial resources in data-fitting technologies and so little on data-interpretation science? The Book of Why attempts to answer this dilemma by drawing parallels to historically similar situations where ideological impediments held back scientific progress. But the true answer, and the magnitude of its ramifications, will only be unravelled by in-depth archival studies of the social, psychological and economical forces that are currently governing our scientific institutions.

A related, yet perhaps more critical topic that came up in handling the COVID-19 pandemic, is the issue of personalized care. Much of current health-care methods and procedures are guided by population data, obtained from controlled experiments or observational studies. However, the task of going from these data to the level of individual behavior requires counterfactual logic, which has been formalized and algorithmatized in the past 2 decades (as narrated in Chapter 8 of The Book of Why), and is still a mystery to most machine learning researchers.

The immediate area where this development could have assisted the COVID-19 pandemic predicament concerns the question of prioritizing patients who are in “greatest need” for treatment, testing, or other scarce resources. “Need” is a counterfactual notion (i.e., patients who would have gotten worse had they not been treated) and cannot be captured by statistical methods alone. A recently posted blog page https://ucla.in/39Ey8sU demonstrates in vivid colors how counterfactual analysis handles this prioritization problem.

The entire enterprise known as “personalized medicine” and, more generally, any enterprise requiring inference from populations to individuals, rests on counterfactual analysis, and AI now holds the key theoretical tools for operationalizing this analysis.

People ask me why these capabilities are not part of the standard tool sets available for handling health-care management. The answer lies again in training and education. We have been rushing too eagerly to reap the low-lying fruits of big data and data fitting technologies, at the cost of neglecting data-interpretation technologies. Data-fitting is addictive, and building more “data-science centers” only intensifies the addiction. Society is waiting for visionary leadership to balance this over-indulgence by establishing research, educational and training centers dedicated to “causal science.”

I hope it happens soon, for we must be prepared for the next pandemic outbreak and the information confusion that will probably come in its wake.

Comments (1)

December 13, 2018

Winter Greetings from the UCLA Causality Blog

Filed under: Announcement,Book (J Pearl),General — Judea Pearl @ 11:37 pm

Dear friends in causality research,

In the past 5 months, since the publication of The Book of Why http://bayes.cs.ucla.edu/WHY/ I have been involved in conversations with many inquisitive readers on Twitter @yudapearl and have not been able to update our blog as frequently as I should. I am glad to return to this forum and update it with the major developments since July, 2018.

1.
Initial reviews of the Book of Why are posted on its trailer page http://bayes.cs.ucla.edu/WHY/ They vary from technical discussions to philosophical speculations, from relationships to machine learning to debates about the supremacy of randomized contolled trials.

2.
A search-able file of all my 750 tweets is available here: https://ucla.in/2Kz0FoY. It can be used for (1) extracting talking points, adages and arguments in the defense of causal inference, and (2) understanding the thinking of neighboring cultures, e.g., statistics, epidemiology, economics, deep learning and reinforcement learning, primarily on issues of transparency, testability, manipulability, do-expressions and counterfactuals.

3.
The 6th printing of the Book Of Why is now available, with corrections to all errors and typos discovered up to Oct. 29, 2018. To check that you have the latest printing, make sure the last line on the copywright page ends with … 8 7 6

4.
Please examine the latest papers and reports from our brewry:

R-484 Pearl, “Causal and Counterfactual Inference,” Forthcoming section in The Handbook of Rationality, MIT press. https://ucla.in/2Iz9myt

R-484 Pearl, “A note on oxygen, matches and fires, On Non-manipulable Causes,” September 2018. https://ucla.in/2Qb1h6v

R-483 Pearl, “Does Obesity Shorten Life? Or is it the Soda? On Non-manipulable Causes,” https://ucla.in/2EpxcNU Journal of Causal Inference, 6(2), online, September 2018.

R-481 Pearl, “The Seven Tools of Causal Inference with Reflections on Machine Learning,” July 2018 https://ucla.in/2umzd65 Forthcoming, Communications of ACM.

R-479 Cinelli and Pearl, “On the utility of causal diagrams in modeling attrition: a practical example,” April 2018. https://ucla.in/2L8KAWw Forthcoming, Journal of Epidemiology.

R-478 Pearl and Bareinboim, “A note on `Generalizability of Study Results’,” April 2018. Forthcoming, Journal of Epidemiology. https://ucla.in/2NIsI6B

Earlier papers can be found here: http://bayes.cs.ucla.edu/csl_papers.html

5.
I wish in particular to call attention to the introduction of R-478, https://ucla.in/2NIsI6B. It provides a “three bullets” recipe for comparing
the structural and potential outcome frameworks:

* To determine if there exist sets of covariates $W$ that satisfy “conditional exchangeability”
** To estimate causal parameters at the target population in cases where such sets $W$ do not exist, and
*** To decide if one’s modeling assumptions are compatible with the available data.

I have listed the “three bullets” above in the hope that they serve to facilitate and concretize future conversations with our neighbors from the potential outcome framework.

6. We are informed of a most relevant workshop: AAAI-WHY 2019, March 26-27, Stanford, CA. The 2019 AAAI Spring Symposium will host a new workshop: Beyond Curve Fitting: Causation, Counterfactuals, and Imagination-based AI. See https://why19.causalai.net. Submissions due December 17, 2018

Greetings and Happy Holidays
Judea

Comments (0)

June 15, 2018

A Statistician’s Re-Reaction to The Book of Why

Filed under: Book (J Pearl),Discussion,Simpson's Paradox — Judea Pearl @ 2:29 am

Responding to my June 11 comment, Kevin Gray posted a reply on kdnuggets.com in which he doubted the possibility that the Causal Revolution has solved problems that generations of statisticians and philosophers have labored over and could not solve. Below is my reply to Kevin’s Re-Reaction, which I have also submitted to kdhuggets.com:

Dear Kevin,
I am not suggesting that you are only superficially acquainted with my works. You actually show much greater acquaintance than most statisticians in my department, and I am extremely appreciative that you are taking the time to comment on The Book of Why. You are showing me what other readers with your perspective would think about the Book, and what they would find unsubstantiated or difficult to swallow. So let us go straight to these two points (i.e., unsubstantiated and difficult to swallow) and give them an in-depth examination.

You say that I have provided no evidence for my claim: “Even today, only a small percentage of practicing statisticians can solve any of the causal toy problems presented in the Book of Why.” I believe that I did provide such evidence, in each of the Book’s chapters, and that the claim is valid once we agree on what is meant by “solve.”

Let us take the first example that you bring, Simpson’s paradox, which is treated in Chapter 6 of the Book, and which is familiar  to every red-blooded statistician. I characterized the paradox in these words: “It has been bothering statisticians for more than sixty years – and it remains vexing to this very day” (p. 201). This was, as you rightly noticed, a polite way of saying: “Even today, the vast majority of statisticians cannot solve Simpson’s paradox,” a fact which I strongly believe to be true.

You find this statement hard to swallow, because: “generations of researchers and statisticians have been trained to look out for it [Simpson’s Paradox]” an observation that seems to contradict my claim. But I beg you to note that “trained to look out for it” does not make the researchers capable of “solving it,” namely capable of deciding what to do when the paradox shows up in the data.

This distinction appears vividly in the debate that took place in 2014 on the pages of The American Statistician, which you and I cite.  However, whereas you see the disagreements in that debate as evidence that statisticians have several ways of resolving Simpson’s paradox, I see it as evidence that they did not even come close. In other words, none of the other participants presented a method for deciding whether the aggregated data or the segregated data give the correct answer to the question: “Is the treatment helpful or harmful?”

Please pay special attention to the article by Keli Liu and Xiao-Li Meng, both are from Harvard’s department of statistics (Xiao-Li is a senior professor and a Dean), so they cannot be accused of misrepresenting the state of statistical knowledge in 2014. Please read their paper carefully and judge for yourself whether it would help you decide whether treatment is helpful or not, in any of the examples presented in the debate.

It would not!! And how do I know? I am listening to their conclusions:

  1. They disavow any connection to causality (p.18), and
  2. They end up with the wrong conclusion. Quoting: “less conditioning is most likely to lead to serious bias when Simpson’s Paradox appears.” (p.17) Simpson himself brings an example where conditioning leads to more bias, not less.

I dont blame Liu and Meng for erring on this point, it is not entirely their fault (Rosenbaum and Rubin made the same error). The correct solution to Simpson’s dilemma rests on the back-door criterion, which is almost impossible to articulate without the aid of DAGs. And DAGs, as you are probably aware, are forbidden from entering a 5 mile no-fly zone around Harvard [North side, where the statistics department is located].

So, here we are. Most statisticians believe that everyone knows how to “watch for” Simpson’s paradox, and those who seek an answer to: “Should we treat or not?” realize that “watching” is far from “solving.” Moreover, the also realize that there is no solution without stepping outside the comfort zone of statistical analysis and entering the forbidden city of causation and graphical models.

One thing I do agree with you — your warning about the implausibility of the Causal Revolution. Quoting: “to this day, philosophers disagree about what causation is, thus to suggest he has found the answer to it is not plausible”.  It is truly not plausible that someone, especially a semi-outsider, has found a Silver Bullet. It is hard to swallow. That is why I am so excited about the Causal Revolution and that is why I wrote the book. The Book does not offer a Silver Bullet to every causal problem in existence, but it offers a solution to a class of problems that centuries of statisticians and Philosophers tried and could not crack. It is implausible, I agree, but it happened. It happened not because I am smarter but because I took Sewall Wright’s idea seriously and milked it to its logical conclusions as much as I could.

It took quite a risk on my part to sound pretentious and call this development a Causal Revolution. I thought it was necessary. Now I am asking you to take a few minutes and judge for yourself whether the evidence does not justify such a risky characterization.

It would be nice if we could alert practicing statisticians, deeply invested in the language of statistics to the possibility that paradigm shifts can occur even in the 21st century, and that centuries of unproductive debates do not make such shifts impossible.

You were right to express doubt and disbelief in the need for a paradigm shift, as would any responsible scientist in your place. The next step is to let the community explore:

  1. How many statisticians can actually answer Simpson’s question, and
  2. How to make that number reach 90%.

I believe The Book of Why has already doubled that number, which is some progress. It is in fact something that I was not able to do in the past thirty years through laborious discussions with the leading statisticians of our time.

It is some progress, let’s continue,
Judea

Comments (4)

June 11, 2018

A Statistician’s Reaction to The Book of Why

Filed under: Book (J Pearl) — Judea Pearl @ 12:37 am

Carlos Cinelli brough to my attention a review of The Book of Why, written by Kevin Gray, who disagrees with my claim that statistics has been  delinquent in neglecting causality. See https://www.kdnuggets.com/2018/06/gray-pearl-book-of-why.html I have received similar reactions from statisticians in the past, and I expect more in the future. These reactions reflect a linguistic dissonance which The Book of Why describes thus: “Many scientists have been quite traumatized to learn that none of the methods they learned in statistics is sufficient even to articulate, let alone answer, a simple question like ‘What happens if we double the price?'” p 31.

I have asked Carlos to post the following response on Kevin’s blog:

————————————————
Kevin’s prediction that many statisticians may find  my views “odd or exaggerated” is accurate. This is exactly what I have found in numerous conversations I have had with statisticians in the past 30 years. However, if you examine my views closely,  you will find that they are not as thoughtless or exaggerated as they may appear at first sight.

Of course many statisticians will scratch their heads and ask: “Isn’t this what we have been doing for years, though perhaps under a different name or not name at all?” And here lies the essence of my views. Doing it informally, under various names, while refraining from doing it mathematically under uniform notation has had a devastating effect on progress in causal inference, both in statistics and in the many disciplines that look to statistics for guidance. The best evidence for this lack of progress is the fact that, even today, only a small percentage of practicing statisticians can solve any of the causal toy problems presented in the Book of Why.

Take for example:

  1. Selecting a sufficient set of covariates to control for confounding
  2. Articulating assumptions that would enable consistent estimates of causal effects
  3. Finding if those assumptions are testable
  4. Estimating causes of effect (as opposed to effects of cause)
  5. More and more.

Every chapter of The Book of Why brings with it a set of problems that statisticians were deeply concerned about, and have been struggling with for years,  albeit under the wrong name (eg. ANOVA or MANOVA) “or not name at all.” The results are many deep concerns but no solution.

A valid question to be asked at this point is what gives humble me the audacity to state so sweepingly that no statistician (in fact no scientist) was able to properly solve those toy problems prior to the 1980’s. How can one be so sure that some bright statistician or philosopher did not come up with the correct resolution of the Simpson’s paradox or a correct way to distinguish direct from indirect effects? The answer is simple: we can see it in the syntax of the equations that scientists used in the 20th century.  To properly define causal problems, let alone solve them, requires a vocabulary that resides outside the language of probability theory. This means that all the smart and brilliant statisticians who used joint density functions, correlation analysis, contingency tables, ANOVA, Entropy, Risk Ratios, etc., etc., and did not enrich them with either diagrams or counterfactual symbols have been laboring in vain — orthogonally to the question — you can’t answer a question if you have no words to ask it. (Book of Why, page 10)

It is this notational litmus test that gives me the confidence to stand behind each one of statements that you were kind enough to cite from the Book of Why. Moreover, if you look closely at this litmus test, you will find that it not just notational but conceptual and practical as well. For example, Fisher’s blunder of using ANOVA to estimate direct effects is still haunting the practices of present day mediation analysts. Numerous other examples are described in the Book of Why and I hope you weigh seriously the lesson that each of them conveys.

Yes, many of your friends and colleagues will be scratching their head saying: “Hmmm… Isn’t this what we have been doing for years, though perhaps under a different name or not name at all?” What I hope you will be able to do after reading “The Book of Why” is to catch some of the head-scratchers and tell them: “Hey, before you scratch further, can you solve any of the toy problems in the Book of Why?” You will be surprised by the results — I was!
————————————————

To me, solving problems is the test of understanding, not head scratching. That is why I wrote this Book.

Judea

Comments (5)

June 7, 2018

Updates on The Book of Why

Filed under: Announcement,Book (J Pearl) — Judea Pearl @ 11:54 pm

Dear friends in causality research,

Three months ago, I sent you a special greeting, announcing the forthcoming publication of The Book of Why (Basic Books, co-authored with Dana MacKenzie). Below please find an update.

The Book came out on May 15, 2018, and has since been featured by the Wall Street Journal, Quanta Magazine, and The Times of London. You can view these articles here:
http://bayes.cs.ucla.edu/WHY/

Eager to allay public fears of the dangers of artificial intelligence, these three articles interpreted my critics of model-blind learning as general impediments to AI and machine learning. This has probably helped put the Book on Amazon’s #1 bestseller lists in several categories.

However, the limitations of current machine learning techniques are only part of the message conveyed in the Book of Why. The second, and more important part of the Book describes how these limitations are circumvented through the use of causal models, however qualitative or incomplete. The impacts that causal modeling has had on the social and health sciences make it only natural that a similar ‘revolution’ will soon be sweeping machine learning research, and liberate it from its current predicaments of opaqueness, forgetfulness and lack of explainability. (See, for example, http://www.sciencemag.org/news/2018/05/ai-researchers-allege-machine-learning-alchemy and https://arxiv.org/pdf/1801.00631.pdf)

I was happy therefore to see that this positive message was understood by many readers who wrote to me about the book, especially readers coming from traditional machine learning background (See, for example, www.inference.vc/untitled). It was also recognized by a more recent review in the New York Times
https://www.nytimes.com/2018/06/01/business/dealbook/review-the-book-of-why-examines-the-science-of-cause-and-effect.html which better reflects my optimism about what artificial intelligence can achieve.

I am hoping that you and your students will find inspiration in the optimistic message of the Book of Why, and that you take active part in the on-going development of “model-assisted machine learning.”

Sincerely,

Judea

Comments (1)

February 12, 2016

Winter Greeting from the UCLA Causality Blog

Filed under: Announcement,Book (J Pearl),General,structural equations,Uncategorized — bryantc @ 5:04 pm

Friends in causality research,
This greeting from the UCLA Causality blog contains:

A. An introduction to our newly published book, Causal Inference in Statistics – A Primer, Wiley 2016 (with M. Glymour and N. Jewell)
B. Comments on two other books: (1) R. Klein’s Structural Equation Modeling and (2) L Pereira and A. Saptawijaya’s on Machine Ethics.
C. News, Journals, awards and other frills.

A.
Our publisher (Wiley) has informed us that the book “Causal Inference in Statistics – A Primer” by J. Pearl, M. Glymour and N. Jewell is already available on Kindle, and will be available in print Feb. 26, 2016.
http://www.amazon.com/Causality-A-Primer-Judea-Pearl/dp/1119186846
http://www.amazon.com/Causal-Inference-Statistics-Judea-Pearl-ebook/dp/B01B3P6NJM/ref=mt_kindle?_encoding=UTF8&me=

This book introduces core elements of causal inference into undergraduate and lower-division graduate classes in statistics and data-intensive sciences. The aim is to provide students with the understanding of how data are generated and interpreted at the earliest stage of their statistics education. To that end, the book empowers students with models and tools that answer nontrivial causal questions using vivid examples and simple mathematics. Topics include: causal models, model testing, effects of interventions, mediation and counterfactuals, in both linear and nonparametric systems.

The Table of Contents, Preface and excerpts from the four chapters can be viewed here:
http://bayes.cs.ucla.edu/PRIMER/
A book website providing answers to home-works and interactive computer programs for simulation and analysis (using dagitty)  is currently under construction.

B1
We are in receipt of the fourth edition of Rex Kline’s book “Principles and Practice of Structural Equation Modeling”, http://psychology.concordia.ca/fac/kline/books/nta.pdf

This book is unique in that it treats structural equation models (SEMs) as carriers of causal assumptions and tools for causal inference. Gone are the inhibitions and trepidation that characterize most SEM texts in their treatments of causation.

To the best of my knowledge, Chapter 8 in Kline’s book is the first SEM text to introduce graphical criteria for parameter identification — a long overdue tool
in a field that depends on identifiability for model “fitting”. Overall, the book elevates SEM education to new heights and promises to usher a renaissance for a field that, five decades ago, has pioneered causal analysis in the behavioral sciences.

B2
Much has been written lately on computer ethics, morality, and free will. The new book “Programming Machine Ethics” by Luis Moniz Pereira and Ari Saptawijaya formalizes these concepts in the language of logic programming. See book announcement http://www.springer.com/gp/book/9783319293530. As a novice to the literature on ethics and morality, I was happy to find a comprehensive compilation of the many philosophical works on these topics, articulated in a language that even a layman can comprehend. I was also happy to see the critical role that the logic of counterfactuals plays in moral reasoning. The book is a refreshing reminder that there is more to counterfactual reasoning than “average treatment effects”.

C. News, Journals, awards and other frills.
C1.
Nominations are Invited for the Causality in Statistics Education Award (Deadline is February 15, 2016).

The ASA Causality in Statistics Education Award is aimed at encouraging the teaching of basic causal inference in introductory statistics courses. Co-sponsored by Microsoft Research and Google, the prize is motivated by the growing importance of introducing core elements of causal inference into undergraduate and lower-division graduate classes in statistics. For more information, please see http://www.amstat.org/education/causalityprize/ .

Nominations and questions should be sent to the ASA office at educinfo@amstat.org . The nomination deadline is February 15, 2016.

C.2.
Issue 4.1 of the Journal of Causal Inference is scheduled to appear March 2016, with articles covering all aspects of causal analysis. For mission, policy, and submission information please see: http://degruyter.com/view/j/jci

C.3
Finally, enjoy new results and new insights posted on our technical report page: http://bayes.cs.ucla.edu/csl_papers.html

Judea

Comments (2)

December 20, 2014

A new book out, Morgan and Winship, 2nd Edition

Filed under: Announcement,Book (J Pearl),General,Opinion — judea @ 2:49 pm

Here is my book recommendation for the month:
Counterfactuals and Causal Inference: Methods and Principles for Social Research (Analytical Methods for Social Research) Paperback – November 17, 2014
by Stephen L. Morgan (Author), Christopher Winship (Author)
ISBN-13: 978-1107694163 ISBN-10: 1107694167 Edition: 2nd

My book-cover blurb reads:
“This improved edition of Morgan and Winship’s book elevates traditional social sciences, including economics, education and political science, from a hopeless flirtation with regression to a solid science of causal interpretation, based on two foundational pillars: counterfactuals and causal graphs. A must for anyone seeking an understanding of the modern tools of causal analysis, and a must for anyone expecting science to secure explanations, not merely descriptions.”

But Gary King puts it in a more compelling historical perspective:
“More has been learned about causal inference in the last few decades than the sum total of everything that had been learned about it in all prior recorded history. The first comprehensive survey of the modern causal inference literature was the first edition of Morgan and Winship. Now with the second edition of this successful book comes the most up-to-date treatment.” Gary King, Harvard University

King’s statement is worth repeating here to remind us that we are indeed participating in an unprecedented historical revolution:

“More has been learned about causal inference in the last few decades than the sum total of everything that had been learned about it in all prior recorded history.”

It is the same revolution that Miquel Porta noted to be transforming the discourse in Epidemiology (link).

Social science and Epidemiology have been spear-heading this revolution, but I don’t think other disciplines will sit idle for too long.

In a recent survey (here), I attributed the revolution to “a fruitful symbiosis between graphs and counterfactuals that has unified the potential outcome framework of Neyman, Rubin, and Robins with the econometric tradition of Haavelmo, Marschak, and Heckman. In this symbiosis, counterfactuals emerge as natural byproducts of structural equations and serve to formally articulate research questions of interest. Graphical models, on the other hand, are used to encode scientific assumptions in a qualitative (i.e. nonparametric) and transparent language and to identify the logical ramifications of these assumptions, in particular their testable implications.”

Other researchers may wish to explain the revolution in other ways; still, Morgan and Winship’s book is a perfect example of how the symbiosis can work when taken seriously.

Comments (3)

A new review of Causality

Filed under: Book (J Pearl),General,Opinion — eb @ 2:46 pm

A new review of Causality (2nd Edition, 2013 printing) has appeared in Acta Sociologica 2014, Vol. 57(4) 369-375.
http://bayes.cs.ucla.edu/BOOK-2K/elwert-review2014.pdf
Reviewed by Felix Elwert, University of Wisconsin-Madison, USA.

Elwert highlights specific sections of Causality that can empower social scientists with new insights or new tools for applying modern methods of causal inference in their research. Coming from a practical social science perspective, this review is a welcome addition to the list of 33 other reviews of Causality, which tend to be more philosophical. see http://bayes.cs.ucla.edu/BOOK-2K/book_review.html

I am particularly gratified by Elwert’s final remarks:
“Pearl’s language empowers social scientists to communicate causal models with each other across sub-disciplines…and enables social scientists to communicate more effectively with statistical methodologists.”

Comments (1)

September 11, 2009

Recent Activities in Causality

Filed under: Announcement,Book (J Pearl),Discussion — moderator @ 4:00 am

Judea Pearl writes:

Dear colleagues in causality research,

  1. I am pleased to announce that the 2nd Edition of Causality is out now (I saw a real copy), and should hit your bookstore any day. Thanks for waiting patiently, and I apologize for not having books to sign at the JSM meeting in DC.
  2. You may be pleased to know that, after a long and heated discussion on Andrew Gelman’s website, a provisional resolution (truce?) has been declared on the question: Is there such a thing as overadjustment? Click for details…
  3. A new survey paper, gently summarizing everything I know about causation (in 40 pages) is now posted. Comments are welcome.
  4. A new paper answering the question: “When are two measurements equally valuable for effect estimation?” has been posted. Confession: It is really a neat result.

Wishing you a fruitful new school year and may clarity reign in causality land.

=======Judea Pearl

Comments (0)

June 28, 2009

Joint Statistical Meetings 2009

Filed under: Announcement,Book (J Pearl),JSM — moderator @ 10:00 am

Tutorial
Judea Pearl will be presenting a tutorial at the JSM meeting (Washington, DC August 5, 2009 from 2-4pm) on "Causal Analysis in Statistics: A Gentle Introduction"

Additional information about the session may be obtained by clicking here.

Book Signing
Just before the tutorial at 12 noon, there will be a book-signing gathering at the Cambridge University Press booth, where J. Pearl will be signing copies of the 2nd Edition of Causality and will engage in gossip and debates about where causality is heading.

Comments (2)
Next Page »
  • Pages
    • About
  • Categories:
    • Announcement
    • Back-door criterion
    • Bad Control
    • Bayesian network
    • Book (J Pearl)
    • Book of Why
    • Bounds
    • Causal Effect
    • Causal models
    • Conferences
    • Counterfactual
    • Counterfactuals
    • covariate selection
    • d-separation
    • DAGs
    • Data Fusion
    • Decision Trees
    • Deep Learning
    • Deep Understanding
    • Definition
    • Discussion
    • do-calculus
    • Econometrics
    • Economics
    • Epidemiology
    • G-estimation
    • Gelman's blog
    • General
    • Generalizability
    • Identification
    • Imbens
    • Indirect effects
    • Intuition
    • Journal of Causal Inference
    • JSM
    • Knowledge representation
    • Ladder of Causation
    • Linear Systems
    • Machine learning
    • Marginal structural models
    • Matching
    • measurement cost
    • Mediated Effects
    • Missing Data
    • Nancy Cartwright
    • Noncompliance
    • Opinion
    • Path Coefficient
    • Plans
    • Presentation
    • Probability
    • Propensity Score
    • RCTs
    • Selection Bias
    • Simpson's Paradox
    • Statistical causality
    • Statistical Time
    • structural equations
    • Uncategorized

  • Archives:
    • March 2023
    • January 2023
    • May 2022
    • April 2021
    • December 2020
    • October 2020
    • July 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • September 2019
    • August 2019
    • June 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • June 2018
    • April 2018
    • March 2018
    • January 2018
    • December 2017
    • August 2017
    • May 2017
    • April 2017
    • February 2017
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • February 2016
    • November 2015
    • August 2015
    • July 2015
    • May 2015
    • April 2015
    • January 2015
    • December 2014
    • November 2014
    • October 2014
    • September 2014
    • August 2014
    • July 2014
    • April 2014
    • December 2013
    • November 2013
    • October 2013
    • September 2013
    • August 2013
    • July 2013
    • April 2013
    • December 2012
    • November 2012
    • October 2012
    • September 2012
    • August 2012
    • July 2012
    • June 2012
    • February 2012
    • January 2012
    • September 2011
    • August 2011
    • March 2011
    • October 2010
    • June 2010
    • May 2010
    • April 2010
    • February 2010
    • December 2009
    • November 2009
    • September 2009
    • August 2009
    • July 2009
    • June 2009
    • March 2009
    • December 2008
    • October 2008
    • May 2008
    • February 2008
    • December 2007
    • October 2007
    • August 2007
    • June 2007
    • May 2007
    • April 2007
    • March 2007
    • February 2007
    • October 2006
    • September 2006
    • May 2006
    • February 2006
    • February 2004
    • October 2001
    • September 2001
    • April 2001
    • January 2001
    • December 2000
    • November 2000
    • September 2000
    • July 2000
    • June 2000
    • May 2000
    • April 2000
    • March 2000
    • January 2000
    • July 1
  • Meta:
    • Log in
    • RSS
    • Comments RSS
    • Valid XHTML
    • XFN
    • WP

Powered by WordPress