Measurement Cost and Estimator’s Variance
Sander Greenland from UCLA writes:
The machinery in your book addresses only issues of identification and unbiasedness. Of equal concern for practice is variance, which comes to the fore when (as usual) one has a lot of estimators with similar bias to choose from, for within that set of estimators the variance becomes the key driver of expected loss (usually taken as MSE (mean-squared-error = variance+bias^2). Thus for example you may identify a lot of (almost-) sufficient subsets in a graph; but the minimum MSE attainable with each may span an order of magnitude. On top of that, the financial costs of obtaining each subset may span orders of magnitudes. So your identification results, while important and useful, are just a start on working out which variables to spend the money to measure and adjust for. The math of the subsequent MSE and cost considerations is harder, but no less important.
Judea Pearl replies:
You are absolutely right, it is just a start, as is stated in Causality page 95. The reason I did not emphasize the analysis of variance in this book was my assumption that, after a century of extremely fruitful statistical research, one would have little to add to this area.
My hypothesis was:
Once we identify a causal parameter, and produce an estimand of that parameter in closed mathematical form, a century of statistical research can be harnessed to the problem, and render theestimation task a routine exercise in data analysis. Why spend energy on areas well researched when so much needs to be done in areas of neglect?
However, the specific problem you raised, that of choosing among competing sufficient sets, happens to be one that Tian, Paz and Pearl (1998) did tackle and solved. See Causality page 80, reading: “The criterion also enable the analyst to search for an optimal set of covariates — a set Z that minimizes measurement cost or sampling variability (Tian et al, 1998).” [Available at http://ftp.cs.ucla.edu/pub/stat_ser/r254.pdf] By “solution”, I mean of course, an analytical solution, assuming that cost is additive and well defined for each covariate. The paper provides a polynomial time algorithm that identifies the minimal (or minimum cost) sets of nodes that d-separates two nodes in a graph. When applied to a graph purged of outgoing arrows from the treatment node, the algorithm will enumerate all minimal sufficient sets, i.e., sets of measurements that de-confound the causal relation between treatment and outcome.
Readers who deem such an algorithm useful, should have no difficulty implementing it from the description given in the paper; the introduction of variance considerations though would require some domain-specific expertise.
[…] Our causality-blog has been enriched with two recent discussions: ” The intuition behind ‘Inverse Probability Weighting’” and “Accounting for measurement cost and estimators variance“. […]
Pingback by Causal Analysis in Theory and Practice » Message from Judea Pearl — December 11, 2009 @ 5:56 pm
It seems that in addition to the minimal cost solution within the set of solutions that have no confounding bias, there could be an interest in allowing some bias when this reduces cost over-all.
Comment by David Farrar — January 10, 2011 @ 11:46 am