In the recent issue of IJE (http://aje.oxfordjournals.org/content/176/7/608), Tyler VanderWeele argues that SEM should be used in Epidemiology only when 1) the interest is on a wide range of effects 2) the purpose of the analysis is to generate hypothesis. However if the interest is on a single fixed exposure, he thinks traditional regression methods are more superior.
According to him, the latter relies on fewer assumptions e.g. we don’t need to know the functional form of the association between a confounder and exposure (or outcome) during estimation, and hence are less prone to bias. How valid is this argument given that some of (if not all) the causal modeling methods are simply a special case of SEM (e.g. the Robin’s G methods and even the regression methods he’s talking about).
Thank you for raising these questions about Tyler’s article. I believe several of Tyler’s statements stand the risk of being misinterpreted by epidemiologists, for they may create the impression that the use of SEM, including its nonparametric variety, is somehow riskier than the use of other techniques. This is not the case. I believe Tyler’s critics were aimed specifically at parametric SEM, such as those used in Arlinghaus etal (2012), but not at nonparametric SEMs which he favors and names “causal diagrams”. Indeed, nonparametric SEM’s are blessed with unequal transparency to assure that each and every assumption is visible and passes the scrutiny of scientific judgment.
While it is true that SEMs have the capacity to make bolder assumptions, some not discernible from experiments, (e.g., no confounding between mediator and outcome) this does not mean that investigators, acting properly, would make such assumptions when they stand contrary to scientific judgment, nor does it mean that investigators are under weaker protection from the ramifications of unwarranted assumptions. Today we know precisely which of SEM’s claims are discernible from experiments (i.e., reducible to do(x) expressions) and which are not (see Shpitser and Pearl, 2008) http://ftp.cs.ucla.edu/pub/stat_ser/r334-uai.pdf
I therefore take issue with Tyler’s statement: “SEMs themselves tend to make much stronger assumptions than these other techniques” (from his abstract) when applied to nonparametric analysis. SEMs do not make assumptions, nor do they “tend to make assumptions”; investigators do. I am inclined to believe that Tyler’s critics were aims at a specific application of SEM rather than SEM as a methodology.
Purging SEM from epidemiology would amount to purging counterfactuals from epidemiology — the latter draws its legitimacy from the former.
I also reject occasional calls to replace SEM and Causal Diagrams with weaker types of graphical models which presumably make weaker assumptions. No matter how we label alternative models (e.g., interventional graphs, agnostic graphs, causal Bayesian networks, FFRCISTG models, influence diagrams, etc.), they all must rest on judgmental assumptions and people think science (read SEM), not experiments. In other words, when an investigators asks him/herself whether an arrow from X to Y is warranted, the investigator does not ask whether an intervention on X would change the probability of Y (read: P(y|do(x)) = P(y)) but whether the function f in the mechanism y=f(x, u) depends on x for some u. Claims that the stronger assumptions made by SEMs (compared with interventional graphs) may have unintended consequences are supported by a few contrived cases where people can craft a nontrivial f(x,u) despite the equality P(y|do(x)) = P(y)). (See an example in Causality page 24.)
For a formal distinction between SEM and interventional graphs (also known as “Causal Bayes networks”, see Causality pages 23-24, 33-36). For more philosophical discussions defending counterfactuals and SEM against false alarms see:
I hope this help clarify the issue.