I have a question about the page 37 in the book of PIRMER。

…

we expand on our definition of causation: A variable X is a direct cause of a variable Y if X appears in the function that assigns Y’s value. X is a cause of

Y if it is a direct cause of Y, or of any cause of Y.

…

the function fX for a variable X contains within it the variable Y (i.e., if X depends on Y for its value),

then, in G, there will be a directed edge from Y to X.

…

From the above definition, we can infer that the edges in the SCM represent the direct causal relationship between nodes.

but，in page 37, In the revised version, “Z and Y are dependent” is changed to “Z and Y likely dependent”.

The arrow in SCM represents a causal relationship. Does it mean that causality cannot represent a dependent relationship?

What is a dependent relationship?

Most of my projects involve causal inference based on some combination of observational and experimental data.

I am expected to provide causal insights because somebody wants to either *do* something or *blame* something.

Experience has taught me to always draw the statistical DAG.

We have prevented many errors by using the statistical DAG as a blueprint for our code.

I have been examinating out some of your articles and

it’s pretty clever stuff. I will surely bookmark your blog.

I think the subject matter here is real superb,Thankyou for your efforts. ]]>

As a practicing statistician, I am very interested in the Causal Revolution and like to read and talk about it.

Most of my projects involve causal inference based on some combination of observational and experimental data.

I am expected to provide causal insights because somebody wants to either *do* something or *blame* something.

I always consult my (imaginary) board of trustees consisting of Judea Pearl, Nancy Cartwright,

Eyal Shahar and Andrew Gelman when launching a project.

Judea exhorts me to organize causal inference as a DAG (Causality, 2000).

This is fruitful because my projects usually pertain to physical processes

that fit the DAG analogy and whose causes and effects are painted bright red.

Nancy exhorts me to be cautious toward the causal Markov condition (What’s wrong with Bayes Nets? 2001).

Even if the DAG accurately depicts the topology of the system,

variable definitions and intervention mechanisms must be carefully articulated.

Eyal exhorts me to avoid thought bias: mistaking derived variables and constructs

(which only exist as thoughts) for cause variables in the physical world (Causal diagrams and three pairs of biases, 2018).

Statistical model parameters are such fiction.

For me, Eyal’s exhortation is the most difficult to heed.

I see the modeling process as the marriage of a mathematical object to a physical reality such

that the properties of the object become synonymous with the reality.

In a good marriage, the statistical model parameters give insights beyond what the data alone can provide.

I have peace by affirming that the desired model parameters *explain* and not merely predict.

Andrew provides my Bayesian inference machinery conveniently organized according to a DAG (Bayesian Data Analysis, 2013).

Even more conveniently, the causal inference DAG is seamlessly augmented by the statistical inference DAG.

Experience has taught me to always draw the statistical DAG.

We have prevented many errors by using the statistical DAG as a blueprint for our code.

Why augment the causal DAG with the statistical DAG?

The clean division of labor makes it apparent that there is no need.

Yet by doing so, the interplay of the physical process and the statistical inference is manifest.

I find this to be very satisfying.

Best regards,

Jeff

First I would like to thank you for in writing this book as it (still) takes a lot of courage to question prevailing scientific views (interesting dynamic at play given that this has always been the case and that Science makes progress in this way) especially in such a significant way.

One question that comes to mind, even after reading just a couple of chapters, is this :

Given that the main stream AI today is still relying on the statistics method of correlations, which holds the AI still at the level one of the Ladder (animal), what consequences do you think that would have in the event of Singularity (or even reaching the Singularity in the first place) as opposed to having an AI that is able to reach the 3rd stage on the Ladder and behave more like a human child does ?

Best Ismar