Testing and Fault Localization Part 4


Necessity is not Sufficiency

Theorem 1 from the last post works due to the fact that simpler models are usually sufficient to explain causal relationships as opposed to more complicated models. Judea Pearl uses the term minimal causal structures for such simpler models. Further, such simpler models directly represent the use of the problem-solving principle called Occam’s razor.

However, even though theorem 1 is a necessary criterion to identify and isolate irrelevant properties – which is itself an effective fault localization method – it is not sufficient when causal models are more complicated.

In particular, theorem 1 is not sufficient in preemptive situations, whereby the value of a certain set of variables, V, cause other variables, Y, to become irrelevant even if those other variables in Y might also be causes at other times and with different values.

 

Model Stability

To overcome this limitation, we utilize Pearl’s Test for Stable No-Confounding, which is an extension of the Associational No-Confounding criterion introduced in part 2 of this series, but with the power of a key observation based on the concept of stability of models.

In short, stable causal structures do not have causal relationships created or destroyed simply by varying the conditions of the environment. In contrast, unstable causal models contain temporary causal relationships that might emerge simply because certain conditions are “just right,” as in the case of preemption and race conditions.

Unstable causal structures mask the root cause (or contributory causes) in seemingly random ways. Such confoundedness is also called incidental confoundedness

Therefore, Pearl’s theorem, which is a test for Stable No-Confounding, is able to help us distinguish whether confoundedness is at play and, additionally, whether it is stable or unstable.

Pearl’s theorem forms the other part of the effective fault localization methodology presented in this blog series.

 

Test for Stable No-Confounding

Remember that we have variables of interest, called X and Y, that we suspect have some causal relationship (particularly that X might cause Y).

Let AZ denote the assumptions that (i) the causal relationship data are generated by some (unspecified) acyclic causal model M and (ii) Z is a variable in M that is unaffected by X but may possibly affect Y.

If both the associational criteria in the Associational No-Confounding definition are violated, then X and Y are not stably unconfounded given AZ.

 

Another perspective

In other words, the test for stable no-confounding states that if any one of the two conditions are satisfied (given the requirements set forth by the test for stable no-confounding), then X and Y are not confounded and, further, the non-confoundedness is stable.

However, if both conditions are violated, then the causal relationships between and Y are not stable, meaning that they might be incidentally (un)confounded simply by changing the conditions of the environment, thus removing or creating confoundedness only in certain situations but not others.

 

Why does this matter?

The power of the test for stable no-confounding stems from the fact that, according to Pearl, it “does not require us to know the causal structure of the variables in the domain or even to enumerate the set of relevant variables” as in the Associational No-Confounding definition, for which we had to have information about all the variables Z in T.

Further, to determine whether or not X and Y are stably non-confounded, we only need to find a single variable (satisfying AZ) and use the conditions of the Associational No-Confounding definition as the test.

Indeed, as Pearl explains: “the qualitative assumption that a variable may have influence on Y and is not affected by X suffices to produce a necessary statistical test for stable no-confounding.”

 

Other positive side-effects

Just as with Theorem 1, the the test for stable no-confounding introduced above can also indicate when further investigation is needed.

For instance, when no irrelevancy can be determined by using Theorem 1 with the information at hand, one would then proceed to determine whether the variables of interest are stably confounded. However, since the test for stable no-confounding can only be used with more than two variables that we think might be at play, further investigation is signaled to find that third variable Z in the problem space.

That is, to determine whether or not two variables of interest are confounded – and whether that (un)confoundedness is stable – a third variable is needed that meets the requirements for the test for stable no-confounding, thus prompting the search for such a variable.

The test for stable no-confounding, therefore, is useful as a second step in a robust fault localization methodology (after first applying Theorem 1) for two important reasons: first, it offers a test for an important property of stability in causal models; second, it is capable of indicating when further investigation is required elsewhere in the problem space.

I encourage you to read the official paper, which includes many examples of how to use the methodology and proofs.

That’s a wrap for now. Until next time!