Ignorability

In statistics, ignorability is a feature of an experiment design whereby the method of data collection (and the nature of missing data) do not depend on the missing data. A missing data mechanism such as a treatment assignment or survey sampling strategy is "ignorable" if the missing data matrix, which indicates which variables are observed or missing, is independent of the missing data conditional on the observed data.

This idea is part of the Rubin Causal Inference Model, developed by Donald Rubin in collaboration with Paul Rosenbaum in the early 1970s.

Pearl [2000] devised a simple graphical criterion, called back-door, that entails ignorability and identifies sets of covariates that achieve this condition.

Ignorability (better called exogeneity) simply means we can ignore how one ended up in one vs. the other group (‘treated’ Tx = 1, or ‘control’ Tx = 0) when it comes to the potential outcome (say Y). It was also called unconfoundedness, selection on the observables, or no omitted variable bias.[1]

Formally it has been written as [Yi1, Yi0] ⊥ Txi, or in words the potential Y outcome of person i had s/he been treated or not does not depend on whether s/he has really been (observable) treated or not. We can ignore in other words how people ended up in one vs. the other condition, and treat their potential outcomes as exchangeable. While this seems thick, it becomes clear if we add subscripts for the ‘realized’ and superscripts for the ‘ideal’ (potential) worlds (notation suggested by David Freedman; a visual can help here: potential outcomes simplified). So: Y11/*Y01 are potential Y outcomes had the person been treated (superscript 1), when in reality s/he has actually been (Y11, subscript 1), or not (*Y01: the * signals this quantity can never be realized or observed, or is fully contrary-to-fact or counterfactual, CF).

Similarly, *Y10/Y00 are potential Y outcomes had the person not been treated (superscript 0), when in reality s/he has been (*Y10, subscript 1), or not actually (Y00).

Note that only one of each potential outcome (PO) can be realized, the other cannot, for the same assignment to condition, so when we try to estimate treatment effects, we need something to replace the fully contrary-to-fact ones with observables (or estimate them). When ignorability/exogeneity holds, like when people are randomized to be treated or not, we can ‘replace’ *Y01 with its observable counterpart Y11, and *Y10 with its observable counterpart Y00, not at the individual level Yi’s, but when it comes to averages like E[Yi1 – Yi0], which is exactly the causal treatment effect (TE) one tries to recover.

Note also that, because of the ‘consistency rule’, the potential outcomes are the values actually realized, so we can write Yi0 = Yi00 and Yi1 = Yi11 (“the consistency rule states that an individual’s potential outcome under a hypothetical condition that happened to materialize is precisely the outcome experienced by that individual”,[2] p. 872). Hence TE = E[Yi1 – Yi0] = E[Yi11 – Yi00]. Now, by simply adding and subtracting the same fully counterfactual quantity *Y10 we get: E[Yi11 – Yi00] = E[Yi11 –*Y10 +*Y10 - Yi00] = E[Yi11 –*Y10] + E[*Y10 - Yi00] = ATT + {Selection Bias}, where ATT = average treatment effect on the treated [3] and the second term is the bias introduced when people have the choice to belong to either the ‘treated’ or the ‘control’ group. Ignorability, either plain or conditional on some other variables, implies that such selection bias can be ignored, so one can recover (or estimate) the causal effect.

See also

Missing at random

References

  1. Yamamoto, Teppei (2012). "Understanding the Past: Statistical Analysis of Causal Attribution". Journal of Political Science. 56 (1): 237–256. doi:10.1111/j.1540-5907.2011.00539.x.
  2. Pearl, Judea (2010). "On the consistency rule in causal inference: axiom, definition, assumption, or theorem?". Epidemiology. 21 (6): 872–875. doi:10.1097/EDE.0b013e3181f5d3fd. PMID 20864888.
  3. Imai, Kosuke (2006). "Misunderstandings between experimentalists and observationalists about causal inference". 1. Journal of the royal statistical society: series A (statistics in society). 171 (2): 481–502. doi:10.1111/j.1467-985X.2007.00527.x.
  • Andrew Gelman, John B. Carlin, Hal S. Stern and Donald B. Rubin. Bayesian Data Analysis. Chapman & Hall/CRC: New York, 2004.
  • Judea Pearl. Causality: Models, Reasoning, and Inference. Cambridge University Press, 2000.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.