Harris chain

In the mathematical study of stochastic processes, a Harris chain is a Markov chain where the chain returns to a particular part of the state space an unbounded number of times.[1] Harris chains are regenerative processes and are named after Theodore Harris. The theory of Harris chains and Harris recurrence is useful for treating Markov chains on general (possibly uncountably infinite) state spaces.

Definition

Let {Xn} be a Markov chain on a general state space Ω with stochastic kernel K. The kernel represents a generalized one-step transition probability law, so that P[Xn+1C | Xn=x] = K(x, C) for all states x in Ω and all measurable sets C ⊆ Ω. The chain {Xn} is a Harris chain[2] if there exists A ⊆ Ω, ϵ > 0, and probability measure ρ with ρ(Ω) = 1 such that

  1. If τA := inf {n ≥ 0 : XnA}, then P(τA < ∞ | X0 = x) = 1 for all x ∈ Ω.
  2. If xA and CΩ (where C is measurable), then K(x, C) ≥ ερ(C).

The first part of the definition ensures that the chain returns to the state A with probability 1, regardless of where it starts. It follows that it visits state A infinitely often (with probability 1). The second part implies that once the Markov chain is in state A, its next-state can be generated with the help of an independent Bernoulli coin flip. To see this, first note that the parameter ε must be between 0 and 1 (this can be shown by applying the second part of the definition to the set C=Ω). Now let x be a point in A and suppose Xn = x. To choose the next-state Xn+1, independently flip a biased coin with success probability ϵ. If the coin flip is successful, choose a next-state Xn+1 ∈ Ω according to the probability measure ρ. Else (and if ϵ < 1), choose a next-state Xn+1 according to the measure P[Xn+1C | Xn = x] = (K(x, C) − ερ(C))/(1-ε) (defined for all measurable subsets C ⊆ Ω).

Two random processes {Xn} and {Yn} that have the same probability law and are Harris chains according to the above definition can be coupled as follows: Suppose that Xn=x and Yn=y, where x and y are points in A. Using the same coin flip to decide the next-state of both processes, it follows that the next states are the same with probability at least ε.

Examples

Example 1: Countable state space

Let Ω be a countable state space. The kernel K is defined by the one-step conditional transition probabilities P[Xn+1 = y | Xn=x] for x,y ∈ Ω. The measure ρ is a probability mass function on the states, so that ρ(x) ≥ 0 for all x ∈ Ω, and the sum of the ρ(x) probabilities is equal to one. Suppose the above definition is satisfied for a given set A ⊆ Ω and a given parameter ε > 0. Then P[Xn+1=c | Xn=x] ≥ ερ(c) for all xA and all c ∈ Ω.

Example 2: Chains with continuous densities

Let {Xn}, XnRd be a Markov chain with a kernel that is absolutely continuous with respect to Lebesgue measure:

K(x, dy) = K(x, y) dy

such that K(x, y) is a continuous function.

Pick (x0, y0) such that K(x0, y0 ) > 0, and let A and Ω be open sets containing x0 and y0 respectively that are sufficiently small so that K(x, y) ≥ ε > 0 on A ×  Ω. Letting ρ(C) = |Ω  C|/|Ω| where |Ω| is the Lebesgue measure of Ω, we have that (2) in the above definition holds. If (1) holds, then {Xn} is a Harris chain.

Reducibility and periodicity

In the following, R := inf {n ≥ 1 : XnA}; i.e. R is the first time after time 0 that the process enters region A.

Definition: If for all L(X0), P(R < ∞ | X0A) = 1, then the Harris chain is called recurrent.

Definition: A recurrent Harris chain Xn is aperiodic if ∃N, such that ∀nN, ∀L(X0), P(XnA | X0A) > 0.

Theorem: Let Xn be an aperiodic recurrent Harris chain with stationary distribution π. If P(R < ∞ | X0 = x) =1 then as n  ∞, distTV (L(Xn | X0 = x), π)  0.

References

  1. Asmussen, Søren (2003). "Further Topics in Renewal Theory and Regenerative Processes". Applied Probability and Queues. Stochastic Modelling and Applied Probability. 51. pp. 186–219. doi:10.1007/0-387-21525-5_7. ISBN 978-0-387-00211-8.
  2. R. Durrett. Probability: Theory and Examples. Thomson, 2005. ISBN 0-534-42441-4.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.