Maximal entropy random walk

Maximal entropy random walk (MERW) is a popular type of biased random walk on a graph, in which transition probabilities are chosen accordingly to the principle of maximum entropy, which says that the probability distribution which best represents the current state of knowledge is the one with largest entropy. While standard random walk chooses for every vertex uniform probability distribution among its outgoing edges, locally maximizing entropy rate, MERW maximizes it globally (average entropy production) by assuming uniform probability distribution among all paths in a given graph.

MERW is used in various fields of science. A direct application is choosing probabilities to maximize transmission rate through a constrained channel, analogously to Fibonacci coding. Its properties also made it useful for example in analysis of complex networks,[1] like link prediction,[2] community detection[3] and centrality measures.[4] Also in image analysis, for example for detecting visual saliency regions,[5] object localization,[6] tampering detection[7] or tractography problem.[8]

Additionally, it recreates some properties of quantum mechanics, suggesting a way to repair the discrepancy between diffusion models and quantum predictions, like Anderson localization.[9]

Basic model

Left: basic concept of the generic random walk (GRW) and maximal entropy random walk (MERW)
Right: example of their evolution on the same inhomogeneous 2D lattice with cyclic boundary conditions – probability density after 10, 100 and 1000 steps while starting from the same vertex. The small boxes represent defects: all vertices but the marked ones have additional self-loop (edge to itself). For regular lattices (no defects), GRW and MERW are identical. While defects do not strongly affect the local beha­vior, they lead to a completely different global stationary probability here. While GRW (and based on it standard diffusion) leads to nearly uniform stationary density, MERW has strong localization property, imprisoning the walkers in entropic wells in analogy to electrons in defected lattice of semi-conductor.

Imagine there is a graph given by adjacency matrix: if there is an edge from vertex to , 0 otherwise. For simplicity assume it is an undirected graph, what corresponds to symmetric , however, MERW can be also generalized for directed and weighted graphs (getting Boltzmann distribution among paths instead of uniform).

We would like to choose a random walk as Markov process on this graph: for every vertex and its outgoing edge to , choose probability of the walker randomly using this edge after visiting . Formally, choose stochastic matrix such that . Assuming this graph is connected and not periodic, ergodic theory says that evolution of this stochastic process leads to some stationary probability distribution , such that .

Using Shannon entropy for every vertex and averaging over probability of visiting this vertex (to be able to use its entropy), we get the following formula for average entropy production (entropy rate) of a stochastic process:

This definition turns out to be equivalent with asymptotic average entropy (per length) of the probability distribution in the space of paths for this stochastic process.

In the standard random walk, referred here as generic random walk (GRW), we naturally choose that each outgoing edge is equally probable: . For a symmetric it leads to stationary probability distribution. It locally maximizes entropy production (uncertainty) for every vertex, but usually leads to a suboptimal averaged global entropy rate .

MERW chooses the stochastic matrix which maximizes , or equivalently assumes uniform probability distribution among all paths in a given graph. Its formula is obtained by first calculating the dominant eigenvalue and corresponding eigenvector of the adjacency matrix: . Then stochastic matrix and stationary probability distribution are given by:

for which every length possible path from to has probability . Its entropy rate is .

In contrast to GRW, the MERW transition probabilities generally depend on situation of the entire graph (are nonlocal). Hence, they rather should not be imagined as directly applied by the walker – if she performs randomly looking decisions based on local situation, like a person, the GRW approach is rather more appropriate. MERW is based on the principle of maximum entropy, making it the safest assumption when we don't have any additional knowledge about the system. For example to model our knowledge about an object performing some complex dynamics – not necessarily random, like a particle.

Sketch of derivation

Assume for simplicity that the considered graph is indirected, connected and aperiodic, what allows to conclude from the Perron-Frobenius theorem that the dominant eigenvector is unique. Hence can be asymptotically approximated by (or in bra-ket notation).

MERW is uniform distribution among paths. The number of length paths with vertex in the center is what asymptotically grows like , getting the behavior.

Analogously calculating probability distribution for two succeeding vertices , we get . Dividing by and normalizing to , we get .

Examples

Left: choosing the optimal probability after symbol 0 in Fibonacci coding. Right: one-dimensional defected lattice and its stationary density for length 1000 cycle (it has three defects). While in standard random walk the stationary density is proportional to degree of a vertex, leading to 3/2 difference here, in MERW density is nearly completely localized in the largest defect-free region, analogous to the ground state predicted by quantum mechanics.

Let us first look at probably the simplest nontrivial situation: Fibonacci coding, where we want to transmit a message as a sequence of 0/1, but not using two successive 1s – after 1 there has to be used 0. To maximize the amount of information transmitted in such sequence, we should assume uniform probability distribution in the space of all possible sequences fulfilling this constraint. To practically use such long sequences, after 1 we have to use 0, but there remains a freedom of choosing the probability of 0 after 0. Let us denote this probability by , then entropy coding would allow to encode a message using this chosen probability distribution. The stationary probability distribution of symbols for a given turns out to be . Hence, entropy production is , which is maximized for , known from golden ratio. In contrast, standard random walk would choose suboptimal . While choosing larger reduces the amount of information produced after 0, it also reduces frequency of 1, after which we cannot write any information.

A more complex example is defected one-dimensional cyclic lattice: let say 1000 nodes connected in a ring, for which all nodes but the defects have self-loop (edge to itself). In standard random walk (GRW) the stationary probability distribution would have defect probability being 2/3 of probability of the remaining vertices – there is nearly no localization, also analogously for standard diffusion, which is infinitesimal limit of GRW. For MERW we have to first find the dominant eigenvector of the adjacency matrix – maximizing in:

for all positions , where for defects, 0 otherwise. Substituting and multiplying the equation by −1 we get:

where is minimized now, becoming the analog of energy. The formula inside the bracket is discrete Laplace operator, making this equation a discrete analogue of stationary Schrodinger equation. Like in quantum mechanics, MERW predicts that the probability distribution should lead exactly to the one of quantum ground state: with its strongly localized density (in contrast to standard diffusion). Performing infinitesimal limit, we can get standard continuous stationary Schrodinger equation ( for ) here.[10]

See also

References

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.