Percolation
In this blog post we will look at Percolation. Starting with some motivating examples we will solve the percolation problem on both a 1d and infinite dimensional lattice. We will use percolation to help understand how power-law behaviour arises from criticality near phase transitions.
- Introduction
- Mathematical Formulation
- Why Should We Care?
- Percolation in 1 dimension
- Bethe Lattice
- Percolation in d-dimenison with 1 < d < $\infty$
- Summary of Critical Exponents
- Conclusion
In this blog post we shall take a first look at percolation and how we can study it mathematically.
Introduction
Percolation theory and related studies is concerned with a simple problem: how does a substance pass through a random medium? Speaking in generalities such as this can obscure a concept so lets consider a real world example: we have a pool of oil sitting below a porous rock. We would like to know if we can "push" the oil through this rock or whether the rock is not "porous enough" for this to happen. (We could also ask a follow up question: if oil can pass can it do so in a timely matter?) Another real world example is a coffee percolator: Here we have a mass of ground coffee that we wish to pass (near) boiling water through in order to extract all its caffeine related goodness. From experience we know that if we grind very finely and pack very tightly the water has a hard time passing and we may need a lot of heat and pressure to get any coffee out (in some cases water may never pass).
We can also consider percolation in a slightly different way. Imagine we have a sheet of paper and a hole punch, we randomly start punching holes in the paper - at some point the paper will break into 2 (or more) smaller seperate pieces of paper. What can we say about the number of holes required for this to happen?
In all the examples above we are observing criticality or a phase transition. At some point of porosity the rock can allow oil to pass, if the rock is any less porous no oil can pass. Similarly grind the coffee too fine and the water will not pass through, but coarsen ever so slightly we can make (perhaps a small) cup of coffee. In the paper example the phase transition is even more obvious as the paper will physically break into multiple parts when we punch the last hole. Perhaps the most common example of a phase transition however are states of matter - in particular water changing to ice at 0 celcius or water vapour at 100 celcius (this is unrelated to percolation however). The point at which the phase of these systems change is called the critical point and it is a very common behaviour in both physical and social systems. Studying these can be quite illuminating as behaviour often "gets weird" when a system nears a critical point.
Mathematical Formulation
Now we have some real world intuition of what percolation theory is all about let's start to phrase it in a more mathematical setting so that we are better able to study it.
In percolation theory we are usually concerned with lattices. Lattices are a form of connected graph that can extend to infinity. A classic example would be a 2d square lattice where each site in the lattice has an "up", "down", "left" and "right" neighbour, we can imagine this as a sheet of squared paper. Other types of lattice can exist however; higher dimensions, different geometries and so on.
With our lattice we can then consider "populating" it, for each site in the lattice we randomly select whether it is populated or not. Typically we would consider this independently uniformly at random. We denote this probability $p$. One question we might ask is at what population probability is it possible to move from one side of the lattice to the other where each step we can move to a nearest neighbour only? Or we could ask the question what is the distribution of cluster size? What is the average cluster size? And so on.
In general we do this in relation to an "infinite" sized lattice since these are often easier to work with mathematically. When dealing with finite sizes we typicaly want to assume the lattices are sufficiently large that any "edge effects" (weirdness relating to edge sites having fewer neighbours than interior sites) can be ignored. Although investigations on smaller scale percolations are also an active field of research due to their practical application.
So far we have phrased the percolation problem in terms of site percolation - whereby the sites themselves are occupied or vacant. We could equally consider bond percolation whereby in the lattice we consider the "edges" between sites. With probability $p$ we select whether an edge exists. Now we can can consider "paths" travelling along the edges - can we move from the top of the lattice to the bottom? and so on. In our examples above the porous oil and coffee problems are examples of bond percolation, the paper problem is a site percolation.
One might expect these to be equivalent to one another however that is not always the case depending on the geometry of a lattice.
Given the formulation here we can see that percolation problems lend themselves to being studied computationally. We can randomly generate finite lattices and analyse them to get intiution about how these things behave. However using this approach will not allow us to ever have a definitive "result" without a corresponding mathematical proof.
Some extensions we may wish to consider include:
- What if the sites/bonds are not independent (or uniformly) distributed?
- What about the edge effects?
- What about directed networks?
- What if connectivity is not static over time?
- How do we find the "path of least resistance" to pass through a realisation of a lattice?
Why Should We Care?
We have seen one potential application of percolation theory: predicting whether we can extract oil through a porous rock. In fact when extracting oil they do take a small sample of the rock to analyse to see whether percolation is possible. The coffee and paper examples, while providing intuition, are not applications in the sense we would not use percolation theory to solve them!
What are some other examples we might care about?
- Forest Fires - we could imagine a forest being made up of sites on a 2d lattice, some occupied with trees and some empty. We could investigate how forest density leads to fire spreading, we would not want to run real-world experiments on this for obvious reasons! We can build in more sophistication through modelling damp/dry patches, wind, air temperature and so on. The results from models like these can influence policy around back-burning, controlled cut-downs etc. to reduce the likelihood of larger more devastating wild fires. (This is a site percolation problem)
- Social Networks - We may be interested in how information spreads through a social network. We can study how network connectivity influences this. If we were running a viral marketing campaign we might wish to understand the penetration required of the social network for an idea to take off. (This is a bond percolation problem)
- Communication Networks - Suppose we are a telecom provider interested in keeping a service running 24/7. We have a network that allows this to happen with a number of connections between the individual nodes. We may wish to know how many connections (or nodes) need to "go down" for the entire system to fall over. We can then grow the network in such a way as to ensure robustness. (This is a bond percolation problem)
- Electrical Engineering - If we imagine impregnating an insulating material with (for example) carbon nano-tubes, we could ask how much carbon we would need to add to turn the insulator into a conductor? Essentially we are asking what is the proportion of carbon required to make a "bridge" across the material to allow for electron flow. (This is a site percolation problem).
- Fault Lines - Another application is the study of fault lines for use in predicting earthquakes. We can study the degree of "cracking" required for a "split" using tools based on percolation.
- Biology/Medicine - Percolation theory has been used to understand virus fragmentation as well as modelling disease spread and fragmentation of ecological systems.
Percolation in 1 dimension
As with many physical and mathematical systems it makes sense to start with 1 dimension since this is often easy to visualize, the mathematics tends to be simple and we can generally use intuition to "solve" the system.
Our lattice in 1 dimension is essentially a line segment made up of a number of sites joined together. (We can equivalently consider a bond percolation in the exact same way, in this 1d example the 2 formulations are equivalent - not necessarily true in more complex scenarios!)
We assume that each site is occupied (independently) with fixed probability $p$. If we imagine taking this line segment and adding extra sites to the end one-by-one (to the limit of infinity), it does not take much to realise that if $p<1$ then eventually we will come across an un-occupied site and so there will not exist a path from one end of the lattice to the other (i.e. there is no percolation!) Clearly if $p=1$ then every site is occupied and it is possible to create a path. We denote this probability $p_c$ (critical probability), it is one of the quantities we wish to study. In more complicated lattices we will not be able to use intuition quite so easily!
We will now step back and solve the system mathematically. In this case it is not strictly required, however it will provide a blueprint for how to approach more complicated lattices. It also allows us to introduce some of the notation.
We will fix a lattice size $L$, we will assume this is suitably large so that we can ignore any edge effect complications. The probability that there exists a cluster of size $s<L$ (that is a group of adjacent sites that are all occupied) is: $$n_s = p^s(1-p)^2 $$ This is since we need $s$ occupied sites (probability $p^s$) surrounded by 2 empty sites (probability $(1-p)^2$) - since we are assuming independence this works as desired. We can look at the expected number of such clusters via: $Ln_s$ since we assume any edge effects are minimal (we could work this out exactly but it is not necessary here). Typically however we want to scale $L \to \infty$ so we will work with probabilities instead. Quite clearly if $p<1$ then $n_s \to 0$ as $s \to \infty$ - which is just a way of expressing our intuition above through mathematics.
We can further note that the probability that a site belongs to a cluster of size $s$ is $s n_s$ (it can be positioned in any 1 of the $s$ sites in the cluster). We then have (for an infinite size lattice) the relation: $$ \sum_s s n_s = p $$ To see this we can note: \begin{align} \sum_s s n_s &= \sum_s s p^s (1-p)^2 \\ &= (1-p)^2 p \sum_s \frac{d}{dp} p^s \\ &= (1-p)^2 p \frac{d}{dp} \sum_s p^s \\ &= (1-p)^2 p \frac{d}{dp} \left( \frac{1-p^{L'}}{1-p} \right) \\ &= (1-p)^2 p \left((1-P^{L'})(1-p)^{-2} - L'p^{L'-1}(1-p^{L'-1})(1-p)^{-1} \right) \\ &\xrightarrow{L' \to \infty} (1-p)^2 p (1-p)^{-2} \\ &= p \end{align} Note: here we imagine a smaller "window" within our lattice such that $L' \leq L-2$ to avoid edge effects. This is a common "trick" to deal with asymptotic behaviours. There are other ways to deal with this such as using periodic boundary conditions too. In this example we could actually compute all the options if we wanted to, the extra terms would "disappear" when taking the limit.
This formula containing $p$ is actually fairly intuitive despite looking "complicated", it essentially states: "every occupied site has to belong to some cluster" - so the sum over all cluster sizes must equal the probability that the site is occupied! In some cases the seemingly intuitive explanations can turn out to be wrong, so it is a good habit to prove these results rather take them for granted and only later realise they are incorrect. This seems especially true in probability theory.
We can now define a new variable: $$ w_s = \frac{s n_s}{\sum_{s'} s' n_{s'}}$$ Which is in some sense a "weighting" of relative cluster sizes. We can then calculate the average cluster size using: $$ S_L = \sum_s s w_s $$ By taking the limit we can observe the asymptotic behaviour: $$ S_{\infty} = \lim_{L \to \infty} \sum_s s w_s $$
Can we find a formula for this value? In the case of the 1d lattice we can, in other more complicated geometries we may not be able to. In the derivation below I exchange limits/sums/derivatives quite freely since this is valid in this case. \begin{align} S_{\infty} &= \lim_{L \to \infty} \sum_s \frac{s^2 n_s}{\sum_{s'} s' n_{s'}} \\ &= \lim_{L \to \infty} \sum_s \frac{s^2 n_s}{p} \\ &= \frac{(1-p)^2}{p} \left( p \frac{d}{dp} \right)^2 \lim_{L \to \infty} \sum_s p^s \\ &= \frac{(1-p)^2}{p} \left( p \frac{d}{dp} \right)^2 \frac{1}{1-p} \\ &= \frac{(1-p)^2}{p} \left( p \frac{d}{dp} \right) \frac{p}{(1-p)^2} \\ &= \frac{(1-p)^2}{p} \left( \frac{p}{(1-p)^2} + \frac{2p^2}{(1-p)^3} \right) \\ &= 1 + \frac{2p}{1-p} \\ &= \frac{1+p}{1-p} \\ &= \frac{p_c+p}{p_c-p} \end{align} For values of $p$ close to $p_c = 1$ we have: $$ S_{\infty} \approx \frac{2p_c}{p_c - p} \propto |p_c - p|^{-1}$$ In general percolation problems the behaviour of cluster size around the critical probability is defined by the "critical exponent" which we denote $\gamma$: $$ S_{\infty} \propto |p_c - p|^{-\gamma} $$ In the case of the 1d lattice percolation this is $\gamma = 1$.
Another concept we have is the "correlation function" $g(r)$ which represents the probability that 2 sites distance $r$ apart belong to the same cluster. In the 1d case we can represent this as: $$g(r) = p^r$$ Which we can re-express in the form: $$ g(r) = exp\left(\frac{-r}{\xi}\right)$$ Where $\xi = \frac{-1}{ln(p)}$ is called the correlation length. For $p$ close to $p_c$ we have $\xi \approx (p_c - p)^{-1} \approx S_{\infty}$. Which is another critical exponent: $\xi \approx (p_c - p)^{-\nu}$. In this case $\nu = \gamma$ but this is not necessarily true in general.
We can also express the cluster size via correlation functions: $$ S_L = \sum_r g(r) $$
This completely solves the percolation problem in the 1d lattice case. Unfortunately things are not always this simple.
Bethe Lattice
We now move onto another lattice that can be solved exactly: the Bethe lattice. A Bethe lattice is an infinite connected cycle-free graph where each node has exactly $z$ neighbours. It is also sometimes called a Caley tree. We can think of the 1d lattice case above as being a Bethe lattice with $z=2$.
If the sites are arranged in "shells" as in the diagram above the number of sites in the kth shell is: $$N_k = z(z-1)^{k-1}$$
We can think of the Bethe lattice as an infinite dimensional lattice. Why is this? If we consider a 3d space, we note that: $$Area \propto L^2$$ For some length $L$ for a generic shape. similarly for volume we have: $$Vol \propto L^3 $$ For example the volume of a sphere is $\frac{4}{3}\pi r^3$ and the surface area of the sphere is $4 \pi r^2$, and similar formulae exist for cubes, etc. We notice that: $$ \frac{Area}{Vol}_{3d} \propto \frac{L^2}{L^3} = L^{1 - \frac{1}{3}}$$ If we repeat the same procedure with a 4d space we get: $$ \frac{Area}{Vol}_{4d} \propto L^{1 - \frac{1}{4}}$$ And in general for a D dimensional space: $$ \frac{Area}{Vol}_{Dd} \propto L^{1 - \frac{1}{D}}$$
What about the Bethe lattice? From above we know the number of sites in the kth shell is: $z(z-1)^{k-1}$ we can then image these shells "encompassing" all preceeding shells to get a "volume". So in this case: $$Vol = 1 +\sum_{i=1}^k z(z-1)^{i-1} = 1 + \frac{z((z-1)^k - 1)}{z-2} = \frac{z(z-1)^k - 2}{z-2}$$ And so: $$ \frac{Area}{Vol}_{Bethe} = \frac{z(z-1)^{k-1}}{\frac{z(z-1)^k - 2}{z-2}} \xrightarrow{k \to \infty} \frac{z-2}{z-1} $$ And so the ratio tends to a constant, in other words: $$ \frac{Area}{Vol}_{Bethe} \propto L^{0}$$ Which is in some sense equivalent to an "infinite dimensional" space (by taking $D=\infty$ into the formula for $\frac{Area}{Vol}_{Dd}$).
We can find the percolation threshold probability through a relatively simple argument. We start at the origin (centre site); if a site exists in the next shell it has $(z-1)$ possible "new" neighbours. Given this site is occupied with probability $p$, the number of (expected) occupied neigbours is: $p(z-1)$. For an infinite cluster to exist we require this quantity to be greater than $1$ and so we have: $$p_c = \frac{1}{z-1}$$ This ties in with our finding for the 1d lattice ($z=2$) as we would expect. Also notice that again in this case we could exchange "bond" for "site" and the argument still holds and so we have the critical probability is the same for both site and bond percolation problems. This does not hold true in general lattices.
We now introduce a new concept: In the 1d case we could only approach $p_c$ from below (at least I know of no way to consider probabilities in excess of 1!) Whereas now we can consider what happens above the percolation threshold. From common sense intuition we know that if $p=1$ the infinite cluster will in some sense be "stronger" than if $p$ is only slightly above $p_c$. We would like a metric to use to illustrate this concept. The infinite cluster strength represents the probability that a given site is part of the infinite cluster (that is we can follow a path from this site to infinity). Cleary the strength is zero below the percolation threshold and $1$ for $p=1$. We denote the cluster strength by $P(p)$ - this is an example of an order parameter.
To derive the cluster strength we introduce a quantity $Q(p)$ which is the probabilty that a site is not connected to the infinite cluster. We can then write: $$P(p) = p(1-Q^z(p))$$ Which is the probability that a site is occupied and at least one path to infinity exists for that site. But we can further write: $$Q(p) = (1-p) + pQ^{z-1}(p)$$ The probability the node is not occupied or it is occupied but the neighbours do not have a path to infinity.
We find that there is a trivial solution $Q(p)=1$ with non-trivial solution: $$Q(p) = 1 - \frac{2p(z-1)-2}{p(z-1)(z-2)} $$ And so: $$P(p) = p\left(1 - \left( 1 - \frac{2p(z-1)-2}{p(z-1)(z-2)} \right)^z \right) $$ Via a rather messy Taylor expansion we can show: $$ P(p) \propto |p_c - p|^{-1} $$ This is another critical exponent for the Bethe lattice. In general lattices we shall find: $$ P(p) \propto |p_c - p|^{-\beta} $$ For some $\beta$.
We now move onto looking at the average cluster size $S(p)$. To do this we will introduce an intermediate varible $T$ which represents contribution from an individual bond in the lattice. The average cluster size containing the origin is then: $$S(p) = 1 + zT$$ Since there are $z$ bonds. For each branch we note that either the neighbouring site is un-occupied (probability $(1-p)$ but contribution 0 to the average) or it is occupied (probability $p$ with $(z-1)$ neighbours) so we can express $T$ as: $$ T = p(1 + (z-1)T) \iff T = \frac{p}{1-(z-1)p}$$ And so: $$S(p) = 1 + \frac{zp}{1-(z-1)p} = \frac{p_c(1+p)}{p_c - p}$$ Where $p_c = \frac{1}{z-1}$. Thus the critical exponent is $\gamma = 1$ as in the 1d case. We can also expand on our previous identity from the 1d case to give: $$P(p) + \sum_{s=1}^{\infty} s n_s(p) = p $$ This equation holds for all lattices not just the Bethe lattice. We may naturally ask what form does $n_s(p)$ take in the Bethe lattice? Clearly the formula from the 1d lattice is not correct. We can show that $n_s$ is of the form: $$ n_s(p) = g_{s,z} p^s (1-p)^{2+s(z-2)} $$ Where $g_{s,z}$ is some constant fixed for each cluster size ($s$) and Bethe lattice number ($z$). This constant can be hard to calculate for large $s$ since there are many possible configurations of sites. We can however look at ratios so remove the pesky constant: \begin{align} \frac{n_s(p)}{n_s(p_c)} &= \frac{p^s (1-p)^{2+s(z-2)}}{p_c^s (1-p_c)^{2+s(z-2)}} \\ &= \left(\frac{1-p}{1-p_c}\right)^2 exp\left(s ln\left[ \frac{p(1-p)^{z-2}}{p_c(1-p_c)^{z-2}} \right] \right)\\ &= \left(\frac{1-p}{1-p_c}\right)^2 exp\left( \frac{-s}{s_\xi} \right) \end{align} With: $$ s_\xi = \frac{-1}{ln \left( \frac{p(1-p)^{z-2}}{p_c(1-p_c)^{z-2}} \right)} $$ Through a Taylor expansion we can show: $$ s_\xi \propto |p_c - p|^{\frac{-1}{\sigma}} $$ With another critical exponent parameter $\sigma$. In the case of the Bethe lattice $\sigma = \frac{1}{2}$.
We can now use $s_\xi$ to investigate $n_s(p)$. Through some more work we can show: $$n_s(p) \propto s^{-\tau} exp \left( \frac{-s}{s_{\xi}} \right) $$ For another critical exponent $\tau$. In the case of the Bethe lattice we have $\tau = \frac{5}{2}$. The quantity $s_{\xi}$ Is often called the "cutoff cluster size" since: for for clusters very much below this we have $n_s(p) \propto s^{-\tau}$ while above this clusters decay very quickly.
Percolation in d-dimenison with 1 < d < $\infty$
We have seen percolation in 1 dimension and in infinite dimensions (Bethe lattice) - what about other dimensions? Unfortunately things get (much) more complicated. To see why this is we first saw that we could "count" the clusters of a given size in the 1d lattice fairly easily, in the Bethe lattice we could also rely on the "cycle free" structure to help us count the arrangement of sites. This is not so easy in other lattices. For example consider the 2d square lattice, if we just look at the different ways a site (in red) can contribute to a cluster (in black) we have:
And so (by counting the empty neigbouring sites): $$n_3(p) = 6 p^3 (1-p)^8 + 12 p^3 (1-p)^7 $$ These formulae only get more complicated as the cluster size increases and as the dimension increases. We cannot easily find a general solution to this problem. Even if we could it is unlikely we'll be able to make much traction using the techniques we have discussed so far.
So far we have seen that site percolation is equivalent to bond percolation. In general this is not the case, it is not hard to see this by considering bond and site percolations on the 2d space. We note however that any bond percolation problem can be reformulated as a site percolation problem (in most cases in a different lattice geometry). As such most study is focussed on site percolation.
In many problems an exact anaytic solution to the percolation problem is not possible/very difficult so many results are uncovered using computer simulations.
The table below summarises some of the critical probabilites for various lattice dimension and geometries:
When not expressed as a fraction or trigonometric function probabilities are expressed to 3 significant figures.
Summary of Critical Exponents
Due to its simplicity for the 1d case we only looked at 1 critical exponent. In more complicated lattices we can describe 5 different critical behaviours $(\gamma, \beta, \nu, \sigma, \tau)$: \begin{align} S(p) &\propto |p_c - p|^{-\gamma} \\ P(p) &\propto |p_c - p|^{-\beta} \\ \xi(p) &\propto |p_c - p|^{-\nu} \\ s_\xi(p) &\propto |p_c - p|^{\frac{-1}{\sigma}} \\ n_s(p) &\propto s^{-\tau} exp \left( \frac{-s}{s_{\xi}} \right) \end{align} Where these proportionalities only hold when we are close to the critical percolation probability. We can see that these will vary depending on the geometry of the lattices involved.
We can see that the behaviour of these quantities around the critical point behave as a power-law. This is a universal behaviour of "criticality" and what makes studying behaviour around phase transitions so interesting. The behaviours can become highly unusual, the critical exponents provide a way of classifying these behaviours.
Another exponent of interest is $\alpha$. We define the probability that a site (e.g. the origin) is connected to another site distance $r$ away as: $$q(r) \propto r^{\alpha} $$ Where the probability is evaluated at the critical percolation probability.
We can summarise these exponents int he table below:
We notice that the critical exponent behaviour depends on the dimension only and not any specific geometry, unlike the critical probability.
Conclusion
In this blog post we have looked at the very basics of Percolation theory. We should have an idea of why we should study this phenomena through the use of some motivating examples. We gained some intuition by analysing 2 "simple" examples in 1d and infinite dimensions. We have also seen how the percolation problem gives rise to criticality and how behaviour around a critical point gives rise to power-law behaviours.
This is just a brief introduction and there are many areas we did not touch upon here. Since percolation theory can provide some truly interesting mathematics and insight I'm sure I will visit some of these topics in a later blog post.