Planar maps, random walks and circle packing

These are lecture notes of the 48th Saint-Flour summer school, July 2018, on the topic of planar maps, random walks and the circle packing theorem.


Preface
These lecture notes are intended to accompany a single semester graduate course. They are meant to be entirely self-contained. All the theory required to prove the main results is presented and only basic knowledge in probability theory is assumed.
In Chapter 1 we describe the main storyline of this text. It is meant to be light bedtime reading exposing the reader to the main results that will be presented and providing some background. Chapter 2 introduces the theory of electric networks and discusses their highly useful relations to random walks. It is roughly based on Chapter 8 of Yuval Peres' excellent lecture notes [68]. We then discuss the circle packing theorem and present its proof in Chapter 3. Chapter 4 discusses the beautiful theorem of He and Schramm [38] relating the circle packing type of a graph to recurrence and transience of the random walk on it. To the best of our knowledge, their work is the first to form connections between the circle packing theorem and probability theory. Next in Chapter 5 we present the highly influential theorem of Benjamini and Schramm [12] about the almost sure recurrence of the simple random walk in planar graph limits of bounded degrees. The notion of a local limit (also known as distributional limit or Benjamini-Schramm limit) of a sequence of finite graphs was introduced there for the first time to our knowledge (and also studied by Aldous-Steele [3] and Aldous-Lyons [2]); this notion is highly important in probability theory as well as other mathematical disciplines (see [2] and the references within). In Chapter 6 we provide a theorem from which one can deduce the almost sure recurrence of the simple random walk on many models of random planar maps. This theorem was obtained by Ori Gurel-Gurevich and the author in [30]. Chapter 7 discusses uniform spanning forests on planar maps and appeals to the circle packing theorem to show that the free uniform spanning forest on proper planar maps is almost surely connected, i.e., it is in fact a tree. This theorem was obtained by Tom Hutchcroft and the author in [43]. We close these notes in Chapter 8 with a description of some related contemporary developments in this field that are not presented in this text.
We have made an effort to add value beyond what is in the published papers. Our proof of the circle packing theorem in Chapter 3 is inspired by Thurston's argument [81] and Brightwell-Scheinerman [13] but we have made what we think are some simplifications; the proof also employs a neat argument due to Ohad Feldheim and Ori Gurel-Gurevich (Theorem 3.14) which makes the drawing part of the argument rather straightforward and avoids topological considerations that are used in the classical proofs. The original proof of the He-Schramm Theorem [38] is based on the notion of discrete extremal length which is essentially a form of effective resistance in electric networks (in fact, the edge extremal length is precisely effective resistance, see [60,Exercise 2.78]). We find that our approach in Chapter 4 using electric networks is somewhat more robust and intuitive to probabilists. We obtain a quantitative version of the He-Schramm Theorem in Chapter 4 as well as the Benjamini-Schramm Theorem [12] in Chap-1 :: Introduction

The circle packing theorem
A planar graph is a graph that can be drawn in the plane, with vertices represented by points and edges represented by non-crossing curves. There are many different ways of drawing any given planar graph and it is not clear what is a canonical method. One very useful and widely applicable method of drawing a planar graph is given by Koebe's 1936 circle packing theorem [50], stated below. As we will see, various geometrical properties of the circle packing drawing (such as existence of accumulation points and their structure, bounds on the radii of circles and so on) encode important probabilistic information (such as the recurrence/transience of the simple random walk, connectivity of the uniform spanning forest and much more). This deep connection is especially fruitful to the study of random planar maps. Indeed, one of the main goals of these notes is to present a self-contained proof that the so-called uniform infinite planar triangulation (UIPT) is almost surely recurrent [30].
A circle packing is a collection of discs P = {C v } in the plane C such that any two distinct discs in P have disjoint interiors. That is, distinct discs in P may be tangent, but may not overlap. Given a circle packing P , we define the tangency graph G(P ) of P to be the graph with vertex set P and with two vertices connected by an edge if and only if their corresponding circles are tangent. The tangency graph G(P ) can be drawn in the plane by drawing straight lines between the centers of tangent circles in P , and is therefore planar. It is also clear from the definition that G(P ) is simple, that is, any two vertices are connected by at most one edge and there are no edges beginning and ending at the same vertex. See Fig. 1.1.
We call a circle packing P a circle packing of a planar graph G if G(P ) is isomorphic to G. Theorem 1.1 (Koebe '36). Every finite simple planar graph G has a circle packing. That is, there exists a circle packing P such that G(P ) is isomorphic to G.
One immediate consequence of the circle packing theorem is Fáry's Theorem [24], which states that every finite simple planar graph can be drawn so that all the edges are represented by straight lines.
The circle packing theorem was first discovered by Koebe [50], who established it as a corollary to his work on the generalization of the Riemann mapping theorem to finitely connected domains; a brief sketch of Koebe's argument is given in Fig. 1.2. The theorem was rediscovered and popularized in the 70's by Thurston [81], who showed that it follows as a corollary to the work of Andreev on hyperbolic polyhedra (see also [62]). Thurston also initiated a popular program of understanding circle packing as a form of discrete complex analysis, a viewpoint which has been highly influential in the subsequent development of the subject and which we  with at most finitely many boundary components is conformally equivalent to a circle domain, that is, a domain all of whose boundary components are circles or points.
Step 1: We begin by drawing the finite simple planar graph G in the plane in an arbitrary way.
Step 2: If we remove the 'middle ε' of each edge, then the complement of the resulting drawing is a domain with finitely many boundary components.
Step 3: Finding a conformal map from this domain to a circle domain gives an 'approximate circle packing' of G.
Step 4: Taking the limit as ε ↓ 0 can be proven to yield a circle packing of G.
discuss in more detail below (see [78] for a review of a different form of discrete complex analysis with many applications to probability). There are now many proofs of the circle packing theorem available including, remarkably, four distinct proofs discovered by Oded Schramm. In Chapter 3 we will give an entirely combinatorial proof, which is adapted from the proof of Thurston [62,81] and Brightwell and Scheinerman [13]. The uniqueness of circle packing was first proven by Thurston, who noted that it follows as a corollary to Mostow's rigidity theorem. Since then, many different proofs have been found. In Chapter 3 we will give a very short and elementary proof of uniqueness due to Oded Schramm that is based on the maximum principle.

Infinite planar graphs
So far, we have only discussed the existence and uniqueness of circle packings of finite planar triangulations. What happens with infinite triangulations? To address this question, we will need to introduce some more definitions. Definition 1.4. We say that a graph G is one-ended if the removal of any finite set of vertices leaves at most one infinite connected component. Definition 1.5. Let P = {C v } be a circle packing of a triangulation. We define the carrier of P to be the union of the closed discs bounded by the circles of P together with the spaces bounded between any three circles that form a face (i.e., the interstices). We say that P is in D if its carrier is D.
See Fig. 1.3 for examples where the carrier is a disc or a square. The circle packing of the standard triangular lattice (see Fig. 4.2) has the whole plane C as its carrier. It is not too hard to see that if G(P ) is an infinite triangulation, then it is one-ended if and only if the carrier of P is simply connected, see Lemma 4.1.
It can be shown via a compactness argument that any simple infinite planar triangulation can be circle packed in some domain. Indeed, one can simply take subsequential limits of circle packings of finite subgraphs (the fact that such subsequential limits can be taken is a consequence of the ring lemma, Lemma 4.2). This is performed in Claim 4.3. However, this compactness argument does not give us any control of the domain we end up with as the carrier of our circle packing. The following theorems of He and Schramm [38,39] give us much better control; they can be thought of as discrete analogues of Poincaré-Koebe's uniformization theorem for Riemann surfaces. Theorem 1.6 (He and Schramm, '93). Any one-ended infinite triangulation can be circle packed such that the carrier is either the plane or the open unit disk, but not both.
This theorem will be proved in Chapter 4 (with the added assumption of finite maximal degree). The proofs in [38,39] are based on the notion of discrete extremal length. We will present our own approach to the proof in Chapter 4 based on a very similar notion of electric resistance discussed in Chapter 2. This approach is somewhat more appealing to a probabilist and allows for quantitative versions of the He-Schramm Theorem that will be used later for the study of random planar maps in Chapter 6.
In view of Theorem 1.6, we call an infinite one-ended simple planar triangulation CP parabolic if it can be circle packed in C, and call it CP hyperbolic if it can be circle packed in the open unit disk U. What about uniqueness? Theorem 1.7 shows that, in general, we have much more flexibility when choosing a circle packing of an infinite planar triangulation than we have in the finite case, see Fig. 1.3 again. Indeed, it implies that the circle packing of a CP hyperbolic triangulation is not determined up to Möbius transformations and reflections, since, for example, we can circle pack the same triangulation in both the unit disc and the unit square, and these two packings are clearly not related by a Möbius transformation. Fortunately, the following theorem of Schramm [72] shows that we recover Möbius rigidity if we restrict the packing to be in C or U. Theorem 1.8 (Schramm '91). Let T be a one-ended infinite planar triangulation.
• If T is CP parabolic, then its circle packing in C is unique up to dilations, rotations, translations and reflections.
• If T is CP hyperbolic, then its circle packing in U is unique up to Möbius transformations or reflections fixing U.

Relation to conformal mapping
A central motivation behind Thurston's popularization of circle packing was its role as a discrete analogue of conformal mapping. The resulting theory is somewhat tangential to the main thrust of these notes, but is worth reviewing for its beauty, and for the intuition it gives about circle packing. A more detailed treatment of this and related topics is given in [80].
Recall that a map φ : D → D between two domains D, D ⊆ C is conformal if and only if it is holomorphic and one-to-one. Intuitively, we can think of the latter condition as saying that φ maps infinitesimal circles to infinitesimal circles. Thus, it is natural to wonder, as Thurston did, whether conformal maps can be approximated by graph isomorphisms between circle packings of the corresponding domains, which literally map circles to circles. For each ε > 0, let T ε = {εn + ε 1+ √ 3i 2 m : n, m ∈ Z} ⊆ C be the triangular lattice with lattice spacing ε, which we make into a simple planar triangulation by connecting two vertices if and only if they have distance ε from each other. This triangulation is naturally circle packed in the plane by placing a disc of radius ε around each point of T ε : this is known as the hexagonal packing. Now, let D be a simply connected domain, and take z 0 to be a marked point in the interior of D. For each ε > 0 let u ε be an element of T ε of minimal distance to z 0 , and let v ε = u ε + ε and w ε = u ε + (1 + √ 3i)ε/2. For each ε > 0, let T ε (D) be the subgraph of T ε induced by the vertices of distance at least 2ε from ∂D (i.e., the subgraph containing all such vertices and all the edges between them), and let T ε (D) be the component of T ε (D) containing u ε . Finally, let T ε (D) be the triangulation obtained from T ε (D) by placing a single additional vertex ∂ ε in the outer face of T ε (D) and connecting this vertex to every vertex in the outer boundary of T ε (D).
• the vertex w ε is represented by a circle centered in the upper half-plane.
The function sending each vertex of T ε (D) to the center of the circle representing it in P ε can be extended piecewise on each triangle by an affine extension. Call the resulting function φ ε .
The following theorem was conjectured by Thurston and proven by Rodin and Sullivan [69]. Theorem 1.9 (Rodin and Sullivan '87). Let φ be the unique conformal map from D to U with φ(z 0 ) = 0 and φ (z 0 ) > 0. Then φ ε converge to φ as ε ↓ 0, uniformly on compact subsets of D.
See Fig. 1.4. The key to the proof of Theorem 1.9 was to establish that the hexagonal packing is the only circle packing of the triangular lattice, which is now a special case of Theorem 1.8.

Probabilistic applications
Why should we be interested in circle packing as probabilists? At a very heuristic level, when we uniformize the geometry of a triangulation by applying the circle packing theorem, we also uniformize the random walk on the triangulation, allowing us to compare it to a standard reference process that we understand very well, namely Brownian motion. Indeed, since Brownian motion is conformally invariant and circle packings satisfy an approximate version of conformality, it is not unreasonable to expect that the random walk on a circle packed triangulation will behave similarly to Brownian motion. This intuition turns out to be broadly correct, at least when the triangulation has bounded degrees, although it is more accurate to say that the random walk behaves like a quasi-conformal image of Brownian motion, that is, the image of Brownian motion under a function that distorts angles by a bounded amount.
Although it is possible to make the discussion in the paragraph above precise, in these notes we will be interested primarily in much coarser information that can be extracted from circle packings, namely effective resistance estimates for planar graphs. This fundamental topic is thoroughly discussed in Chapter 2. One of the many definitions of the effective resistance R eff (A ↔ B) between two disjoint sets A and B in a finite graph is where P v is the law of the simple random walk started at v, τ B is the first time the walk hits B, and τ + A is the first positive time the walk visits A. Good enough control of effective resistances allows one to understand most aspects of the random walk on a graph. We can also define effective resistances on infinite graphs, although issues arise with boundary conditions. An infinite graph is recurrent if and only if the effective resistance from a vertex to infinity is infinite.
The effective resistance can also be computed via either of two variational principles: Dirichlet's principle and Thomson's principle, see Section 2.4. The first expresses the effective resistance as a supremum of energies of a certain set of functions, while the second expresses the effective resistance as an infimum of energies of a certain set of flows. Thus, we can bound effective resistances from above by constructing flows, and from below by constructing functions.
A central insight is that we can use the circle packing to construct these functions and flows. This idea leads fairly easily to various statements such as the following: • The effective resistance across a Euclidean annulus of fixed modulus is at most a constant.
If the triangulation has bounded degrees, then the resistance is at least a constant.
• The effective resistance between the left and right sides of a Euclidean square is at most a constant. If the triangulation has bounded degrees, then the resistance is at least a constant.
See for instance Lemma 4.8. We will use these ideas to prove the following remarkable theorem of He and Schramm [38], which pioneered the connection between circle packing and random walks. Theorem 1.10 (He and Schramm, '95). Let T be a one-ended infinite triangulation. If T has bounded degrees, then it is CP parabolic if and only if it is recurrent for simple random walk, that is, if and only if the simple random walk on T visits every vertex infinitely often almost surely.
This has been extended to the multiply-ended cases in [31], see also Chapter 8, item 4.

Recurrence of distributional limits of random planar maps
Random planar maps is a widely studied field lying at the intersection of probability, combinatorics and statistical physics. It aims to answer the vague question "what does a typical random surface look like?" We provide here a very quick account of this field, referring the readers to the excellent lecture notes [57] by Le Gall and Miermont, and the many references within for further reading. The enumerative study of planar maps (answering questions of the form "how many simple triangulations on n vertices are there?") began with the work of Tutte in the 1960's [82] who enumerated various classes of finite planar maps, in particular triangulations. Cori and Vauquelin [18], Schaeffer [71] and Chassaing and Schaeffer [16] have found beautiful bijections between planar maps and labeled trees and initiated this fascinating topic in enumerative combinatorics. The bijections themselves are model dependent and extremely useful since many combinatorial and metric aspects of random planar maps can be inferred from them. This approach has spurred a new line of research: limits of large random planar maps.
Two natural notions of such limits come to mind: scaling limits and local limits. In the first notion, one takes a random planar map M n on n vertices, scales the distances appropriately (in most models the correct scaling turns out to be n −1/4 ), and aims to show that this random metric space converges in distribution in the Gromov-Hausdorff sense. The existence of such limits was suggested by Chassaing and Schaeffer [16], Le Gall [54], and Marckert and Mokkadem [61], who coined the term the Brownian map for such a limit. The recent landmark work of Le Gall [55] and Miermont [64] establishes the convergence of random p-angulations for p = 3 and all even p to the Brownian map.
The study of local limits of random planar maps, initiated by Benjamini and Schramm [12], while bearing many similarities, is independent of the study of scaling limits. The local limit of a random planar map M n on n vertices is an infinite random rooted graph (U, ρ) with the property that neighborhoods of M n around a random vertex converge in distribution to neighborhoods of U around ρ. The infinite random graph (U, ρ) captures the local behavior of M n around typical vertices. We develop this notion precisely in Chapter 5.
In their pioneering work, Angel and Schramm [7] showed that the local limit of a uniformly chosen random triangulation on n vertices exists and that it is a one-ended infinite planar triangulation. They termed the limit as the uniform infinite planar triangulation (UIPT). The uniform infinite planar quadrangulation (UIPQ), that is, the local limit of a uniformly chosen random quadrangulation (i.e., each face has 4 edges) on n vertices, was later constructed by Krikun [51].
The questions in this line of research concern the almost sure properties of this limiting geometry. It is a highly fractal geometry that is drastically different from Z 2 . Angel [4] proved that the volume of a graph-distance ball of radius r in the UIPT is almost surely of order r 4+o (1) and that the boundary component separating this ball from infinity has volume r 2+o(1) almost surely. For the UIPQ this is proved in [16].
Due to the various combinatorial techniques of generating random planar maps, many of the metric properties of the UIPT/UIPQ are firmly understood. Surface properties of these maps are somewhat harder to understand using enumerative methods. Recall that a non-compact simply connected Riemannian surface is either conformally equivalent to the disc or the whole plane and that this is determined according to whether Brownian motion on the surface is transient or recurrent. Hence, the behavior of the simple random walk on the UIPT/UIPQ is considered here as a "surface property" (see also [29]).
As mentioned earlier, one of the main objectives of these notes is to answer the question of the almost sure recurrence/transience of the simple random walk on the UIPT/UIPQ. We provide a general statement, Theorem 6.1 of these notes, to which a corollary is Theorem 1.11 ([30]). The UIPT and UIPQ are almost surely recurrent.
The proof heavily relies on the circle packing theorem and can be viewed as an extension of the remarkable theorem of Benjamini and Schramm [12] stating that the local limit of finite planar maps with finite maximum degree is almost surely recurrent. The maximum degree of the UIPT is unbounded and so one cannot apply [12]. A combination of the techniques presented in Chapters 4 to 6 is required to overcome this difficulty.
Recently, there have been terrific new developments studying further surface properties of the UIPT/UIPQ. Lee [58] has given an exciting new proof of Theorem 1.11 based on a spectral analysis and an embedding theorem for planar maps due to [47]. His proof also yields that the spectral dimension of the UIPT/UIPQ is at most 2 and applies to local limits of sphere-packable graphs in higher dimensions as well. Gwynne and Miller [32] provided the converse bound showing that the spectral dimension of the UIPT equals 2 and calculated other exponents governing the behavior of the random walk. Their results are based on the deep work of Gwynne, Miller and Sheffield [33] (see also Chapter 8, item 9).

:: Random walks and electric networks
An extremely useful tool and viewpoint for the study of random walks is Kirchhoff's theory of electric networks. Our treatment here roughly follows [68,Chapter 8], we also refer the reader to [60] for an in-depth comprehensive study.
Definition 2.1. A network is a connected graph G = (V, E) endowed with positive edge weights, {c e } e∈E (called conductances). The reciprocals r e = 1/c e are called resistances.
In sections 2.1-2.4 below we discuss finite networks. We extend our treatment to infinite networks in Section 2.5.

Harmonic functions and voltages
Let G = (V, E) be a finite network. In physics classes it is taught that when we impose specific voltages at fixed vertices a and z, then current flows through the network according to certain laws (such as the series and parallel laws). An immediate consequence of these laws is that the function from V to R giving the voltage at each vertex is harmonic at each x ∈ V \ {a, z}.
Instead of starting with the physical laws and proving that voltage is harmonic, we now take the axiomatically equivalent approach of defining voltage to be a harmonic function and deriving the laws as corollaries. Definition 2.3. Given a network G = (V, E) and two distinct vertices a, z ∈ V , a voltage is a function h : V → R that is harmonic at any x ∈ V \ {a, z}.
Proof 1. We write n = |V |. Observe that a voltage h with h(a) = α and h(z) = β is defined by a system of n − 2 linear equations of the form (2.1) in n − 2 variables (which are the values h(x) for x ∈ V \ {a, z}). Corollary 2.7 guarantees that the matrix representing that system has empty kernel, hence it is invertible.
We present an alternative proof of existence based on the random walk on the network. Consider the Markov chain {X n } on the state space V with transition probabilities This Markov chain is a weighted random walk (note that if c xy are all 1 then the described chain is the so-called simple random walk). We write P x and E x for the probability and expectation, respectively, conditioned on X 0 = x. For a vertex x, define the hitting time of x by τ x := min{t ≥ 0 | X t = x}.
Proof 2. We will find a voltage g satisfying g(a) = 0 and g(z) = 1 by setting Indeed, g is harmonic at x = a, z, since by the law of total probability and the Markov property we have g(x) = 1 π x y:y∼x c xy P x (τ z < τ a | X 1 = y) = 1 π x y:y∼x c xy P y (τ z < τ a ) = 1 π x y:y∼x c xy g(y).
For general boundary conditions α, β we define h by By Claim 2.4, h is a voltage, and clearly h(a) = α and h(z) = β, concluding the proof.
This proof justifies the equality between simple random walk probabilities and voltages that was discussed at the start of this chapter: since the function x → P x (τ z < τ a ) is harmonic on V \ {a, z} and takes values 0, 1 at a, z respectively, it must be equal to the voltage at x when voltages 0, 1 are imposed at a, z.
Proof. This follows directly from the construction of h in Proof 2 of Claim 2.8 and the uniqueness statement of Corollary 2.7. Alternatively, one can argue as in the proof of Claim 2.6 that if h(x) = m} must each contain at least one element of {a, z}.
To prove the second assertion, we note that by Claim 2.8 and Corollary 2.7 it is enough to check when h is the voltage with boundary values h(a) = 0 and h(z) = 1. In this case, the condition on x guarantees that the probabilities that the random walk started at x visits a before z or visits z before a are positive. By proof 2 of Claim 2.8 we find that h(x) ∈ (0, 1).

Flows and currents
For a graph G = (V, E), denote by E the set of edges of G, each endowed with the two possible orientations. That is, (x, y) ∈ E iff {x, y} ∈ E (and in that case, (y, x) ∈ E as well).
In other words, the voltage difference across an edge is the product of the current flowing along the edge with the resistance of the edge. This is known as Ohm's law. According to this definition, the current flows from vertices with lower voltage to vertices with higher voltage. We will use this convention throughout, but the reader should be advised that some other sources use the opposite convention.
Claim 2.13. The current flow associated with a voltage is indeed a flow.
Proof. The current flow is clearly antisymmetric by definition. To show that it satisfies the node law, observe that for x = a, z, since h is harmonic, Claim 2.14. The current flow associated with a voltage h satisfies Kirchhoff 's cycle law, that is, for every directed cycle e 1 , . . . , e m , Claim 2.15. Given a flow θ which satisfies the cycle law, there exists a voltage h = h θ such that θ is the current flow associated with h. Furthermore, this voltage is unique up to an additive constant.
Proof. For every vertex x, let e 1 , . . . , e k be a path from a to x, and define Note that since θ satisfies the cycle law, the right hand side of (2.3) does not depend on the choice of the path, hence h(x) is well defined. Let x ∈ V , and consider a given path e 1 , . . . , e k from a to x (if x = a we take the empty path). To evaluate h(y) for y ∼ x, consider the path e 1 , . . . , e k , xy from a to y, so h(y) = h(x) + r xy θ(xy). It follows that h(y) − h(x) = r xy θ(xy), hence θ(xy) = c xy (h(y) − h(x)), meaning that θ is indeed the current flow associated with h.
Since θ(xy) = c xy (h(y) − h(x)) for any x ∼ y, the node law of immediately implies that h is a voltage. To show that h is unique up to an additive constant, suppose that g : V → R is another voltage such that r xy θ(xy) = g(y) − g(x). It follows that g(y) − g(y) = g(x) − g(x) for any x ∼ y. Since G is connected it follows that g − h is the constant function on V .  where the first equality is due to antisymmetry, and the third equality is due to the node law. The claim follows again by antisymmetry.
Proof. Letθ = θ 1 − θ 2 . According to Claim 2.11,θ is a flow. It also satisfies the cycle law, as for every cycle e 1 , . . . , e m , Observe in addition that θ = θ 1 − θ 2 = 0. Now, let h = hθ be the voltage defined in Claim 2.15, chosen so that h(a) = 0. Note that it is harmonic at a, since Similarly, using Claim 2.17 it is also harmonic at z. Since h is harmonic everywhere, it is constant by Claim 2.5, and thus h ≡ 0, henceθ ≡ 0 and so θ 1 = θ 2 .  This last claim prompts the following useful definition.
Definition 2.19. The unit current flow from a to z is the unique current flow from a to z of strength 1.

The effective resistance of a network
Suppose we are given a voltage h on a network G with fixed vertices a and z. Scaling h by a constant multiple causes the associated current flow to scale by the same multiple, while adding a constant to h does not change the current flow at all. Therefore, the strength of the current flow is proportional to the difference h(z) − h(a). Proof. Let h 1 , h 2 be two non-constant voltages, and let θ 1 , θ 2 be their associated current flows. For i = 1, 2, leth i = h i / θ i and letθ i be the current flow associated withh i (note that since h i is non-constant θ i = 0). Thus, θ i = 1. By Claim 2.18 we getθ 1 =θ 2 and thereforē h 1 =h 2 + c for some constant c by Claim 2.15. It follows thath 1 To see that this constant is positive, it is enough to check one particular choice of a voltage. By Claim 2.8, let h be the voltage with h(a) = 0 and h(z) = 1. By Claim 2.9 and since G is connected, we have that h(x) > 0 for at least one neighbor x of a. Thus, the corresponding current flow θ has θ > 0 making (2.4) positive.
Claim 2.20 is the mathematical manifestation of Ohm's law which states that the voltage difference across an electric circuit is proportional to the current through it. The constant of proportionality is usually called the effective resistance of the circuit.
Definition 2.21. The number defined in (2.4) is called the effective resistance between a and z in the network, and is denoted R eff (a ↔ z). We call its reciprocal the effective conductance between a and z and is denoted C eff (a ↔ z) For examples of computing the effective resistances of networks, see Fig. 2

.2.
Notation In most cases we write R eff (a ↔ z) and suppress the notation of which network we are working on. However, when it is important to us what the network is, we will write R eff (a ↔ z; G) for the effective resistance in the network G with unit edge conductances and R eff (a ↔ z; (G, {r e })) for the effective resistance in the network G with edge resistances {r e } e∈E . Furthermore, given disjoint subsets A and Z of vertices in a graph G, we write R eff (A ↔ Z) for the effective resistance between a and z in the network obtained from the original network by identifying all the vertices of A into a single vertex a, and all the vertices of Z into a single vertex z.
Probabilistic interpretation For a vertex x we write τ + x for the stopping time where X t is the weighted random walk on the network, as defined in (2.2). Note that if X 0 = x then τ x = τ + x with probability 1. .
Proof. Consider the voltage h satisfying h(a) = 0 and h(z) = 1, and let θ be the current flow associated with h. Due to uniqueness of h (Corollary 2.7) we have that for x = a, z, .
Network Simplifications Sometimes a network can be replaced by a simpler network, without changing the effective resistance between a pair of vertices.
Claim 2.23. [Parallel law] Conductances add in parallel. Suppose e 1 , e 2 are parallel edges between a pair of vertices, with conductances c 1 and c 2 , respectively. If we replace them with a single edge e with conductance c 1 +c 2 , then the effective resistance between a and z is unchanged.
A demonstration of the parallel law appears in Fig. 2.3.
Proof. Let G be the graph where e 1 and e 2 are replaced with e with conductance c 1 + c 2 .
Then it is immediate that if h is any voltage function on G, then it remains a voltage function on the network G . The claim follows.   Claim 2.24. [Series law] Resistances add in series. Suppose that u ∈ {a, z} is a vertex of degree 2 and that e 1 = (u, v 1 ) and e 2 = (u, v 2 ) are the two edges touching u with edge resistances r 1 and r 2 , respectively. If we erase u and replace e 1 and e 2 by a single edge e = (v 1 , v 2 ) of resistance r 1 + r 2 , then the effective resistance between a and z is unchanged.
The series law is depicted in Fig. 2

.4.
Proof. Denote by G the graph in which u is erased and e 1 and e 2 are replaced by a single edge (v 1 , v 2 ) of resistance r 1 + r 2 . Let θ be a current flow from a to z in G, and define a flow θ from a to z in G by putting θ (e) = θ(e) for any e = e 1 , e 2 and θ (v 1 , v 2 ) = θ(v 1 , u). Since u had degree 2, it must be that θ(v 1 , u) = θ(u, v 2 ). Thus θ satisfies the node law at any x ∈ {a, z} and θ = θ . Furthermore, since θ satisfies the cycle law, so does θ . We conclude θ is a current flow of the same strength as θ and the voltage difference they induce is the same.
The operation of gluing a subset of vertices S ⊂ V consists of identifying the vertices of S into a single vertex and keeping all the edges and their conductances. In this process we may generate parallel edges or loops. Proof. This is immediate since the voltage on the glued graph is still harmonic.
Example: Spherically Symmetric Tree Let Γ be a spherically symmetric tree, that is, a rooted tree where all vertices at the same distance from the root have the same number of children. Denote by ρ the root of the tree, and let {d n } n∈N be a sequence of positive integers. Every vertex at distance n from the root ρ has d n children. Denote by Γ n the set of all vertices of height n. We would like to calculate R eff (ρ ↔ Γ n ). Due to the tree's symmetry, all vertices at the same level have the same voltage and therefore by Claim 2.25 we can identify them. Our simplified network has now one vertex for each level, denoted by {v i } i∈N (where ρ = v 0 ), with |Γ n+1 | edges between v n and v n+1 . Using the parallel law (Claim 2.23), we can reduce each set of |Γ n | edges to a single edge with resistance 1 |Γn| , then, using the series law (Claim 2.24) we get see Figure 2.5.    By Claim 2.22 we learn that where τ n is the hitting time of Γ n for the random walk on Γ. Observe that P ρ τ n < τ + ρ for all n = P ρ (X t never returns to ρ) , so by (2.6) we reach an interesting dichotomy. If ∞ i=1 1 d 1 ···d i = ∞, then the random walker returns to ρ with probability 1, and hence returns to ρ infinitely often almost surely. If ∞ i=1 1 d 1 ···d i < ∞, then with positive probability the walker never returns to ρ, and hence visits ρ only finitely many times almost surely.
The former graph is called a recurrent graph and the latter is called transient. We will get back to this dichotomy in Section 2.5.

The commute time identity
The following lemma shows that the effective resistance between a and z is proportional to the expected time it takes the random walk starting at a to visit z and then return to a, in other words, the expected commute time between a and z. We will use this lemma only in Chapter 6 so the impatient reader can skip this section and return to it later. and note that It is straightforward to show that the function ν(x) = G z (a, x)/π x is harmonic in V \ {a, z}. Also, we have that G z (a, z) = 0 and G z (a, a) = 1 Pa(τz<τa) = π a R eff (a ↔ z). Thus, ν is a voltage function with boundary conditions ν(z) = 0 and ν(a) = R eff (a ↔ z) which satisfies Similarly, the same analysis for E z [τ a ] yields the same result, with the voltage function η which has boundary conditions η(z) = R eff (a ↔ z) and η(a) = 0. Therefore, η(x) = ν(a) − ν(x) for all x ∈ V since both sides are harmonic functions in V \ {a, z} that receive the same boundary values. This implies that

Energy
So far we have seen how to compute the effective resistance of a network via harmonic functions and current flows. However, in typical situations it is hard to find a flow satisfying the circle law. Luckily, an extremely useful property of the effective resistance is that it can be represented by a variational problem. Our Physics intuition asserts that the energy of the unit current flow is minimal among all unit flows from a to z. The notion of energy can be made precise and will allow us to obtain valuable monotonicity properties. For instance, removing any edge from an electric network can only increase its effective resistance. Hence, any recurrent graph remains recurrent after removing any subset of edges from it. Two variational problems govern the effective resistance, Thomson's principle, which is typically used to bound the effective resistance from above, and Dirichlet's principle, allowing to bound it from below. Note that in the second sum we sum over undirected edges, but since θ(xy) 2 = θ(yx) 2 , this is well defined. R eff (a ↔ z) = inf{E(θ) : θ = 1, θ is a flow from a → z} and the unique minimizer is the unit current flow.
Proof. First, we will show that the energy of the unit current flow is the effective resistance. Let I be the unit current flow, and h the corresponding (Claim 2.15) voltage function. Observe that in the second term of the right hand side, for every x = a, z the sum over all y ∼ x is 0 due to the node law, hence the entire term equals 1 2 (h(a)−h(z)). From antisymmetry of I, the first term on the right hand side equals − 1 2 (h(a) − h(z)), hence the right hand side equals altogether h(z) − h(a) = R eff (a ↔ z).
We will now show that every other flow J with J = 1 has E(J) ≥ E(I).
where the last inequality follows from the same reasoning as before. We conclude that E(J) ≥ E(I) as required and that equality holds if and only if E(θ) = 0, that is, if and only if J = I. This inequality is preserved while taking infimum over all flows with strength 1. Applying Theorem 2.28 finishes the proof.
Corollary 2.30. Gluing vertices cannot increase the effective resistance between a and z.
Proof. Denote by G the original network and by G the network obtained from gluing a subset of vertices. Then every flow θ on G (viewed as a function on the edges) is a flow on G . Hence the infimum in Theorem 2.28 taken over flows in G is taken over a larger subset of flows.
Definition 2.31. The energy of a function h : V → R, denoted by E(h), is defined to be Compare the following lemma with Thomson's principle (Theorem 2.28).
Lemma 2.32 (Dirichlet's principle). Let G be a finite network with source a and sink z. Then Proof. The infimum is obtained when h is the harmonic function taking 0 and 1 at a, z respectively. The reason is that if there exists v = a, z with then we can change the value of h at v to be the right hand side of (2.7) and the energy will only decrease. One way to see this is that if X is a random variable then the value E(X) minimizes the function f (x) = E (X − x) 2 .
Let h be that harmonic function and let I be its current flow, so I(xy) = c xy (h(y) − h(x)). WriteÎ = R eff (a ↔ z) · I, so Î = 1. By Thomson's principle,

Infinite graphs
Let G = (V, E) be an infinite connected graph with edge resistances {r e } e∈E . We assume henceforth that this network is locally finite, that is, for any vertex x ∈ V we have y:y∼x c xy < ∞. Let {G n } be a sequence of finite subgraphs of G such that n∈N G n = G and G n ⊂ G n+1 ; we call such a sequence an exhaustive sequence of G. Identify all vertices of G \ G n with a single vertex z n .
Claim 2.33. Given an exhaustive sequence {G n } of G, the limit exists.
Proof. The graph G n ∪ {z n } can be obtained from G n+1 ∪ {z n+1 } by gluing the vertices in G n+1 \ G n with z n+1 and labeling the new vertex z n . By Corollary 2.30, the effective resistance R eff (a ↔ z n ; G n ∪ {z n }) is increasing in n. Proof. Indeed, let {G n } and {G n } be two exhaustive sequences of G. We can find subsequences {i k } k≥1 and {j k } k≥1 such that . .} is itself an exhaustive sequence of G, the limit of effective resistances for this sequence exists and equals the limits of effective resistances for the subsequences {G i k } and {G j k }. In turn, these are equal to the limits of effective resistances for the original sequences {G n } and {G n }, respectively.
Definition 2. 35. In an infinite network, the effective resistance from a vertex a and ∞ is We are now able to address the question of recurrence versus transience of a graph systematically. Recall the definition of τ + x in (2.5). In an infinite network we define τ + a = ∞ when there is no time t such that X a = x. Definition 2.36. A network (G, {r e } e∈E ) is called recurrent if P a (τ + a = ∞) = 0, that is, if the probability of the random walker started at a never returning to a is 0. Otherwise, it is called transient .
Observe that since G is connected, if P a (τ + a = ∞) = 0 for one vertex a, then it holds for all vertices in the network. As we have seen, if n is large enough so that a ∈ G n , then , with the convention that 1/0 = ∞.
Definition 2.37. Let G be an infinite network. A function θ : E(G) → R is a flow from a to ∞ if it is anti-symmetric and satisfies the node law on each vertex v = a.
The following follows easily from Theorem 2.28, we omit the proof. Corollary 2.39. Let G be an infinite graph. The following are equivalent: 1. G is transient.
2. There exists a vertex a ∈ V such that R eff (a ↔ ∞) < ∞. Hence all vertices satisfy this.
3. There exists a vertex a ∈ V (and hence all vertices) and a unit flow θ from a to ∞ with E(θ) < ∞. Hence all vertices satisfy this.
We will now develop a useful method for bounding effective resistances from below. This will lead us to a popular sufficient criterion for recurrence in Corollary 2.43. Definition 2.40. A cutset Γ ⊆ E(G) separating a from z is a set of edges such that every path from a to z must use an edge from Γ. Proof. Denote by Z the set of vertices separated from a by Γ. Denote by G the network where Z is identified to a single vertex x and all edges having both endpoints in Z are removed. Now, the restriction of θ to the edges of the new network is a flow from a to x. By Claim 2.17, we have y:y∼x θ(yx) = θ . Also, all edges incident to x must be in Γ, since otherwise x is not separated from a by Γ. Therefore   Also, since Γ n is a cutset, the flow passing through Γ n is at least θ , by Claim 2.41. So  Consider now an infinite network G = (V, E). We say that Γ ⊂ E is a cutset separating a from ∞ if any infinite simple path from a must intersect Γ. Example 2.44 (Z 2 is recurrent). Define Γ n as the set of vertical edges {(x, y), (x, y + 1)} with |x| ≤ n and min{|y|, |y + 1|} = n along with the horizontal edges {(x, y), (x + 1, y)} with |y| ≤ n and min{|x|, |x + 1|} = n, see Figure 2.6. Then {Γ n } is a collection of disjoint cutsets separating 0 from ∞. Also, |Γ n | = 4(2n + 1) and therefore n e∈Γn c e −1 = ∞. We deduce by Corollary 2.43 that Z 2 is recurrent.

Random paths
We now present the method of random paths, which is one of the most useful methods for generating unit flows on a network and bounding their energy. In fact, it is possible to show that the electric flow can be represented by such a random path. Suppose G is a network with fixed vertices a, z and µ is a probability measure on the set of paths from a to z. where by e and e we mean the two orientations of an edge e of G. Set Then θ is a flow from a to z with θ = 1.
An example of the use of this method is the following classical result.
We construct a random path µ from 0 to ∂V R by choosing a uniform random point p in ∂B R = {(x, y, z) : x 2 +y 2 +z 2 = R 2 }, drawing a straight line between 0 and p in R 3 , considering the set of distance at most 10 in R 3 from the line, and then choosing (in some arbitrary fashion) a path in Z 3 which is contained inside this set. The non-optimal constant 10 was chosen in order to guarantee that such a discrete path exists for any point p ∈ ∂B R . By Claim 2.46, the measure µ corresponds to a flow from 0 to ∂V R . To estimate the energy of this flow, we note that if e is an edge at distance r ≤ R from the origin, then the probability that it is traversed by a path drawn by µ is O(r −2 ). Furthermore, there are O(r 2 ) such edges. Hence the energy of the flow is at most for some constant C < ∞ which does not depend on R. By Claim 2.46 and Theorem 2.28 we learn that R eff (0 ↔ ∂V R ) ≤ C for all R, and so by Corollary 2.39 we deduce that Z 3 is transient.

Exercises
1. Let G z (a, x) be the Green's function, that is, Show that the function h(x) = G z (a, x)/π(x) is a voltage.
2. Show that the effective resistance satisfies the triangle inequality. That is, for any three vertices x, y, z we have 3. Let a, z be two vertices of a finite network and let τ a , τ z be the first visit time to a and z, respectively, of the weighted random walk. Show that for any vertex x 4. Consider the following tree T . At height n it has 2 n vertices (the root is at height n = 0) and if (v 1 , . . . , v 2 n ) are the vertices at level n we make it so that v k has 1 child at level n + 1 and if 1 ≤ k ≤ 2 n−1 and v k has 3 children at level n + 1 for all other k.
(a) Show that T is recurrent.
(b) Show that for any disjoint edge cutsets Π n we have that n |Π n | −1 < ∞. (So, the Nash-Williams criterion for recurrence is not sharp) 5. (a) Let G be a finite planar graph with two distinct vertices a = z such that a, z are on the outer face. Consider an embedding of G so that a is the left most point on the real axis and z is the right most point on the real axis. Split the outer face of G into two by adding the ray from a to −∞ and the ray from z to +∞. Consider the dual graph G * of G and write a * and z * for the two vertices corresponding to the split outer face of G. Assume that all edge resistances are 1. Show that .
3 :: The circle packing theorem 3.1 Planar graphs, maps and embeddings is planar if it can be properly drawn in the plane, that is, there exists a mapping sending distinct vertices to distinct points of R 2 and edges to continuous curves between the corresponding vertices so that no two curves intersect, except at the vertices they share. We call such a mapping a proper drawing of G.

Remark 3.2.
A single planar graph has infinitely many drawings. Intuitively, some may seem similar to one another, while others seem different. For example, The following definition gives a precise sense to the above intuitive equivalence / nonequivalence of drawings.

Definition 3.3.
A planar map is a graph endowed with a cyclic permutation of the edges incident to each vertex, such that there exists a proper drawing in which the clockwise order of the curves touching the image of a vertex respects that cyclic permutation.
The combinatorial structure of a planar map allows us to define faces directly (that is, without mentioning the drawing). Consider each edge of the graph as directed in both ways, and say that a directed edge e precedes f (or, equivalently, f succeeds e), if there exist vertices v, x, y such that e = (x, v), f = (v, y), and y is the successor of x in the cyclic permutation σ v ; see We say that e, f belong to the same face if there exists a finite directed path e 1 , . . . , e m in the graph with e i preceding e i+1 for i = 1, . . . , m − 1 and such that either e = e 1 and f = e m , or f = e 1 and e = e m . This is readily seen to be an equivalence relation and we call each equivalence class a face. Even though a face is a set of directed edges, we frequently ignore the orientations and consider a face the set of corresponding undirected edges. Each (undirected) edge is henceforth incident to either one or two faces.   We will use the following classical formula. We now state the main theorem we will discuss and use throughout this course. Its proof is presented in the next section.
Theorem 3.5 (The circle packing theorem, [50]). Given any finite simple planar map G = (V, E), V = {v 1 , . . . , v n }, there exist n circles in R 2 , C 1 , . . . , C n , with disjoint interiors, such that C i is tangent to C j if and only if {i, j} ∈ E. Furthermore, for every vertex v i , the clockwise order of the circles tangent to C i agrees with the cyclic permutation of v i 's neighbors in the map. First note that it suffices to prove the theorem for triangulations, that is, simple planar maps in which every face has precisely three edges. Indeed, in any planar map we may add a single vertex inside each face and connect it to all vertices bounding that face. The obtained map is a triangulation, and after applying the circle packing theorem for triangulations, we may remove the circles corresponding to the added vertices, obtaining a circle packing of the original map which respects its cyclic permutations. This is depicted in Fig. 3.3.
Thus, it suffices to prove Theorem 3.5 for finite triangulations. In this case an important uniqueness statement also holds.   Theorem 3.6. Let G = (V, E) be a finite triangulation on vertex set V = {v 1 , . . . , v n } and assume that {v 1 , v 2 , v 3 } form a face. Then for any three positive numbers ρ 1 , ρ 2 , ρ 3 , there exists a circle packing C 1 , . . . , C n as in Theorem 3.5 with the additional property that C 1 , C 2 , C 3 are mutually tangent, form the outer face, and have radii ρ 1 , ρ 2 , ρ 3 , respectively. Furthermore, this circle packing is unique, up to translations and rotations of the plane.

Proof of the circle packing theorem
We here prove Theorem 3.6 which implies Theorem 3.5 as explained above. We therefore assume from now on that our map is a triangulation. Denote by n, m and f the number of vertices, edges and faces of the map respectively, and observe that 3f = 2m, since each edge is counted in exactly two faces, and each face is bounded by exactly three edges. Therefore, by Euler's formula (Theorem 3.4), we have that We assume the vertex set is {v 1 , . . . , v n }, that {v 1 , v 2 , v 3 } is the outer face and that ρ 1 , ρ 2 , ρ 3 are three positive numbers that will be the radii of the outer circles C 1 , C 2 , C 3 eventually. Denote by F • the set of faces of the map except the outer face, and for a subset of vertices A let F (A) be the set of inner faces with at least one vertex in A. We write F (v) when we mean F ({v}).
Given a vector r = (r 1 , . . . , r n ) ∈ (0, ∞) n , an inner face f bounded by the vertices which is the angle of v j in the triangle v i v j v k created by connecting the centers of three mutually tangent circles C i , C j , C k of radii r i , r j and r k (that is, in a triangle with side lengths r i + r j , r j + r k and r k + r i ). This number can be calculated using the cosine formula however, we will not use this formula directly. For every j ∈ {1, . . . , n} we define to be the sum of angles at v i with respect to r. Let θ 1 , θ 2 , θ 3 be the angles formed at the centers of three mutually tangent circles C 1 , C 2 , C 3 of radii ρ 1 , ρ 2 , ρ 3 . Equivalently, these are the angles of a triangle with edge lengths r 1 + r 2 , r 2 + r 3 and r 3 + r 1 . If the vector r was indeed the vector of radii of a circle packing of the map satisfying Theorem 3.6, then we would have and additionally (r 1 , r 2 , r 3 ) = (ρ 1 , ρ 2 , ρ 3 ). The proof is split into three parts: 1. Show that there exists a vector r ∈ (0, ∞) n satisfying (3.2); 2. Given such r, show that a circle packing with these radii exists and that (r 1 , r 2 , r 3 ) is a positive multiple of (ρ 1 , ρ 2 , ρ 3 ); furthermore, this circle packing is unique up to translations and rotations.
3. Show that r is unique up to scaling all entries by a constant factor.
Proof of Theorem 3.6, step 1: Finding the radii vector r Observation 3.7. For every r, Figure 3.4: When the radius of a circle corresponding to a vertex increases, while the radii of the circles corresponding to its two neighbors in a given face decrease, the vertex's angle in the corresponding triangle decreases (see Observation 3.8).

Proof. Follows immediately since each inner face f bounded by the vertices
We now set Using this notation, our goal is to find r for which δ r ≡ 0. It follows from Observation 3.7 that for every r, We define We would like to find r for which E r = 0. We will use the following geometric observation; see Observation 3.8. Let r = (r 1 , . . . , r n ) and r = (r 1 , . . . , r n ), and let f ∈ F • be bounded by Proof. A proof using the cosine formula is routine and is omitted.
We now define an iterative algorithm, whose input and output are both vectors of radii normalized to have 1 norm 1. We start with the vector r (0) = 1 n , . . . , 1 n , and given r = r (t) we construct r = r (t+1) . Write δ = δ r and δ = δ r , and similarly E = E r and E = E r . We begin by ordering the set of reals {δ(v i ) | 1 ≤ i ≤ n}. If δ ≡ 0 we are done; otherwise, we may choose s ∈ R such that the set S = {v | δ(v) > s} = ∅ and its complement V \ S are non-empty and such that the gap is maximal over all such s. See Fig. 3.5 for illustration.
Once we choose S, a step of the algorithm consists of two steps: 1. For some λ ∈ (0, 1) to be chosen later, we set 2. We normalize r λ so that the sum of entries is 1, lettingr λ be the normalized vector. Note that this step does not change the vector δ.
We will choose an appropriate λ that will decrease all values of δ(v) for v ∈ S, increase all values of δ(v) for v / ∈ S, and will close the gap. This will be made formal in the following two claims.
Proof of Claim 3.9. Consider v j / ∈ S and an inner face v i , v j , v k .
In this case, the radii of C i , C j , C k are all multiplied by the same number Case II v i , v k ∈ S. In this case, the radii of C i , C k remain unchanged and the radius of C j decreases, thus by Observation 3.8, In this case the radii of C i , C j are multiplied by λ and the radius of C k is unchanged. The angles of v i v j v k remain unchanged if we multiply all radii by λ −1 , thus we could just as easily have left C i , C j unchanged and increased the radius of C k . By Observation 3.8, we get that α r In order to prove Claim 3.10, we present another claim. Figure 3.5: Left: Finding the maximum gap between two consecutive values of δ, and splitting the set of values into S and its complement. Right: Moving from r to r closes the gap between S and V \ S.
Proof of Claim 3.11. We first show that for each face f ∈ F (V \ S) bounded by v i , v j , v k , the sum of angles at the vertices belonging to V \ S converges to π as λ 0. We show this by the following case analysis. The statements in cases II and III can be justified by drawing a picture or appealing to the cosine formula.
In the rest of the proof, we fix an embedding of G in the plane with (v 1 , v 2 , v 3 ) as the outer face. Let G[S] be the subgraph of G induced by S. Partition S into equivalence classes, S = S 1 ∪· · ·∪S k , where two vertices are equivalent if they are in the same connected component LetF j be the set of faces inF that appear as faces of G[S j ], so that we have the disjoint unionF =F 1 ∪ · · · ∪F k .
Since S is nonempty, it is enough to show that for all 1 ≤ j ≤ k, Let m j and f j denote the number of edges and faces, respectively, of has at least one inner face, and since it is a simple graph, every face must have degree at least 3. (The degree of a face is the number of directed edges that make up its boundary.) Because the sum of the degrees of all the faces equals twice the number of edges, we have 2m j ≥ 3f j . Euler's formula now gives and hence f j ≤ 2|S j | − 4. Thus, the left side of (3.8) satisfies If S j contains all of v 1 , v 2 , v 3 , then the right side of (3.8) is Otherwise, at least one of the θ i is replaced by 2π and so the right side of (3.8) is strictly greater than the left side. In fact, We now show that this situation cannot occur.
The equality |F j | = f j − 1 means that every inner face of G[S j ] is an element ofF j and therefore a face of G.
, which is the same as the outer face of G. So, every face of G[S j ] is also a face of G. But this is impossible: if we choose any v ∈ V \ S, then v must lie in some face of G[S j ], which then cannot be a face of G. Therefore, it cannot be true that v 1 , v 2 , v 3 ∈ S j and also |F j | = f j − 1, so we conclude that (3.8) always holds.
We now analyse the algorithm. Let λ ∈ (0, 1) be the one guaranteed by Claim 3.10, and set r =r λ .
Proof. As depicted in Fig. 3.5, define Since gap δ (S) was chosen to be the maximal gap we may bound, and we conclude that Write E (t) = E r (t) . By iterating the described algorithm, we obtain from Claim 3.12 that By our normalization r (t) 1 = 1. Thus, by compactness, there exists a subsequence {t k } and a vector r ∞ such that r (t k ) → r ∞ as k → ∞. From continuity of E we have that E(r ∞ ) = 0, meaning that (3.2) is satisfied. For r ∞ to be feasible as a vector of radii, we also have to argue that it is positive (the fact that no coordinates are ∞ follows since ||r ∞ || 1 = 1).
Because of the normalization of r, we know that S is nonempty. Assume for contradiction that S V . We repeat the exact same argument used in the proof of Claim 3.11 showing first by case analysis that and then deducing that lim This contradicts that lim t→∞ E (t) = 0, so we conclude that S = V .
Proof of Theorem 3.6, step 2: Drawing the circle packing described by r ∞ Given the vector of radii r ∞ satisfying (3.2), we now show that the corresponding circle packing can be drawn uniquely up to translations and rotations. In fact, we provide a slightly more general statement which is due to Ori Gurel-Gurevich and Ohad Feldheim [personal communications, 2018].
Let G = (V, E) be a finite planar triangulation on vertex set {v 1 , . . . , v n } and assume that {v 1 , v 2 , v 3 } is the outer face. A vector of positive real numbers = { e } e∈E indexed by the edge set E is called feasible if for any face enclosed by edges e 1 , e 2 , e 3 , the lengths e 1 , e 2 , e 3 can be made to form a triangle. In other words, these lengths satisfy three triangle inequalities, Given a feasible edge length vector we may again use the cosine formula to compute, for each face f , the angle at a vertex of the triangle formed by the three corresponding edge lengths. We denote these angles, as before, by to be the sum of angles at a vertex v.
Theorem 3.14. Let G be a finite triangulation and a feasible vector of edge lengths. Assume that σ (v) = 2π for any internal vertex v. Then there is a drawing of G in the plane so that each edge e is drawn as a straight line segment of length e and no two edges cross. Furthermore, this drawing is unique up to translations and rotations.
It is easy to use the theorem above to draw the circle packing given the radii vector r ∞ satisfying (3.2). Indeed, given r ∞ we set by putting e = r ∞ i + r ∞ j for any edge e = {v i , v j } of the graph. Condition (3.2) implies that is feasible. We now apply Theorem 3.14 and obtain the guaranteed drawing and draw a circle C i of radii r ∞ i around v i for all i. Theorem 3.14 guarantees that the for any edge {v i , v k } the distance between v i , v j is precisely r ∞ i + r ∞ j and thus C i and C j are tangent. Conversely, assume that v i , v j do not form an edge. To each vertex v let A v be the union of faces touching v i in the drawing of Theorem 3.14. Since v i and v j are not adjacent we have that A v i and A v j have disjoint interiors. Furthermore, C i ⊂ Int(A i ) since the boundary of A i is composed of lines between consecutive neighbors of v i and each of these lines are contained in the corresponding circles. By the same token C j ⊂ Int(A j ) and we conclude that C i and C j are not tangent.
Step 2 of the proof of Theorem 3.5 is now concluded.
Figure 3.6: On the left, we first draw the polygon surrounding v. On the right, we then erase v and the edge emanating from it, replacing it with diagonals that triangulate the polygon while recording the lengths of the diagonals in . The latter is the input to the induction hypothesis.
Proof of Theorem 3.14. We prove this by induction on the number of vertices n. The base case n = 3 is trivial since the feasibility of guarantees that the edge lengths of the three edges of the outer face can form a triangle. Any two triangles with the same edge lengths can be rotated and translated to be identical, so the uniqueness statement holds for n = 3.
Assume now that n > 3 so that there exists an internal vertex v. Denote by v 1 , . . . , v m the neighbors of v ordered clockwise. We begin by placing v at the origin and drawing all the faces to which v belongs, see This determines the location of v 1 , . . . , v m in the plane and allows us to "complete" the triangles by drawing the straight line segments . Denote these edges by e 1 , . . . , e m .
Since σ (v) = 2π we learn that these m triangles have disjoint interiors and that the edges e 1 , . . . , e m form a closed polygon containing the origin in its interior. It is a classical fact [63] that every closed polygon can be triangulated by drawing some diagonals as straight line segments in the interior of the polygon. We fix such a choice of diagonals and use it to form a new graph G on n − 1 vertices and |E(G)| − 3 edges by erasing v and the m edges emanating from it and adding the new m − 3 edges corresponding to the diagonals we added. Furthermore, we generate a new edge length vector corresponding to G by assigning the new edges lengths corresponding to the Euclidean length of the drawn diagonals and leaving the other edge lengths unchanged. See Fig. 3.6, right.
It is clear that is feasible and that the angle sum at each internal vertex of G is 2π. Therefore we may apply the induction hypothesis and draw the graph G according to the edge lengths . This drawing is unique up to translations and rotations by induction. Note that in this drawing of G , the polygon corresponding to e 1 , . . . , e m must be the exact same polygon as before, up to translations and rotations, since it has the same edge lengths and the same angles between its edges. Since it is the same polygon, we can now erase the diagonals in this drawing and place a new vertex in the same relative location where we drew v previously, along with the straight line segments connecting it to v 1 , . . . , v m . Thus we have obtained the desired drawing of G. The uniqueness up to translations and rotations of this drawing follows from the uniqueness of the drawing of G and the fact that the location of v is uniquely in that drawing.
Proof. We have already seen in step 2 that given the radii vector r the drawing we obtain is unique up to translations and rotations. Thus, we only need to show the uniqueness of r given ρ 1 , ρ 2 , ρ 3 .
To that aim, suppose that r a and r b are two vectors satisfying (3.2). Since the outer face in both vectors correspond to a triangle of angles θ 1 , θ 2 , θ 3 we may rescale so that r a i = r b i = ρ i for i = 1, 2, 3. After this rescaling, assume by contradiction that r a = r b and let v be the interior vertex which maximizes r a v /r b v . We can assume without loss of generality that this quantity is strictly larger than 1, as otherwise we can swap r a and r b .
This is a direct consequence of Observation 3.8. Indeed, scale all the radii in r b by a factor of r a v /r b v to get a new vector r such that r a v = r v and r a u ≤ r u for all u = v. The second bullet point in Observation 3.8 implies that . As well, if either r a u 1 < r u 1 or r a u 2 < r u 2 , then the cosine formula yields the strict inequality Because the graph is connected, the ratio r a u /r b u must be constant for all vertices u ∈ V (G). But this contradicts that r a We conclude that r a = r b .

Infinite planar maps
In this chapter we discuss countably infinite locally finite (that is, the vertex degrees are finite) connected simple graphs. In a similar fashion to the previous chapter, an infinite planar graph is a connected infinite graph such that there exists a drawing of it in the plane. We recall that a drawing is a correspondence sending vertices to points of R 2 and edges to continuous curves between the corresponding vertices such that no two edges cross. An infinite planar map is an infinite planar graph equipped with a set of cyclic permutations {σ v : v ∈ V } of the neighbors of each vertex v, such that there exists a drawing of the graph which respects these permutations, that is, the clockwise order of edges emanating from a vertex v coincides with σ v .
Unlike the finite case, one cannot define faces as the connected components of the plane with the edges removed since the drawing may have a complicated set of accumulation points. This is the reason that we have defined faces in Section 3.1 combinatorially, that is, based solely on the edge set and the cyclic permutation structure. This definition makes sense in both the finite and infinite case.
A (finite or infinite) planar map is a triangulation if each of its faces has exactly 3 edges. Given a drawing of a triangulation, the Jordan curve theorem implies that the edges of each face bound a connected component of the plane minus the edges. We will often refer to the faces as these connected components. A triangulation is called a plane triangulation if there exists a drawing of it such that every point of the plane is contained in either a face or an edge and any compact subset of the plane intersects at most finitely many edges and vertices. The term disk triangulation is also used in the literature and means that there exists a drawing in the open unit disk in R 2 such that every point of the disk is contained in either a face or an edge and any compact subset of the disk intersects at most finitely many edges and vertices. Since the plane and the open disk are homeomorphic, we deduce that these two definitions are equivalent. For example, take the product of the complete graph K 3 on 3 vertices with an infinite ray N and add a diagonal edge in each face that has 4 edges; then this is a plane triangulation. However, the product of K 3 with a bi-infinite ray Z together with the same diagonals is a triangulation but not a plane triangulation, since it cannot be drawn in the plane without an accumulation point.
It turns out that there is a combinatorial criterion for a triangulation to be a plane/disk triangulation. We say that an infinite graph is one-ended if the removal of any finite set of its vertices leaves exactly one infinite connected component. Proof. Suppose G = (V, E) is a plane triangulation and consider a drawing of the graph with no accumulation points in the plane such that every point of the plane belongs to either an edge or a face. Let A ⊆ V be a finite set of vertices and take B ⊂ R 2 to be a ball around the origin which contains every vertex of A, every edge touching a vertex of A and every face incident to such an edge. Let u = v be two vertices drawn outside of B and take a continuous curve γ between them in R 2 \ B. By definition of B, this path only touches faces and edges that are not incident to the vertices of A and hence one can trace a discrete path from u to v in the graph that "follows" γ and avoids A. Since B intersects only finitely many edges and vertices, we learn that G \ A has a unique infinite component.
Conversely, assume now that G is one-ended and consider a drawing of G in the plane. By the stereographic projection we project the drawing to the unit sphere S 2 in R 3 . Denote by I the complement in S 2 of the union of all faces and edges. Since G is an infinite triangulation this union is an open set, hence I is a closed set and its boundary ∂I is precisely the set of accumulation points of the drawing. Since I is closed, each connected component of I must be closed as well and hence contain at least one accumulation point. Since G is one-ended I cannot have more than one connected component, since otherwise we would be able to separate the two components by a finite set of edges and obtain two infinite connected components. Now choose a point p ∈ I and rotate the sphere so that p is the north pole. Project back the rotated sphere to the plane and consider the drawing in the plane. In this drawing the union of all faces and edges must be a simply connected set. By the Riemann mapping theorem this set is homeomorphic to the whole plane, and we deduce that the triangulation is a plane triangulation.

The Ring Lemma and infinite circle packings
The circle packing theorem Theorem 3.5 is stated for finite planar maps. However, it is not hard to argue that any infinite map also has a circle packing. To this aim we will prove what is known as Rodin and Sullivan's Ring Lemma [69]; we will use it again many times throughout these notes. Given circles C 0 , C 1 , . . . , C M with disjoint interiors, we say that C 1 , . . . , C M completely surround C 0 if they are all tangent to C 0 and C i is tangent to C i+1 for i = 1, . . . , M (where C M +1 is set to be C 1 ). Proof. We may scale the picture so that r 0 = 1. Assume that the radius of C 2 is small and consider the circles C 1 and C 3 to its left and right. It cannot be that both C 1 and C 3 have large radii compared to C 2 since in this case they will intersect; see Fig. 4.1. Hence, one of them has to be small as well. Assume without loss of generality that it is C 3 . By similar reasoning, one of C 1 and C 4 has to be small. We continue this argument this way and get a path of circles of small radii; thus, for the circles C 1 , . . . , C M to completely surround C 0 we learn that M must be large. For a circle packing P and a vertex v, denote by C v the circle corresponding to v, by cent(v) the center of that circle, and by rad(v) its radius. We write G(P ) for the tangency graph of the packing P , that is, the graph in which each vertex is a circle of P and two such circles form an edge when they are tangent. Proof. If G is not a triangulation, then it is always possible to add in each face new vertices and edges touching them so the resulting graph is a planar triangulation (in an infinite face we have to put infinitely many vertices). After circle packing this new graph, we can remove all the circles corresponding to the added vertices and remain with a circle packing of G. Thus, we may assume without loss of generality that G is a triangulation.
Fix a vertex x, and let G n be the graph distance ball of radius n around x. Apply the circle packing theorem to G n to obtain a packing P n , and scale and translate it so that rad(x) = 1 and cent(x) is the origin.
Consider a neighbor y of x. By the Ring Lemma (Lemma 4.2), there exists a constant A = A(x, y) > 0 such that A −1 ≤ rad(y) ≤ A. By compactness there exists a subsequence of packings P n k for which rad n k (y) and cent n k (y) both converge. By taking further subsequences for the rest of x's neighbors, and then for the rest of the graph's vertices, it follows by a diagonalization argument that there exists a subsequence such that the radii and centers of all vertices converge. The limiting packing P ∞ satisfies that G(P ∞ ) is isomorphic to G.

Statement of the He-Schramm theorem
Given a circle packing P of a graph G, we obtain a drawing of G as follows: plot each vertex as the center of its corresponding circle in P and connect adjacent vertices by straight lines. It is immediate that this is a drawing of G. When G is a triangulation and P is a circle packing of G, we define the carrier of P , denoted Carrier(P ), to be the open subset of the plane obtained by taking the union of all faces (seen as open subsets of the plane) and all edges. When P is a circle packing of an infinite one-ended triangulation, the argument in Lemma 4.1 shows that Carrier(P ) is simply connected.
We say that G is circle packed in R 2 when Carrier(P ) = R 2 . Denote by U the disk {z ∈ R 2 : |z| < 1}; we say that G is circle packed in U when Carrier(P ) = U. See Fig. 4.2. Let G be a plane triangulation. Then G can be drawn in the plane R 2 or alternatively in the disk U (since they are homeomorphic), but can it be circle packed both in R 2 and in U? A celebrated theorem of He and Schramm [38] states that this cannot be done: each plane triangulation can be circle packed in either the plane or the disk, but not both. In fact, the combinatorial property of G that determines on which side of the dichotomy we are is the recurrence or transience of the simple random walk on G (assuming also that G has bounded degrees, that is, sup x∈V (G) deg(x) < ∞). This is the content of the He-Schramm theorem, which we are now ready to state. 1. If G is recurrent, then there exists a circle packing P of G such that Carrier(P ) = R 2 .
2. If G is transient, then there exists a circle packing P of G such that Carrier(P ) = U.
3. If P is a circle packing of G with Carrier(P ) = R 2 , then G is recurrent.
4. If P is a circle packing of G with Carrier(P ) = U, then G is transient. Remark 4.6. In fact, it is proved in [38] that the corollary above holds without the assumption of bounded degree. Furthermore, in [38] Theorem 4.4 (1) and Theorem 4.4 (4) are proved without the bounded degrees assumption, but the other two statements require this assumption.
The following example demonstrates why the bounded degree condition is necessary for Theorem 4.4 (2) and Theorem 4.4 (3).
Example 4.7. Let P be a triangular lattice circle packing (as in Fig. 4.3), and let C 0 , C 1 , C 2 , . . . be an infinite horizontal path of circles in P going (say) to the right. In the upper face shared by C n and C n+1 , draw 2 n circles which form a vertical path and each of them tangent both to C n and C n+1 ; the last circle of these is also tangent to the upper neighbor of C n and C n+1 . See Fig. 4.3.
The resulting graph is a plane triangulation and the carrier of the packing is R 2 . However, it is an easy exercise to verify that the tangency graph of this circle packing is transient. In the rest of this chapter we prove Theorem 4.4. We begin by proving parts 3 and 4, in which a circle packing is given and one uses its geometry to deduce estimates about the effective resistance. Afterwards we prove parts 1 and 2, in which we use electrical estimates to deduce facts about the geometry of the circle packing.
Proof. We begin with part (i). For every v ∈ V R it holds that rad(v) ≤ R since C v 0 is centered at the origin. By the Ring Lemma (Lemma 4.2), there exists A = A(∆) such that rad(u) ≤ AR for every u ∼ v , and therefore | cent(u)| ≤ (A + 2)R. Hence (i) holds with C = A + 2.
To prove part (ii) we define By the triangle inequality, for an edge {x, y} with both endpoints in V CR \ V R we have and it is straightforward to check that the same bound holds also when one of the edge's endpoints is in V R or V \ V CR . Thus, using the Ring Lemma's (Lemma 4.2) constant A = A(∆) from part (i), where area(C x ) is the area that C x encloses (that is, π rad(x) 2 ). We have that x area(C x ) ≤ area(B(0, 2CR)) = 4πC 2 R 2 , hence if C = A + 2, then and the result follows for c = (4∆C 2 ) −1 .
Proof of Theorem 4.4 (3). Consider the unit current flow I from v 0 to ∞ and fix any R ≥ 1.
Restricting this flow to the edges which have at least one endpoint in the annulus V CR \V R gives a unit flow from V R to V \ V CR , by part (i) of Lemma 4.8. Hence, by part (ii) of that lemma and by Thomson's principle (Theorem 2.28), the energy contributed to E(I) from these edges is at least c. In the same manner, the edges which have at least one endpoint in the annulus V C 2k+1 R \ V C 2k R contribute at least c to E(I). Part (i) of Lemma 4.8 implies that all these edge sets are disjoint, hence E(I) = ∞ and we learn that G is recurrent (Corollary 2.39).

Proof of Theorem 4.4 (4)
We will use the given circle packing of G to create a random path to infinity with finite energy. This gives transience by Claim 2.46. This proof strategy is similar to that of Theorem 2.47.
Proof of Theorem 4.4 (4). Let v 0 be a fixed vertex of the graph, and apply a Möbius transformation to make the circle of P corresponding to v 0 be centered at the origin 0. We now use Claim 2.46 to construct a flow θ from v 0 to ∞ by choosing a uniform random point p on ∂U, taking the straight line from 0 to p and considering the set of all circles in the packing P that intersect this line in the order that they are visited; this set forms an infinite simple path in the graph which starts at v 0 .
To bound the energy of the flow, we claim that there exists some constant C (which may depend on the graph G and the packing P ) such that the probability that the random path uses the vertex v is bounded above by C rad(v). Indeed, since there are only finitely many vertices with centers at distance at most 1/2 from 0, we may assume that the center of v is of distance at least 1/2 from 0. In this case, in order for v to be included in the random path the circle of v must intersect the line between 0 and p. By the Ring Lemma (Lemma 4.2) the neighbors of v have circles of radii comparable to rad(v) and so the probability of the line touching them is at most C rad(v). Since the vertex degree is bounded by ∆, we find that and we deduce by Corollary 2.39 that G is transient.

Proof of Theorem 4.4 (1)
We apply Claim 4.3 to obtain a circle packing P of G. We claim that Carrier(P ) = R 2 . Fix some vertex v and rescale and translate so that P (v) is the the unit circle ∂U. Assume by contradiction that Carrier(P ) = R 2 and let p ∈ R 2 \ Carrier(P ) be a point not in the carrier. Rotate the packing so that p = R for some real number R > 1. Let U ∈ [−1, 1] and consider the circle C U = {z : |z − p| = R − U }. We traverse C U from the point U counterclockwise and consider all the circles of P which intersect C U . The circles of P we obtain this way is a simple path in the graph G starting from v. The argument in Lemma 4.1 shows that Carrier(P ) is simply connected, and since p ∈ Carrier(P ) it cannot be that C U is contained in Carrier(P ). Thus, as we traverse C U counterclockwise we must hit the boundary of Carrier(P ). We conclude that the path in G we obtained in this manner is an infinite simple path starting at v.
We now let U be a uniform random variable in [−1, 1] and let µ denote the probability measure on random infinite paths starting at v we obtained as described above. Let θ be the flow induced by µ as in Claim 2.46. We wish to bound the energy E(θ). Consider a vertex w ∈ G and its circle P (w) and let B be the Euclidean ball of radius R + 1 around p. If P (w) does not intersect B, it cannot be included in the random path by our construction. If it does intersect this ball, then the probability that the random path intersects it is bounded above by its radius. Thus, where ∆ is the maximal degree of G. We learn that E(θ) is bounded above by a constant multiple of the area of all circles of P that intersect B. Since p ∈ Carrier(P ), by the Ring Lemma (Lemma 4.2), any circle of P that intersects B cannot have radius more than AR for some large A ≥ R (since otherwise, all the circles surrounding this vertex will have radius more than R + 1, contradicting the fact that p ∈ Carrier(P )). We learn that all the circles counted in the sum above are contained in the Euclidean ball of radius (A + 1)R + 1 around p. Since these circles has disjoint interiors, the sum of their area is bounded above by the area of the Euclidean ball above. We conclude that E(θ) < ∞, hence G is transient by Corollary 2.39 and we have reached a contradiction.

Proof of Theorem 4.4 (2)
We will use the following simple corollary of the circle packing theorem, Theorem 3.5.
Claim 4.9. Let G be a finite simple planar map such that all faces have three edges except for one face (which we can think of as the outer face). Then, there is a circle packing P of G such that all circles of the outer face are internally tangent to ∂U and all other circles of P are contained in U.
Proof. Denote by v 1 , . . . , v m the vertices of the outer face in clockwise order. Add a new vertex v * to the graph and connect it to v 1 , . . . , v m according to their order. We obtain a triangulation G * . Apply Theorem 3.5 to obtain a circle packing P = {C v } v∈V (G * ) . By translating and dilating we may assume that C v * is centered at the origin and has radius 1. Apply the map z → 1 z on this packing. Since this map preserves circles, the image of the circles {C v } v∈V (G * )\{v * } under this map is precisely the desired circle packing.
Furthermore, we will require an auxiliary general estimate. Given a circle packing P and a set of vertices A, we write diam P (A) for the Euclidean diameter of the union of all circles in P corresponding to the vertices of A.
Lemma 4.10. Let P be a circle packing in U of a finite triangulation except the outer face with maximum degree ∆, such that the circle of the vertex v 0 is centered at the origin and has radius r 0 . Assume that r 0 ∈ (r min , r max ) for some constants 0 < r min < r max < 1. Then there exists a constant c = c(r min , r max , ∆) > 0 such that for any connected set A of vertices, .
If in addition all circles of the outer face are tangent to ∂U and A contains a vertex of the outer face, then (4.2) Proof. Write ε = diam P (A) and let z(A) denote the union of all circles corresponding to the vertices of A. We begin with the proof of (4.1), which goes along similar lines to the proof of Regarding this proof, we note that even though for some values of r the set {|z − z 0 | ≤ r} is not contained in U (unlike the proof of Lemma 4.8 when the carrier is all of R 2 ). However, this only works to our benefit. The proof of (4.1) now proceeds similarly to the proof of Theorem 4.4 (3). By the Ring Lemma (Lemma 4.2), the Euclidean distance between the circle corresponding to v 0 and A is at least some constant (which depends on ∆, r min , r max ) so that v 0 ∈ V C K ε for some K = Ω(log(1/ε)). For each k = 0, 2, 4, . . . , K the sets of edges which have at least one endpoint in the annulus V C k+1 ε \ V C k ε are disjoint by (i). By (ii), each of these sets of edges contribute at least C −1 to the energy of the unit current flow from A to v 0 , concluding the proof of (4.1) using Thomson's principle (Theorem 2.28).
For the proof of (4.2) we construct a unit flow from v 0 to A that has energy O(log(1/ε)). The construction is in the same spirit as the proof of Theorem 4.4 (4), but there are some technical difficulties. Since A contains a vertex that is tangent to ∂U, we choose z 0 ∈ ∂U that belongs to a circle of A. By rotating the packing we may assume that z 0 = e iε/4 . We now treat two cases separately. In the first case we assume that there exists z 1 in z(A) such that arg(z 1 ) ∈ [0, ε/2] and |z 1 | ≤ 1 − ε/2 such that the path in z(A) from z 0 to z 1 remains in the sector arg(z) ∈ [0, ε/2]. Consider the points and note that x 0 , x 1 are the two leftmost and rightmost points on the circle of v 0 . Let C 0 and C 1 be the upper half plane semi-circles in which x 0 , y 0 and x 1 , y 1 are antipodal points, respectively. The choice of y 0 , y 1 is made so that the path between z 0 to z 1 in z(A) must cross the region bounded by C 0 , C 1 and the intervals [x 0 , x 1 ], [y 1 , y 0 ], by our assumption on z 1 as long as ε is small enough. See Fig. 4.4, left.
For each t ∈ [0, 1] write C t for the upper half plane semi-circle in which ty 1 + (1 − t)y 0 and tx 1 + (1 − t)x 0 are antipodal points, so that C t continuously interpolates between C 0 and C 1 . See Fig. 4.4, left. Choose t ∈ [0, 1] uniformly at random and consider the random path γ which traces C t from left to right. This random path starts at the circle of v 0 and must hit the path between z 0 and z 1 by our previous discussion. Hence, the circles of P that intersect γ must contain a path in the graph from v 0 to A. By Claim 2.46 we obtain a flow I from v 0 to A whose energy E(I) we now bound.
For an angle θ ∈ [0, π] we denote by w θ (t) the point at angle θ on the semi-circle C t . It is an exercise to see that the set of points {w θ (t) : t ∈ [0, 1]} form a straight line interval θ . Furthermore, when t is chosen uniform in [0, 1], the intersection of C t and θ is a uniformly chosen point on θ . Fix some constant A > 1 and set θ 0 = 0 and θ i = A i−1 ε for i = 1, . . . , K where K = Θ(log(1/ε)) such that θ K = π. We will obtain the bound E(I) = O(log(1/ε)) by bounding from above by a constant the contribution to E(I) coming from edges which intersect the quadrilateral Q i of R 2 bounded by θ i , θ i+1 , C 0 , C 1 ; see Fig. 4.4, right. The random path γ restricted to Q i can be sampled by choosing a uniform random point on θ i , setting t ∈ [0, 1] to be the unique number such that C t intersects θ i at the chosen point, and tracing the part of C t from θ i to θ i+1 . The lengths of the four curves bounding Q i are all of order A i ε and so we deduce that if v corresponds to a circle of radius O(A i ε) which intersects Q i , then the probability that it is visited by γ is O(rad(v)/A i ε). Since the sum of rad(v) 2 over such v's is at most the area of Q i , up to a multiplicative constant since some of these circles need not be contained in Q i , and so it has order A 2i ε 2 . Since the degrees are bounded, we deduce that the contribution to the energy from edges touching such v's is O(1). Lastly, if v corresponds to a larger circle, then we bound its probability of being visited by γ by 1 and note that there can only be O(1) many such v's whose circles intersects Q i . Thus the contribution from these is another O(1). Since there are O(log(1/ε)) such i, we learn that E(I) = O(log(1/ε)) finishing our proof in this case using Thomson's principle (Theorem 2.28).
In the second case, we assume that there exists z 1 ∈ z(A) such that arg(z 1 ) ∈ [0, ε/2] and z 0 Figure 4.4: Left: For any t ∈ [0, 1] the semi-circle C t must intersect the path in A between z 0 and z 1 . Right: The quadrilateral Q i is bounded between θ i , θ i+1 , C 0 and C 1 .
It is clear that since diam P (A) = ε either the first or the second case must occur. Denote z 0 = |z 1 |e iε/4 and let x 0 , x 1 be antipodal points on the circle of v 0 such that the straight line between x 0 and x 1 is parallel to the straight line between z 0 and z 1 . Consider the trapezoid on the vertices z 0 , z 1 , x 0 , x 1 . We choose a uniform random point t ∈ [0, 1] and stretch a straight line from tx 0 + (1 − t)x 1 to tz 1 + (1 − t)z 0 . We then add to it a straight line from tz 1 + (1 − t)z 0 to w ∈ ∂U where arg(w) = arg(tz 1 + (1 − t)z 0 ). For any t ∈ [0, 1], this path γ starts from the circle of v 0 and must hit the path between z 0 and z 1 in z(A). Thus, the set of all circles which intersect γ must contain a path in the graph that starts at v 0 and ends at A; this random choice of γ gives us as usual a unit flow from v 0 to A. See Fig. 4.5. By repeating the same argument as in the previous case (that is, splitting the trapezoid into O(log(1/ε)) many trapezoids of constant aspect ratio), we see that the contribution to the energy of the flow induced by γ of the edges in the trapezoid is O(log(1/ε)). Furthermore, the same argument gives that the edges in the quadrilateral formed by the vertices z 0 , z 0 , z 1 and e i arg(z 1 ) contribute at most a constant to the energy, concluding our proof by Thomson's principle in this case as well.
Proof of Theorem 4.4 (2). As usual we denote by d G (u, v) the graph distance between the ver- Observe that since G is one-ended the finite map (V j , E j ) is a triangulation except the outer face which we denote by ∂V j . We apply Claim 4.9 to pack (V j , E j ) inside the unit disk U such that the circles of ∂V j are tangent to U. By applying a Möbius transformation from U onto U, we may assume that the circle corresponding to v 0 is centered at the origin 0. We denote this packing by P j and let r j 0 be the radius of v 0 in P j . Since G is transient it follows that there exists some c = c(∆) > 0 such that r j 0 ≥ c for all j by Corollary 2.39. Indeed, if r j 0 ≤ ε, we learn by Lemma 4.8 and the proof of As we did in Claim 4.3, we now take a subsequence in which the centers and radii of all vertices converge. Denote the resulting limiting packing by P ∞ . This packing has all circles inside U and we therefore deduce that Carrier(P ∞ ) ⊆ U. It is a priori possible that Carrier(P ∞ ) is some strict subset of U, i.e., that all the circles stabilize inside some strict subset of U. We now argue however that this is not possible.
Let Z be the set of accumulation points of Carrier(P ∞ ); it suffices to show that Z ⊂ ∂U (since any simply connected open set G ⊂ U for which ∂G ⊂ ∂U must equal U). Since Z is a compact set, let z ∈ Z minimize |z| among all z ∈ Z; it suffices to show that z ∈ ∂U. Fix ε > 0 and put The set U ε (z) may not be connected (graph-wise), yet by our choice of z it is clear that U ε (z) has an infinite connected component. Indeed, one can draw a straight line from the origin to z without intersecting Z and consider the set of all circles intersecting this line; from some point onwards the vertices corresponding to these circles will reside in U ε (z). Therefore, let W ε (z) be an infinite connected component of the graph spanned on U ε (z). Let J = J(z, ε) be the first integer such that V J ∩ W ε (z) = ∅. Since the V j 's are increasing sets and W ε (z) is an infinite connected set, we have that ∂V j ∩ W ε (z) = ∅ for all j ≥ J. Consider now a connected component A of the graph spanned on the vertices V j ∩ W ε (z). Denote by P j ∞ the finite circle packing obtained from P ∞ by taking only the circles of V j .
Since A ⊂ W ε (z), it follows that diam P j ∞ (A) ≤ 4ε. By Lemma 4.10, Eq. (4.1), applied to the set A in the packing P j ∞ , we deduce that R eff (v 0 ↔ A; V j ) ≥ c log(1/ε). By Rayleigh's monotonicity (Corollary 2.29) we have that R eff (v 0 ↔ A; (V j , E j )) ≥ c log(1/ε). Since A is a connected component of V j ∩ W ε (z) and since W ε (z) is an infinite connected set of vertices in G, it follows that A must contain a vertex of ∂V j . Thus, we may apply Lemma 4.10, Eq. (4.2), to the set A in the packing P j , we get that there exists some c = c(G) > 0 such that . Since the circle of v j in P j touches ∂U we learn by (4.3) that the distance of the circle of v J in P j from ∂U is at most ε c for all j ≥ J.
Since the circle corresponding to v J in P ∞ is the limit of its circles in P j we deduce that the distance of cent P∞ (v J ) from ∂U is at most ε c . We deduce that distance of z from ∂U is at most ε + ε c . Since ε was arbitrary we learn that z ∈ ∂U, as required.

Exercises
1. Let G be a finite simple planar map such that all of its faces have 3 edges except for the outer face which is a simple cycle. Show that there exists a circle packing of G such that all the circles are inside the unit disc {z : |z| ≤ 1} and all the circles corresponding to the vertices of the outer face are tangent to the unit circle {z : |z| = 1}.
2. Let G be a triangulation of the plane with maximal degree at most 6. Prove that the simple random walk on G is recurrent. 5 :: Planar local graph limits

Local convergence of graphs and maps
In order to study large random graphs it is mathematically appealing and natural to introduce an infinite limiting object and study its properties. In their seminal paper, Benjamini and Schramm [12] introduced the notion of locally convergent graph sequences, which we now describe.
We will consider random variables taking values in the space G • of rooted locally finite connected graphs viewed up to root preserving graph isomorphisms. That is, G • is the space of pairs (G, ρ) where G is a graph (finite or infinite) and ρ ∈ V (G) is a vertex of G, where two elements (G 1 , ρ 1 ), (G 2 , ρ 2 ) are considered equivalent if there is a graph isomorphism between them (that is, a bijection ϕ : . In a similar fashion we define M • to be the set of equivalence classes of rooted maps; in this case we require the graph isomorphism in addition to preserve the cyclic permutations of the neighbors of each vertex, that is, it is a map isomorphism. Let us describe the topology on G • and M • . For convenience we discuss G • but every statement in the following holds for M • as well. Given an element (G, ρ) of G • the finite graph B G (ρ, R) is the the subgraph of (G, ρ) rooted at ρ spanned by the vertices of distance at most R from ρ. We provide G • with the local metric d loc ((G 1 , ρ 1 ), (G 2 , ρ 2 )) = 2 −R , where R is the largest integer for which B G 1 (ρ 1 , R) and B G 2 (ρ 2 , R) are isomorphic as graphs. This is a separable topological space (the finite graphs form a countable base for the topology) and is easily seen to be complete, i.e., it is a Polish space. The distances are bounded by 1 but the space is not compact. Indeed, the sequence G n of stars with n leaves emanating from the root ρ has no converging subsequence.
Since G • is a Polish space, we can discuss convergence in distribution of a sequence of random variables {X n } ∞ n=1 taking values in G • . We say that X n converges in distribution to a random variable X, and denote it by X n d −→ X, if for every bounded continuous function f : G • → R we have that E(f (X n )) → E(f (X)). We will be focused here on the particular situation in which X n is a finite rooted random graph (G n , ρ n ) such that given G n , the root ρ n is uniformly distributed among the vertices of G n . It is a very common setting and justifies the following definition.
Definition 5.1. Let {G n } be a sequence of (possibly random) finite graphs. We say that G n converges locally to a (possibly infinite) random rooted graph (U, ρ) ∈ G • , and denote it by G n loc − − → (U, ρ), if for every integer r ≥ 1, where ρ n is a uniformly chosen vertex from G n .
It is straightforward to see that this definition is equivalent to saying that the random variables (G n , ρ n ) converge in distribution to (U, ρ).

Examples
• The sequence {G n } of paths of length n converges locally to the graph (Z, 0) (note that the root vertex can be chosen to be any vertex of Z since (Z, i) and (Z, j) are equivalent for all i, j ∈ Z).
• The sequence {G n } of the n × n square grid converges locally to the graph (Z 2 , 0) (again the root can be chosen to be any vertex of Z 2 ).
• Let λ > 0 be fixed and let {G(n, λ n )} be the sequence of random graphs obtained from the complete graph K n by retaining each edge with probability λ n and erasing it otherwise, independently for all edges. This is known as the Erdös-Rényi random graph. One can verify that this sequences converges locally to a branching process with progeny distribution Poisson(λ).
• If G n is the binary tree of height n then its local limit is not the infinite binary tree with any vertex distribution. Instead, it is the following so-called canopy tree depicted in Fig. 5.1 and the root is at distance k ≥ 0 from the leaves with probability 2 −k−1 . Note that the distance of the root from the leaves determines the isomorphism class of the rooted graph. It is easy to see that the canopy tree is not the infinite binary tree (for example, it has leaves); in fact, it is recurrent.
• Consider G n to be a path of length n, glued via one of its leaves into a √ n × √ n grid. The local limit of G n is (U, ρ), where (U, ρ) is (Z, 0) with probability 1/2, and (Z 2 , 0) otherwise.
Our goal in this chapter is to prove the following pioneering result. For instance, a local limit of planar maps cannot give the 3-regular tree as its limit (this tree however can be obtained as a local limit of random 3-regular graphs). The bounded degree assumption is necessary for this theorem. Indeed, suppose we start with a binary tree of height n and replace each edge (u, v) that is at distance k ≥ 0 from the leaves by 2 k parallel edges. By the same reasoning of the local convergence of binary trees to the canopy tree, the modified graph sequence converges locally to a modified canopy tree in which an edge at distance k from the leaves is replaced with 2 k parallel edges. Using the parallel law it is immediate to see that this graph is transient, and that the effective resistance from a leaf to ∞ is at most 2 (in fact it equals 2). See

The Magic Lemma
We call ρ w the isolation radius of w. Given δ ∈ (0, 1), s ≥ 2 and w ∈ C, we say that w is (δ, s)-supported if in the disk of radius δ −1 ρ w around w there are at least s points of C outside any given disk of radius δρ w . Formally, w is (δ, s)-supported if The proof of Theorem 5.2 is based on the following lemma, which has been dubbed "the Magic Lemma".
Remark 5.4. We prove the lemma for R 2 , but it holds for R d or any other doubling metric space. In fact, a metric space for which the lemma holds must be doubling; see [28].

Proof of Lemma 5.3
Let k ≥ 3 be an integer (later we will take k = k(δ)). Let G 0 be a tiling of R 2 by 1 × 1 squares, rooted at some point p, and for every n ∈ Z, let G n be a tiling of R 2 by k n × k n such that each square of G n is tiled by k × k squares of G n−1 . We may choose p so that none of the points of C lies on the edge of a square.
We say that a square S ∈ G n is s-supported if for every smaller square S ∈ G n−1 we have that |C ∩ (S \ S )| ≥ s. Claim 5.5. For any s ≥ 2 the total number of s-supported squares, in G = n∈Z G n , is at most 2|C|/s. Proof. Define a "flow" f : G × G → R as follows: Let us make two initial observations. First we have that Therefore, using (5.1) and (5.2), we deduce that there are at most 2|C|/s s-supported squares in n>a G n . Sending b → ∞ finishes the proof.
The above claim is very close to the statement of Lemma 5.3 which we are pursuing. However, we need to move from squares to circles. We use a technique called random padded partitions.
We choose k = 20δ −2 and let β ∼ Unif([0, ln k]). Let G 0 be a tiling with side length e β , based at the origin. Suppose we have defined G n as a tiling of squares of side length e β k n ; then G n+1 is a tiling of squares of side length e β k n+1 that is based uniformly at one of the k 2 possible points of G n . Because the desired statement is invariant under translation and dilation of C, we may assume that C does not intersect the edges of G n (for every n) and that ρ w ≥ k for every w ∈ C. We call a point w ∈ C a city in a square S ∈ G if: • the side length of S is in the interval [4δ −1 ρ w , 5δ −1 ρ w ], and • the distance from w to the center of S is at most δ −1 ρ w .
As for the second item, it holds with positive probability (independent of δ) over the k 2 basing choices for G n , given that β satisfies the requirement posed by the first item.
Claim 5.7. If w is a city in S and is (δ, s)-supported, then S is s-supported.
Proof. If S ∈ G n , any little square S ∈ G n−1 has side length at most Hence, it is contained in a ball of radius δρ w . Thus, for every S ∈ G n−1 with S ⊆ S there exists a point p such that Now note that the expected number of pairs (w, S) such that S is s-supported, w is (δ, s)supported, and w is a city, is at least c ln −1 (δ −1 )N , where N is the number of (δ, s)-supported points. Also, no more than cδ −2 points of C can be cities in a single square S. It follows from Claim 5.5 that concluding the proof of Lemma 5.3.

Recurrence of bounded degree planar graph limits
Theorem 5.2 follows immediately from the following theorem which gives a quantitative estimate on the growth of the resistance in local limits of bounded degree planar maps. In particular, it states that the resistance grows logarithmically in the Euclidean distance of the corresponding circle packing.
Theorem 5.8. Let (U, ρ) be a local limit of finite planar maps with maximum degree at most D. Then, almost surely, there exist a constant c > 0 and a sequence {B k } k≥1 of subsets of U such that for each k we have In particular, (U, ρ) is almost surely recurrent.
We write B euc (p, r) for the Euclidean ball of radius r around a point p ∈ R 2 . As before, for a subset O ⊂ R 2 and a given circle packing we write V O for the set of vertices in which the centers of the corresponding circles are in O. In order to prove Theorem 5.8, we will need the following immediate corollary of the Magic Lemma (Lemma 5.3): Corollary 5.9. Let G be a finite simple planar triangulation, and P a circle packing of G. Let ρ be a uniform random vertex and P a dilation and translation of P such that the circle of ρ is a unit circle centered at the origin 0. Then, there exists a universal constant A > 0 such that in the packing P , for every real r ≥ 2 and integer s ≥ 2 P ∀p ∈ R 2 V Beuc(0,r)\Beuc(p, 1 r ) ≥ s ≤ Ar 2 log r s .
Proof. Apply the Magic Lemma with δ = 1 r and s = s, with the centers of circles of P as the point set C. Note that there exists a constant C > 0 such that for all w ∈ V the isolation radius of w, ρ w , satisfies rad(C w ) ≤ ρ w ≤ C rad(C w ) (without appealing to the Ring Lemma).
The following lemma provides the main estimate needed to prove Theorem 5.8. Once it has been shown, Theorem 5.8 will follow by a Borel-Cantelli argument.
Lemma 5.10. Let G be a finite simple planar map with maximum degree at most D and let ρ be a uniform random vertex of G. Then, there exists a constant C = C(D) < ∞ such that for all k ≥ 1, Proof. We first assume that G is a triangulation and consider a circle packing of it where the circle of ρ is a unit circle centered at the origin 0. Applying Corollary 5.9 with r = k 1 3 , s = k, we have that with probability at least 1 − Ak − 1 3 log(k)/3, there exists p ∈ R 2 with V Beuc(0,r)\Beuc(p, 1 r ) < k.
Now, if |V Beuc(p, 1 r ) | ≤ 1, we set B = V Beuc(0,r) . We then have |B k | ≤ k and by applying . By the Ring Lemma, there exists a c = c (D) > 0 such that |p| ≥ 1 + c . Since |V Beuc(p, 1 r ) | ≥ 2, we have a vertex in that set with radius at most r −1 . Therefore, B euc (p, 2 r ) contains at least one full circle C v . Hence, by scaling and translating such that C v = U, we get (again, by Lemma 4.8) that for some other constant c 2 = c 2 (D) > 0. Since ρ ∈ V \ V Beuc(p,c /2) we obtain R eff ρ ↔ V Beuc(p, 2 r ) ≥ c 2 log k . By Lemma 4.8 we also have that for some c 3 = c 3 (D) > 0. By Claim 2.22 this means that .

By the union bound
hence by Claim 2.22 again concluding the proof when G is a triangulation.
If G is not a triangulation, we would like to add edges to make it a triangulation while making sure that the maximal degree does not increase too much. We also have to ensure that the graph remains simple which may require us to add some additional vertices as well. Let f be a face of G with vertices v 1 , . . . , v k . Suppose first that there are no edges between non-consecutive vertices of the face. In this case, we draw the edges in a zig-zag fashion, as in Fig. 5.3. In the case where edges between non-consecutive vertices of the face exist, we draw a cycle u 1 , . . . , u k inside f . Then, we connect u i to v i and v i+1 for each i < k and u k to v k and v 1 . Finally, we triangulate the inner face created by the new cycle by zig-zagging as in the previous case (see Fig. 5

.4).
Since each vertex of the original graph is a member of at most D faces and for each face at most 2 edges are added, the maximal degree of the resulting graph is at most 3D. Similarly, the number of vertices in the resulting graph is at most D times the number of vertices in the original graph hence the probability of a random vertex being a vertex of the original graph is at least D −1 . If this occurs then it is straightforward to see that the existence of a subset of vertices B in the new graph which satisfies the required conditions implies the existence of such a set in the old graph, concluding our reduction to the triangulation case and finishing our proof. We are ready to deduce Theorem 5.8.
Proof of Theorem 5.8. Assume that G n are finite planar maps with maximum degree at most D such that G n loc − − → (U, ρ). If {G n } are not simple graphs we erase loops and merge parallel edges into a single edge to obtain the sequence {G n }. It is immediate that G n loc − − → (U , ρ ) where (U , ρ ) is distributed as (U, ρ) after removing from U all loops and merging parallel edges into a single edge. Since the maximum degree is bounded, U is recurrent if and only if U is recurrent. Thus we may assume that G n are simple graphs so the previous estimates may be used.

Denote by A k the event
where C = C(D) < ∞ is the constant from Lemma 5.10. Therefore P(A c k ) ≤ c −1 k − 1 3 log(k). Looking at the sequence {A 2 j } j≥1 , by Borel-Cantelli, almost surely there exists j 0 such that for all j ≥ j 0 the event A 2 j holds. Thus we have proved the required assertion for k which is a power of 2. To prove this for all k sufficiently large, let B 2 j be the set guaranteed to exist in the definition of A 2 j , and take B k = B 2 j for the unique j for which 2 j ≤ k < 2 j+1 . It is immediate that these sets satisfy the assertion of the theorem, concluding our proof.

Exercises
1. For a graph G, let G 2 be the graph on the same vertex set as G so that vertices u, v form an edge if and only if the graph distance in G between u and v is at most 2. Show that if G has uniformly bounded degrees, then G is recurrent if and only if G 2 is recurrent.

2.
Construct an example of a local limit (U, ρ) of finite planar graphs such that U is almost surely recurrent, but U 2 is almost surely transient.
3. Let G(n, p) be the random graph on n vertices drawn such that each of the n 2 possible edges appears with probability p independently of all other edges. Let c > 0 be a constant, show that G(n, c/n) converges locally to a branching process with progeny distribution Poisson(c). 4. Fix an integer k ≥ 1. Construct an example of a sequence of finite simple planar maps G n such that G n converge locally to (U, ρ) with the property that E[deg k (ρ)] < ∞ and U is almost surely transient.

(*)
Suppose that G n is a sequence of finite trees converging locally to (U, ρ). Show that U is almost surely recurrent. (Note that the degrees may be unbounded )

:: Recurrence of random planar maps
Our main goal in this chapter is to remove the bounded degrees assumption in Theorem 5.2 and replace it with the assumption that the degree of the root has an exponential tail.
[ [30]] Let G n be a sequence of (possibly random) planar graphs such that G n loc − − → (U, ρ) and there exist C, c > 0 such that P(deg(ρ) ≥ k) ≤ Ce −ck for every k. Then U is almost surely recurrent.
As discussed in Section 1.2, the last theorem is immediately applicable in the setting of random planar maps. It is well known that the degree of the root in the UIPT and the UIPQ has an exponential tail. See [7,Lemma 4.1 and 4.2] or [25] for the UIPT and [8,Proposition 9] for the UIPQ.

Star-tree transform
We present here a transformation which maps any planar map G to a planar map G * with maximal degree of 3. We call this transformation G → G * the star-tree transform. Recall that a balanced rooted tree is a finite rooted tree in which every non-leaf vertex has precisely two children and the distance of the leaves from the root differs by at most 1. The transformation is performed as follows.
1. Subdivide each edge e by adding a new vertex w e of degree two in the "middle". See Fig. 6.1b. Denote the resulting graph by G .
2. For every vertex v ∈ V (G), replace all edges incident to v in G by a balanced binary tree rooted at v, whose leaves are the neighbors of v in G . We perform this in a fashion which preserves the cyclic order of these neighbors and thus preserves planarity. Furthermore, add two extra vertices and attach them to the root. Denote this tree by T v . See Fig. 6.1d.
Denote the resulting graph by G * . Note that each edge e in G * corresponds to precisely one vertex v of G such that e belongs to T v . Lemma 6.3. Let G be a planar map and G * its star-tree transform. We set edge resistances on G * by putting R e = 1/d G (v), where v is the vertex of G for which e ∈ T v and d G (v) is the degree of v in G. If the network (G * , R e ) is recurrent, then G is recurrent as well.  Proof. It is clear that from the point of view of recurrence versus transience, the two edges leading to the two "extra" neighbors of each root do not matter and can be removed. Hence for the rest of the proof we write T v for the previously defined tree with these two edges removed. The purpose of these extra edges will become apparent later in the proof of Theorem 6.1.
Assume G is transient and let a ∈ V (G) be some vertex. There is a flow θ from a to ∞ such that E(θ) < ∞. We will construct a flow θ * on (G * , R e ) from a to ∞ with finite energy, showing that (G * , R e ) is transient, giving the theorem. We first provide some notation. We denote by A the set of vertices that were added to form G in the first step of the star-tree transform (that is, the white vertices in Fig. 6.1). Each vertex w ∈ A is a leaf of precisely two trees T u and T v , where {u, v} was the edge of G that w divided. We call u and v the tree roots of w. We denote by B the set of vertices that were added to G * in the second step of the star-tree transform, that is, the gray vertices in Fig. 6.1d. The vertices of V (G) are the black discs in Fig. 6.1. Each vertex of x ∈ V (G) ∪ B is a member of a single tree T v ; we call v the tree root of x. Lastly, for any x ∈ V (G) ∪ B we denote by C x ⊂ A the set of leaves of T v , where v is the tree root of x, for which the path from the leaf to the root of T v goes through x; in other words, C x is the set of leaves of T v which are the "descendants" of x. If x ∈ A, then we set C x = {x}.
To define θ * , let e = (x, y) be an edge of T v . Assume that x is closer to the root of T v than y in graph distance. We set where v w is the tree root of w that is not v. The construction of θ * is depicted in Fig. 6.2.
We will now show that E(θ * ) ≤ 4E(θ) where the energy of θ * is taken in the network (G * , R e ). Let v ∈ V (G) and write h for the height of T v , that is, h is the maximal graph distance from a leaf of T v to its root. Note that since the tree is balanced, the distances from the leaves to the root vary by at most 1. Let e = (x, y) be an edge of T v and assume that x is closer than   y to the root of T v and that the graph distance of y from the root is ∈ {1, . . . , h}. By the construction of θ * , the contribution of e to E(θ * ) is Since y is at distance from the root, |C y | ≤ 2 h− . Hence by Cauchy-Schwarz Summing over all edges in T v at distance from the root, we go over each leaf of T v precisely once. Thus, We now sum over all edges in T v by summing over ∈ {1, . . . , h}. We get since h ≤ log 2 (d G (v)) + 1. Lastly, we sum this over all v ∈ V (G) and note that each term of the form θ(v, v w ) 2 in the last sum appears twice. Hence, concluding our proof.

Stationary random graphs and markings Stationary random graphs
Recall that Theorem 6.1 and the entire setup of Chapter 5 is adapted to the case when G n is itself random. The reason is that in Definition 5.1 we consider the graph distance ball B Gn (ρ n , r) as a random variable in the probability space (G • , d loc ), where ρ n conditioned on G n is a uniformly chosen random vertex.
Let us emphasize that this is not the same as drawing a sample of {G n } and claiming that almost surely G n loc − − → (U, ρ). For example, let G n be a path of length n with probability 1/2 and an n × n square grid with probability 1/2, independently for all n. In this case G n loc − − → (U, ρ) where U = Z with probability 1/2 and U = Z 2 with probability 1/2, however, almost surely on the sequence {G n }, the local limit of G n does not exist.
In many cases it is useful to take a random root drawn from the stationary distribution on G n , that is, the probability distribution on vertices giving each vertex v probability deg Gn (v)/2|E(G n )|. In a similar fashion to Definition 5.1, we define this type of local convergence.
Definition 6.4. Let {G n } be a sequence of (possibly random) finite graphs. We say that where ρ n is a randomly chosen vertex from G n distributed according to the stationary distribution on G n . We call such a limit a stationary local limit.
Indeed, let G n be a path of length n attached to a complete graph on √ n vertices. Then the local limit of G n is Z, however the limit according to a stationary random root does not exist.
The reason for taking the loc − − → π limit rather than the uniform limit as before is that the random walk on the limit (U, ρ) starting from ρ is then stationary. Claim 6.5. Assume that G n loc − − → π (U, ρ). Conditioned on (U, ρ), let X 1 be a uniformly chosen neighbor of ρ. Then (U, X 1 ) is equal in law to (U, ρ). Similarly, if {X n } n≥0 is the simple random walk on (U, ρ), then for each n ≥ 0 the law of (U, X n ) coincides with the law of (U, ρ).
Proof. If H is a finite graph and v is a vertex chosen from the stationary distribution, then it is immediate that a uniformly chosen random neighbor of v is distributed according to the stationary distribution. Thus for any fixed r > 0 we have that B Gn (ρ n , r) has the same distribution as B Gn (X 1 , r) where ρ n is drawn from the stationary distribution on G n and X 1 is a uniform neighbor of ρ n . The claim follows now by definition. Definition 6.6. A random rooted graph (G, ρ) is called a stationary random graph if (G, X 1 ) has the same distribution as (G, ρ), where the vertex X 1 is a uniform neighbor of ρ (conditioned on (G, ρ)).
We would like to develop a simple abstract framework that will allow us to comfortably move from loc − − → convergence to loc − − → π convergence and vice versa. This is straightforward when {G n } are a sequence of deterministic graphs with uniformly bounded average degree but is less obvious when G n themselves are random. For this we need to degree bias our random graphs. Definition 6.7. Denote by P the law of a random rooted graph (G, ρ) and assume that E deg(ρ) < ∞. The probability measure µ on (G • , d loc ) defined by for any event A ⊂ (G • , d loc ) is called the degree biasing of P. Similarly, the probability measure ν defined by is called the degree unbiasing of P. Note that to define ν we do not need to require that E deg(ρ) < ∞.
Lemma 6.8. Assume that (G, ρ) is a random rooted graph such that G is almost surely finite, that the distribution of ρ given G is uniform and that E deg(ρ) < ∞. Then the degree biasing of (G, ρ) is a stationary random graph.
Conversely, assume that (G π , ρ π ) is a stationary random graph such that G π is almost surely finite. Then the degree unbiasing of it (G, ρ) is such that G is almost surely finite and ρ condition on G is uniformly distributed.
Proof. We will prove only the first statement and the second is similar. Denote by (G π , ρ π ) a random variable drawn according to the degree biasing of (G, ρ). Let H be a fixed finite graph and denote by deg H (v) the degree of a vertex v in H. By definition we have that .
Let X 1 be a uniformly chosen neighbor of ρ π . Then by (6.1) .
Since ρ is uniformly distributed given G the quantity P((G, ρ) = (H, v)) is the same for all v. So P((G π , X 1 ) = (H, u)) = deg H (u)P((G, ρ) = (H, u)) E deg(ρ) so by (6.1) the required assertion follows. Corollary 6.9. Assume that {G n } is a sequence of random finite graphs such that G n loc − − → (U, ρ) and denote by ρ n a uniformly chosen vertex of G n and by (G π n , ρ π n ) the degree biasing of (G n , ρ n ). Assume further that E deg(ρ) < ∞ and that E deg(ρ n ) → E deg(ρ).
Conversely, assume that {G π n } is a sequence of random finite graphs such that G π n loc − − → π (U π , ρ π ), denote by ρ π n a random vertex of G n drawn according to the stationary distribution and by (G n , ρ n ) the degree unbiasing of (G π n , ρ π n ). ρ) is the degree unbiasing of (U π , ρ π ). Furthermore, (U, ρ) and (U π , ρ π ) are absolutely continuous with respect to each other.
Proof. Indeed, let (H, v) be a finite rooted graph and r > 0 a fixed integer. Then .
where the last equality is by definition. The absolute continuity of (U, ρ) and (U π , ρ π ) follows immediately from the definition.
The second statement follows by the same proof. Note that we by the dominated convergence theorem we have that We end this subsection by addressing the somewhat technical issue of verifying the condition E deg(ρ n ) → E deg(ρ) in Corollary 6.9. It is not guaranteed guaranteed just by requiring sup E deg(ρ n ) < ∞ as we see in the example of a path of length n together with √ n loops attached to √ n arbitrary vertices of the path; in this example deg(ρ) = 2 almost surely, and E deg(ρ n ) = 3 + o(1). However, we now show that it is always possible to "truncate" the finite graphs G n by removing edges touching vertices of large degrees so that the limit is unchanged and the average degrees converge to the expected degree of the limit. Given a finite graph G and an integer k ≥ 1 we denote by G ∧ k the graph obtained from G by erasing all the edges touching vertices of degree at least k. Lemma 6.10. Let {G n } be a sequence of random finite graphs such that G n loc − − → (U, ρ) and E deg(ρ) < ∞. Then there exists a sequence k(n) → ∞ such that where ρ n is a uniformly chosen vertex of G n .
Proof. We first show that for any sequence k(n) → ∞ we have that G n ∧ k(n) loc − − → (U, ρ).
Secondly, since deg(ρ n ) converges in distribution to deg(ρ) we have that there exists a sequence k(n) → ∞ such that E deg(ρ n ) ∧ k(n) → E deg(ρ). Indeed, by dominated convergence we have that E[deg(ρ) ∧ k] → k→∞ E deg(ρ). Furthermore, by bounded convergence for any Hence for any ε > 0 there exists k and n 0 such that for all n ≥ n 0 we have that

Markings
Given a locally convergent sequence of (possibly random) graphs G n , we wish to apply the star-tree transform on them to create a sequence G * n and take its local limit of that while "remembering", in light of Lemma 6.3, the original degrees of G n . The approach is a rather straightforward extension of the abstract setting of Section 5.1, see also [2]. We consider the space of triples (G, ρ, M ) where G = (V, E) is a graph, ρ ∈ V is a vertex and M : E → R is a function assigning real values to the edges. We endow the space with a metric by setting the distance between (G 1 , ρ 1 , M 1 ) and (G 2 , ρ 2 , M 2 ) to be 2 −R where R is the maximal value such that there exists a rooted graph isomorphism ϕ between B G 1 (ρ 1 , R) and B G 2 (ρ 2 , R) such that |M 1 (e) − M 2 (ϕ(e))| ≤ R −1 for all edges e ∈ E(G) both of whose end points are in B G 1 (ρ 1 , R). It is easy to check that this space is again a Polish space, so again we may define convergence in distribution of random variables taking values in this space.
We say that such a random triplet (U, ρ, M ) is stationary if conditioned on (U, ρ, M ) a uniformly chosen random neighbor X 1 of ρ satisfies that (U, ρ, M ) has the same law as (U, X 1 , M ) in the space of isomorphism classes of rooted graphs with markings (that is, rooted isomorphisms that preserve the markings). Given a marking M we extend it to M : E(U ) ∪ V (U ) → R by setting M (v) = max e:v∈e M (e) for any v ∈ V (U ). We say that (U, ρ, M ) has an exponential tail if for some A < ∞ and β > 0 we have that P(M (ρ) ≥ s) ≤ Ae −βs for all s ≥ 0.
In the following lemma we consider a stationary triplet (U, ρ, M ) that has an exponential tail and compare the hitting probabilities of certain sets when we endow the graphs with two sets of edge resistances: the first are the usual unit resistances, and in the second we may change the edge resistances arbitrarily but only on edges with high M values. We tailored the lemma this way in order to show that (G * , R e ) from Lemma 6.3 is recurrent. Lemma 6.11. Let (U, ρ, M ) be a stationary, bounded degree rooted random graph with markings which has an exponential tail. Conditioned on (U, ρ, M ), fix some finite set B ⊂ U . Let P ρ denote the unit-resistance random walk on U starting from ρ and let P ρ denote the random walk on U with edge resistances R e satisyfing that R e = 1 whenever M (e) ≤ 21β −1 log |B|. Then almost surely on (U, ρ, M ) there exists K < ∞ such that for any finite subset B ⊂ U with |B| ≥ K we have

CHAPTER 6. RECURRENCE OF RANDOM PLANAR MAPS
Proof. For every integers T, s ≥ 1 we set Since (U, ρ, M ) is stationary and has an exponential tail for any t ≥ 0 we have hence by the union bound Thus by Markov's inequality P A c T,s ≤ AT −2 e −βs/2 . By Borel-Cantelli we deduce that almost surely A T,s occurs for all but finitely many pairs T, s. Conditioned on (U, ρ, M ), we may consider only finite subsets B ⊂ U which contain ρ, since otherwise both probabilities in the statement of the lemma are 1. Let B be such a subset. By the commute time identity Lemma 2.26, and since the maximum degree of U is bounded, for some constant C > 0. The last inequality is since the resistance is bounded by |B| since there is a path of length at most |B| from ρ to U \ B. By Markov's inequality, For every T, s for which A T,s occurs we have We now choose T = 2C|B| 3 and s = 21β −1 log |B| so that the right hand side of the last inequality is at most |B| −1 when |B| is sufficiently large. It is clear that we can couple two random walks starting from ρ, one walking on U with unit resistances and the other on (U, R e ), so that they remain together until they visit a vertex of S. Hence, when |B| is large enough so that the chosen T, s are such that A T,s holds we deduce from the last inequality that with probability at least 1 − |B| −1 the simple random walk on U visits {ρ} ∪ U \ B before visiting S, concluding our proof.

Proof of Theorem 6.1
We now proceed to wrapping up the proof of Theorem 6.1. Recall that we have a sequence of finite planar graphs {G n } such that G n loc − − → (U, ρ) such that P(deg(ρ) ≥ k) ≤ Ce −ck . Our goal is to prove that (U, ρ) is almost surely recurrent.
By Lemma 6.10 and Corollary 6.9 we may truncate and degree bias G n and (U, ρ) so that we may assume without loss of generality that G n loc − − → π (U, ρ). It is an easy computation using Definition 6.7 that we still have P(deg(ρ) ≥ k) ≤ Ce −ck (possibly for some other positive constants C, c). Thus, from now on we assume this that G n loc − − → π (U, ρ) and that deg(ρ) has exponential tails.
Recall now the definitions and notations of Section 6.1. Consider the star-tree transform G * n of G n and let ρ * n be a random vertex of T ρn drawn according to the stationary distribution of T ρn . Similarly, conditioned on (U, ρ), let U * be the star-tree transform of U and ρ * be a random vertex of T ρ drawn according to the stationary distribution of T ρ . Furthermore, we put markings on G * n and U * by marking each edge e of G * n or U * with deg(v) whenever e is in the tree T v and deg(v) is the degree of v in G n or U , respectively. Denote these markings by M n and M , respectively. Claim 6.12. We have that (G * n , ρ * n , M n ) for each n and (U * , ρ * , M ) are stationary, and, Proof. Since for any fixed integer r > 0, the laws of B G * n (ρ * n , r) and B U * (ρ * , r) are determined by B Gn (ρ n , r) and B U (ρ, r), respectively, we obtain that Secondly, it is immediate to check that for each v ∈ G n we have that the number of edges in T v is precisely 2 deg Gn (v). This is the reason why we added the two "extra" neighbors to the root of T v in the star tree transform described in Section 6.1. Thus, conditioned on G n for any x ∈ G * n such that x ∈ T v for some v ∈ G n we have that or in other words, (G * n , ρ * n , M n ) is a stationary random graph and since it converges to (U * , ρ * , M ), the latter is also stationary. Lemma 6.13. The triplet (U * , ρ * , M ) has an exponential tail.
Proof. We observe that M (ρ * ) = deg(v) where v is either ρ or one of its neighbors (the latter can happen if ρ * was chosen to be a leaf of T ρ ). Hence it suffices to show that if (U, ρ) is a stationary local limit such that deg(ρ) has an exponential tail, then the random variable D(ρ) = max v:{ρ,v}∈E(U ) deg(v) has an exponential tail. We have The probability of the first term on the right hand side decays exponentially in k due to our assumption on (U, ρ). Conditioned on (U, ρ), let X 1 be a uniformly chosen random neighbor of ρ. Then clearly However, by stationarity P(deg(X 1 ) ≥ k) = P(deg(ρ) ≥ k), which decays exponentially. We conclude that the second term on the right hand side of (6.2) decays exponentially as well.
Consider the stationary random graph (U * , ρ * , M ). By Lemma 6.13 it has an exponential tail. Consider the edge resistances In view of Lemma 6.3, it suffices to show that the network (U * , R mark ) is almost surely recurrent, for then it will follow that U is almost surely recurrent. To prove the former, we apply the second assertion of Corollary 6.9 which allows us to assume without loss of generality that (U * , ρ * ) is a local limit of finite planar maps (rather than a stationary local limit). Since (U * , ρ * ) is now a local limit of finite planar maps with degrees bounded by 3 we may apply Theorem 5.8 to almost surely obtain a constant c > 0 and a sequence of sets B k ⊂ U * such that where we added to the conclusion of Theorem 5.8 that B k ≥ ck since adding vertices to B k makes the lower bound on the resistance even better.
We now define one extra set of edge resistances on U * which will allow us to interpolate between the edge resistances R unit and R mark . For each integer k ≥ 1 we define where C > 0 is some large constant that will be chosen later. We will use P, P mark and P mid to denote the probability measures, conditioned on (U * , ρ * , M ), of random walks on U * with edge resistances {R unit e }, {R mark e } and {R mid e }, respectively.
Lemma 6.14. For some other constant c > 0 we have Proof. We may assume k is large enough so that M (e) ≤ C log k for every edge e incident to ρ * . By Claim 2.22 we have by our assumption on B k above. By Lemma 6.11 it follows that when k is large enough and the constant C > 0 in the definition of {R mid e } is chosen large enough with respect to β. Using Claim 2.22 again and the fact that U * has degrees bounded by 3 concludes the proof.
We need yet another easy general fact about electric networks. Claim 6.15. Consider a finite network G in which all resistances are bounded above by 1. Then for any integer m ≥ 1 and any two vertices a = z we have Proof. Let θ m be the unit current flow from B(a, m) to z. For a vertex v ∈ B(a, m) denote α v (θ m + θ a,v ) .

By Thomson's principle (Theorem 2.28), Jensen's inequality and since
We are finally ready to conclude the proof of the main theorem of this chapter.
Proof of Theorem 6.1. By Lemma 6.14 and Claim 6.15 we have that the sets B k obtained earlier satisfy that for any m ≥ 0 Moreover, for every edge e, By taking k → ∞ we deduce that there exists c > 0 such that for any m ≥ 1 Consider the current unit flow from ρ * to ∞ in (U * , {R mark e }). If this flow had finite energy, then it follows that for any ε > 0 there exists m ≥ 1 such that R eff (B U * (ρ * , m) ↔ ∞; {R mark e }) ≤ ε, which is a contradiction to the above. Hence

Introduction
Let G be a finite connected graph. A spanning tree T of G is a connected subgraph of G that contains no cycles and such that every vertex of G is incident to at least one edge of T . The set of spanning trees of a given finite connected graph is obviously finite and hence we may draw one uniformly at random. This random tree is called the uniform spanning tree (UST) of G. This model was first studied by Kirchhoff [48] who gave a formula for the number of spanning trees of a given graph and provided a beautiful connection with the theory of electric networks. In particular, he showed that the probability that a given edge {x, y} of G is contained in the UST equals R eff (x ↔ y; G); we prove this fundamental equality in Section 7.2 (see Theorem 7.2).
Is there a natural way of defining a UST probability measure on an infinite connected graph? It will soon become clear that we have set the framework to answer this positively in Section 2.3. Let G = (V, E) be an infinite connected graph and assume that {G n } is a finite exhaustion of G as defined in Section 2.5. That is, {G n } is a sequence of finite graphs, G n ⊂ G n+1 for all n, and ∪G n = G. Russell Lyons conjectured that the UST probability measure on G n converges weakly to some probability measure on subsets of E and in his pioneering work Pemantle [67] showed that is indeed the case.
More precisely, denote by T n a UST of G n , then it is shown in [67] that for any two finite subset of edges A, B of G the limit lim n→∞ P(A ⊂ T n , B ∩ T n = ∅) , for any two finite subsets of edges of G. Thus, the law of F is determined and we denote it by µ F . The superscript F stands for free and will be explained momentarily. Let us explore some properties of µ F that are immediate from its definition.
Since every vertex of G is touched by at least one edge of T n with probability 1 when n is large enough (so that G n contains the vertex), we learn that the edges of F almost surely touch every vertex of G, that is, F is almost surely spanning. Similarly, the probability that the edges of a given cycle in G are contained in T n (once n is large enough so that G n contains the cycle) is 0. Since G has countably many cycles we deduce that almost surely there are no cycles in F. By a similar reasoning we deduce that almost surely any connected component of F is infinite. However, a moment's reflection shows that this kind of reasoning cannot be used to determine that F is almost surely connected.
It turns out, perhaps surprisingly, that F need not be connected almost surely. A remarkable result of Pemantle [67] shows that a sample of µ F on Z d is almost surely connected when d = 1, 2, 3, 4 and almost surely disconnected when d ≥ 5. Since it may be the case that a sample of µ F is disconnected with positive probability, we call µ F the free uniform spanning forest (rather than tree) of G, denoted henceforth FUSF G . The term free corresponds to the fact that we have not imposed any boundary conditions when taking a limit. It will be very useful to take other boundary conditions, such as the wired boundary condition, see Section 7.3. The seminal paper of Benjamini, Lyons, Peres and Schramm [9] explores many properties of these infinite random trees (properties such as number of components and connectivity in particular, size of the trees, recurrence or transience of the trees and many others) on various underlying graphs with an emphasis on Cayley graphs. We refer the reader to [9] and to [60,Chapters 4 and 10] for a comprehensive treatment.
The question of connectivity of the FUSF is therefore fundamental and unfortunately it is not even known that connectivity is an event of probability 0 or 1 on any graph G, see [9,Question 15.7]. In [44] the circle packing theorem (Theorem 3.5) is used to prove that FUSF G is almost surely connected when G is a bounded degree proper planar map, answering a question of [9,Question 15.2]. Our goal in this chapter is to present a proof for a specific case where G is a bounded degree, transient, one-ended planar triangulations. Even though this is a particular case of a general theorem, the argument we present here contains most of the key ideas. We refer the interested reader to [44] for the general statement. The rest of this chapter is organized as follows. In Section 7.2 we discuss two basic properties of USTs on finite graphs. Namely, Kirchhoff's effective resistance formula mentioned earlier and the spatial Markov property for the UST. In Section 7.3 we prove Pemantle's [67] result (7.1) showing that FUSF G exists. We will also define there the wired uniform spanning forest which is obtained by taking a limit of the UST probability measures over exhaustions with wired boundary. We will also need some fairly basic notions of electric networks on infinite graphs that we have not discussed in Section 2.5. Next, in Section 7.4 we will restrict to the setting of planar graph and employ planar duality to obtain an extremely useful connection between the free and wired spanning forests which will be useful later. Using these tools we have collected we will prove Theorem 7.1 in Section 7.5.

Basic properties of the UST
Kirchhoff 's effective resistance formula Theorem 7.2 (Kirchoff [48]). Let G be a finite connected graph and denote by T a uniformly drawn spanning tree of G. Then for any edge e = (x, y) we have P(e ∈ T ) = R eff (x ↔ y) .
Proof. Let a = z be two distinct vertices of G (later we will take a = x and z = y) and note that any spanning tree of G contains precisely one path connecting a and z. Thus, a uniformly drawn spanning tree induces a random path from a to z. By Claim 2.46 we obtain a unit flow θ from a to z. To be concrete, for each edge e we have that θ( e) is the probability that the random path from a to z traverses e minus the probability that it traverses e. We will now show that θ satisfies the cycle law (see Claim 2.14), so it is in fact the unit current flow (see Definition 2.19).
Let (t, i) ∈ T + i . The graph t \ {e i } has two connected components. Let e j be the first edge after e i , in the order of the cycle e 1 , . . . , e m , that is incident to both connected components and consider the spanning tree t = t ∪ {e j } \ {e i }. Note that the unique path in t from a to z traverses e j , so (t , j) ∈ T − j . This procedure defines a bijection from i T + i to j T − j . Indeed, given (t , j) from before, we can erase e j and go on the cycle in the opposite order until we reach e i which has to be the first edge incident to the two connected components of t \ {e j }. This shows (7.2) and concludes the proof.

Spatial Markov property of the UST
We would like to study the UST probability measure conditioned on the event that some edges are present in the UST and others not. It turns out that sampling from this conditional distribution amounts to drawing a UST on a modified graph.
Let G = (V, E) be a finite connected graph and let A and B be two disjoint subsets of edges. We write (G − B)/A for the graph obtained from G by erasing the edges of B and contracting the edges of A. We identify the edges of (G − B)/A with the edges E \ B. Denote by T G and T (G−B)/A a UST on G and (G − B)/A, respectively, and assume that This assumption is equivalent to G − B being connected and that A contains no cycles.
Then, conditioned on the event that T G contains the edges A and does not contain any edge of B the distribution of T G is equal to the union of A with T (G−B)/A . In other words, for a set A of spanning trees of G we have that 3) The proof of (7.3) follows immediately from the observation that the set of spanning trees of G not containing any edge of B is simply the set of spanning trees of G − B. Similarly, the set of spanning trees of G containing all the edges of A is simply the union of A to each spanning tree of G/A, and (7.3) follows.

Limits over exhaustions: the free and wired USF
Let G be an infinite connected graph and let {G n } be a finite exhaustion of it. In this section we will show that (7.1) holds and that the UST measures with wired boundary conditions also converge. Let us first explain the latter. Denote by G * n the graph obtained from G by identifying the infinite set of vertices G \ G n to a single vertex z n and erasing the loops at z n formed by this identification. We say that {G * n } is a wired finite exhaustion of G. We postpone the proof for a little longer and first discuss some of its implications. As mentioned earlier, Theorem 7.3 together with Kolmogorov's extension theorem [23,Theorem A.3.1] implies that there exists two probability measures µ F and µ W on infinite subsets of the edges of E arising as the unique limits of the laws T n and T * n . That is, the samples F f and F w of µ F and µ W satisfy We call µ F and µ W the free uniform spanning forest and the wired uniform spanning forest and denote them by FUSF G and WUSF G respectively. We have seen earlier (one paragraph below (7.1)) that both F f and F w are almost surely spanning forests, that is, spanning graphs of G with no cycles and that every connected component of them is infinite. Thus µ F and µ W are supported on what are known as essential spanning forests of G, that is, spanning forests of G in which every component is infinite.
Are the probability measures FUSF G and WUSF G equal? Not necessarily. It is easy to see that on the infinite path Z the WUSF Z and the FUSF Z are equal and are the entire graph Z with probability 1. Conversely, it is not very difficult to see that they are different on a 3-regular tree, see the exercise below Theorem 7.5. Pemantle [67] has shown that FUSF Z d = WUSF Z d for any d ≥ 1 and a very useful criterion for determining whether there is equality was developed in [9]. We refer the reader to [60,Chapter 10] for further reading.
Before presenting the proof of Theorem 7.3 let us make a few short observations regarding the effective resistance between two vertices in an infinite graph, extending what we proved in Section 2.5.

Effective resistance in infinite networks
Let G be an infinite connected graph. We have seen in Section 2.5 that for any vertex v the electric resistance R eff (v ↔ ∞) from v to ∞ is well defined as the limit of R eff (a ↔ z n ; G * n ) where {G * n } is a wired finite exhaustion and z n is the vertex resulting in the identification of the vertices G \ G n .
To define the electric resistance between two vertices v, u of an infinite graph, one has to take exhaustions and specify boundary conditions since the limits may differ depending on them.
Claim 7.4. Let G be an infinite connected graph, {G n } a finite exhaustion and {G * n } a wired finite exhaustion. Then for any two vertices u, v of G we have that the limits exist and do not depend on the exhaustion {G n }.
Proof. For the first limit we note that by Rayleigh's monotonicity (Corollary 2.29), the sequence R eff (u ↔ v; G n ) is non-increasing and non-negative since G n ⊂ G n+1 , hence it converges. A sandwiching argument as in the proof of Claim 7.4 shows that the limit does not depend on the exhaustion {G n }.
For the second limit, since G n can be obtained by gluing vertices of G n+1 we deduce by Corollary 2.30 that the sequence R eff (u ↔ v; G * n ) is non-decreasing and bounded (by the graph distance in G between u and v for instance), hence it converges. The limit does not depend on the exhaustion by an identical sandwiching argument.
We call R F eff (u ↔ v; G) and R W eff (u ↔ v; G) the free effective resistance and wired effective resistance between u and v respectively.

Proof of Theorem 7.3
We will prove the assertion regarding the first limit; the second is almost identical. Write A = {e 1 , . . . , e k } and e i = (x i , y i ) for each 1 ≤ i ≤ k. Assume without loss of generality that G n contains A for all n. As before, denote by T n a UST of G n . By (7.3) and Theorem 7.2 we have that Note that G n /{e 1 , . . . , e i−1 } is a finite exhaustion of the infinite graph G/{e 1 , . . . , e i−1 } and so by Claim 7.4 we obtain that the limit exists and does not depend on the exhaustion.
Since we know this limit exists for all finite edge sets A, it follows by the inclusion-exclusion formula that P(A ⊂ T n , B ∩T n = ∅) converges for any finite sets A, B, concluding our proof.
It is now quite pleasant to see that the symbiotic relationship between electric network and UST theories continues to flourish in the infinite setting. Indeed, by combining Theorem 7.3, Theorem 7.2 and Claim 7.4 we obtain the extension of Kirchhoff's formula for infinite connected graphs.
Theorem 7.5. Let G be an infinite connected graph and denote by F F and F W a sample from FUSF G and WUSF G respectively. Then for any edge e = (x, y) of G we have that and P(e ∈ F W ) = R W eff (x ↔ y; G) .
Exercise: Use this to show that on the 3-regular tree T 3 the probability measures FUSF T 3 and the WUSF T 3 are distinct.

Planar duality
When G is planar there is a very useful relationship between FUSF G and WUSF G . Recall that given a planar map G, the dual graph of G is the graph G † whose vertex set is the set of faces of G and two faces are adjacent in G † if they share an edge in G. Thus, G † is locally-finite if and only if every face of G has finitely many edges. To each edge e ∈ E(G) corresponds a dual edge e † ∈ E(G † ) which is the pair of faces of G incident to e; this is clearly a one-to-one correspondence.
When G is a finite planar graph, this correspondence induces a one-to-one correspondence between the set of spanning trees of G and the set of spanning trees of G † . Given a spanning tree of t of G we slightly abuse the notation and write t † for the set of edges {e † : e ∈ G \ t}, that is e ∈ t ⇐⇒ e † ∈ t † .
If t † has a cycle, then t is disconnected. Furthermore, if there is a vertex G † not incident to any edge of t † , then all the edges of the corresponding face in G are present in t hence t contains a cycle. We deduce that if t is a spanning tree of G, then t † is a spanning tree of G † . The converse also holds since (t † ) † = t and (G † ) † = G.
Now assume that G is an infinite planar maps such that G † is locally finite. Given an essential spanning forest F of G we similarly define F † as the set of edges {e † : e ∈ G \ F}. A similar argument shows that F † is an essential spanning forest of G † . This raises the natural question: when F is a sample of FUSF G , what is the law of F † ? The answer in general is an object known as the transboundary uniform spanning forest [44, Proposition 5.1]. However, when G is additionally assumed to be one-ended (in particular, in the setting of Theorem 7.1) it turns out that F † is distributed as WUSF G † : Proposition 7.6. Let G be an infinite, one-ended planar map with a locally finite dual G † and let F be a sample of FUSF G . Then the law of F † is WUSF G † .
Proof. Let G n be a finite exhaustion of G. Let F n be a finite exhaustion G † defined by letting f ∈ F n if and only if every vertex of f in G belongs to G n . Then G † n is obtained from G † by contracting G † \ F n into a single vertex which corresponds to the outer face of G n . Thus, G † n is a wired exhaustion of G † and the statement follows.
We use to obtain an important criterion of connectivity of FUSF G in the planar case. Proof. By Proposition 7.6 it suffices to show that if F is an essential spanning forest of G, then F is connected if and only if every component of F † is one-ended. Indeed, if F is disconnected, then the boundary of a connected component of F induces an bi-infinite path in F † . Conversely, if F † contains a bi-infinite path, then by the Jordan curve theorem F is disconnected.

Connectivity of the free forest Last note on infinite networks
We make two more useful and natural definitions. Given two disjoint finite sets A and B in an infinite connected graph G we define the free and wired effective resistance between them R W eff (A ↔ B; G) and R F eff (A ↔ B; G) as the free and wired effective resistance between a and b in the graph obtained from G by identifying A and B to the vertices a and b.
Lastly, given a graph G, a wired finite exhaustion {G * n } of G and two disjoint finite sets A and B we define where the last limit exists since the sequence is non-increasing from n that is large enough so that G n contains A and B. In the proof of Theorem 7.1 we will require the following estimate.
Lemma 7.8. Let A and B be two finite sets of vertices in an infinite connected graph G. Then Proof. For any three distinct vertices u, v, w in a finite network we have by the union bound that P u (τ {v,w} < τ + u ) ≤ P u (τ v < τ + u ) + P u (τ w < τ + u ). Hence by Claim 2.22 we get that Let {G * n } be a wired finite exhaustion of G and assume without loss of generality that A and B are contained in G * n for all n. Then by the previous estimate Denote by M the maximum in the statement of the lemma and take n → ∞ in the last inequality. We obtain that Rearranging gives that By symmetry, the same inequality holds when we replace the roles of A and B. We put this together with the triangle inequality for effective resistances (2.9) and get that which by rearranging gives the desired inequality.

Method of random sets
We present the following weakening of the method of random paths as in Section 2.6. Let µ be the law of a random subset W of vertices of G. Define the energy of µ as Lemma 7.9 (Method of random sets). Let A, B be two disjoint finite sets of vertices in an infinite graph G. Let W be a random subset of vertices of G and denote by µ its law. Assume that the subgraph of G induced by W almost surely contains a simple path starting at A that is either infinite or finite and ends at B. Then Proof. Given W let γ be a simple path, contained in W , connecting A to B or an infinite path starting at A. We choose γ according to some prescribed lexicographical ordering. Then, letting ν be the law of γ, where by e ∈ γ we mean that the directed edge e is traversed (in its direction) by γ, and by E(ν) we mean the energy of the flow induced by γ, as in Claim 2.46.
Let γ be an independent random path having the same law as γ. Then the sum above is precisely the expected number of directed edges traversed both by γ and γ . Since these are simple paths, they each contain at most one directed edge emanating from each vertex v ∈ W . Thus, the expected number of directed edges used by both paths is at most the number of vertices used by both paths. Hence, and the proof is concluded by Thomson's principle (Theorem 2.28).

Proof of Theorem 7.1
In Theorem 7.1 we assume that G = (V, E) is a bounded-degree, one-ended triangulation. Hence G † is a bounded degree (in fact, 3-regular), one-ended and transient planar map with faces of uniformly bounded size. We leave this verification as an exercise for the reader. To avoid carrying the † symbol around, and with a slight abuse of notation, let G = (V, E) be a graph satisfying these assumptions on G † , that is, we assume that G is a one-ended, transient, infinite planar map with bounded degrees and face sizes. We will prove under these assumptions that every component of WUSF G is one ended almost surely which implies Theorem 7.1 by Proposition 7.7.
Let T be the bounded-degree one-ended triangulation obtained from G by adding a vertex inside each face of G and connecting it by edges to the vertices of that face according to their cyclic ordering. By Theorem 4.4 there exists a circle packing of T in the unit disc U. We identify the vertices of T as the vertices V (G) and faces F (G) of G, and denote this circle packing as Given z ∈ U and r ≥ r > 0 denote by A z (r, r ) the annulus {w ∈ C : r ≤ |w − z| ≤ r }. Definition 7.10. Write V z (r, r ) for the set of vertices v of G such that either We emphasize that V z (r, r ) contains only vertices of G (that is, no vertices of T that correspond to faces of G belong to it).
Lemma 7.11. There exists a constant C < ∞ depending only on the maximal degree such that for any z ∈ U and any positive integer n satisfying |z| ≥ 1 − C −n the sets are disjoint.
Proof. By the Ring Lemma (Lemma 4.2) there exists a constant B < ∞ such that for any C > 1, any z satisfying z ≥ 1 − C −n and any 1 ≤ i ≤ n, if a circle of P intersects A z (C −i , 2C −i ) or is tangent to a circle that intersects A z (C −i , 2C −i ), then its radius is at most BC −i . Hence, this set of circles is contained in the disc of radius (2 + 4B)C −i around z. Furthermore, since |z| ≥ 1 − C −n , by the Ring Lemma again there exists b > 0 such that any such circle must be of distance at least bC −i from z. Hence, any fixed C > 4+4B b satisfies the assertion of the lemma.
Lemma 7.12. Let z ∈ U and r > 0. Let U be a uniform random variable in [1,2] and denote by µ r the law of the random set V z (U r, U r) (as defined in Definition 7.10). Then there exists a constant C < ∞ depending only on the maximal degree such that Proof. For each vertex v, the event v ∈ V z (U r, U r) implies that the circle {w ∈ C : |w−z| = U r} intersects the circle P (v) or intersects P (f ) for some face f incident to v. The union of P (v) and P (f ) over all such faces f is contained in the Euclidean ball around the center of P (v) of radius r(v) + 2 max f :v∈f r(f ). Since T has finite maximal degree we have that r(f ) ≤ Cr(v) for all f with v ∈ f where C < ∞ depends only on the maximal degree by the Ring Lemma Lemma 4.2. Hence, We claim that v∈Vz(r,2r) min{r(v), r} 2 ≤ 16r 2 . (7.7) Indeed, consider a vertex v ∈ V z (r, 2r) for which the corresponding circle P (v) has radius larger than r. By Definition 7.10 this circle must intersect {w ∈ C : |w − z| ≤ 2r}. We replace each such P (v) with a circle of radius r that is contained in the original circle and intersects {w ∈ C : |w − z| ≤ 2r}. The circles in this new set still have disjoint interiors and are contained in {w ∈ C : |w − z| ≤ 4r}. Therefore their area is at most π16r 2 and (7.7) follows. The proof of lemma is now concluded by combining (7.6) and (7.7).
Proof of Theorem 7.1. Let F be a sample of WUSF G and given an edge e = (x, y) we define A e to be the event that x and y are in two distinct infinite connected components of F \ {e}. It is clear that every component of F is one-ended almost surely if and only if P(e ∈ F , A e ) = 0 (7.8) for every edge e of G. Consider the triangulation T described above Definition 7.10 and its circle packing P in U. By choosing the proper Möbius transformation we may assume that the On the event A e ε , the paths η x and η y split V ε into two pieces, L and R. Right: We define a random set containing a path (solid blue) from η x to η y ∪ {∞} in G \ K c using a random circle (dashed blue). Here we see two examples, one in which the path ends at η y , and the other in which the path ends at the boundary (i.e., at infinity). tangency point between P (x) and P (y) is the origin, and that the centers of P (x) and P (y) lie on the negative and positive real axis, respectively.
Fix now an arbitrary ε > 0 and let V ε be all the vertices of G such that the center z(v) of P (v) satisfies |z(v)| ≤ 1 − ε. Denote by B e ε the event that every connected component of F \ {e} intersects V \ V ε . Note that A e ⊂ ∩ ε>0 B e ε but this containment is strict since it is possible that e ∈ F and x is connected to y in F inside V ε .
Assume that B e ε holds. Let η x be the rightmost path in F \ {e} from x to V \ V ε when looking at x from y, and let η y be the leftmost path in F \ {e} from y to V \ V ε when looking at y from x. As mentioned above, the paths η x and η y are not necessarily disjoint. Nonetheless, concatenating the reversal of η x with e and η y separates V ε into two sets of vertices, L and R, which are to the left and right of e (when viewed from x to y) respectively. See Figure 7.1 for an illustration of the case when η x and η y are disjoint (when they are not, R is a "bubble" separated from V \ V ε ).
On the event B e ε , let K be the set of edges that are either incident to a vertex in L or belong to the path η x ∪ η y , and set K = E off of this event. Note that the edges of K do not touch the vertices of R. The condition that η x and η y are the rightmost and leftmost paths to V \ V ε from x and y is equivalent to the condition that K does not contain any open path from x to V \ V ε other than η x , and does not contain any open path from y to V \ V ε other than η y . We note that K can be explored algorithmically, without querying the status of any edge in E \ K, by performing a right-directed depth-first search of x's component in F and a left-directed depth-first search of y's component in F, stopping each search when it first leaves V ε .
Denote by A e ε the event that η x and η y are disjoint, or equivalently, that K does not contain an open path from x to y (and in particular, no path starting at η x and ending at η y ). The event A e ε is measurable with respect to the random set K and A e = ∩ ε>0 A e ε . Hence Denote by K o the open edge of K (that is, the edge of K in F) and by K c the closed edges of K (that is, the edges of K not belonging to F). In particular, η x and η y are contained in K o . Then by the UST Markov property (7.3), conditioned on K and the event A e ε , the law of F is equal to the union of K o with a sample of the WUSF on (G − K c )/K o . In particular, by Kirchhoff's formula Theorem 7.5 we have that where in the last inequality we used the fact that gluing cannot increase the resistance (Corollary 2.30).
We will show that the last quantity tends to 0 as ε → 0 which gives (7.8). To that aim, let let v x be the endpoint of the path η x and let z 0 be the center of the P (v x ). On the event A e ε , for each 1 − |z 0 | ≤ r ≤ 1/4, we claim that the set V z 0 (r, r), as defined in Definition 7.10, contains a path in G from η x to η y that is contained in R ∪ η x ∪ η y or an infinite simple path starting at η x that is contained in R ∪ η x . Either of these paths are therefore a path in G − K c .
To see this, consider the arc A (z 0 , r) = {z ∈ U : |z − z 0 | = r} viewed in the clockwise direction and let A(z 0 , r) be the subarc beginning at the last intersection of A (z 0 , r) with a circle corresponding to a vertex in the trace of η x , and ending at the first intersection after this time of A (z 0 , r) with either ∂U or a circle corresponding to a vertex in the trace of η y (see Fig. 7.1). Hence, if A e ε holds, then the set of vertices of T whose circles in P intersect A(z 0 , r) contains a path in T starting at η x and ending η y or does not end at all, for every 1 − |z 0 | ≤ r ≤ 1/4. To obtain a path in G rather than T we divert the path counterclockwise around each face of G. That is, whenever the path passes from a vertex u of G to a face f of G and then to a vertex v of G, we replace this section of the path with the list of vertices of G incident to f that are between u and v in the counterclockwise order. By Definition 7.10 this diverted path is in V z 0 (r, r) and so this construction shows that the subgraph of G − K c induced by the set V z 0 (r, r) contains a path from η x to η y or an infinite path from η x , as claimed.
Let r i = C −i for i = 1, . . . , N where C < ∞ the constant from Lemma 7.11 and N = log C (ε) . Assume without loss of generality that C ≥ 4 so that ε ≤ r i ≤ 1/4 for all i = 1, . . . , N . By Lemma 7.11 the measures µ r i defined in Lemma 7.12 are supported on sets that are contained in the disjoint sets V z (r i , 2r i ). Thus, by Lemma 7.9 and Lemma 7.12 we have where B < ∞ is a constant depending only on the maximum degree. By symmetry we also have R W eff (η y ↔ η x ∪ {∞}; G − K c ) ≤ B log(1/ε) .

:: Related topics
In this chapter we briefly review some aspects of the literature on circle packing that unfortunately we do not have space to get into in depth in this course. We hope this will be useful as a guide to further reading.
1. Double circle packing. If one wishes to study planar graphs that are not triangulations, it is often convenient to work with double circle packings, which enjoy similar rigidity properties to usual circle packings, but for the larger class of polyhedral planar graphs.
Here, a planar graph is polyhedral if it is both simple and 3-connected, meaning that the removal of any two vertices cannot disconnect the graph. Double circle packings also satisfy a version of the ring lemma [43, Theorem 4.1], which means that they can be used to produce good straight-line embeddings of polyhedral planar graphs that have bounded face degrees but which are not necessarily triangulations. (c) (Primal and dual circles are perpendicular.) For each vertex v and face f of G, the discs P f and P v have non-empty intersection if and only if f is incident to v, and in this case the boundary circles of P f and P v intersect at right angles.
See Fig. 8.1 for an illustration.
Thurston's proof of the circle packing theorem also implies that every finite polyhedral planar graph admits a double circle packing. This was also shown by Brightwell and Scheinerman [13]. As with circle packings of triangulations, the double circle packing of any finite polyhedral planar map is unique up to Möbius transformations or reflections. The theory of double circle packings in the infinite setting follows from the work of He [36], and is exactly analogous to the corresponding theory for triangulations. Indeed, essentially everything we have to say in these notes about circle packings of simple triangulations can be generalized to double circle packings of polyhedral planar maps (sometimes under the additional assumption that the faces are of bounded degree).

2.
Packing with other shapes. A very powerful generalization of the circle packing theorem known as the monster packing theorem was proven by Oded Schramm in his PhD thesis [75]. In other words, we can represent the triangulation of T by a packing with arbitrary smooth convex shapes that are specified up to homothety (it is quite surprising at first that rotations are not needed). The full monster packing theorem also allows one to relax the smoothness and convexity assumptions above in various ways. The proof of the monster packing theorem is based upon Brouwer's fixed point theorem, and does not give an algorithm for computing the packing.
3. Square tiling. Another popular method of embedding planar graphs is the square tiling, in which vertices are represented by horizontal line segments and edges by squares; such square tilings can take place either in a rectangle, the plane, or a cylinder. Square tiling was introduced by Brooks, Smith, Stone, and Tutte [14], and generalized to infinite planar graphs by Benjamini and Schramm [11]. Like circle packing, square tiling can be thought of as a discrete version of conformal mapping, and in particular can be used to approximate the uniformizing map from a simply connected domain with four marked boundary points to a rectangle. For studying the random walk, a very nice feature of the square tiling that is not enjoyed by the circle packing is that the height of a vertex in the cylinder is a harmonic function, so that the height of a random walk is a martingale. Furthermore, Georgakopoulos [27] observed that if one stops the random walk at the first time it hits some height, then its horizontal coordinate at this time is uniform on the circle (this takes some interpretation to make precise). Further works on square tiling include [1,27,45]. Unlike circle packing, however, square tilings do not enjoy an analogue of the ring lemma, and can be geometrically very degenerate. Indeed, it is possible for edges to be represented by squares of zero area, and is also possible for two distinct planar graphs to have the same square tiling. Furthermore, square tilings are typically defined with reference to a specified root vertex, and it is difficult to compare the two different square tilings of the same graph that are computed with respect to different root vertices. These differences tend to mean that square tilings are best suited to quite different problems than circle packing.
We also remark that a different sort of square tiling in which vertices are represented by squares was introduced independently by Cannon, Floyd, and Parry [15] and Schramm [73].

4.
Multiply-connected triangulations. Several works have studied generalizations of the circle packing theorem to triangulations that are either not simply connected or not planar. Most notably, He and Schramm [39] proved that every triangulation of a domain with countably many boundary components can be circle packed in a circle domain, that is, a domain all of whose boundary components are either circles or points: see 5. Isoperimetry of planar graphs. In [65], Miller, Teng, Thurston, and Vavasis used circle packing to give a new proof of the Lipton-Tarjan planar separator theorem [59], which concerns sparse cuts in planar graphs. Precisely, the theorem states that for any n-vertex planar graph, one can find a set of vertices of size at most O( √ n) such that if this vertex set is deleted from the graph then every connected component that remains has size at most 3n/4. More precisely, the authors of [65] showed that if one circle packs a planar graph in the unit sphere of R 3 , normalizes by applying an appropriate Möbius transformation, and takes a random plane passing through the origin in R 3 , then the set of vertices whose corresponding discs intersect the plane will have the desired properties with high probability.
A related result of Jonasson and Schramm [46] concerns the cover time of planar graphs, i.e., the expected number of steps for a random walk on the graph to visit every vertex of the graph. They used circle packing to prove that the cover time of an n-vertex planar graph with maximum degree M is always at least c M n log 2 n for some positive constant c M depending only on M . This bound is attained (up to the constant) for large boxes [−n, n] 2 in Z 2 . In general, it is possible for n-vertex graphs to have cover time as small as (1 + o(1))n log n.
6. Boundary theory. Benjamini and Schramm [10] proved that if P is a circle packing of a bounded degree triangulation in the unit disc U, then the simple random walk on the circle packed triangulation converges to a point in the boundary of U, and that the law of the limit point is non-atomic and has full support. (That is, the walk has probability zero of converging to any specific boundary point, and has positive probability of converging to any positive-length interval.) They used this result to deduce that a bounded degree planar graph admits non-constant bounded harmonic functions if and only if it is transient (equivalently, the invariant sigma-algebra of the random walk on the triangulation is nontrivial if and only if the walk is transient), and in this case it also admits non-constant bounded harmonic functions of finite Dirichlet energy. They also gave an alternative proof of the same result using square tiling instead of circle packing in [11].
Indeed, given the result of Benjamini and Schramm, one may construct a non-constant bounded harmonic function h on T by taking any bounded, measurable function f : ∂U → R and defining h to be the harmonic extension of f , that is, where E v denotes expectation taken with respect to the random walk X started at v, and z(u) denotes the center of the circle in P corresponding to u. Angel, Barlow, Gurel-Gurevich, and the current author [5] proved that, in fact, every bounded harmonic function on a bounded degree triangulation can be represented in this way. In other words, the boundary ∂U can be identified with the Poisson boundary of the triangulation. Probabilistically, this means that the entire invariant σ-algebra of the random walk coincides with the σ-algebra generated by the limit point. They also proved the stronger result that ∂U can be identified with the Martin boundary of the triangulation. Roughly speaking, this means that every positive harmonic function on the triangulation admits a representation as the harmonic extension of some measure on ∂U. A related representation theorem for harmonic functions of finite Dirichlet energy on bounded degree triangulations was established by Hutchcroft [42].
The results of [5] regarding the Poisson boundary followed earlier work by Georgakopoulos [27], which established a corresponding result for square tilings. Both results were revisited in the work of Hutchcroft and Peres [45], which gave a simplified and unified proof that works for both embeddings.
A parallel boundary theory for circle packings of unimodular random triangulations of unbounded degree was developed by Angel, Hutchcroft, the current author, and Ray in [6].
7. Harnack inequalities, Poincaré inequalities, and comparison to Brownian motion. The work of Angel, Barlow, Gurel-Gurevich and the current author [5] also established various quite strong estimates for random walk on circle packings of bounded degree triangulations. Roughly speaking, these estimates show that the random walk behaves similarly to the image of a Brownian motion under a quasi-conformal map, that is, a bijective map that distorts angles by at most a bounded amount (i.e., maps infinitesimal circles to infinitesimal ellipses of bounded eccentricity). These estimates were central to their result concerning the Martin boundary of the triangulation, and are also interesting in their own right. Further related estimates have also been established by Chelkak [17].
Recent work of Murugan [66] has built further upon these methods to establish very precise control of the random walk on (graphical approximations of) various deterministic self-similar fractal surfaces.
8. Liouville quantum gravity and the KPZ correspondence. Statistical physics in two dimensions has been one of the hottest areas of probability theory in recent years. The introduction of Schramm's SLE [74] and further breakthrough developments by Lawler, Schramm and Werner (see [52,53] and the references within) on the one hand, and the application of discrete complex analysis, pioneered by Smirnov [78], on the other, have led to several breakthroughs and to the resolution of a number of long-standing conjectures. These include the conformally invariant scaling limits of critical percolation [76] and Ising models [77], and the determination of critical exponents and dimensions of sets associated with planar Brownian motion [52] (such as the frontier and the set of cut points). It is manifest that much progress will follow, possibly including the treatment of self-avoiding walk (the connective constant of the hexagonal lattice was calculated in the breakthrough work [21]), the O(n) loop model and the Potts model. While the bulk of this body of work applies to specific lattices, there are many fascinating problems in extending results to arbitrary planar graphs.
The next natural step is to study the classical models of statistical physics in the context of random planar maps (see Le Gall's 2014 ICM proceedings [56]). There are deep con-jectured connections between the behaviour of the models in the random setting versus the Euclidean setting, most significantly the KPZ formula of Knizhnik, Polyakov and Zamolodchikov [49] from conformal field theory. This formula relates the dimensions of certain sets in Euclidean geometry to the dimensions of corresponding sets in the random geometry. It may provide a systematic way to analyze models on the two dimensional Euclidean lattice: first study the model in the random geometry setting, where the Markovian properties of the underlying space make the model tractable; then use the KPZ formula to translate the critical exponents from the random setting to the Euclidean one.
Much of this picture is conjectural but a definite step towards this goal was taken in the influential paper of Duplantier and Sheffield [22]. Let us describe their formulation. Let G n be a random triangulation on n vertices and consider its circle packing (or any other "natural" embedding) in the unit sphere. The embedding induces a random measure µ n on the sphere by putting µ n (A) to be the proportion of circle centers that are in A. The Duplantier-Sheffield conjecture asserts that the measures µ n converge in distribution to a random measure µ on the sphere that has density given by an exponential of the Gaussian free field -the latter is carefully defined and constructed in [22]. This measure is what is known as Liouville quantum gravity (LQG).
Next, given a deterministic or random set K on the sphere, one can calculate its expected dimension using the random measure given by LQG, and using the usual Lebesgue measure -one gets two different numbers. Duplantier and Sheffield [22] obtain a quadratic formula allowing to compute one number from the other in the spirit of [49]; this is the first rigorous instance of the KPZ correspondence. It allows one to compute the dimension of random sets in the Z 2 lattice (corresponding to Lebesgue measure) by first calculating the corresponding dimension in the random geometry setting and then appealing to the KPZ formula.
Many difficult models of statistical physics are tractable on a random planar map due to the inherent randomness of the space. For instance, it can be shown that the self avoiding walk on the UIPT behaves diffusively, that is, the endpoint of a self avoiding walk of length n is typically of distance n 1/2+o(1) from the origin [19,34]. A straightforward calculation with the KPZ formula allows one to predict that the typical displacement of the self-avoiding walk of length n on the lattice Z 2 is n 3/4+o(1) -a notoriously hard open problem with endless simulations supporting it.
LQG and the KPZ correspondence thus pose a path to solving many difficult problems in classical two-dimensional statistical physics. We refer the interested reader to Garban's excellent survey [26] of the topic.