Operator Theory Advances and Applications 293

# Pavel Kurasov

# Spectral Geometry of Graphs

# **Operator Theory: Advances and Applications**

#### **Volume 293**

#### **Founded in 1979 by Israel Gohberg**

#### **Series Editors:**

Joseph A. Ball (Blacksburg, VA, USA) Albrecht Böttcher (Chemnitz, Germany) Harry Dym (Rehovot, Israel) Heinz Langer (Wien, Austria) Christiane Tretter (Bern, Switzerland)

#### **Associate Editors:**

Vadim Adamyan (Odessa, Ukraine) Wolfgang Arendt (Ulm, Germany) Raul Curto (Iowa, IA, USA) Kenneth R. Davidson (Waterloo, ON, Canada) Fritz Gesztesy (Waco, TX, USA) Pavel Kurasov (Stockholm, Sweden) Vern Paulsen (Houston, TX, USA) Mihai Putinar (Santa Barbara, CA, USA) Ilya Spitkovsky (Abu Dhabi, UAE)

#### **Honorary and Advisory Editorial Board:**

Lewis A. Coburn (Buffalo, NY, USA) J.William Helton (San Diego, CA, USA) Marinus A. Kaashoek (Amsterdam, NL) Thomas Kailath (Stanford, CA, USA) Peter Lancaster (Calgary, Canada) Peter D. Lax (New York, NY, USA) Bernd Silbermann (Chemnitz, Germany)

**Subseries Linear Operators and Linear Systems**  *Subseries editors:*  Daniel Alpay (Orange, CA, USA) Birgit Jacob (Wuppertal, Germany)

André C.M. Ran (Amsterdam, The Netherlands)

#### **Subseries**

#### **Advances in Partial Differential Equations**

*Subseries editors:*  Bert-Wolfgang Schulze (Potsdam, Germany) Jerome A. Goldstein (Memphis, TN, USA) Nobuyuki Tose (Yokohama, Japan) Ingo Witt (Göttingen, Germany)

Pavel Kurasov

# Spectral Geometry of Graphs

Pavel Kurasov Department of Mathematics Stockholm University Stockholm, Sweden

ISSN 0255-0156 ISSN 2296-4878 (electronic) Operator Theory: Advances and Applications ISBN 978-3-662-67870-1 ISBN 978-3-662-67872-5 (eBook) https://doi.org/10.1007/978-3-662-67872-5

This work was supported by Humboldt Foundation, Stockholm University, and The publication of this open access book was funded by the Alexander von Humboldt Stiftung/Foundation and Stockholm University.

© The Editor(s) (if applicable) and The Author(s) 2024. This book is an open access publication.

**Open Access** This book is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this book are included in the book's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the book's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.

The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made.

This book is published under the imprint Birkhäuser, www.birkhauser-science.com by the registered company Springer-Verlag GmbH, DE

The registered company address is: Heidelberger Platz 3, 14197 Berlin, Germany

Paper in this product is recyclable.

*Tools are sometimes more important than results. Jan Boman*

*This book is dedicated to my parents Natalia Morozova and Boris Kurasov and my grandmother Elena Rusanova who invested all their love, knowledge and life experience in raising two professors. My brother Viktor Kurasov played an extremely important role not only by inspiring my work and discussing related questions, but via clever guidance through the academic world. This book would never appear without both personal and mathematical support from Annemarie Luger who was ready to answer my questions at any time of the day and creating atmosphere allowing to work on the project. This home is unthinkable for me without our daughter Elena-Sofia who not only tried to solve the problem of seven bridges of Königsberg together with me, but helped by bringing fruits to the office. Many ideas described here could not grow without training from my two scientific fathers Boris Pavlov and Jan Boman who together with Sergio Albeverio shaped me as a mathematician during the decades we had a privilege of working together.*

*It took me several years to accomplish this treatise and under this time I found out that my grandfather Vladimir Razumov graduated from Kazan university in 1908 specialising in mathematics before his life was plowed by the Russian revolution and he finally was executed in 1938 for listening to a legend attributed to Hypatia, the legend that probably inspired Maeterlinck's L'Oiseau bleu. All memories about him were to be erased and his own grandchildren had to restore his life story from a few still remaining shards.*

# **Notations**

	- **Se***(k)*—the global edge scattering matrix (see (5.37));
	- **Sv***(k)*—the global vertex scattering matrix (see (3.54));
	- **S** ≡ **Sv***(*1*)*—the general unitary matrix used to parametrise the vertex conditions at all vertices at once (see (3.52) and (3.53));
	- *S<sup>m</sup>* <sup>v</sup> *(k)*—the vertex scattering matrix for the single vertex *Vm* given by (3.15).
	- **M***-(λ)*—the matrix-valued M-function associated with the contact set *∂-* and the magnetic Schrödinger operator *L***<sup>S</sup>** *q,a(-)* (both *∂* and *L***<sup>S</sup>** *q,a(-)* are assumed to be known) (see Chap. 17).
	- M*-(λ)*—diagonal blocks of the corresponding M-functions.
	- **M***-(λ, ),* M*-(λ, )* —the corresponding M-functions with the indicated dependence on the magnetic fluxes through the cycles (see page 532).

## **Conventions**

• We use the following convention for the **Fourier transform**:

$$
\hat{\mu}(\mathbf{p}) = \int\_{\mathbb{R}^n} e^{-i \langle \mathbf{x}, \mathbf{p} \rangle} \mu(\mathbf{x}) d\mathbf{x}.
$$

Then the inversion formula is

$$\mu(\mathbf{x}) = \frac{1}{(2\pi)^n} \int\_{\mathbb{R}^n} e^{i \langle \mathbf{x}, \mathbf{p} \rangle} \hat{u}(\mathbf{x}) d\mathbf{x}.$$

• The **square root** is fixed by the following conventions:

$$
\lambda \in \mathbb{C} \Rightarrow k = \sqrt{\lambda}, \quad \operatorname{Im} k \ge 0,
$$

and

$$
\lambda > 0 \Rightarrow k = \sqrt{\lambda} > 0.
$$

• The **scalar product** is linear with respect to the second function

$$
\langle \alpha f, \beta g \rangle = \overline{\alpha} \beta \langle f, g \rangle.
$$

# **Contents**





#### Contents xv



# **Chapter 1 Very Personal Introduction**

**Where to Start?** Differential operators on metric graphs appear naturally in numerous applications where one is interested in describing transport or propagation of waves on a metric graph—a set of edges (bonds) joined at their end-points forming vertices (summits). Such problems became popular in the last two decades and are known nowadays as **quantum graphs**, despite that it is not required that the investigated system has any quantum mechanical interpretation. Thinking about quantum graphs I often imagine a set of strings joined together forming a structure reminding a spiderweb, in this picture the spectrum is just the set of its eigenfrequencies. The systems that are modelled are not necessarily locally one-dimensional, it is enough that the dynamics is essentially restricted to a set of neighbourhoods of a few low-dimensional manifolds. Imagine for example the channels in Venice or Amsterdam: if you are interested in the shortest path, then approximation as a one-dimensional network is completely sufficient. If you look how different boats miss one another in the small channels, then their shape starts to play a role. *If one has enough imagination*1 one may trace research on quantum graphs to the works of Klaus Ruedenberg and Charles W. Scherr [456] in the 50-ies or even of Linus Pauling in the 30-ies [425]. Research on differential operators on graph-like structures experienced a renaissance in the 80-ies pushed forward by fabrication of nano-electronic devices, and since then it is present at numerous conferences on mathematical physics, operator theory and differential equations. Quantum graphs form nowadays an attractive chapter in modern mathematical physics lying on the border between differential equations, discrete mathematics and algebraic geometry. Even other neighbouring areas of mathematics such as number theory and different types of zeta functions play a very important role in the area. What is most important is that this area is still rapidly developing with many unexpected results to surprise us. Let me just mention the recent discovery

<sup>1</sup> I owe this citation to Evans Harrell.

P. Kurasov, *Spectral Geometry of Graphs*, Operator Theory: Advances and Applications 293, https://doi.org/10.1007/978-3-662-67872-5\_1

that Laplacians on metric graphs lead to explicit examples of crystalline measures, which occupied specialists in Fourier analysis for decades (see Chap. 10).

**Meeting Quantum Graphs** Let me present here the history of my personal relation to quantum graphs without any desire to be impartial or fair. I met quantum graphs (long time before they got their name) for the first time in April 1987 when Pavel Exner and Petr Šeba came all the way from Dubna to Leningrad to visit Boris Pavlov's group and invite us to a completely new unexplored field of mathematical physics. It appeared that these problems fitted perfectly our interests connected with exactly solvable models in quantum mechanics. In particular extension theory for symmetric operators, well developed in Leningrad, play a very important role. One should mention publications by Yossi Avron [45–47] influenced the whole later development of the field. Research in the area just started and focus was on simple examples and straightforward transfer of methods and results from the classical mathematical physics. One example of this development can be found in [205], where a *one-dimensional* wire is coupled to a *two-dimensional* plane. I still remember Robert Schrader explaining to me that such coupling contradicts physical intuition and therefore the result in [205] is remarkable. Another example are the papers [235, 236] by Nikolai Gerasimenko and Boris Pavlov solving the inverse potential problem for the star graph. It was hard to expect at that stage that somebody would lift up general questions relating geometry of graphs to spectral properties of the corresponding Schrödinger operators. Surprisingly no one was even interested in giving a rigorous mathematical definition. It was not until 1999 when the paper [309] by V. Kostrykin and R. Schrader provided a clear definition of a rather general quantum graph, also many ideas can be found in [188, 206]. Reading the paper I realised that connection to geometry has to be clarified, therefore we started discussing this paper with two Master students Erik von Schwerin and Fredrik Stenberg leading to the paper [355], where our most important contribution was a clear relation between the self-adjoint operator and the topological structure of the underlying metric graph. The key point lies in which vertex conditions that should be allowed in order to properly reflect connections between the edges.

In January 2002 Uzy Smilansky visited Lund university and spoke about nodal domains for metric graphs. The talk was very inspiring putting operators on graphs into a more general perspective. By the way, it was the first time I heard the name *quantum graph*, although it probably appeared few years earlier. I liked the idea behind the trace formula originally suggested by Boris Gutkin, Tsampikos Kottos, and Uzy Smilansky [252, 320, 321] and rigorously derived in our papers with Marlena Nowaczyk [346] few years later.

It is not hard to understand that potentials on the edges play a secondary role if one is interested in understanding the relation between the geometry and spectrum. Hence one possible strategy is to study spectral properties of graph Laplacians first and bring back potentials later on using perturbation theory or the language of inverse problems, as we shall do.

**Graph Laplacians** The standard Laplacian on a metric graph is uniquely determined by the metric graph and is usually called graph Laplacian, and its spectrum is identified as the graph's spectrum.

The eigenfunctions of the Laplacian are given by exponential functions on the edges, hence the spectrum of a graph can be identified with the zeroes of a certain trigonometric polynomial. This allows one to apply the apparatus of almost periodic functions and makes a bridge to the theory of quasicrystals.

Another major idea is connected with the trace formula that relates the spectrum to the set of periodic orbits on the graph and can be used to prove that the spectrum uniquely determines metric graphs with rationally independent edges [252, 346] and obtain a formula for graph's Euler characteristic directly from the graph's spectrum [332, 333]. I cannot resist from mentioning that this formula has even been checked experimentally several years later in the group of Leszek Sirko [364, 365].

Felipe Barra and Pierre Gaspard [63] noticed that the spectrum of graph Laplacians can be obtained as the intersection between the zero set of a certain multivariate secular polynomial (determinant manifold) and a line. In this picture the secular polynomial depends entirely on the discrete graph, while the edge lengths give the direction vector for the line. This elegant description explains which features of the spectrum depend on the topology and which on the geometry of the metric graphs. Moreover it is possible to relate spectral properties of graphs to reducibility of the secular polynomials.

To understand how spectral properties of graph Laplacians depend on the topology and geometry one may study behaviour of the spectrum under "small" topologic and geometric perturbations. These studies, grown up from our paper with Sergey Naboko [343], led to effective spectral estimates involving geometric and topologic characteristics such as the number of independent cycles, the total length, the diameter and the girth. I had the privilege to work on this subject with Gregory Berkolaiko, James Kennedy, Gabriela Malenova, and Delio Mugnolo [87, 88, 294].

**Inverse Problems** The area of inverse problems grew up from a remarkable article by Viktor Ambartsumian from 1929 where it is proved that the zero potential is uniquely determined from the spectrum of the corresponding Schrödinger operator on a bounded interval. The geometric version of this theorem for metric graphs was proven in collaboration with Sergey Naboko, while the result for arbitrary graph and zero potential is due to Brian Davies. Trying to combine these results for arbitrary graphs it was realised that the spectrum of the Schrödinger operator on a metric graph uniquely determines the spectrum of the graph Laplacian. The proof was based on the fact that the spectrum of the graph Laplacian is given by a trigonometric polynomial and therefore used the theory of almost periodic functions. In this way it was proven that the zeroes of such two functions coincide provided their asymptotics coincide. This result result is now proved for arbitrary holomorphic almost periodic functions and illustrates how research on quantum graphs influences neighbouring, but seemingly independent areas of mathematics [220, 357].

Our studies of inverse problems started after the conference in Snowbird in 2005 where speaking with Sergey Avdonin we realised that the Boundary Control method can be applied to solve the inverse problem on trees. The main difficulty in solving the inverse problem for general graphs is the interplay between the metric graph, vertex conditions and the potential. The fact that non-tree graphs may carry magnetic potentials, but operators with magnetic potentials are unitary equivalent to operators without magnetic potentials but with different vertex conditions, does not make the problem easier. Solving the inverse problem my main concern was that the procedure should be realisable in practice rather than obtaining of an academic result hardly interesting for applications. In this way the possibility to consider spectral data depending on the magnetic fluxes through the cycles appeared very attractive – this procedure is specific for quantum graphs. It was Boris Altshuler, who suggested me to look at quantum graphs with magnetic fields during Boris Pavlov's retirement conference in Bay of Islands, 2007.

**Crystalline Measures** Let us return back to 2005 and mention another important observation coming from the trace formula:

For any fixed metric graph the trace formula provides an explicit example of a positive crystalline measure.

This observation looked so strange that we contacted Jean-Pierre Kahane inquiring whether such measures are possible, other than the classical Dirac comb leading to Poisson summation formula. The answer was supporting but did not show much interest in obtaining such measures:

Playing with Poisson formulas in several dimensions gives a lot of formulas on the line.

Most probably Kahane did not realise that the new measures are positive and can be uniformly discrete. I did not understand the general importance of obtained measures for specialists in Fourier analysis, who put a lot of efforts in understanding whether such measures exist. In fact, as I learned later, the working hypothesis was that positive uniformly discrete measures are only those given by Dirac combs or their finite combinations.

In December 2019 Yves Meyer came to Stockholm and spoke about crystalline measures at Institute Mittag-Leffler. He explained the status of the problem and presented a rather sophisticated construction of a signed crystalline measure assuming Riemann hypothesis. It was hard to wait until the end of the lecture to tell about our measures. It was clear that the measures associated with metric graphs are positive and uniformly discrete crystalline measures, but how to prove that these measures are not trivial? It was natural to take the simplest metric graph – the lasso graph and study its spectrum. The non-trivial part of the spectrum is given by the equation

$$
\Im \sin Ak + \sin k = 0,\tag{1.1}
$$

where *A* depends on the edge lengths and only the case of irrational *A* is of interest. It was sufficient to prove that the solutions of the equation do not belong to a union

of a finite number of lattices. Denoting the zeroes by *kn* an even stronger sufficient condition can be formulated:

$$\dim\_{\mathbb{Q}} \mathcal{L}\_{\mathbb{Q}} \left\{ k\_n \right\} = \infty,$$

where L<sup>Q</sup> denotes the linear span with rational coefficients and the dimension is given with respect to the rationals. My expectation was that an abstract formulation can help to find an easy solution to the problem. After having tried myself I started to tease colleagues first in Stockholm and later on all over the world. Boris Shapiro passed my question to Peter Sarnak who saw the connection to Lang's conjecture and showed that for this particular equation the set of zeroes does not contain any arithmetic sequence. He requested that there will be acknowledgement to him in the forthcoming paper. But the problem became even more exciting when we tried to prove the result for arbitrary metric graphs. It appeared that reducibility of secular multivariate polynomials are reflected in the arithmetic structure of the spectrum. Reducibility of secular polynomials is well-described by Colin de Verdière's conjecture that we proved on the way. We used the elegant description of the spectrum due to Barra and Gaspard [63].

Our first elementary example of a positive uniformly discrete crystalline measure based on Eq. (1.1) was generalised using stable polynomials [350]. Developing our ideas all such crystalline measures in one dimension have been characterised [414]. This is the second very encouraging example showing usefulness of quantum graphs to other areas of mathematics.

**Final Remarks** This book is a result of my long journey through the country of differential operators on metric graphs. I tried to collect results reflecting my understanding of the subject and underline connections between different seemingly independent areas of research. For example it is hard to separate the area of inverse problem from the spectral estimates for graph Laplacian—this connection is straightforward. But relation of the spectrum to the topology of the graph or dependence of arithmetic properties of the spectrum on reducibility of secular polynomials surprised me a lot.

You can see that destiny and pure luck played a decisive role in my relations to quantum graphs. What could have happened if Uzy Smilansky would not have visited Lund or a department meeting would prohibit me from attending Yves Meyer's lecture at Institute Mittag-Leffler? Many of the results described in the monograph would not be discovered without interaction with my dear collaborators that often not only worked together with me but also contributed to my own studies. The influence from two of my colleagues was decisive for the appearance of the book

• **Sergey Naboko**, with whom we spent a lot of time setting numerous mathematical experiments and who infected me with his love to analysis in exchange to me spoiling him with the geometry of graphs. It is probably Sergey's fault that the book did not appear earlier—he read and criticised the first chapters showing that the book was not mature.

• **Peter Sarnak**, who taught me to think from a broad perspective and fly high. One citation from Peter helped me several times on my way

Mathematics is simple, one should just understand why.

It is due to Peter that the book is lifted to a new level connecting quantum graphs to other areas of mathematics.

All of you, including first of all my Master and PhD students, helped me during my work and often suffered from listening to unfinished arguments and vague explanations:


• Gabriela Malenova

My special thanks go to Ask, Jacob, Jan, Jonathan, Matthew, and Rune, who read and commented the manuscript at different stages. If you find incorrect passages, this means that they were introduced by me alone after Jacob and Matthew finished reading the manuscript.

This book would have never appeared without support from **The Swedish Research Council (VR)** through several individual research grants (in particular 2013-4973, 2020-03780).

I would also like to thank all participants in the Research group **Discrete and Continuous Models in the Theory of Networks** that we organised together with Fatihcan Atay and Delio Mugnolo at Zentrum für Interdiszipläre Forshung at Bielefeld (2012–2017).

Special thanks go to **Alexander von Humboldt Foundation** (Germany) and **Dept. of Mathematics, Stockholm Univ.,** who generously financed Open Access publication of the book.

#### **Further Reading**

Writing the book I tried to make it self-consistent and clear, but it is always helpful to consult other sources providing introduction into the area:


#### 1 Very Personal Introduction 7


These books cover in particular the following important and very interesting topics not discussed in the monograph:


It is unreasonable nowadays to provide a complete reference list to papers, where these and other subjects related to differential operators on graphs, are treated, especially since the area is exponentially growing and new papers will appear weeks after the publication of the book. Interested readers may read proceedings volumes


or simply consult publications by the researchers appeared above. Let me just add one name missing in the list above—of Uzy Smilansky—who is not only responsible for the name *quantum graph*, which grew up from the paper [320], but by his innovative ideas and brilliant observations shaped research in the area.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 2 How to Define Differential Operators on Metric Graphs**

# **2.1 Schrödinger Operators on Metric Graphs**

The main subject of our studies will be magnetic Schrödinger operators on metric graphs. Every such operator is determined by triple consisting of


The three components of such a triple are not completely independent, and we are going to describe all members in detail. Our studies will be mostly restricted to selfadjoint operators, but the same methods can successfully be applied even to certain non-self-adjoint problems.

## *2.1.1 Metric Graphs*

In discrete mathematics, graphs are usually defined as ordered pairs of vertices and edges with the emphasis on the *first component*—the set of vertices. This reflects the fact that one is interested in a certain jump process between the vertices, almost neglecting the dynamics on the edges. For differential operators, the edges play the crucial role which makes it necessary to turn the standard definition of graphs 'up side down' and start the whole construction with the edges.

Let us recall first the definition of a graph used in discrete mathematics:

*A graph G consists of a set V of vertices and a set E of edges. Every edge connects two vertices and therefore can be seen as an element of V* × *V. Then E is a subset of V* × *V* .

To represent a graph, one often makes a drawing like it is done in Figs. 2.2 and 2.3. Here fat points correspond to vertices, while lines represent the edges. It is natural to generalise this definition and consider every edge as an interval on the real line having a certain length. Only the lengths of the intervals will be important for us. In this way we obtain a new object called a **metric graph**. Any two points on such a graph have a distance—the length of the shortest path connecting the points. Even in this approach, the edges seem to play a secondary role as they appear as weights attached to the edges. We prefer to change the point of view and start the whole construction with the edges. Here we are going to give a rigorous definition for graphs formed by a finite number of edges.

Consider *N* compact or semi-infinite intervals *En*, each belonging to a separate copy of the real line R:

$$E\_n = \begin{cases} [\mathbf{x}\_{2n-1}, \mathbf{x}\_{2n}], \ n = 1, 2, \dots, N\_c \\\\ [\mathbf{x}\_{2n-1}, \infty), \ n = N\_c + 1, \dots, N\_c + N\_l = N, \end{cases} \tag{2.1}$$

where *Nc* (respectively *Ni*) denotes the number of compact (respectively infinite) intervals. Each of these numbers could be equal to zero. The intervals *En* are called **edges**. It will be convenient to assume that *x*2*n*−<sup>1</sup> ≤ *x*2*n.*

Consider the set **<sup>V</sup>** = {*xj* } = ∪*Nc <sup>n</sup>*=<sup>1</sup> {*x*2*n*−1*, x*2*n*} ∪ ∪*N <sup>n</sup>*=*Nc*+<sup>1</sup> *<sup>x</sup>*2*n*−<sup>1</sup> of all endpoints, and an arbitrary partition of **V** into *M* equivalence classes *<sup>V</sup> m, m* <sup>=</sup> <sup>1</sup>*,* <sup>2</sup>*,...,M*, called **vertices**. In other words, we divide the set **<sup>V</sup>** into *M* nonintersecting sets *V <sup>m</sup>*

$$\begin{aligned} \mathbf{V} &= V^1 \cup V^2 \cup \dots \cup V^M, \\\\ V^{m\_1} \cap V^{m\_2} &= \emptyset, \text{ provided } m\_1 \neq m\_2. \end{aligned} \tag{2.2}$$

The endpoints belonging to the same equivalence class will be identified

$$\mathbf{x}, \mathbf{y} \in V^{m} \Rightarrow \mathbf{x} \sim \mathbf{y},\tag{2.3}$$

where ∼ denotes the equivalence relation induced by the partition.

**Definition 2.1** Consider *N* finite or semi-infinite closed intervals *En* belonging to separate (disjoint) copies of R, called **edges**, and a partition of the set **V** of their endpoints into equivalence classes *<sup>V</sup> <sup>m</sup>*, called **vertices**: **<sup>V</sup>** = ∪*<sup>M</sup> <sup>m</sup>*=1*<sup>V</sup> m.* The corresponding **metric graph**  is the union of the edges with the endpoints belonging to the same vertex identified.

Two points *x* and *y* are **equivalent** (*x* ∼ *y*) if and only if either they belong to the same edge *En* and are equal, or they belong to the same vertex *V <sup>m</sup>*:

$$\mathbf{x} \sim \mathbf{y} \Leftrightarrow \begin{cases} \exists E\_n : \mathbf{x}, \mathbf{y} \in E\_n \text{ and } \mathbf{x} = \mathbf{y}, \\\\ \exists V^m : \mathbf{x}, \mathbf{y} \in V^m. \end{cases} \tag{2.4}$$

With this notation the graph can formally be seen as the quotient metric space

$$
\Gamma = \bigcup\_{n=1}^{N} E\_n / \varkappa \sim \text{y}.\tag{2.5}
$$

A metric graph is called **connected** if any two points *x* and *y* in  can be connected by a **path**—a finite sequence of compact intervals *Ij* = [*y*2*j*−1*, y*2*<sup>j</sup>* ]*, j* = 1*,* 2*,...,J* each belonging to a certain edge *En*, *n* = *n(j )*, such that the endpoints in subsequent intervals are equivalent (belong to the same vertex)

$$
\mathcal{Y}\_{2j} \sim \mathcal{Y}\_{2j+1}, \quad j = 1, 2, \dots, J - 1,
$$

and

$$\mathbf{x} = \mathbf{y}\_1, \quad \mathbf{y} = \mathbf{y}\_2.$$

Note that such a path need not be unique. If the graph is not connected, then it is straightforward to define the **number** *β*<sup>0</sup> **of connected components** (the zero-th Betti number). We shall often restrict our studies to connected graphs.

The **distance** *d(x, y)* between any two points *x, y* ∈  is the length of the shortest path connecting the points. Note that the distance between two points on the same edge may be less than |*x* − *y*|. We formally put *d(x, y)* = ∞ if *x* and *y* belong to different connected components.

Every metric graph can also be seen as a singular one-dimensional manifold with the singular set given by the vertices.

The number *d<sup>m</sup>* of elements in the class *V <sup>m</sup>* will be called the **valence** or **degree of** *V m.* If the graph has no loops (edges attached by both endpoints to the same vertex), then the degree of a vertex is equal to the number of edges joined together at it. In what follows we are going to identify the set of all endpoints **V** = {*x*2*n*−1*, x*2*n*} *Nc <sup>n</sup>*=<sup>1</sup> ∪ {*x*2*n*−1} *N <sup>n</sup>*=*Nc*+<sup>1</sup> and the set of all vertices {*<sup>V</sup> <sup>m</sup>*}*<sup>M</sup> <sup>m</sup>*=1*.* It is clear that

$$D := \sum\_{m=1}^{M} d^m = 2N\_c + N\_l = \#\mathbf{V},\tag{2.6}$$

where #**V** denotes the total number of endpoints.

The main part of this book will be devoted to **compact finite graphs**, which occur when *Ni* = 0, *i.e.* all edges are finite closed intervals: *N* = *Nc*. On the other hand, non-compact graphs appear in applications related to scattering phenomena, and it is hard to avoid such graphs while speaking about vertex conditions (Fig. 2.1).

**Fig. 2.2** Connected compact graph

For compact graphs, the **Euler characteristic** *χ* is given by the formula

$$
\chi = M - N,\tag{2.7}
$$

where *M* and *N* are the number of vertices and edges respectively. The Euler characteristic determines the first Betti number *β*1—the number of (homotopically) independent cycles in the graph

$$
\beta\_1 = \beta\_0 - \chi.\tag{2.8}
$$

It coincides with the number of generators in the fundamental group.

For metric **trees**—connected graphs without cycles—the Euler characteristic is equal to one. For all other connected graphs, *χ* is nonpositive: *χ* ≤ 0*.*

The graphs presented in Figs. 2.2 and 2.3 have the same Euler characteristic equal to −3, but the number of independent cycles is different, namely 4 and 6, since they do not have the same number of connected components.

Only the lengths*ln* = *x*2*n*−*x*2*n*−1*, n* = 1*,* 2*,...,Nc,* of the finite edges are going to play a role in our studies not their particular parametrization. Therefore graphs with equal lengths of the edges will be identified, of course provided the edges are connected in the same way. For compact graphs we define their **total length** L as

$$\mathcal{L} = \sum\_{n=1}^{N} \ell\_n. \tag{2.9}$$

It is clear that isometric compact graphs have the same total length.

A parametrization of the edges naturally induces a measure on the graph *-.* Consider complex valued functions on and the corresponding Hilbert space

$$L\_2(\Gamma) = \bigoplus\_{j=1}^N L\_2(\mathbf{x}\_{2j-1}, \mathbf{x}\_{2j}).\tag{2.10}$$

Note that the functions from the Hilbert space are not defined pointwise, and therefore the Hilbert space does not reflect how the edges connect to each other. Particular values that a function attains at the vertices (forming a set of measure zero) do not play any role. In particular, we shall use

$$\int\_{\Gamma} f(\mathbf{x})d\mathbf{x} = \sum\_{n=1}^{N} \int\_{E\_n} f(\mathbf{x})d\mathbf{x}.\tag{2.11}$$

Even if the endpoints belonging to the same equivalence class *V <sup>m</sup>* are identified, in the case of functions that are continuous on the edges, it appears natural to introduce their values at the endpoints using the limits from inside the edges

$$
\mu(\mathbf{x}\_j) = \lim\_{\mathbf{x} \to \mathbf{x}\_j} \mu(\mathbf{x}).\tag{2.12}
$$

Note that these limits may be different for *xj* belonging to the same vertex *V <sup>m</sup>*, so that the value of the function at the vertex, *u(V m)*, in general is not well-defined. It is well-defined only if all *u(xj ), xj* <sup>∈</sup> *<sup>V</sup> <sup>m</sup>* are equal, that is if *<sup>u</sup>* is continuous not only on the edges but at the vertex *V <sup>m</sup>* as well. If the function is continuously differentiable on the edges, we introduce the **normal derivatives** 

$$\partial\_{\mathbf{h}}\mu(\mathbf{x}\_{j}) = \begin{cases} \lim\_{\mathbf{x}\to\mathbf{x}\_{j}} \frac{d}{d\mathbf{x}}\mu(\mathbf{x}), & \mathbf{x}\_{j} \text{ is the left endpoint}, \\ -\lim\_{\mathbf{x}\to\mathbf{x}\_{j}} \frac{d}{d\mathbf{x}}\mu(\mathbf{x}), & \mathbf{x}\_{j} \text{ is the right endpoint}. \end{cases} \tag{2.13}$$

The limits are taken from inside of the corresponding interval. The normal deriva-

tives are independent of the direction in which the edge is parametrised and always point inside the interval. Note that in contrast to function values, it is wrong to speak about continuity of normal derivatives at a vertex as the normal derivatives are not defined inside the edges.

The introduced limiting values *u(xj ), ∂***n***u(xj )* will be used for the vertex conditions.

Consider for example the circle graph *-(*1*.*2*)* 1 formed by one interval [*x*1*, x*2] with the endpoints identified. Then the normal derivatives are given by

$$
\partial\_\mathbf{n} \mu(\mathbf{x}\_\mathbb{I}) = \mu'(\mathbf{x}\_\mathbb{I}), \qquad \partial\_\mathbf{n} \mu(\mathbf{x}\_2) = -\mu'(\mathbf{x}\_2).
$$

Then the vertex conditions

$$\begin{cases} \boldsymbol{\mu}(\mathbf{x}\_{1}) = \boldsymbol{\mu}(\mathbf{x}\_{2}), \\\\ \partial\_{\mathbf{n}} \boldsymbol{\mu}(\mathbf{x}\_{1}) + \partial\_{\mathbf{n}} \boldsymbol{\mu}(\mathbf{x}\_{2}) = 0; \end{cases} \tag{2.14}$$

usually called standard (to be discussed below in Sect. 2.1.3) imply that

$$\begin{cases} \boldsymbol{\mu}(\mathbf{x}\_{1}) = \boldsymbol{\mu}(\mathbf{x}\_{2}), \\\\ \boldsymbol{\mu}'(\mathbf{x}\_{1}) = \boldsymbol{\mu}'(\mathbf{x}\_{2}); \end{cases}$$

*i.e.* that the function and its first derivative are continuous at the vertex.

## *2.1.2 Differential Operators*

The differential operator describes the dynamics of waves or particles travelling along the edges. One may consider different operators depending on the particular phenomena one would like to describe. In this book we shall limit our consideration to the Schrödinger operator, which is standard for quantum mechanical problems. Other differential operators can also be studied. Some developed ideas can be used, but often serious modifications are necessary.

<sup>1</sup> Here and in what follows considering graphs on one, two, and three edges we shall refer to their classification presented in Fig. 6.4 below. It is natural to use the same classification for metric and discrete graphs, so that the metric graph *-(i.j )* corresponds to the discrete graph *G(i.j )*. Remember that graphs with different enumerations of edges and vertices are considered to be equivalent.

More precisely, we are going to consider the following three differential operators:

• the **Laplace** operator

$$
\pi = -\frac{d^2}{dx^2};\tag{2.15}
$$

• the **Schrödinger** operator

$$
\pi\_q = -\frac{d^2}{d\mathbf{x}^2} + q(\mathbf{x});\tag{2.16}
$$

• the **magnetic Schrödinger** operator

$$
\pi\_{q,a} = \left( i \frac{d}{dx} + a(\mathbf{x}) \right)^2 + q(\mathbf{x}). \tag{2.17}
$$

The magnetic Schrödinger operator *τq,a* describes quantum particles moving under the influence of the electric potential *q* and the magnetic potential *a.* If the magnetic potential is identically equal to zero, then we get the usual Schrödinger operator *τq* ≡ *τq,*0. The case where both magnetic and electric potentials vanish corresponds to free motion and is described by the Laplace operator *τ* ≡ *τ*<sup>0</sup> ≡ *τ*0*,*0*.*

In our studies we are going to assume that the potentials satisfy the following natural assumptions:

(1) the potentials are real

$$q(\mathbf{x}), a(\mathbf{x}) \in \mathbb{R};\tag{2.18}$$

(2) the electric potential *q* is absolutely integrable

$$q \in L\_1(\Gamma),$$

and satisfies Faddeev condition

$$\int\_{\Gamma} (1+|x|) \cdot |q(x)| dx < \infty;$$

(3) the magnetic potential *a* is continuous

$$a \in C(\Gamma \backslash \mathbf{V}).$$

Faddeev condition holds automatically for any *q* ∈ *L*<sup>1</sup> if the graph is compact.

In the introductory chapters, we are going to use slightly stronger assumptions on the potentials. The reason is that without such assumptions the domain of the operator may depend on the potential, which makes presentation more involved. We postpone this discussion to Chap. 4 and assume here in addition to (2.18) that the (electric) potential is essentially bounded

$$q \in L\_{\infty}(\Gamma) \tag{2.19}$$

and the magnetic potential is continuously differentiable on each edge

$$a \in C^1(\Gamma \backslash \mathbf{V}).\tag{2.20}$$

The differential expressions (2.15), (2.16), or (2.17) do not determine unique self-adjoint operators, since one needs to specify the operator domain. We are going to see that the freedom in selecting the domain is not very broad, and is limited to selecting different vertex conditions. One may define certain maximal and minimal operators, so that the domain of any self-adjoint operator associated with the differential expression *τq,a* is always contained in the domain of the maximal operator, and includes the domain of the minimal one.

The **minimal** operator is defined on the domain Dom *(L*min *q,a )* = *C*<sup>∞</sup> <sup>0</sup> *(-* \ **V***)* consisting of smooth (*i.e.* infinitely many times differentiable) functions with compact support separated from the vertices. This domain is dense in the Hilbert space *L*2*(-),* and the operator defined by the differential expression *τq,a* on this domain is symmetric, not self-adjoint.2 (We shall prove the symmetry of *L*min *q,a* by integrating by parts in (2.25).) The **maximal** operator is defined on the domain of all functions from the Hilbert space *L*2*(-)* whose images (under the differential operator) are still in the Hilbert space:

$$\text{Dom}\left(L\_{q,a}^{\text{max}}\right) = \left\{ u \in L\_2(\Gamma) : \mathfrak{r}\_{q,a}\mu \in L\_2(\Gamma) \right\}.\tag{2.21}$$

Since we assume here that *q* ∈ *L*∞*(-),* the operator of multiplication by *q* is bounded, so *τq,au* ∈ *L*2*(-)* if and only if *i d dx* + *a* 2 *u* ≡ −*u*'' + 2*iau*' +

$$
\langle A\mu, v\rangle = \langle \mu, Av\rangle
$$

<sup>2</sup> We remind that an operator in the Hilbert space <sup>H</sup> is called **symmetric** if

holds for any *u, v* ∈ Dom *(A).* A densely defined symmetric operator is called **self-adjoint** if ║<*Au, v*>║ ≤ *<sup>C</sup>* ║ *<sup>u</sup>* ║ holds for a certain *<sup>C</sup>* <sup>=</sup> *C(v)* <sup>∈</sup> <sup>R</sup><sup>+</sup> and any *<sup>u</sup>* <sup>∈</sup> Dom *(A)* only if *v* ∈ Dom *(A).* Note that due to Riesz representation theorem the last inequality implies that there exists a certain *w* ∈ H, such that <*Au, v*>=<*u, w*> holds. In other words, a densely defined symmetric operator is self-adjoint if the domain of the adjoint operator *A*∗ coincides with the domain of *A*, then the action of the adjoint operator necessarily coincides with the action of the original operator. We reserve the name **Hermitian** for symmetric and self-adjoint operators in a finite dimensional Hilbert space. For details see any course on the theory of (unbounded) selfadjoint operators in Hilbert spaces, for example [90, 442] or Volume 4, Chapter 7 of [474].

 *ia*' <sup>+</sup> *<sup>a</sup>*<sup>2</sup> *u* ∈ *L*2*(-).* The function *ia*' <sup>+</sup> *<sup>a</sup>*<sup>2</sup> *u* belongs to *L*2*(-)* since *a* is continuous (2.20).

It follows that *iu*' <sup>+</sup> *au* <sup>∈</sup> *<sup>W</sup>*<sup>2</sup> <sup>1</sup> *(-* \ **<sup>V</sup>***),* and hence *<sup>u</sup>* <sup>∈</sup> *<sup>W</sup>*<sup>2</sup> <sup>2</sup> *(-* \ **<sup>V</sup>***),* where *W<sup>q</sup> p* denotes the Sobolev space. We have proven that

$$\text{Dom}\,(L\_{q,a}^{\text{max}}) = W\_2^2(\Gamma \backslash \mathbf{V}),\tag{2.22}$$

since every function from this domain is mapped by *τq,a* into a function from the Hilbert space. We stress once more that formula (2.22) holds under the assumption that *q* ∈ *L*∞*(-)*. The maximal operator is not symmetric, but it is an extension of the minimal operator *L*min *q,a ,* since the differential expression *τq,a* is formally symmetric. Therefore any self-adjoint operator *Lq,a* associated with the differential expression *τq,a* satisfies:

$$\text{Dom}\,(L\_{q,a}^{\text{min}}) \subset \text{Dom}\,(L\_{q,a}) \subset \text{Dom}\,(L\_{q,a}^{\text{max}}).\tag{2.23}$$

Our task is to specify the domain of the self-adjoint operator *Lq,a.* To solve such a problem it is standard to use von Neumann extension theory for symmetric operators which characterizes all possible extensions, but our goal is to describe those extensions that correspond to the graph *-.* Note that neither the minimal nor the maximal operator respect how different edges are connected to each other—each of these operators can be written as an orthogonal sum of operators in *L*2*(En)* :

$$L\_{q,a}^{\min}(\Gamma) = \bigoplus\_{n=1}^{N} L\_{q,a}^{\min}(E\_n), \qquad L\_{q,a}^{\max}(\Gamma) = \bigoplus\_{n=1}^{N} L\_{q,a}^{\max}(E\_n), \tag{2.24}$$

where to define *L*max*/*min *q,a (En)* we consider each edge *En* as a graph formed by one edge. Thus selecting the domain of *Lq,a* one has to respect the topological structure. Therefore we prefer to use a constructive approach to describe the selfadjoint extensions.3

To understand whether an operator is symmetric or not it is useful to calculate the boundary form, which vanishes if the operator is symmetric. Let us calculate the boundary form for the maximal operator:<sup>4</sup>

<sup>3</sup> Another possibility would be to use the theory of boundary triples, since the minimal operator has finite deficiency indices (equal to the number of endpoints). In fact differential operators on metric graphs is an area where the theory of boundary triples can be applied.

<sup>4</sup> Here we use the convention from mathematical physics that the scalar product in a Hilbert space is linear in the second argument and anti-linear in the first one: <sup>&</sup>lt;*αu, βv*> = *αβ*<*u, v*>*, α, β* <sup>∈</sup> <sup>C</sup>.

$$\begin{split} \langle L\_{q,a}^{\max} u, v \rangle - \langle u, L\_{q,a}^{\max} v \rangle \\ = & \sum\_{n=1}^{N} \left\{ \int\_{E\_{a}} \overline{\left[ \left( i \frac{d}{dx} + a(x) \right)^{2} + \mathfrak{q}(x) \right] u(x)} \cdot v(x) dx \\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \int\_{E\_{a}} \overline{u(x)} \cdot \left[ \left( i \frac{d}{dx} + a(x) \right)^{2} + \mathfrak{q}(x) \right] v(x) dx \right\} \\ = & \sum\_{n=1}^{N} \left\{ -\overline{\left( \frac{d}{dx} - ia(x) \right) u(x)} \cdot v(x) \right\}\_{x = \chi\_{2n-1}}^{\mathbb{P}\_{2n}} \\ & \qquad \qquad \qquad \qquad \qquad \left\{ \frac{d}{dx} - ia(x) \right\} u(x) \cdot \left( \frac{d}{dx} - ia(x) \right) v(x) dx \\ = & \overline{\left( \frac{1}{\chi\_{2n}(x)} \right)} u(x) \cdot v(x)^{\chi\_{2n}} \end{split}$$

$$\begin{aligned} &+\overline{u(x)}\left(\frac{d}{dx} - ia(x)\right)\upsilon(x)|\_{x=x\_{2n-1}}^{x\_{2n}} \\ &-\int\_{E\_n} \overline{\left(\frac{d}{dx} - ia(x)\right)u(x)} \cdot \left(\frac{d}{dx} - ia(x)\right)\upsilon(x)dx \bigg) \\\\ &= \sum\_{\mathbf{x}\_j \in \mathbf{V}} \left\{\overline{\partial u(\mathbf{x}\_j)} \cdot \upsilon(\mathbf{x}\_j) - \overline{u(\mathbf{x}\_j)} \cdot \partial \upsilon(\mathbf{x}\_j)\right\}, \\ &\tag{2.25} \end{aligned}$$

where we used notations (2.12) and the **extended normal derivatives** determined by

$$\partial u(\mathbf{x}\_{j}) = \begin{cases} \lim\_{\mathbf{x} \to \mathbf{x}\_{j}} \left( \frac{d}{dx} u(\mathbf{x}) - i a(\mathbf{x}) u(\mathbf{x}) \right), & \mathbf{x}\_{j} \text{ is the left endpoint}, \\ - \lim\_{\mathbf{x} \to \mathbf{x}\_{j}} \left( \frac{d}{dx} u(\mathbf{x}) - i a(\mathbf{x}) u(\mathbf{x}) \right), & \mathbf{x}\_{j} \text{ is the right endpoint}. \end{cases} \tag{2.26}$$

The limits are taken as *x* approaches the endpoint *xj* from inside the edge. In the case of zero magnetic potential, the extended normal derivatives coincide with the normal derivatives *∂***n***u(xj )* introduced earlier (2.13). The limits in (2.25) exist since the functions *u* and *v* belong to the Sobolev space *W*<sup>2</sup> <sup>2</sup> *(En).* In formula (2.25), one should ignore the right endpoints in the case of semi-infinite edges. We see that the boundary form does not necessarily vanish if no further conditions on the functions are introduced.

The minimal operator *L*min *q,a* is symmetric, since it is given by the same differential expression on the domain Dom *(L*min *q,a )* = *C*<sup>∞</sup> <sup>0</sup> *(-* \**V***)* <sup>⊂</sup> *<sup>W</sup>*<sup>2</sup> <sup>2</sup> *(-* \**V***)* <sup>=</sup> Dom *(L*max *q,a ),* and the values *u(xj ), ∂u(xj )* are equal to zero:

$$\langle L\_{q,a}^{\min}u,v\rangle = \langle u, L\_{q,a}^{\min}v\rangle.$$

The differential operator *τq,a* can be made self-adjoint by restricting the maximal operator using certain vertex conditions described below: in the following subsection we give an example of such conditions, while the most general case is discussed in Chap. 3.

## *2.1.3 Standard Vertex Conditions*

The vertex conditions are needed in order to make the differential operator selfadjoint, but their role is not limited to this. These conditions should relate only the values *u(xj ), ∂u(xj )* associated with the same vertex. These conditions should also be irreducible so that the vertex cannot be divided into two or more smaller vertices (in which case the vertex conditions would connect the values belonging to each of the smaller vertices separately). Then the vertex conditions would correctly reflect how different edges in are connected to each other.

The set of all appropriate vertex conditions is well-understood and will be described in Chap. 3. We start with describing here the most natural conditions to be called **standard vertex conditions** imposed at each vertex *V <sup>m</sup>*

$$\begin{cases} \mathbf{x}\_l, \mathbf{x}\_j \in V^m \Rightarrow \boldsymbol{\mu}(\mathbf{x}\_l) = \boldsymbol{\mu}(\mathbf{x}\_j) - \mathbf{conitivity condition}, \\\\ \sum\_{\mathbf{x}\_j \in V^m} \partial \boldsymbol{\mu}(\mathbf{x}\_j) = 0 & - \text{Kirchhoff condition.} \end{cases} \tag{2.27}$$

For every *m* formula (2.27) gives *d<sup>m</sup>* independent conditions on the function *u*, where *d<sup>m</sup>* is the valence of the vertex *V m.* These two conditions together are sometimes called **Kirchhoff, Neumann, natural**, or **free** in the literature, but we prefer to reserve the name Kirchhoff for the balance condition on the derivatives alone.

For degree one vertices, these conditions are reduced to just one Neumann condition as follows

$$
\partial \mu(\mathbf{x}\_j) = 0, \ x\_j \in V^m, \quad d^m = 1. \tag{2.28}
$$

This fact explains why standard vertex conditions are often called Neumann.

In the case of two intervals joined together, the standard vertex conditions imply that the function and its extended derivative are continuous at this vertex. The vertex can be removed in this case and two intervals may be substituted by one with the length equal to the sum of the lengths of the two original intervals. This property explains why standard conditions are sometimes called free. Any other condition at such vertex corresponds to a certain point interaction or separates the two intervals.

We have just described so-called standard vertex conditions. These conditions are often chosen when it is not known which particular properties of the vertex are required from the model.

## *2.1.4 Definition of the Operator*

In this subsection, we are going to sum up our discussions and define the standard magnetic Schrödinger operator under stronger assumptions (2.19) and (2.20) on the potentials.

**Definition 2.2** The **standard magnetic Schrödinger operator** *L*st *q,a* is defined by the differential expression (2.17) on the domain of functions from the Sobolev space *W*<sup>2</sup> <sup>2</sup> *(-*\ **V***)* satisfying the standard vertex conditions (2.27) at all vertices.

The standard Schrödinger and Laplace operators are defined similarly. In what follows we shall often use the simplified notation *Lq,a* for the operator with standard vertex conditions. This is motivated by the fact that the standard operators are uniquely determined by the metric graphs and the differential expressions. Moreover, the standard Laplacian is determined by the metric graph alone.

The standard operators are self-adjoint. This fact will be proved in the following chapter, but already now we can see that these operators are symmetric. Consider the boundary form given by (2.25), and rearrange the summation as follows

$$=\sum\_{m=1}^{M} \left\{ \underbrace{\left(\sum\_{\boldsymbol{x}\_{j}\in V^{m}} \overline{\partial u(\boldsymbol{x}\_{j})}\right)}\_{=0} \upsilon(V^{m})}\_{=0} \underbrace{\upsilon(V^{m})}\_{=0} \underbrace{\upsilon(V^{m})}\_{=0} \underbrace{\left(\sum\_{\boldsymbol{x}\_{j}\in V^{m}} \partial \upsilon(\boldsymbol{x}\_{j})\right)}\_{=0} \right\} = 0.$$

Note that the values *u(V m), v(V m)* are well-defined, since the functions satisfying the standard vertex conditions are continuous even at the vertices. The sum of the extended normal derivatives over each particular vertex is zero due to the Kirchhoff condition in (2.27), hence the boundary form is zero implying that the operator is symmetric.

In what follows we are going to call by **quantum graph** any Schrödinger operator on a metric graph referring to its spectrum as the **spectrum of the quantum graph**. The standard Laplacian is uniquely determined by the metric graph, therefore we are going to refer to its spectrum as the **spectrum of the metric graph**. Several chapters below will be devoted to the spectral analysis of metric graphs.

## **2.2 Elementary Examples**

In this section we are going to look at a few examples of quantum graphs and calculate their spectra. A function *ψ* ∈ *L*2*(-)* is an eigenfunction of the operator

#### 2.2 Elementary Examples 21

**Fig. 2.4** The circle graph *-(*1*.*2*)*

$$\left(i\frac{d}{d\mathbf{x}} + a(\mathbf{x})\right)^2 \psi(\mathbf{x}) + q(\mathbf{x})\psi(\mathbf{x}) = \lambda\psi(\mathbf{x})\tag{2.29}$$

on every edge and vertex conditions at every vertex, in our case standard vertex conditions (2.27). Here, *λ* is the spectral parameter. Generalised eigenfunctions corresponding to the continuous spectrum are not necessarily from *L*2*(-)* but satisfy the same differential equation and vertex conditions. We are going to discuss spectral properties of quantum graphs in more detail later on, but consider elementary examples here. In all these examples the potentials will be identically equal to zero, hence we are going to look at spectral properties of metric graphs.

**The Ring Graph** Consider the ring graph *-(*1*.*2*)* depicted in Fig. 2.4. The corresponding standard Laplacian *L* has purely discrete spectrum and we are going to calculate it. It is clear that only the length of the interval is important hence let us identify the edge with the interval [*x*1*, x*2] = [−*ℓ*1*/*2*, ℓ*1*/*2], *ℓ*<sup>1</sup> is the length of the ring.

The differential equation (2.29) takes the form

$$-\psi'' = k^2 \psi, \quad \lambda = k^2,\tag{2.30}$$

and the standard vertex conditions (2.27) are

$$ \begin{cases} \psi(-\ell\_1/2) = \psi(\ell\_1/2), \\ \psi'(-\ell\_1/2) = \psi'(\ell\_1/2). \end{cases} $$

Consider first the case *λ* = 0, then the general solution to the differential equation is given by

$$
\psi(x) = A x + B.
$$

Substitution into the vertex conditions gives a unique (up to a multiplier) eigenfunction

$$
\psi\_1(\alpha) = 1.
$$

Assume that *λ* /= 0 then any solution to the differential equation (2.30) can be written as

$$\psi(\mathbf{x}) = c\_1 \cos k\mathbf{x} + c\_2 \sin k\mathbf{x}, \quad c\_1, c\_2 \in \mathbb{C}. \tag{2.31}$$

Substituting (2.31) into the vertex conditions, we get the homogeneous linear system

$$\begin{cases} \cos k\ell\_1/2 \cdot c\_1 - \sin k\ell\_1/2 \cdot c\_2 = \cos k\ell\_1/2 \cdot c\_1 + \sin k\ell\_1/2 \cdot c\_2, \\ k\sin k\ell\_1/2 \cdot c\_1 + k\cos k\ell\_1/2 \cdot c\_2 = -k\sin k\ell\_1/2 \cdot c\_1 + k\cos k\ell\_1/2 \cdot c\_2, \end{cases}$$

$$\Rightarrow \begin{cases} \sin k\ell\_1/2 \cdot c\_2 = 0, \\\\ \sin k\ell\_1/2 \cdot c\_1 = 0. \end{cases}$$

The eigenvalues 2*π ℓ*1 2 *<sup>n</sup>*2*, n* <sup>=</sup> <sup>1</sup>*,* <sup>2</sup>*,* <sup>3</sup>*,...,* have multiplicity 2 with the eigenfunctions

⎩

$$
\psi\_n^{\mathbf{e}}(\mathbf{x}) = \cos\left(\frac{2\pi}{\ell\_1}n\mathbf{x}\right), \quad \psi\_n^{\mathbf{o}}(\mathbf{x}) = \sin\left(\frac{2\pi}{\ell\_1}n\mathbf{x}\right).
$$

The eigenfunctions can be divided into even and odd ones due to the fact that the operator is invariant under the change of variables *x* |→ −*x.* The appearance of multiple eigenvalues distinguishes this operator from any Schrödinger operator on a finite interval. Of course the eigenvalues satisfy Weyl's asymptotic law (see (4.25) below).

**The Lasso Graph** Next consider the non-compact lasso graph  formed by a ring and one semi-infinite interval attached to it. This graph can be defined as a union of two intervals [*x*1*, x*2] and [*x*3*,*∞*)* with the endpoints *x*1*, x*2*,* and *x*<sup>3</sup> identified, *i.e.*  the graph has one vertex *<sup>V</sup>* <sup>1</sup> = {*x*1*, x*2*, x*3} (Fig. 2.5).

Consider the standard Laplace operator *L* defined on the functions from *W*<sup>2</sup> <sup>2</sup> *(-* \ *V* <sup>1</sup>*)* and satisfying standard vertex conditions at the vertex *V* <sup>1</sup>*.*

The spectrum of *L* is formed by the branch [0*,*∞*)* of absolutely continuous spectrum and an infinite sequence of (embedded) eigenvalues *λn* tending to +∞*.* The generalised eigenfunctions corresponding to the absolutely continuous spectrum are solutions to the differential equation (2.30) satisfying the vertex conditions that are bounded linear functionals but do not belong to the domain of the operator or even

**Fig. 2.5** Non-compact lasso graph

to the Hilbert space. These functions are given by a combination of incoming and outgoing waves on the semi-infinite edges.5

For the sake of convenience, let us choose the parametrization of the edges so that

$$E\_1 = [\mathbf{x}\_1, \mathbf{x}\_2] = [-\ell\_1/2, \ell\_1/2], \quad E\_2 = [\mathbf{x}\_3, \infty) = [0, \infty).$$

The operator is invariant under the change of variables

$$J: x \mapsto \begin{cases} -x, \ x \in E\_1, \\ x, \quad x \in E\_2, \end{cases}$$

defined on functions as

$$Jf(\mathbf{x}) = f(J\mathbf{x}).$$

This change of variables preserves all points on the semi-infinite interval and reflects the finite interval. Even and odd eigenfunctions can be calculated separately. The eigenfunctions including generalised eigenfunctions corresponding to the absolutely continuous spectrum are solutions to (2.30) on every edge satisfying the standard vertex conditions at the vertex.

The odd eigenfunctions satisfying *J u* = −*u* are necessarily equal to zero on the semi-infinite interval. On the loop these functions are given by

$$
\psi = c \sin kx.
$$

Substitution into the vertex conditions gives:

$$\begin{cases} c\sin k\ell\_1/2 = c\sin k(-\ell\_1)/2 = \psi(\mathbf{x}\_3) = 0; \\ kc\cos k\ell\_1/2 - kc\cos k(-\ell\_1)/2 + \psi'(\mathbf{x}\_3) \\ \quad = kc\cos k\ell\_1/2 - kc\cos k(-\ell\_1)/2 + 0 = 0. \end{cases}$$

The second condition is satisfied for any *k*, while the first condition gives us the quantization rule:

$$\sin k\ell\_1/2 = 0 \Rightarrow k = \frac{2\pi}{\ell\_1}n, \ n = 1, 2, \dots$$

The corresponding eigenvalues are

$$\left(\frac{2\pi}{\ell\_1}\right)^2 n^2, \quad n = 1, 2, 3, \dots$$

<sup>5</sup> For example the Dirichlet Laplacian on [0*,*∞*)* has generalised eigenfunctions *<sup>ψ</sup>* <sup>=</sup> sin *kx, k* <sup>∈</sup> **R**+, which are uniformly bounded solutions of (2.30), which in addition are equal to zero at origin.

The even eigenfunctions are scattered waves *ψ* and we use the usual representation with the reflection coefficient *R*

$$\psi(\mathbf{x}) = \begin{cases} c \cos k\mathbf{x}, & \mathbf{x} \in E\_1, \\ \exp(-ik\mathbf{x}) + R(k)\exp(ik\mathbf{x}), & \mathbf{x} \in E\_2. \end{cases} \tag{2.32}$$

The component *ψE*<sup>2</sup> is written as a sum of the incoming wave exp*(*−*ikx)* with the unit amplitude and of the outgoing wave exp*(ikx)* with the amplitude *R* in order to recall the scattering theory for the one-dimensional Schrödinger equation, where this representation holds asymptotically for large *x* [442]. The coefficient *R* is then called the reflection coefficient. Substitution into the vertex conditions gives the 2 × 2 linear system

$$\begin{cases} c\cos k\ell/2 = 1+R\\ 2ck\sin k\ell/2 + ik(-1+R) = 0 \end{cases} \Rightarrow \begin{pmatrix} \cos k\ell/2 & -1\\ 2\sin k\ell/2 & i \end{pmatrix} \begin{pmatrix} c\\ R \end{pmatrix} = \begin{pmatrix} 1\\ i \end{pmatrix}.$$

Solving the system we get the reflection coefficient

$$R(k) = \frac{\cos k\ell/2 + 2i \sin k\ell/2}{\cos k\ell/2 - 2i \sin k\ell/2}. \tag{2.33}$$

The reflection coefficient has modulus one on the real axis *<sup>k</sup>* <sup>∈</sup> <sup>R</sup>*.* The singularities are situated at the points where tan *kℓ/*2 = −*i/*2*.* These points are different from *kn* calculated earlier.

It follows that the discrete spectrum of the operator on a graph cannot always be determined from the corresponding scattering coefficient, which is the case for the Schrödinger operator in **R***n.* In our example, the discrete spectrum eigenfunctions and the scattered waves belong to different subspaces in the Hilbert space *L*2*(-), i.e.*  the eigenfunctions possess different symmetries, hence it is not surprising that the singularities of the scattering coefficient do not coincide with the discrete spectrum. One may think that this phenomena occurs just due to the symmetry of the graph *-.* In my opinion, the symmetry just facilitates occurrence of this phenomena, but is not necessary. The reason that the discrete spectrum eigenfunctions are not *seen*  from the scattering coefficient is that they are vanishing at the vertex *V* <sup>1</sup>*.*

**Figure Eight Graph and Isoscattering** Using symmetries of metric graphs, one may construct interesting examples of isospectral and isoscattering graphs. Two quantum graphs are called **isoscattering** if the corresponding scattering matrices<sup>6</sup> are equal. Of course the isoscattering property depends heavily on the potential. Since the Laplace operator on a metric graph is uniquely determined by the metric graph, one speaks about isoscattering graphs if the scattering matrices for the two

<sup>6</sup> If you are not familiar with the definition of the stationary scattering matrix, consult Sect. 3.3.1 (where the vertex scattering matrix is introduced) or Sect. 18.3.2 (where formal definition can be found).

Laplacians coincide. The first example of isoscattering graphs was constructed in [355], and is presented in Fig. 2.8. It is assumed that the following relations between the lengths of the graphs hold:

$$\begin{aligned} \ell\_1 &= \ell\_2 = \ell\_1' + \ell\_3' = \ell\_2' + \ell\_4';\\ \ell\_1' &= \ell\_2'. \end{aligned} \tag{2.34}$$

It is remarkable that these two graphs have different topological structures. This counterexample shows that the scattering matrix does not determine the number of cycles in the graph or its size, since the lengths *ℓ*' <sup>1</sup> = *ℓ*' <sup>2</sup> can be chosen arbitrarily.

**Problem 1** Prove that *λ*<sup>1</sup> = 0 is an eigenvalue for the standard Laplacian on any compact finite graph. What is the multiplicity of this eigenvalue?

**Problem 2** Calculate the spectrum of the standard Laplacian of the compact star graph formed by three intervals of length 1, shown in Fig. 2.6.

**Problem 3** Calculate the spectrum of the standard Laplacian on the figure eight graph *-(*2*.*4*)* shown in Fig. 2.7, assuming that

(a) the lengths of the loops are equal *ℓ*<sup>1</sup> = *ℓ*<sup>2</sup> = *π,*

(b) the lengths *ℓ*<sup>1</sup> and *ℓ*<sup>2</sup> are arbitrary.

**Problem 4** Consider any compact metric graph and the standard Laplacian on it. What happens to the spectrum if one doubles the lengths of all edges?

**Problem 5 (Kurasov-Stenberg)** [355] Consider the two graphs  and *-*' presented in Fig. 2.8. Prove that the scattering matrices for the Laplace operators on the graphs  and *-*'are equal under the assumption that

$$\begin{array}{l} \ell\_1 = \ell\_2 = \ell\_1' + \ell\_3' = \ell\_2' + \ell\_4';\\ \ell\_1' = \ell\_2'; \ \ell\_3' = \ell\_4'.\end{array}$$

**Fig. 2.6** Compact star graph *-(*3*.*2*)*

**Fig. 2.7** Figure eight graph *-(*2*.*4*)*

Calculate the scattering matrix for the graph *-.* Calculate the spectra of the Laplacians on  and *-*'

**Isospectral Graphs** Two operators are called **isospectral** if they have the same spectrum. We present here one example of isospectral graphs developed by B. Gutkin and U. Smilansky in [252]. This example grew up from the famous counterexample to M. Kac' question 'Can one hear the shape of a drum?' formulated in 1966. The counterexample constructed by C. Gordon, D.L. Webb, and S. Wolpert [244] may be modified in order to simulate metric graphs. As the counterexample provided two drums having precisely the same Laplacian spectrum, the two obtained metric graphs are isospectral. The two isospectral graphs (trees) presented first in [252] are shown in Fig. 2.9.

**Fig. 2.11** Kurasov-Muller graphs

**Problem 6 (Gutkin-Smilansky [252])** The spectra of the Laplacians on the graphs presented in Fig. 2.9 are given by zeroes of the following two functions [252]

$$\begin{aligned} Z\_I(k) &= \tan(2(a+b)k) \\ &+ \frac{2\tan ak + 2\tan bk + \tan(2a+b)k + \tan(a+2b)k}{1 - (2\tan ak + \tan bk)\left(\tan bk + \tan(2a+b)k + \tan(a+2b)k\right)}, \\ Z\_{II}(k) &= \tan 2ak \end{aligned}$$

$$+\frac{2\tan ak + 2\tan bk + \tan(a+2b)k + \tan(2a+3b)k}{1 - (\tan ak + \tan bk + \tan(a+2b)k)(\tan ak + \tan bk + \tan(2a+3b)k)}.\tag{2.35}$$

Show that the zeroes of the two functions *ZI (k)* and *ZI I (k)* coincide.

**Problem 7 (Parzanchevski-Band [424])** Consider the Laplace operator defined on the graphs depicted in Fig. 2.10. Dirichlet and Neumann conditions7 (indicated by letters *D* and *N*) are introduced at different degree one vertices and standard vertex conditions at all internal vertices. Prove that the corresponding operators are isospespectral assuming the indicated lengths of the edges.

**Problem 8 (Kurasov-Muller)** The two graphs depicted in Fig. 2.11 are equilateral, *i.e.* all edge lengths are equal. Calculate graphs spectra and show that they are isospectral.

There is no general understanding how isospectral graphs may be constructed. It is clear that the edge lengths of such graphs have to be rationally dependent (as will be shown in Sect. 9.4) and it is believed that symmetry arguments should play an important role in classification of isospectral families. Gutkin-Smilansky and Parzanchevski-Band examples appear reducing large graphs with symmetries

<sup>7</sup> The Dirichlet and Neumann conditions at degree one vertices are *u(xj )* <sup>=</sup> <sup>0</sup>*, xj* <sup>=</sup> *<sup>V</sup> <sup>m</sup>* and *<sup>∂</sup>***n***ψ(xj )* <sup>=</sup> <sup>0</sup>*, xj* <sup>=</sup> *<sup>V</sup> <sup>m</sup>* respectively.

to certain fundamental domains. The last example (Kurasov-Muller) [342] can be constructed by cutting in two different ways one of the vertices in the watermelon graph **W**<sup>4</sup> (Fig. 2.12).

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 3 Vertex Conditions**

The goal of this chapter is to describe the most general vertex conditions for Schrödinger operators on metric graphs and how these conditions are connected to graph's topology. As we already mentioned, different types of vertex conditions may be required in order to reflect special properties of the vertices. Considering only standard and Dirichlet conditions is often sufficient, therefore one may get the impression that this chapter can be dropped by readers not aiming to study differential operators on metric graphs in full detail. This is not completely true since the ideas developed in the chapter will be used later on, for example deriving the trace formula.

# **3.1 Preliminary Discussion**

We have seen that differential operators on metric graphs require introducing special conditions connecting limiting values of functions and their normal derivatives at the vertices. The role of such vertex conditions is two-fold:


The Hilbert space *L*2*(-)* and the formal differential expression (2.17) do not reflect how different edges are connected to each other. It is the vertex conditions that determine the connectivity of the graph, and therefore this question requires more attention than one might expect at the first glance.

Assume that a metric graph is given and we are interested in studying all appropriate vertex conditions. Our experience tells us that we need as many conditions as the number of endpoints—the sum of degrees of all vertices. In order to reflect the graph's connectivity properly, these conditions should connect together only the limit values associated with each vertex separately. It follows that

each vertex can be considered independently, and therefore it is wise to write the boundary form (2.25) collecting together the terms corresponding to each vertex:

$$
\langle \langle L\_{q,a}^{\max} u, v \rangle - \langle u, L\_{q,a}^{\max} v \rangle = \sum\_{m=1}^{M} \left( \sum\_{\mathbf{x}\_{j} \in V^{m}} \left\{ \overline{\partial u(\mathbf{x}\_{j})} \cdot v(\mathbf{x}\_{j}) - \overline{u(\mathbf{x}\_{j})} \cdot \partial v(\mathbf{x}\_{j}) \right\} \right). \tag{3.1}$$

For every vertex of valence *d<sup>m</sup>* one writes precisely *d<sup>m</sup>* linearly independent conditions so that the corresponding expression

$$\sum\_{\mathbf{x}\_{j}\in V^{m}} \left\{ \overline{\partial u(\mathbf{x}\_{j})} \cdot \upsilon(\mathbf{x}\_{j}) - \overline{u(\mathbf{x}\_{j})} \cdot \partial \upsilon(\mathbf{x}\_{j}) \right\}$$

$$= \left\langle \partial \vec{u}(V^{m}), \vec{v}(V^{m}) \right\rangle\_{\mathbb{C}^{d^{m}}} - \left\langle \vec{u}(V^{m}), \partial \vec{v}(V^{m}) \right\rangle\_{\mathbb{C}^{d^{m}}} \tag{3.2}$$

vanishes for each *m*, ensuring that the operator is symmetric. Here,

$$
\vec{\mu}(V^m) = \{\mu(\mathbf{x}\_j)\}\_{j=1}^{d^m} \quad \text{and} \quad \partial \vec{\mu}(V^m) = \{\partial \mu(\mathbf{x}\_j)\}\_{j=1}^{d^m}, \tag{3.3}
$$

denote the *dm*-dimensional vectors of limit values at the vertex *V <sup>m</sup>*. It is not hard to give examples of vertex conditions that guarantee that the boundary form vanishes:

• Dirichlet conditions:

$$
\vec{u}(V^m) = \ddot{0},
$$

• Neumann conditions:

$$
\partial \vec{\mu} (V^m) = \ddot{0},
$$

• (generalised) Robin conditions:

$$
\partial \vec{\mu}(V^m) = A^m \vec{\mu}(V^m),
$$

where *Am* is a Hermitian matrix in C*d<sup>m</sup> .*

However, these families do not cover all possible vertex conditions. In order to obtain all possible conditions, one needs to consider a certain combination of Robin and Dirichlet conditions (as it will be shown in the following section).

One may think that any set of *d<sup>m</sup>* such conditions guaranteeing zero boundary form is appropriate, but it is necessary to take into account another one aspect. Assume that the endpoints in the vertex *V <sup>m</sup>* can be divided into two non-intersecting

**Fig. 3.1** Splitting a vertex

classes *V <sup>m</sup>*' and *V <sup>m</sup>*''*,*

$$V^{m'} \cup V^{m''} = V^m, \quad V^{m'} \cap V^{m''} = \emptyset,$$

so that the vertex conditions connect just the limit values associated with each of these subclasses separately (see Fig. 3.1). Then such vertex conditions correspond to the graph  with two vertices *V <sup>m</sup>*' and *V <sup>m</sup>*'', rather than with one vertex *V m.* If such separation is impossible, then the vertex conditions will be called **properly connecting**. In what follows we consider only properly connecting conditions unless something else is required for different reasons. If the separation described above is possible, we are going to say that the vertex *V <sup>m</sup>* **splits** into two vertices *V <sup>m</sup>*' and *V <sup>m</sup>*''*.*

In this chapter, we are going to describe all appropriate vertex conditions for star graphs. Such parametrisation can be done in different (equivalent) ways and we collect the most widely used parametrisations to be used in the book. We are convinced that the parametrisation using the irreducible unitary matrix *S* (3.21) is the most appropriate, since this parameter has a clear physical interpretation—it coincides with the vertex scattering matrix. Moreover, this parametrisation is unique and guarantees that the vertex conditions are properly connecting.

## **3.2 Vertex Conditions for the Star Graph**

Consider any star graph formed by *d* semi-infinite edges *En* = [*xn,*∞*), n* = 1*,* 2*,... ,d,* joined together at one central vertex *V* = {*x*1*, x*2*,...,xd* } (having degree *d*). The boundary form of the maximal operator is given by:

$$\begin{aligned} \langle L^{\max}u, v \rangle\_{L\_2(\Gamma)} - \langle u, L^{\max}v \rangle\_{L\_2(\Gamma)} &= \langle \partial \vec{u}, \vec{v} \rangle\_{\mathbb{C}^d} - \langle \vec{u}, \partial \vec{v} \rangle\_{\mathbb{C}^d} \\ &=: B[U, V], \end{aligned} \tag{3.4}$$

where *<sup>U</sup>* <sup>=</sup> *(u, ∂ u)* <sup>∈</sup> <sup>C</sup>2*<sup>d</sup> .* The (sesquilinear) form *<sup>B</sup>* introduced above does not depend on the behaviour of the functions *u* and *v* inside the edges, but is given via their limit values at the vertex.

We have seen that in order to determine a self-adjoint operator corresponding to the formal expression (2.17), one has to introduce precisely *d* linearly independent conditions connecting the limit values *<sup>U</sup>* <sup>=</sup> *(u, ∂ u)* <sup>∈</sup> <sup>C</sup>2*<sup>d</sup> .* These conditions should be chosen so that the boundary form *B*[*U,V* ] vanishes whenever both *U* and *V* satisfy the conditions. In other words, in the space C2*<sup>d</sup>* one has to select a *d*dimensional subspace *M* such that *B*[*U,V* ] vanishes, provided *U,V* ∈ *M.* This is a standard problem from linear algebra and it is not hard to give examples of such subspaces, but we would like to describe all possible such subspaces. The corresponding conditions will be called Hermitian.

**Definition 3.1** Conditions relating the limit values *(u, ∂ u)* <sup>∈</sup> <sup>C</sup>2*<sup>d</sup>* at a vertex *<sup>V</sup>* of degree *d* are called **Hermitian** if and only if


Every *d*-dimensional subspace *<sup>M</sup>* <sup>⊂</sup> <sup>C</sup>2*<sup>d</sup>* can be described as the image of a linear map from C*<sup>d</sup>* to C2*<sup>d</sup>* , and hence as the set of *(Et, F t)* for *<sup>t</sup>* <sup>∈</sup> <sup>C</sup>*<sup>d</sup> ,* where *<sup>E</sup>* and *F* are *d* × *d* matrices. For reasons that will become clear in a moment, we shall write *E* = *B*<sup>∗</sup> and *F* = *A*<sup>∗</sup> for suitable matrices *A* and *B.*

The subspace

$$M := \left\{ U = (B^\*t, A^\*t) : t \in \mathbb{C}^d \right\} \tag{3.5}$$

has dimension *d* only if the *d* × 2*d* matrix *(A, B)* has maximal rank:

$$\text{rank}\,(A,B) = d.\tag{3.6}$$

In fact, the dimension of *M* is less than *d* if and only if there exists a vector *t*<sup>0</sup> ∈ <sup>C</sup>*<sup>d</sup> , t*<sup>0</sup> /= <sup>0</sup>*,* such that *B*∗*t*<sup>0</sup> <sup>=</sup> *<sup>A</sup>*∗*t*<sup>0</sup> <sup>=</sup> <sup>0</sup>*.* Hence, for any *<sup>s</sup>* <sup>∈</sup> <sup>C</sup>*<sup>d</sup>* , we have

$$
\langle B^\* t\_0, s \rangle = \langle A^\* t\_0, s \rangle = 0 \Leftrightarrow \langle t\_0, Bs \rangle = \langle t\_0, As \rangle = 0,
$$

*i.e.* the ranges of *A* and *B* are both orthogonal to *t*0, so rank*(A, B) < d.*

The boundary form *B* vanishes on *M*×*M* provided the matrix *AB*<sup>∗</sup> is Hermitian, with

$$AB^\* = BA^\*.\tag{3.7}$$

To prove this statement, let us consider two arbitrary vectors *U,V* ∈ *M*

$$U = (B^\*t, A^\*t), \quad V = (B^\*s, A^\*s),$$

where *t,s* <sup>∈</sup> <sup>C</sup>*<sup>d</sup> .* The boundary form can be expressed using *s,t* as follows:

$$\begin{split} B[U, V] &= \langle \partial \vec{\mu}, \vec{v} \rangle\_{\mathbb{C}^d} - \langle \vec{\mu}, \partial \vec{v} \rangle\_{\mathbb{C}^d} \\ &= \langle B^\*t, A^\*s \rangle\_{\mathbb{C}^d} - \langle A^\*t, B^\*s \rangle\_{\mathbb{C}^d} \\ &= \langle AB^\*t, s \rangle\_{\mathbb{C}^d} - \langle BA^\*t, s \rangle\_{\mathbb{C}^d}, \end{split} \tag{3.8}$$

which vanishes if and only if *AB*∗ is Hermitian. Thus we have proven that all self-adjoint operators on the star graph can be parameterized by *d*-dimensional subspaces *M* of the form (3.5). But this description of self-adjoint extensions is not convenient, since in order to determine whether a function *u* belongs to the domain of the operator, one has to check whether its limit values *U* can be presented as *<sup>U</sup>* <sup>=</sup> *(B*∗*t,A*∗*t)* with a certain vector *<sup>t</sup>* <sup>∈</sup> <sup>C</sup>*<sup>d</sup> .*

It turns out that *<sup>M</sup>* can be described as the set of all vectors *<sup>U</sup>* <sup>∈</sup> <sup>C</sup>2*<sup>d</sup>* satisfying the vertex conditions [309]

$$A\ddot{\boldsymbol{\mu}} = B\partial\ddot{\boldsymbol{\mu}}.\tag{3.9}$$

It is trivial, that every *U* ∈ *M* satisfies (3.9) as the matrix *AB*<sup>∗</sup> is Hermitian and therefore *AB*∗*t* = *BA*∗*t.* Moreover, due to (3.6), the set of vectors satisfying (3.9) form a *d*-dimensional subspace and has to be equal to *M*, since *M* is also *d*-dimensional. Formula (3.9) explains our unusual choice of matrices *B*∗ and *A*∗ instead of *E* and *F* in the definition of *M.*

We have proved the following theorem:

**Theorem 3.2** *Any Hermitian vertex condition at the vertex V of degree d can be written in the form* 

$$A\,\tilde{\mu} = B\,\partial\tilde{\mu},\tag{3.10}$$

*where u and ∂u denote the vectors of limit values of the functions (2.12) and their extended normal derivatives (2.26) at the vertex. The d* × *d matrices A and B can be chosen arbitrarily, provided that the rank of the d* ×2*d matrix (A, B) is maximal, and the matrix AB*∗ *is Hermitian* 

$$\text{rank}\,(A,B) = d \quad \text{and} \quad AB^\* = BA^\*. \tag{3.11}$$

The subspace *M* (and therefore the self-adjoint operator) is not changed if the matrices *A* and *B* are replaced with *CA* and *CB*, where *C* is any *d* × *d* nonsingular matrix. It follows that there is no one-to-one correspondence between the pairs of matrices and the self-adjoint operators. This fact makes it difficult to use this parametrisation when inverse problems are discussed. It is also not straightforward to check whether the corresponding conditions are properly connecting or not. It is clear that if *A* and *B* are block-diagonal with the equal sizes of all blocks, then the

vertex conditions are not properly connecting. Consider just the following explicit example.

**Example 3.3** Let  be the star graph formed by three semi-axes joined together in the vertex *V* = {*x*1*, x*2*, x*3} (see Fig. 3.2) and the vertex conditions be given by

$$
\underbrace{\begin{pmatrix} 1 & -1 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix}}\_{=:A} \begin{pmatrix} u(\boldsymbol{x}\_{1}) \\ u(\boldsymbol{x}\_{2}) \\ u(\boldsymbol{x}\_{3}) \end{pmatrix} = \underbrace{\begin{pmatrix} 0 \ 0 \ 0 \\ 1 \ 1 \ 0 \\ 0 \ 0 \ 0 \end{pmatrix}}\_{=:B} \begin{pmatrix} \partial u(\boldsymbol{x}\_{1}) \\ \partial u(\boldsymbol{x}\_{2}) \\ \partial u(\boldsymbol{x}\_{3}) \end{pmatrix}.$$

It is clear that *AB*<sup>∗</sup> = 0 = *BA*<sup>∗</sup> and the rank of *(A, B)* is 3*.* Therefore the corresponding vertex conditions are Hermitian.

But both *A* and *B* are block-diagonal matrices with blocks of size 2×2 and 1×1*,* which allows one to write the same vertex conditions in the form:

$$
\begin{pmatrix} 1 & -1 \\ 0 & 0 \end{pmatrix} \begin{pmatrix} \mu(\mathbf{x}\_1) \\ \mu(\mathbf{x}\_2) \end{pmatrix} = \begin{pmatrix} 0 \ 0 \\ 1 \ 1 \end{pmatrix} \begin{pmatrix} \partial \mu(\mathbf{x}\_1) \\ \partial \mu(\mathbf{x}\_2) \end{pmatrix} \quad \&\quad \mu(\mathbf{x}\_3) = 0,
$$

or even as

$$\begin{cases} \mu(\mathbf{x}\_1) = \mu(\mathbf{x}\_2) \\ \partial \mu(\mathbf{x}\_1) = -\partial \mu(\mathbf{x}\_2) \end{cases} \text{ & } \mu(\mathbf{x}\_3) = 0.$$

These conditions are not properly connecting and correspond rather to a line and a half line, which are independent of each other, not to the star graph formed by three semi axes.

Multiplication of the matrices *A* and *B* by a non-singular matrix *C* may destroy the block-diagonal structure, in which case it will be hard to see that these conditions can be written such that they connect only the limiting values corresponding to two subvertices.

## **3.3 Vertex Conditions Via the Vertex Scattering Matrix**

In this section we are going to describe another possible equivalent parametrisation of all vertex conditions using the scattering matrix—a unitary matrix describing how the waves are transmitted by the vertex. This parametrisation has the following advantages:


In what follows, we are going to mainly use this parametrisation in our studies.

## *3.3.1 The Vertex Scattering Matrix*

We introduce here the notion of the vertex scattering matrix. Consider the Laplace operator *L(A,B)* on the star graph, defined by <sup>−</sup> *<sup>d</sup>*<sup>2</sup> *dx*<sup>2</sup> on the domain of functions satisfying (3.10). The absolutely continuous spectrum for this operator is the same as for the Dirichlet Laplacian *LD*—the second derivative operator defined on functions satisfying Dirichlet conditions at the vertex—and coincides with the interval [0*,*∞*)*, the multiplicity is *d.* The corresponding generalised eigenfunctions of *L(A,B)*, often called scattered waves, are uniformly bounded solutions to the differential equation

$$-\frac{d^2}{dx^2}\psi = \lambda\psi$$

satisfying the vertex conditions (3.21). Every solution to this differential equation on each interval [*xj ,*∞*)* can be written in the form

$$\psi(\mathbf{x})|\_{E\_j=[\mathbf{x}\_j,\infty)} = \mathbf{e}^{-ik(\mathbf{x}-\mathbf{x}\_j)}b\_j + \mathbf{e}^{ik(\mathbf{x}-\mathbf{x}\_j)}a\_j, \ k \in \mathbb{R}\_+.\tag{3.12}$$

One should think about the wave e−*ik(x*−*xj ) bj* as a certain incoming wave, which after the interaction with the vertex is reflected as the outgoing wave e*ik(x*−*xj ) aj .* Of course, the amplitudes *bj* of the incoming waves are arbitrary, while the amplitudes *ai* of the outgoing waves are determined by the whole set of *bj , j* = 1*,* 2*,... ,d.* This relation can be written in the matrix form as

$$
\ddot{a} = \mathbb{S}\_{\mathbb{V}}(k)b,\tag{3.13}
$$

where *S***v***(k)* is called the **vertex scattering matrix** corresponding to the energy *λ* = *k*2*.* In our case, the relation between the amplitudes of incoming and outgoing waves is obtained by inserting the function given by (3.12) into the vertex conditions.<sup>1</sup>

Let us calculate *S***v***(k)* determined by the vertex conditions (3.10). The limit values of the function *ψ* are

$$\begin{aligned} \vec{\psi} &= \vec{b} + \mathbf{S\_{v}}(k)\vec{b}, \\ \partial\vec{\psi} &= -ik\vec{b} + ikS\_{\mathbf{v}}(k)\vec{b}. \end{aligned}$$

Substitution into (3.10) gives the relation

$$A(I + S\_{\mathbf{v}}(k))\vec{b} = B\vec{k}(-I + S\_{\mathbf{v}}(k))\vec{b},$$

leading to

$$A + \mathrm{i}kB = -(A - \mathrm{i}kB)S\_{\mathrm{V}}(k),\tag{3.14}$$

where one takes into account that the vector *b* of amplitudes of incoming waves is arbitrary. The matrix *A* − i*kB* is invertible, since otherwise the adjoint matrix *A*<sup>∗</sup> + i*kB*<sup>∗</sup> has a nontrivial kernel, *i.e.* there exists *t* such that *(A*<sup>∗</sup> + i*kB*∗*)t* = 0*.* But then multiplying by *A* and taking the scalar product with *t* we arrive at

$$\|A^\*t\|^2 - \mathrm{i}k \langle AB^\*t, t \rangle = 0.$$

Since both ║*A*∗*t*║<sup>2</sup> and <sup>&</sup>lt;*AB*∗*t,t*<sup>&</sup>gt; are real (*AB*<sup>∗</sup> is Hermitian), it follows that *<sup>A</sup>*∗*<sup>t</sup>* <sup>=</sup> 0*.* In a similar way we may prove that *B*∗*t* = 0, which contradicts the second assumption in (3.11) that rank *(A, B)* = *d.*

The vertex scattering matrix can now be calculated from (3.14)

$$\mathcal{S}\_{\mathbf{V}}(k) = -(A - \mathrm{i}k\,B)^{-1} \left( A + \mathrm{i}k\,B \right). \tag{3.15}$$

<sup>1</sup> The vertex scattering matrix introduced in this way coincides with the formal scattering matrix given as a product of wave operators associated with the self-adjoint operators *LD* (the unperturbed operator) and *L(A,B)* (the perturbed operator) as is done in abstract scattering theory [90, 442, 506].

It is easy to see that the matrix *S***v***(k)* is unitary:

$$S\_{\mathbf{V}}(k)S\_{\mathbf{V}}(k)^{\*} = (A - \mathbf{i}kB)^{-1}(A + \mathbf{i}kB)(A^{\*} - \mathbf{i}kB^{\*})(A^{\*} + \mathbf{i}kB^{\*})^{-1}$$

$$= (A^{\*} + \mathbf{i}kB^{\*})\left[(A - \mathbf{i}kB)(A^{\*} + \mathbf{i}kB^{\*})\right]^{-1}$$

$$\times \left[(A + \mathbf{i}kB)(A^{\*} - \mathbf{i}kB^{\*})\right](A^{\*} + \mathbf{i}kB^{\*})^{-1}$$

$$= (A^{\*} + \mathbf{i}kB^{\*})\left[AA^{\*} - \mathbf{i}kBA^{\*} + \mathbf{i}kAB^{\*} + \mathbf{k}^{2}BB^{\*}\right]^{-1}$$

$$\times \left[AA^{\*} + \mathbf{i}kBA^{\*} - \mathbf{i}kAB^{\*} + \mathbf{k}^{2}BB^{\*}\right](A^{\*} + \mathbf{i}kB^{\*})^{-1}$$

$$= I,$$

where we used that *BA*∗ is Hermitian due to (3.11). Note that we were able to prove that *S***v***(k)* is unitary only because *A* and *B* satisfy both conditions (3.11) and *k* is real. As we shall see later, the vertex scattering matrix has norm less than 1 if Im *<sup>k</sup>* <sup>∈</sup> <sup>R</sup>+.

Unitarity of *S***v***(k)* implies that not only the vectors *b* of incoming amplitudes span the whole C*<sup>d</sup>* , but also the vectors *a* of outgoing amplitudes. In other words, given any *a* <sup>∈</sup> **<sup>C</sup>***<sup>d</sup>* one may find the set of incoming amplitudes such that (3.13) holds. On the other hand, some entries in the scattering matrix may vanish, for example if *(S***v***(k))*<sup>12</sup> is zero, then the amplitude of the outgoing wave on the first edge is independent of the amplitude of the incoming wave on the second edge.

# *3.3.2 Scattering Matrix as a Parameter in the Vertex Conditions*

Our idea is to use the vertex scattering matrix to parameterise the set of vertex conditions. It is easy to see that the values of *S***v***(k)* for different *<sup>k</sup>* <sup>∈</sup> <sup>R</sup> are determined by each other. In particular, we are going to prove the following explicit formula (which probably appeared for the first time in [310]):

$$S\_{\mathbf{V}}(k) = \frac{(k+k\_0)S\_{\mathbf{V}}(k\_0) + (k-k\_0)I}{(k-k\_0)S\_{\mathbf{V}}(k\_0) + (k+k\_0)I},\tag{3.17}$$

where *I* denotes the *d* × *d* unit matrix. In what follows we are going to identify *α* with *αI.* There is no significance of the particular value of *k*<sup>0</sup> chosen in our parametrisation, so let us use *k*<sup>0</sup> = 1 in what follows and introduce the notation:

$$\mathcal{S} \coloneqq \mathcal{S}\_\mathbf{V}(\mathbf{l}) = -(A - \mathbf{i}B)^{-\mathbf{l}} \begin{pmatrix} A + \mathbf{i}B \end{pmatrix}.\tag{3.18}$$

The unitary matrix *S* is uniquely determined by *A* and *B*, but not *vice versa*. The matrices *A* and *B* can be chosen equal to

$$\begin{cases} A = \text{i}(S - I) \\ B = \text{ } S + I \end{cases} \tag{3.19}$$

It is an easy exercise that the corresponding *S***v***(*1*)* = *S.* One may also prove that such pair *(A, B)* satisfies conditions (3.11). The first condition can be shown by taking into account that the matrix *S* is unitary

$$\begin{cases} AB^\* = \mathrm{i}(S - I)(S^\* + I) = \mathrm{i}(\underbrace{SS^\*}\_{=I} - S^\* + S - I) = \mathrm{i}(S - S^\*)\\ BA^\* = (S + I)(-\mathrm{i})(S^\* - I) = -\mathrm{i}(\underbrace{SS^\*}\_{=I} + S^\* - S - I) = -\mathrm{i}(S^\* - S) \\\\ \Rightarrow AB^\* = BA^\*. \end{cases}$$

The second condition follows from

$$\text{rank}\,(A,B) = \text{rank}\,(\mathbb{S}-I,\mathbb{S}+I) = d,$$

which holds for any unitary *S.*

To prove formula (3.17) we substitute *(A, B)* from (3.19) into formula (3.15) for the scattering matrix:

$$S\_{\mathbf{v}}(k) = -\left(\mathbf{i}S - \mathbf{i} - \mathbf{i}kS - \mathbf{i}k\right)^{-1}\left(\mathbf{i}S - \mathbf{i} + \mathbf{i}kS + \mathbf{i}k\right)$$

$$= \left((k-1)S + (k+1)\right)^{-1}\left((k+1)S + (k-1)\right),$$

which is essentially (3.17) in the special case *k*<sup>0</sup> = 1*.* One just needs to take into account that the matrices are commuting and *S***v***(k)* can be written as a quotient.

In what follows we shall need the special case of (3.17), which expresses the vertex scattering matrix through the unitary parameter *S*:

$$S\_{\mathbf{V}}(k) = \frac{(k+1)S + (k-1)I}{(k-1)S + (k+1)I}.\tag{3.20}$$

## *3.3.3 On Properly Connecting Vertex Conditions*

We are going to discuss now which matrices *S* lead to properly connecting vertex conditions. Let us recall that vertex conditions are called **properly connecting** if and only if the vertex cannot be divided into two (or more) vertices, so that the vertex conditions connect only limit values belonging to each of the new vertices

separately. We have seen that one faces certain difficulties to characterise all possible properly connecting conditions when the description (3.10) via pair *(A, B)* is used. On the other hand it is clear that all not properly connecting vertex conditions lead to vertex scattering matrices *S***<sup>v</sup>** having block-diagonal form. Conversely, every such matrix leads to not properly connecting vertex conditions.

A matrix is called **reducible** if and only if it can be transformed into block uppertriangular form by a permutation of coordinates. But every unitary block uppertriangular matrix is block diagonal, so all properly connecting vertex conditions are in one-to-one correspondence with irreducible unitary matrices *S.* Therefore, without loss of generality, we are going to restrict ourselves to irreducible unitary matrices *S* parameterising the vertex conditions.

Theorem 3.2 can be reformulated as follows

**Theorem 3.4** *The set of Hermitian properly connecting vertex condition at the vertex V of degree d can be uniquely parameterised by d* × *d irreducible unitary matrices S writing conditions* (3.10) *in the form* 

$$\left(\mathcal{S} - I\right)\ddot{\boldsymbol{\mu}} = \left(\mathcal{S} + I\right)\partial\ddot{\boldsymbol{\mu}},\tag{3.21}$$

*where u and ∂u denote the vectors of limit values of the functions (2.12) and their extended normal derivatives (2.26) at the vertex.* 

Since every self-adjoint extension of the minimal operator *L*min leads to a certain unitary vertex scattering matrix *S***v***(k)*, the vertex conditions (3.21) describe all possible self-adjoint extensions [90, 442, 506].

In what follows, the self-adjoint operator corresponding to the differential expression *τq,a* given by (2.17) on a metric graph  and vertex conditions (3.21) will be denoted by *L<sup>S</sup> q,a(-).*2 We shall often omit certain indices hoping that no misunderstanding occurs.

A few other possible parametrisations of vertex conditions are described in Appendix 2. In our opinion, the parametrisation (3.21) is the most appropriate, and we are going to use it in what follows. We are going to illustrate the advantages of this parametrisation in the following section, where different properties of vertex scattering matrices are addressed.

Let us consider just one (rather applied) example that illustrates the power of this parametrisation.

**Example 3.5 ([338])** Experimental physicists [470] considered transport properties of the system of nano-wires depicted in Fig. 3.3. This problem can be described by Schrödinger equation on *-<sup>B</sup>* and requires Hermitian vertex conditions in the vertex *V* = {*x*1*, x*2*, x*3*, x*4}*.* The main question is: how does one select these conditions in order to reflect the geometry of the coupling? It is clear that, in the

<sup>2</sup> Note that we have now defined the self-adjoint operator *LS q,a* only in the case of regular potentials *q* and *a* satisfying assumptions (2.19) and (2.20). The case of more general potentials will be treated in Chap. 4.

**Fig. 3.3** The Graph *-<sup>B</sup>*.A bounded wire with an Aharonov-Bohm ring attached

ballistic regime, the probabilities of the transport between points *x*<sup>1</sup> and *x*3, as well as between *x*<sup>2</sup> and *x*4, are negligible. Hence it is natural to look for vertex conditions that guarantee that the following entries in the vertex scattering matrix are zero:

$$s\_{31} = s\_{13} = s\_{24} = s\_{42} = 0.\tag{3.22}$$

One may also assume that the reflection is small, leading to

$$s\_{11} = s\_{22} = s\_{33} = s\_{44} = 0.\tag{3.23}$$

If a certain entry in the vertex scattering matrix is equal to zero for one particular energy, one cannot be sure that it remains zero for all other values of the energy, since the vertex scattering matrices in general depend on the energy (see (3.20)). One may show that the vertex scattering matrix is independent of the energy if and only if the parameter *<sup>S</sup>* is not only unitary, but also Hermitian: *<sup>S</sup>* <sup>=</sup> *<sup>S</sup>*−<sup>1</sup> <sup>=</sup> *<sup>S</sup>*<sup>∗</sup> (see Sect. 3.5.1).

Every 4 × 4 real unitary Hermitian matrix satisfying conditions (3.22)–(3.23) is of the form

$$S = \begin{pmatrix} 0 & \alpha & 0 & \beta \\ \alpha & 0 & \sigma \beta & 0 \\ 0 \ \sigma \beta & 0 & -\sigma \alpha \\ \beta & 0 & -\sigma \alpha & 0 \end{pmatrix},\tag{3.24}$$

where *<sup>σ</sup>* = ±<sup>1</sup> and *α, β* <sup>∈</sup> <sup>R</sup> are subject to

$$
\alpha^2 + \beta^2 = 1.\tag{3.25}
$$

We required that the matrix is real in order to guarantee that all eigenfunctions may be chosen real. In order to guarantee that the vertex conditions are properly connecting one should require that

$$
\alpha \neq 0 \neq \beta. \tag{3.26}
$$

## **3.4 Parametrisation Via Hermitian Matrices**

Consider the eigenprojector *P*−<sup>1</sup> associated with the eigenvalue −1 (if any) of the unitary matrix *S* appearing in the parametrisation (3.21). The complementary projector *P* ⊥ <sup>−</sup><sup>1</sup> <sup>=</sup> *<sup>I</sup>* <sup>−</sup> *<sup>P</sup>*−<sup>1</sup> projects on the linear span of the eigensubspaces associated with all other eigenvalues of *S.* Multiplying (3.21) by *P*−<sup>1</sup> from the left we arrive at

$$-\mathcal{D}i\,P\_{-1}\vec{\mu} = 0 \Leftrightarrow \,P\_{-1}\vec{\mu} = 0.$$

This condition means that the vector *u* has to be orthogonal to the eigenvectors of *S* associated with the eigenvalue −1*.*

The second condition is obtained by multiplying (3.21) by *P* ⊥ <sup>−</sup>1:

$$i(\mathcal{S} - I)P\_{-1}^{\perp}\vec{\mu} = (\mathcal{S} + I)P\_{-1}^{\perp}\partial\vec{\mu},$$

where we used that *S* commutes with its eigenprojectors. The matrix *(S* + *I )* is invertible on the range of *P* ⊥ <sup>−</sup>1, hence we have

$$i\left(\mathcal{S} + I\right)^{-1}(\mathcal{S} - I)P\_{-1}^{\perp}\vec{\mu} = P\_{-1}^{\perp}\partial\vec{\mu}.$$

The ranges of *P*−<sup>1</sup> and *P* <sup>⊥</sup> <sup>−</sup><sup>1</sup> span the space C*<sup>d</sup>* , hence condition (3.21) is equivalent to

$$\begin{cases} P\_{-1}\vec{\mu} = 0, \\ (I - P\_{-1})\partial\vec{\mu} = A\_S(I - P\_{-1})\vec{\mu}, \end{cases} \tag{3.27}$$

where

$$A\_S = \mathrm{i}\frac{S-I}{S+I}P\_{-1}^\perp,\text{ with }P\_{-1}^\perp := I - P\_{-1}.\tag{3.28}$$

The matrix *AS* appearing in this parametrisation is Hermitian and its eigenvectors coincide with the eigenvectors of the unitary matrix *S* (not corresponding to the eigenvalue −1). To prove this let us write *AS* in the form

$$A\_S = \mathrm{i}P\_{-1}^\perp (S+I)^{-1}(S-I)P\_{-1}^\perp$$

and take the adjoint

$$A\_S^\* = -i \, P\_{-1}^\perp (S^\* - I) (S^\* + I)^{-1} P\_{-1}^\perp$$

$$= -i \, P\_{-1}^\perp (S^\* - S S^\*) (S^\* + S^\* S)^{-1} P\_{-1}^\perp$$

$$= -i \, P\_{-1}^\perp (I - S) S^\* (S^\*)^{-1} (I + S)^{-1} P\_{-1}^\perp$$

$$= A\_S.$$

This parametrisation shows that the most general vertex conditions at a vertex can be considered as a combination of Dirichlet and Robin type conditions:


This form of vertex conditions will be extremely useful when quadratic forms of operators are discussed (see Chap. 11).

## **3.5 Scaling-Invariant and Standard Conditions**

## *3.5.1 Energy Dependence of the Vertex S-matrix*

Let us now discuss how the vertex scattering matrix depends on the energy. Since the matrix *S* is unitary, it is convenient to use its spectral representation

$$S = \sum\_{n=1}^{d} \mathbf{e}^{i\theta\_n} \langle \vec{e}\_n, \cdot \rangle\_{\mathbb{C}^d} \vec{e}\_n,\tag{3.29}$$

where *θn* <sup>∈</sup> *(*−*π, π*]*, <sup>e</sup><sup>n</sup>* <sup>∈</sup> <sup>C</sup>*<sup>d</sup> , Se<sup>n</sup>* <sup>=</sup> ei*θn <sup>e</sup>n.* We use that *S***<sup>v</sup>** is rational function of *S*, hence formula (3.20) implies

$$\begin{split} S\_{\mathbf{v}}(k) &= \sum\_{n=1}^{d} \frac{(k+1)\mathbf{e}^{i\boldsymbol{\theta}\_{n}} + (k-1)}{(k-1)\mathbf{e}^{i\boldsymbol{\theta}\_{n}} + (k+1)} \left< \vec{e}\_{n}, \cdot \right>\_{\mathbb{C}^{d}} \vec{e}\_{n} \\ &= \sum\_{n=1}^{d} \frac{k(\mathbf{e}^{i\boldsymbol{\theta}\_{n}} + 1) + (\mathbf{e}^{i\boldsymbol{\theta}\_{n}} - 1)}{k(\mathbf{e}^{i\boldsymbol{\theta}\_{n}} + 1) - (\mathbf{e}^{i\boldsymbol{\theta}\_{n}} - 1)} \left< \vec{e}\_{n}, \cdot \right>\_{\mathbb{C}^{d}} \vec{e}\_{n} \\ &= \sum\_{n:\boldsymbol{\theta}\_{n}=\boldsymbol{\pi}} (-1) \left< \vec{e}\_{n}, \cdot \right>\_{\mathbb{C}^{d}} \vec{e}\_{n} + \sum\_{n:\boldsymbol{\theta}\_{n}\neq\boldsymbol{\pi}} \frac{k(\mathbf{e}^{i\boldsymbol{\theta}\_{n}} + 1) + (\mathbf{e}^{i\boldsymbol{\theta}\_{n}} - 1)}{k(\mathbf{e}^{i\boldsymbol{\theta}\_{n}} + 1) - (\mathbf{e}^{i\boldsymbol{\theta}\_{n}} - 1)} \left< \vec{e}\_{n}, \cdot \right>\_{\mathbb{C}^{d}} \vec{e}\_{n}. \end{split} \tag{3.30}$$

The unitary matrix *S***v***(k)* has the same eigenvectors as the matrix *S*, but the corresponding eigenvalues in general depend on the energy. The eigenvalues ±1 are invariant; all other eigenvalues (*i.e.* different from ±1) tend to 1 as *k* → ∞*.*

If *S* is not Hermitian, one may calculate both the high and the low energy limits of *S***v***(k)*:

$$\begin{aligned} S\_{\mathbf{V}}(\infty) &= \lim\_{k \to \infty} S\_{\mathbf{V}}(k) = -P\_{-1} + (I - P\_{-1}) = I - 2P\_{-1}, \\ S\_{\mathbf{V}}(0) &= \lim\_{k \to 0} S\_{\mathbf{V}}(k) = \quad P\_{\mathbf{l}} - (I - P\_{\mathbf{l}}) &= \, 2P\_{\mathbf{l}} - I. \end{aligned} \tag{3.31}$$

Here we used the notations *P*±<sup>1</sup> for the spectral projectors associated with the eigenvalues ±1 :

$$P\_{-1} = \sum\_{\theta\_n = \pi} \langle \vec{e}\_n, \cdot \rangle\_{\mathbb{C}^d} \vec{e}\_n, \quad P\_{\mathbb{I}} = \sum\_{\theta\_n = 0} \langle \vec{e}\_n, \cdot \rangle\_{\mathbb{C}^d} \vec{e}\_n.$$

The vertex scattering matrix is independent of the energy if and only if the vertex conditions are non-Robin, or scaling-invariant as described in the following section.

## *3.5.2 Scaling-Invariant, or Non-Robin Vertex Conditions*

For the star graph formed by the edges *En* = [*xn,*∞*), n* = 1*,* 2*,...,d* consider the scaling transformation

$$(\mathbf{x}\_n, \infty) \in \mathfrak{x} \mapsto \mathfrak{y} = \mathfrak{x}\_n + c(\mathfrak{x} - \mathfrak{x}\_n) \in [\mathfrak{x}\_n, \infty).$$

This transformation naturally induces the function transformation

$$u \mapsto u\_c$$

so that if *y* ∈ *En* = [*xn,*∞*)* then

$$
\mu\_c(\mathbf{y}) = \mu(\underbrace{\mathbf{x}\_n + (\mathbf{y} - \mathbf{x}\_n)/c}\_{\in E\_n}).
$$

It is natural to call vertex conditions **scaling invariant** if and only if any function *u* and its scaling *uc* satisfy conditions simultaneously.

It is clear that the limit values of *u* and *uc* are related via

$$
\vec{u} = \vec{u}\_c, \quad \partial \vec{u} = c \, \partial \vec{u}\_c,\tag{3.32}
$$

provided the magnetic potential is zero. Vertex conditions (3.27) are invariant under scaling if and only if the matrix *AS* is identically zero. As one can see from (3.28) the parameter matrix *S* has just eigenvalues 1 and −1, hence *S* is not only unitary but also Hermitian. Hence any scaling-invariant vertex condition can be written in the form:

$$\begin{cases} P\_{-1}\vec{\boldsymbol{\mu}} = 0, \\ P\_1 \partial \vec{\boldsymbol{\mu}} = 0, \end{cases} \tag{3.33}$$

where *P*±<sup>1</sup> are the eigenprojectors on the two orthogonal eigensubspaces spanning up C*<sup>d</sup> .* These conditions can be seen as a combination of Dirichlet and Neumann conditions. The corresponding matrix *AS* appearing in the Hermitian parametrisation is zero, therefore scaling-invariant vertex conditions are often called **non-Robin**. In the two extreme cases *P*−<sup>1</sup> = *I (P*<sup>1</sup> = 0*)* and *P*−<sup>1</sup> = 0 *(P*<sup>1</sup> = *I )* the conditions reduce to usual Dirichlet and Neumann ones.

Characteristic property of scaling-invariant vertex conditions is that the corresponding vertex scattering matrix is independent of the energy (as can be seen from (3.30)) and can be written as a sum of two projectors

$$S\_{\mathbf{V}}(k) \equiv S = P\_{\mathbf{l}} - P\_{-\mathbf{l}}.\tag{3.34}$$

## *3.5.3 Standard Vertex Conditions*

Standard vertex conditions (2.27)

$$\begin{cases} \boldsymbol{u}(\mathbf{x}\_1) = \boldsymbol{u}(\mathbf{x}\_2) = \dots = \boldsymbol{u}(\mathbf{x}\_d) - \text{continuity condition}, \\\\ \sum\_{j=1}^d \partial \boldsymbol{u}(\mathbf{x}\_j) = 0 & - \text{Kirchhoff condition}, \end{cases}$$

appear naturally if we impose the requirement that the functions are continuous at the nodes.3 Continuity of the wave-function is a natural requirement and is usually welcomed in applications. It is customary to use these conditions if there is no preference or it is not known which particular vertex conditions should be used, which explains the name. We have already considered standard vertex conditions in

<sup>3</sup> We have already mentioned that these two conditions are sometimes also called **Kirchhoff**, **free**  or **Neumann**.

Sect. 2.1.3. Writing these conditions using matrices *A* and *B* is not difficult

$$
\begin{pmatrix} 1 & -1 & 0 & \cdots & 0 & 0 \\ 0 & 1 & -1 & \cdots & 0 & 0 \\ 0 & 0 & 1 & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & 1 & -1 \\ 0 & 0 & 0 & \cdots & 0 & 0 \end{pmatrix} \vec{u} = \begin{pmatrix} 0 \, 0 \, 0 & \cdots & 0 \, 0 \\ 0 \, 0 \, 0 & \cdots & 0 \, 0 \\ 0 \, 0 \, 0 & \cdots & 0 \, 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 \, 0 \, 0 & \cdots & 0 \, 0 \\ 1 \, 1 \, 1 & \cdots & 1 \, 1 \end{pmatrix} \partial \vec{u},\tag{3.35}
$$

The first *d* − 1 equations imply that the function *u* is continuous, while the last equation corresponds Kirchhoff condition.

Let us discuss how to describe the standard conditions using the scattering matrix. To this end we calculate the vertex scattering matrix. Substituting Ansatz (3.12) into (3.35) and taking into account that the ranges of the two matrices are orthogonal, we get:

$$\begin{pmatrix} 1 & -1 & 0 & \cdots & 0 & 0\\ 0 & 1 & -1 & \cdots & 0 & 0\\ 0 & 0 & 1 & \cdots & 0 & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\ 0 & 0 & 0 & \cdots & 1 & -1\\ 0 & 0 & 0 & \cdots & 0 & 0 \end{pmatrix} (\vec{b} + \vec{a}) = 0;\tag{3.36}$$

$$ik \begin{pmatrix} 0 \ 0 \ 0 \ \cdots \ 0 \ 0\\ 0 \ 0 \ 0 \ \cdots \ 0 \ 0\\ 0 \ 0 \ 0 \ \cdots \ 0 \ 0\\ \vdots \ \vdots \ \vdots & \ddots & \vdots\\ 0 \ 0 \ 0 \ \cdots \ 0 \ 0\\ 1 \ 1 \ 1 \ \cdots \ 1 \ 1 \end{pmatrix} (-\vec{b} + \vec{a}) = 0.$$

These conditions can be written as

$$\begin{cases} a\_l + b\_l = a\_j + b\_j, \quad i, j = 1, 2, \dots, d, \\\\ \sum\_{j=1}^d (a\_j - b\_j) = 0. \end{cases} \tag{3.37}$$

Then it is clear that the edges are indistinguishable *i.e.* they are invariant under permutations. Therefore the vertex scattering matrix should satisfy the equation

$$\mathcal{S}\_{\mathbf{V}}(k) = P\_{\sigma} \mathcal{S}\_{\mathbf{V}}(k) P\_{\sigma}^{-1}$$

for any permutation *Pσ* and therefore be of the form:<sup>4</sup>

$$S\_{lj}(k) = \begin{cases} T, & i \neq j, \\ R, & i = j, \end{cases} \Rightarrow S(k) = \begin{pmatrix} R \ T \ T \ \cdots \ \cdots \\ T \ R \ T \ \cdots \\ T \ T \ R \cdots \\ \vdots \ \vdots \ \vdots \ \ddots \end{pmatrix} . \tag{3.38}$$

At this stage we cannot exclude the possibility that the transmission *T* and reflection *R* coefficients depend on the spectral parameter *k.* Let us assume that there is just one incoming wave arriving along the edge *E*1: the corresponding scattered wave is given by the Ansatz

$$\psi(\mathbf{x}) = \begin{cases} e^{-ik(\mathbf{x}-\mathbf{x}\_{\mathrm{I}})} + Re^{ik(\mathbf{x}-\mathbf{x}\_{\mathrm{I}})}, & \mathbf{x} \in E\_{\mathrm{I}} = [\mathbf{x}\_{\mathrm{I}}, \infty), \\\\ Te^{ik(\mathbf{x}-\mathbf{x}\_{\mathrm{I}})}, & \mathbf{x} \in E\_{\mathrm{n}} = [\mathbf{x}\_{\mathrm{n}}, \infty), \ n = 2, 3, \dots, d. \end{cases}$$

Substituting this Ansatz into the standard conditions (2.27) leads to the following linear system

$$\begin{cases} 1 + R = T \\ ik \left( -1 + R + (d - 1)T \right) = 0. \end{cases}$$

$$\Rightarrow \begin{cases} T - R = 1 \\ (d - 1)T + R = 1. \end{cases} \tag{3.39}$$

Solving the linear system we get the transition and reflection coefficients

$$\begin{cases} T = 2/d, \\ R = -1 + 2/d. \end{cases} \tag{3.40}$$

The matrix *S*st corresponding to standard vertex conditions is then given by

$$\mathbf{S}^{\rm st} = \mathbf{S}\_d^{\rm st} = \begin{pmatrix} -1 + 2/d & 2/d & 2/d & \cdots \\ 2/d & -1 + 2/d & 2/d & \cdots \\ 2/d & 2/d & -1 + 2/d & \cdots \\ \vdots & \vdots & \vdots & \ddots \end{pmatrix},\tag{3.41}$$

<sup>4</sup> We are going to return to this question in Sect. 3.8.2.

which allows one to write the standard vertex conditions in the form (3.21)

$$\begin{aligned} \text{i)} \begin{pmatrix} -2+2/d & 2/d & 2/d & \cdots \\ 2/d & -2+2/d & 2/d & \cdots \\ 2/d & 2/d & -2+2/d & \cdots \\ \vdots & \vdots & \vdots & \ddots \end{pmatrix} \vec{u} \\ &= \begin{pmatrix} 2/d \ 2/d \ 2/d & \cdots \\ 2/d \ 2/d \ 2/d & \cdots \\ 2/d \ 2/d \ 2/d & \cdots \\ \vdots & \vdots & \vdots & \ddots \end{pmatrix} \vec{u} \vec{u}. \end{aligned} \tag{3.42}$$

The scattering matrix is independent of the energy and therefore can be written using two projectors. One may also introduce the eigensubspaces *N*<sup>1</sup> = L{*(*1*,* 1*,* 1*,...,* 1*)*} and *N*−<sup>1</sup> = *N*<sup>⊥</sup> <sup>1</sup> corresponding to the eigenvalues ±1*.* The orthogonal projectors *P*±<sup>1</sup> = *PN*±<sup>1</sup> allow one to write standard vertex conditions also in the form (3.33).

Standard vertex conditions for degree two vertices mean that the function and its first derivative are continuous at the vertex. As the result the corresponding vertex scattering matrix describes free passage through the vertex

$$S\_2^{\rm st} = \begin{pmatrix} 0 \ 1 \\ 1 \ 0 \end{pmatrix}.$$

Hence degree two vertices with standard conditions can always be removed and the two edges joined at the vertex can be substituted with one edge of the length equal to the sum of the lengths of the two edges.

On the opposite, every point inside an edge can be seen as a degree two vertex with standard conditions.

## **3.6 Signing Conditions for Degree Two Vertices**

The signing conditions remind the standard conditions and differ by two extra signs, hence the name

$$\begin{cases} \boldsymbol{\mu}(\mathbf{x}\_{1}) = -\boldsymbol{\mu}(\mathbf{x}\_{2}),\\ \partial \boldsymbol{\mu}(\mathbf{x}\_{1}) - \partial \boldsymbol{\mu}(\mathbf{x}\_{2}) = \mathbf{0}. \end{cases} \tag{3.43}$$

These condition correspond to the multiplication of the function by −1 while crossing the vertex. The corresponding vertex scattering matrix is

$$\mathcal{S}^{\text{sign}} = \begin{pmatrix} 0 & -1 \\ -1 & 0 \end{pmatrix} = -\mathcal{S}\_2^{\text{st}}.$$

These conditions will play a very important role when discussing the solution of the inverse problem using magnetic flux dependent spectral data.

For example, introducing signing conditions connecting the endpoints of the same interval corresponds to the loop graph with magnetic flux equal to *π*.

We borrow the name signing conditions from the discrete graph theory, see for example [89, 384].

## **3.7 Generalised Delta Couplings**

In this section we present yet another class of vertex conditions. These conditions were introduced in order to guarantee that the ground state eigenfunction may be chosen positive. They are characterised by the property that the domain of the quadratic form is invariant under taking the absolute value and the value of the quadratic form does not increase (see Sect. 4.5).

With any vertex *V* of degree *d* we associate *n* ≤ *d* arbitrary vectors *a<sup>j</sup>* with the following properties:

• all coordinates of *a<sup>j</sup>* are non-negative numbers<sup>5</sup>

$$
\vec{a}\_j \in \mathbb{R}\_+^d;
$$

• the vectors have disjoint supports so that

$$
\stackrel{\circ}{a}\_j(\chi\_l) \stackrel{\circ}{a}\_l(\chi\_l) = 0, \text{ provided } j \neq i, \ \chi\_l \in V, j
$$

holds.

Without loss of generality we assume that the vectors *a<sup>j</sup>* are normalised:

$$\|\vec{a}\_j\|^2 := \sum\_{\chi\_l \in V} |\vec{a}\_j(\chi\_l)|^2 = 1.$$

The coordinates of the vectors *a<sup>j</sup>* will be called **weights**.

<sup>5</sup> We study only the case where the weights *<sup>a</sup><sup>j</sup> (xl)* are non-negative reals, but in principle complex values may be allowed.

#### 3.7 Generalised Delta Couplings 49

**Fig. 3.4** Generalised delta couplings when *d* = 9 and *n* = 3

In addition to the vectors *a<sup>j</sup>* we pick up a Hermitian *n* × *n* matrix **A** playing the role of Robin parameter. Then the **generalised delta couplings** are written as follows

$$\begin{cases} \vec{u} \in \mathfrak{L}\{\vec{a}\_1, \vec{a}\_2, \dots, \vec{a}\_n\}; \\ \langle \vec{a}\_j, \partial \vec{u} \rangle = \sum\_{l=1}^n A\_{jl} \langle \vec{a}\_l, \vec{u} \rangle. \end{cases} \tag{3.44}$$

The dimension *n* of the subspace

$$\mathcal{B} := \mathfrak{L}\{\ddot{a}\_1, \ddot{a}\_2, \dots, \ddot{a}\_n\}$$

will be referred to as the **order** of the generalised delta-condition (Fig. 3.4).

The first condition in (3.44) is a weighted continuity condition, since it can be written as follows:

$$\frac{\mu(\mathbf{x}\_k)}{\vec{a}\_j(\mathbf{x}\_k)} = \frac{\mu(\mathbf{x}\_l)}{\vec{a}\_j(\mathbf{x}\_l)} := \mathbf{u}\_j, \ \mathbf{x}\_k, \mathbf{x}\_l \in \text{supp}\, \vec{a}\_j, \ \mathbf{j} = 1, 2, \ldots, n. \tag{3.45}$$

The difference to the classical delta coupling (see Appendix 1) is that the function is not necessarily continuous at the vertex. In the case *n* = 1 and the corresponding vector *a*<sup>1</sup> has maximal support, any coordinate of *u* determines all other coordinates—the value of *u* at one endpoint determines its values at all other endpoints. But the values may be different if the weights are different. One may say that the weighted function is continuous in this case. If *n* ≥ 2, then the entries of *u* are determined by *n* arbitrary parameters. Every coordinate in *u* belongs to the support of at most one vector *a<sup>j</sup>* for a certain *j* and thus determines all other coordinates in the support of *a<sup>j</sup> .* The wave function *u* attains *n* independent weighted values associated with different groups of endpoints joined at the vertex. One should think about this condition as a weighted continuity of *u* at each group of endpoints.

Changing the order *n*; 1 ≤ *n* ≤ *d*, of the delta coupling allows one to interpolate between the classical delta coupling and the most general vertex conditions, so that

*n* = 1 corresponds to weighted delta coupling and *n* = *d* to the most general Robin condition of the form *∂u* = *A u.*

Note that in Eq. (3.45) we introduced a new vector **u** = *(***u**1*,* **u**2*,...,* **u***n)*—the reduced vector containing common weighted values of the vector *u*. The dimension of the vector coincides with the dimension *n* of the linear subspace B.

The second equation in (3.44) is a balance equation for the normal derivatives. The sum of normal derivatives connected with endpoints from the support of one of the vectors *a<sup>j</sup>* is connected via the coupling matrix **A** to the common values of *u* at all other groups of endpoints, since we have

$$\langle \vec{a}\_l, \vec{u} \rangle = \sum\_{\chi\_l \in \text{supp}\, \vec{a}\_l} \vec{a}\_l(\chi\_l) \mu(\chi\_l) = \sum\_{\chi\_l \in \text{supp}\, \vec{a}\_l} |\vec{a}\_l(\chi\_l)|^2 \frac{\mu(\chi\_l)}{\vec{a}\_l(\chi\_l)} = \mathbf{u}\_l.$$

Here we used that the vector *a<sup>i</sup>* is normalised.

For generalised delta couplings to be properly connecting two requirements should be fulfilled:

(1) The union of supports of the vectors *a<sup>j</sup>* coincides with all endpoints in *V* :

$$\cup\_{j=1}^{n} \text{supp } (\vec{a}\_{j}) = \{\mathbf{x}\_{l}\}\_{\mathbf{x}\_{l}\in V}.\tag{3.46}$$

(2) The matrix **A** = {*Aj i*} *n j,i*=<sup>1</sup> is irreducible, *i.e.* it cannot be put into a blockdiagonal form by permutations.

If the first condition is not satisfied, then we have classical Dirichlet conditions at certain endpoints:

$$
\mu(\mathbf{x}\_l) = 0,\text{ provided } \mathbf{x}\_l \notin \cup\_{j=1}^n \text{supp } (\vec{a}\_j).
$$

Dirichlet endpoints always form separate vertices.

If the second condition is not satisfied, then the vertex *V* can be chopped into two (or more) vertices preserving the vertex conditions. Such conditions correspond to the metric graph, where the vertex *V* is divided.

As we already pointed out described vertex conditions will play a crucial role proving that the ground state eigenfunction can be chosen positive. For that purpose, all the weights should be real and the matrix **A** should be not only Hermitian but real with non-positive entries outside the diagonal. You will read more about generalised delta couplings in Sect. 4.5, where, in particular, the corresponding quadratic form is calculated and its properties are discussed.

# **3.8 Vertex Conditions for Arbitrary Graphs and Definition of the Magnetic Schrödinger Operator**

## *3.8.1 Scattering Matrix Parametrisation of Vertex Conditions*

In this section we discuss the most general vertex conditions for arbitrary compact finite graphs generalizing Sect. 3.3. Our main focus will be on which properties of these conditions guarantee their admissibility, and therefore we still assume that the potentials satisfy (2.19) and (2.20).

The standard self-adjoint operator *L*st *q,a* associated with a symmetric differential expression on a metric graph  has already been defined in Sect. 2.1 (Definition 2.2). This operator is selected by introducing standard vertex conditions (2.27) at the vertices. Let us discuss how to introduce other types of vertex conditions, so that the vertex structure of the graph  is respected. The boundary form of the maximal operator *L*max *q,a* can be written as

$$\begin{split} \langle L\_{q,a}^{\max} u, v \rangle - \langle u, L\_{q,a}^{\max} v \rangle \\ = \sum\_{n=1}^{N} \int\_{E\_n} \left\{ \overline{\left( \mathbf{i} \frac{d}{dx} + a(\mathbf{x}) \right)^2 u(\mathbf{x})} v(\mathbf{x}) - \overline{u(\mathbf{x})} \left( \mathbf{i} \frac{d}{dx} + a(\mathbf{x}) \right)^2 v(\mathbf{x}) \right\} d\mathbf{x} \\ = \sum\_{\mathbf{x}\_j} \left( \overline{\partial u(\mathbf{x}\_j)} v(\mathbf{x}\_j) - \overline{u(\mathbf{x}\_j)} \partial v(\mathbf{x}\_j) \right). \end{split} \tag{3.47}$$

Let us introduce the vectors *U,∂ U* of limit values of the function *u* at all endpoints:

$$\begin{array}{l}\ddot{U} = \left(\mu(\mathbf{x}\_{1}), \mu(\mathbf{x}\_{2}), \dots\right),\\\partial\ddot{U} = \left(\partial\mu(\mathbf{x}\_{1}), \partial\mu(\mathbf{x}\_{2}), \dots\right).\end{array} \tag{3.48}$$

The dimension of these vectors coincides with the number *D* of endpoints in **V***.*

In vector notation the boundary form (3.47) looks as follows

$$
\langle \langle L\_{q,a}^{\max} u, v \rangle - \langle u, L\_{q,a}^{\max} v \rangle = \left| \begin{pmatrix} 0 & I \\ -I & 0 \end{pmatrix} \begin{pmatrix} \vec{U} \\ \partial \vec{U} \end{pmatrix}, \begin{pmatrix} \vec{V} \\ \partial \vec{V} \end{pmatrix} \right|\_{\mathbb{C}^{2D}}, \tag{3.49}
$$

and coincides with the standard symplectic form in the space C2*<sup>D</sup> (U,∂ U )* . The set of self-adjoint restrictions of the maximal operator *L*max *q,a* can be described

by Lagrangian planes, *i.e.* maximal isotropic6 subspaces in C2*D.* But not all such Lagrangian subspaces respect the vertex structure of the underlying metric graph. In order to select proper conditions let us re-write the boundary form as follows

$$\begin{aligned} & \langle L\_{q,a}^{\max} u, v \rangle - \langle u, L\_{q,a}^{\max} v \rangle \\ &= \sum\_{m=1}^{M} \left\{ \sum\_{\mathbf{x}\_{j} \in V^{m}} \left( \overline{\partial u(\mathbf{x}\_{j})} v(\mathbf{x}\_{j}) - \overline{u(\mathbf{x}\_{j})} \partial v(\mathbf{x}\_{j}) \right) \right\} \\ &= \sum\_{m=1}^{M} \left\langle \begin{pmatrix} 0 & I \\ -I & 0 \end{pmatrix} \begin{pmatrix} \tilde{u}(V^{m}) \\ \partial \tilde{u}(V^{m}) \end{pmatrix}, \begin{pmatrix} \tilde{v}(V^{m}) \\ \partial \tilde{v}(V^{m}) \end{pmatrix} \right\rangle\_{\mathbb{C}^{2d^{m}}}. \end{aligned} \tag{3.50}$$

Each subspace C2*d<sup>m</sup>* associated with the vertex *V <sup>m</sup>* can be considered separately. The corresponding appropriate Lagrangian planes, or vertex conditions, have already been discussed in Sect. 3.3 in the context of star graphs.

With every vertex *<sup>V</sup> m,* we associate *d<sup>m</sup>* <sup>×</sup> *<sup>d</sup><sup>m</sup>* unitary irreducible matrix *S<sup>m</sup>* and introduce the vertex conditions

$$\mathrm{i}(S^m - I)\vec{\mu}(V^m) = (S^m + I)\partial\vec{\mu}(V^m), \ \ m = 1, 2, \ldots, M. \tag{3.51}$$

In what follows, we are going to limit our studies to the case of irreducible matrices *Sm.* The corresponding vertex conditions will be called **admissible**.

It will be convenient to consider the vectors *u(V m)* as elements from C*<sup>D</sup>* extending them by zero for all endpoints not from *V <sup>m</sup>*. Then the unitary matrices *<sup>S</sup><sup>m</sup>* will be identified with the *<sup>D</sup>* <sup>×</sup> *<sup>D</sup>* matrices obtained by putting equal to zero all entries with the indices *ij* if either *xi* <sup>∈</sup>*/ <sup>V</sup> <sup>m</sup>* or *xj* <sup>∈</sup>*/ <sup>V</sup> <sup>m</sup>*. Then the matrix **<sup>S</sup>** given by

$$\mathbf{S} = \bigoplus\_{m=1}^{M} \mathbf{S}^{m} \tag{3.52}$$

is unitary and describes the vertex conditions at all vertices via

$$
\hat{i}\,(\mathbf{S}-\mathbf{I})\ddot{U} = (\mathbf{S}+\mathbf{I})\partial\ddot{U}.\tag{3.53}
$$

Note that the sum in (3.52) is orthogonal since the matrices *S<sup>m</sup>* map limiting values at different vertices. The matrix **S** in general is reducible and its invariant subspaces are determined by the vertices.

<sup>6</sup> A subspace is called **isotropic** if and only if the symplectic form vanishes for any two vectors from the subspace. Every such maximal subspace has dimension *D.*

Then the self-adjoint operator is defined as the restriction of the maximal operator to the domain of functions satisfying vertex conditions (3.51).

**Definition 3.6** The **magnetic Schrödinger operator** *L***<sup>S</sup>** *q,a* is defined by the differential expression (2.17) on the domain of functions from the Sobolev space *W*<sup>2</sup> <sup>2</sup> *(-*\ **V***)* satisfying the vertex conditions (3.51) at each vertex.

In this definition it is important that each matrix *S<sup>m</sup>* is irreducible, while the matrix **S** is reducible by construction (assuming, of course, that  has more than one vertex). The case where at least one of the matrices *S<sup>m</sup>* is reducible corresponds to a different metric graph. The corresponding graph can be obtained from the graph  by splitting one of the vertices into two or more equivalence classes—new vertices (see Fig. 3.1). Thus taking **<sup>S</sup>** = −**<sup>I</sup>** we get the Dirichlet operator *<sup>L</sup><sup>D</sup> q,a* corresponding to the graph consisting of disconnected edges.

**Theorem 3.7** *The operator L***<sup>S</sup>** *q,a is self-adjoint, provided that the matrix* **S** *is unitary.* 

*Proof* Consider the minimal operator associated with the differential expression *Lq,a* in *L*2*(-)*. The adjoint operator is determined by the same differential expression on the domain *W*<sup>2</sup> <sup>2</sup> *(-*\**V***).* This follows directly from the fact that the differential expression *Lq,a* is formally symmetric.

To prove that *L***<sup>S</sup>** *q,a* is self-adjoint, one may repeat step-by-step the proof of Theorems 3.2 and 3.4.

The boundary form of the operator is given by (3.47) and it vanishes due to vertex conditions (3.51), since it can be re-written as

$$
\langle L\_{q,a}^{\max} u, v \rangle - \langle u, L\_{q,a}^{\max} v \rangle = \sum\_{m=1}^{M} \left( \sum\_{\mathbf{x}\_j \in V^m} \left( \overline{\partial u(\mathbf{x}\_j)} v(\mathbf{x}\_j) - \overline{u(\mathbf{x}\_j)} \partial v(\mathbf{x}\_j) \right) \right).
$$

Each term in the sum vanishes separately. Calculating the adjoint operator *(L***<sup>S</sup>** *q,a)*<sup>∗</sup> all vertices may also be treated separately, and therefore the corresponding calculations can be repeated without any major changes.

Following (3.20), it is natural to introduce the corresponding (global) vertex scattering matrix

$$\mathbf{S}\_{\mathbf{V}}(k) = \frac{(k+1)\mathbf{S}\_{\mathbf{V}}(1) + (k-1)\mathbf{I}}{(k-1)\mathbf{S}\_{\mathbf{V}}(1) + (k+1)\mathbf{I}}.\tag{3.54}$$

This matrix coincides with the scattering matrix for the vertex of valency *D* with the vertex conditions given by formula (3.53). This matrix will be used in what follows to calculate the positive spectrum and to establish the corresponding trace formulas.

## *3.8.2 Quadratic Form Parametrisation of Vertex Conditions*

In mathematical physics one often determines self-adjoint operators via their quadratic, or more precisely sesquilinear, form. The reason is two-fold:


All operators we discuss here are semibounded, let us look at their quadratic forms. The sesquilinear form of the operator *L<sup>S</sup> q,a* can be calculated explicitly:

*Q***<sup>S</sup>** *q,a(u, u)* ≡ <*L***<sup>S</sup>** *q,au, u*>*L*2*(-)* <sup>=</sup> - *N n*=1 ' *En* − *d dx* <sup>−</sup> <sup>i</sup>*a(x)*<sup>2</sup> *u(x)u(x)dx* + *En q(x)*|*u(x)*| <sup>2</sup>*dx*( <sup>=</sup> - *xj ∂u(xj )u(xj )* <sup>+</sup>- *N n*=1 ' *En* ) ) ) ) *d dx* <sup>−</sup> <sup>i</sup>*a(x) u(x)* ) ) ) ) 2 *dx* + *En q(x)*|*u(x)*| <sup>2</sup>*dx*( <sup>=</sup> - *M m*=1 <sup>&</sup>lt;*∂u(V m), u(V m)*>C*dm* + - *N n*=1 ' *En* ) ) ) ) *d dx* <sup>−</sup> <sup>i</sup>*a(x) u(x)* ) ) ) ) 2 *dx* + *En q(x)*|*u(x)*| <sup>2</sup>*dx*( <sup>=</sup> - *M m*=1 <sup>&</sup>lt;*ASm u(V m), u(V m)*>C*dm* + - *N n*=1 ' *En* ) ) ) ) *d dx* <sup>−</sup> <sup>i</sup>*a(x) u(x)* ) ) ) ) 2 *dx* + *En q(x)*|*u(x)*| <sup>2</sup>*dx*( *.* (3.55)

The domain Dom *Q***<sup>S</sup>** *q,a* of the sesquilinear form is obtained by closing the domain Dom *(L***<sup>S</sup>** *q,a)* with respect to the norm *Q***<sup>S</sup>** *q,a(u, u)* <sup>+</sup> *<sup>C</sup>*║*u*║2*,* where the constant *<sup>C</sup>* is chosen sufficiently large to ensure positivity. Let us remember that we assume that *q* and *a* satisfy assumptions (2.19) and (2.20) respectively. Under these assumptions *Q***<sup>S</sup>** *q,a(u, u)* is bounded if and only if *<sup>u</sup>* <sup>∈</sup> *<sup>W</sup>*<sup>1</sup> <sup>2</sup> *(-* \ **V***)* since *au* ∈ *L*2*(-)* and *<sup>q</sup>*║*u*║<sup>2</sup> <sup>∈</sup> *<sup>L</sup>*1*(-).* It remains to understand what happens to the vertex conditions. Every function from *W*<sup>1</sup> <sup>2</sup> *(-*\**V***)* is continuous on every edge, but the first derivatives are not continuous anymore, in other words the functionals *u* |→ *u*' *(x)* are not bounded with respect to the norm in the Sobolev space *W*<sup>1</sup> <sup>2</sup> *(-* \ **V***).* It follows that the Robin part of vertex conditions, that is the second equation in (3.27) is not preserved. On the other hand every function from the closure of Dom *(L***<sup>S</sup>** *q,a)* with respect to *W*<sup>1</sup> <sup>2</sup> -norm satisfies the Dirichlet part, *i.e.* the first equation in (3.27).

Summing up, the domain of the quadratic form consists of all functions from the Sobolev space *W*<sup>1</sup> <sup>2</sup> *(-*\ **V***)* satisfying just the first conditions in (3.27)

$$P\_{-1}^{m}\vec{\mu}(V^{m}) = 0, \ m = 1, 2, \ldots, M. \tag{3.56}$$

The second condition is not preserved, since the functionals *u* |→ *u*' *(x)* are not bounded with respect to the norm in the Sobolev space *W*<sup>1</sup> <sup>2</sup> *(-*\ **V***).*

The Robin part of vertex conditions is not preserved in the description of the quadratic form domain, nevertheless it can be reconstructed. In other words, the quadratic form *Q***<sup>S</sup>** *q,a* determines the unique vertex condition. The domain of the quadratic form determines the projectors *P <sup>m</sup>* <sup>−</sup><sup>1</sup> and hence the subspaces *(I* <sup>−</sup> *P <sup>m</sup>* −1*)*C*d<sup>m</sup> .* The quadratic forms <*Amu(V m), u(V m)*>C*dm* determine the Hermitian matrices *Am.* Therefore the unitary matrices *S<sup>m</sup>* are given by the formula

$$\mathbf{S}^{m} = P\_{-1}^{m} \overset{\perp}{\mathbf{i}} \frac{\mathbf{i}I - A^{m}}{\mathbf{i}I + A^{m}} P\_{-1}^{m} \overset{\perp}{\oplus} (-P\_{-1}^{m}), \text{ where } P\_{-1}^{m} \overset{\perp}{=} (I - P\_{-1}^{m}).\tag{3.57}$$

The standard vertex conditions correspond to the quadratic form

$$\mathcal{Q}\_{q,a}(\boldsymbol{u},\boldsymbol{u}) = \sum\_{n=1}^{N} \int\_{E\_{\boldsymbol{u}}} \left| \left( \frac{d}{d\boldsymbol{x}} - \mathrm{i}a(\boldsymbol{x}) \right) \boldsymbol{u}(\boldsymbol{x}) \right|^{2} d\boldsymbol{x} + \int\_{E\_{\boldsymbol{u}}} q(\boldsymbol{x}) |\boldsymbol{u}(\boldsymbol{x})|^{2} d\boldsymbol{x},\qquad(3.58)$$

where vertex terms are absent. The domain is given by all *W*<sup>1</sup> <sup>2</sup> *(-* \ **V***)* functions, which are in addition continuous at the vertices. Starting from this quadratic form, which is the most natural candidate from the physical point of view, we get the Schrödinger operator determined by the standard vertex conditions. Hence standard vertex conditions appear if one requires that the functions from the domain of the operator are continuous at the vertices and the quadratic form contains no vertex terms.

Consider the quadratic form given by the same formula (3.58) on the domain of functions from *W*<sup>1</sup> <sup>2</sup> *(-* \ {*<sup>V</sup> <sup>m</sup>*}*<sup>M</sup> <sup>m</sup>*=1*)* without requiring any continuity at the vertices. The corresponding Schrödinger operator is defined on the domain of functions satisfying Neumann conditions at all endpoints of the edges, *i.e.* the corresponding graph consists of *N* completely disconnected intervals.

## **Appendix 1: Important Classes of Vertex Conditions**

#### *δ and δ***'** *-Couplings*

It is probably worth mentioning that the continuity requirement does not necessarily lead to standard vertex conditions. All self-adjoint operators described by conditions other than standard vertex conditions are usually considered as certain point

perturbations of standard operators. For each vertex the following one-parameter family of vertex conditions is usually called a *δ*-**coupling** at the vertex

$$\begin{cases} \mu \text{ is continuous at the vertex } V, \\ \sum\_{\mathbf{x}\_{f} \in V} \partial \mu(\mathbf{x}\_{f}) = \alpha \cdot \mu(V); \end{cases}, \quad \alpha \in \mathbb{R}. \tag{3.59}$$

Since the function *u* is continuous at the vertex, its value *u(V )* is well-defined. The real parameter *α* describes the strength of the *δ*-coupling.,

Another one-parameter family is sometimes called *δ*' -**coupling** and is in some sense dual to the *δ*-coupling. It is described by the conditions

$$\begin{cases} \partial \boldsymbol{u}(\mathbf{x}\_{j}) = \partial \boldsymbol{u}(\mathbf{x}\_{l}), & \mathbf{x}\_{j}, \mathbf{x}\_{l} \in V, \\ \sum\_{\mathbf{x}\_{j} \in V} \boldsymbol{u}(\mathbf{x}\_{j}) = \boldsymbol{\beta} \cdot \partial \boldsymbol{u}(V); & \boldsymbol{\beta} \in \mathbb{R}. \end{cases}, \quad \boldsymbol{\beta} \in \mathbb{R}. \tag{3.60}$$

The first condition substitutes the continuity condition, while the second condition contains the parameter *β* describing the strength of the *δ*' -coupling.

The *δ* and *δ*' -couplings can formally be considered for infinite values of the coupling parameters. The *δ*-coupling with *α* = ∞ corresponds to the Dirichlet condition *u(V )* = 0*, i.e. u(xj )* = 0*,* whereas *β* = ∞ leads to the Neumann condition *∂u(V )* = 0*, i.e. ∂u(xj )* = 0*.* Note that these Dirichlet and Neumann conditions describe completely independent edges and therefore are not properly connecting (unless of course the valence is trivial *d* = 1).

## *Circulant Conditions*

In many applications it is important to chose vertex conditions satisfying certain additional assumptions. In this section we shall study the case where the vertex conditions are invariant under cyclic permutations. Assume that the limit values *(u, ∂ u)* satisfy the vertex conditions whenever *(*R*u,* R*∂u)* satisfy the same conditions, where R is the rotation matrix

$$\mathcal{R} = \begin{pmatrix} 0 \ 1 \ 0 \ 0 \cdots \ 0 \ 0 \\ 0 \ 0 \ 1 \ 0 \cdots \ 0 \ 0 \\ 0 \ 0 \ 0 \ 1 \cdots \ 0 \ 0 \\ 0 \ 0 \ 0 \ 0 \cdots \ 0 \ 0 \\ \vdots \ \vdots \ \vdots \ \vdots \ \vdots \ \vdots \\ 0 \ 0 \ 0 \ 0 \cdots \ 0 \ 1 \\ 1 \ 0 \ 0 \ 0 \cdots \ 0 \ 0 \\ \end{pmatrix} \tag{3.61}$$

Substitution of the limit values *(*R*u,* R*∂u)* into original vertex conditions (3.21) gives

$$\|(S - I)\mathcal{R}\vec{\mu} = (S + I)\mathcal{R}\partial\vec{\mu}\|.$$

Multiplying the last equality by R−<sup>1</sup> from the left we get

$$\| (\mathcal{R}^{-1} S \mathcal{R} - I) \vec{\mu} = (\mathcal{R}^{-1} S \mathcal{R} + I) \partial \vec{\mu} .$$

Since the parametrisation (3.21) is one-to-one the two vertex conditions are equivalent if and only if

$$\mathcal{S} = \mathcal{R}^{-1} \mathcal{S} \mathcal{R}\_{\cdot}$$

It follows that the matrix *S* is circulant as was probably expected by the reader:

$$S = \begin{pmatrix} s\_0 & s\_{n-1} & s\_{n-2} & \cdots & s\_2 & s\_1 \\ & s\_1 & s\_0 & s\_{n-1} & \cdots & s\_3 & s\_2 \\ & s\_2 & s\_1 & s\_0 & \cdots & s\_4 & s\_3 \\ & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ & & & & & \\ s\_{n-2} & s\_{n-3} & s\_{n-4} & \cdots & s\_0 & s\_{n-1} \\ & s\_{n-1} & s\_{n-2} & s\_{n-3} & \cdots & s\_1 & s\_0 \\ \end{pmatrix} \tag{3.62}$$

We have already seen the following important examples of circulant vertex conditions: standard (3.42), *δ*- and *δ*' -couplings (3.59), (3.60). Circulant conditions in connection with PT -symmetric operators on graphs have been discussed in [34].

## *'Real' Conditions*

The standard Schrödinger equation possesses an important property: its eigenfunctions can always be chosen real, since if *ψ* is an eigenfunction, then *ψ* is also an eigenfunction. This property is also known as time-reversal symmetry. Let us study which vertex conditions possess this property. We have to check under which conditions the limit values *(u, ∂ u)* satisfy (3.21) whenever *(u, ∂u)* satisfy the same equation.

The limit values *(u, ∂u)* satisfy (3.21) if and only if

$$-\operatorname{i}(\overline{S} - I)\vec{\mu} = (\overline{S} + I)\partial\vec{\mu}$$

holds. Multiplying the last equality by *(S)*−<sup>1</sup> <sup>=</sup> *<sup>S</sup>* <sup>∗</sup> <sup>=</sup> *<sup>S</sup>*<sup>t</sup> we arrive at

$$\|(\mathcal{S}^\mathfrak{t} - I)\vec{\mu} = (\mathcal{S}^\mathfrak{t} + I)\partial\vec{\mu}... \|$$

These vertex conditions are equivalent to (3.21) if and only if

$$\mathcal{S} = \mathcal{S}^{\mathfrak{t}},$$

*i.e. S* is a complex symmetric matrix (but not necessarily Hermitian).

Let us note that all 'real' vertex conditions leading to energy independent vertex scattering matrices are described by real symmetric matrices. This fact is important for physical applications. Physically relevant models are usually described by matrices with real entries. The requirement that the corresponding Hamiltonian is time-reversal invariant leads directly to scaling-invariant vertex scattering matrices.

## *Indistinguishable Edges*

Let us study which class of vertex conditions corresponds to indistinguishable edges, *i.e.* vertex conditions invariant under arbitrary permutations of the edges. The corresponding matrices *S* satisfy the equation

$$SP\_{\sigma} = P\_{\sigma}S,\tag{3.63}$$

where *Pσ* is any permutation matrix corresponding to permutation *σ*. Every matrix *S* satisfying (3.63) is of the form

$$S = \begin{pmatrix} R \ T \ T \cdots \\ T \ R \ T \cdots \\ T \ T \ R \cdots \\ \vdots \ \vdots \ \vdots \ \ddots \end{pmatrix},$$

where *R, T* are arbitrary complex numbers. In order for *S* to be unitary and irreducible one has to require that

$$\begin{cases} \left| R \right|^2 + (d-1)|T|^2 &= 1, \\\\ R\overline{T} + T\overline{R} + (d-2)|T|^2 = 0, \\\\ T \neq 0. \end{cases}$$

The reflection coefficient *R* may be equal to zero only if *d* = 2*,* since otherwise the second equality would imply that even *T* = 0*.*

Consider the case of real *T* and *R.* The corresponding linear system

$$\begin{cases} R^2 + (d-1)T^2 = 1, \\\\ 2R + (d-2)T = 0; \end{cases}$$

has just two solutions

$$ \begin{cases} T = \frac{2}{d} \\ R = -\frac{d-2}{v} = -1 + T \end{cases} \quad \text{and} \quad \begin{cases} T = -\frac{2}{d} \\ R = \frac{d-2}{d} = 1 + T \end{cases} $$

The first solution corresponds to standard vertex conditions (3.41) (which coincides with the *δ*-coupling with *α* = 0), the second solution—to *δ*' -coupling (3.60) with *β* = 0*.* This fact underlines the importance of the family of *δ*' -couplings, which was introduced originally just as a certain dual to the family of *δ*-couplings. One can read more about such vertex conditions in [486].

## *Equi-transmitting Vertices*

In quantum mechanics, transition probabilities *ρij* are given by squared absolute values of the scattering coefficients *ρij* = |*sij* | <sup>2</sup>*.* Therefore the edges meeting at a vertex are equivalent, from the quantum mechanical point of view, if all nondiagonal entries of the vertex scattering matrix have the same absolute value. The diagonal elements have equal absolute values as well.

**Definition 3.8** ([263]) A *d* × *d* unitary matrix *S* is called **equi-transmitting** if and only if


Equi-transmitting unitary matrices attracted attention in recent years with the hope to *repair* apparently non-physical behaviour of the vertex scattering matrices (3.41) corresponding to standard vertex conditions:

$$S^{\rm st} \sim -I \quad \text{ for} \quad d \gg 1. \tag{3.64}$$

In other words, vertices with a large degree are similar to Dirichlet vertices. This is against the physical intuition that by increasing the number of edges, one increases penetrability of the vertex.

In the first step **reflectionless equi-transmitting matrices** leading to scalinginvariant vertex conditions were studied [358]. Reflectionless means that all diagonal elements are zero. Such matrices exist only in odd dimensions, since the trace is zero, but eigenvalues of Hermitian unitary matrices are just ±1*.* Their sum can be equal to zero only if the dimension *d* is even. It is relatively easy to characterise these matrices in low dimensions *d* = 2*,* 4*,* 6*,* which is done in the article mentioned above.

**Equi-transmitting matrices** leading to scaling-invariant vertex conditions were investigated in [263, 348, 486, 487]. The class of equi-transmitting matrices is invariant under multiplication by −1, hence without loss of generality we may assume the number *ν*+ of positive diagonal elements is not less than the number of negative ones. In this case the trace of *S* is equal to

$$\operatorname{Tr}(\mathcal{S}) = (2\nu^+ - d)r \ge 0,$$

where *r* = |*sjj* | and *ν*<sup>+</sup> ≥ *d/*2*.* On the other hand, the matrix *S* is unitary and Hermitian and therefore its spectrum is given by ±1*.* Denoting by *d*<sup>+</sup> the multiplicity of +1 we calculate the trace using the spectrum

$$\operatorname{Tr}\left(\mathcal{S}\right) = 2d^+ - d,$$

implying *d*<sup>+</sup> ≥ *d/*2, since the trace is non-negative as calculated above. Comparing these formulas we get

$$r = \frac{2d^+ - d}{2\nu^+ - d}.\tag{3.65}$$

In the special case *ν*<sup>+</sup> = *d*<sup>+</sup> = *d/*2 the reflection amplitude *r* remains undetermined.

If *ν*<sup>+</sup> = *d*+, then *r* = 1, which means that the corresponding unitary matrix *S* is diagonal and determines vertex conditions which are not properly connecting (unless *d* = 1 of course). Moreover one needs *d*<sup>+</sup> *< ν*<sup>+</sup> in order to guarantee that *r <* 1*.* Hence all possible values of *r* are given by formula (3.65), where the parameters *d*+ and *ν*+ should satisfy:

$$d/2 < d^+ < \nu^+ \le d.$$

All possible values of *r* are obtained going through all natural numbers satisfying the above inequalities. Surprisingly, not all possible cases described by (3.65) can be realised. These are the only possible values of the reflection amplitude in odd dimensions. If *d* is even, then *r* may be arbitrary, provided *d*<sup>+</sup> = *ν*<sup>+</sup> = *d/*2*.*

Equi-transmitting matrices in low dimensions (*d* ≤ 6) are completely described in [348]. It turns out that matrices equivalent to those corresponding to standard vertex conditions play a very exceptional role. For example for *d* = 5 admissible values of *(d*+*, ν*+*)* are *(*4*,* 5*), (*3*,* 5*)* and *(*3*,* 4*)* leading to the following possible values of *r* respectively:

$$r = 3/\mathfrak{S}, \ 1/\mathfrak{S}, \ 1/\mathfrak{A}.$$

The case *r* = 3*/*5 corresponds to standard vertex conditions and hence is realisable. The other two cases *r* = 1*/*5 and *r* = 1*/*3 do not lead to any equi-transmitting matrix. The same phenomenon is observed when *d* = 3*.* For more details see [411, 412]. In [136, 137], approximations of low-dimensional equi-transmiting matrices are discussed.

The case of large dimensions is much less studied. Equi-transmitting unitary matrices can be constructed using Dirichlet characters [263], but construction heaxvily depends on the dimension. Standard vertex conditions lead to equitransmitting matrices proving that such matrices exist in any dimension. Reflectionless equi-transmitting matrices may exist in even dimensions only, as discussed above but it is not clear that they are realisable in any even dimension.

Studies of equi-transmitting matrices may be extended by considering unitary symmetric (not necessarily Hermitian) matrices. An interesting example of such matrix for *d* = 5 was constructed in [263]

$$S = \frac{1}{2} \begin{pmatrix} 0 & 1 & 1 & 1 & 1 \\ 1 & 0 & 1 & \omega & \omega^2 \\ 1 & 1 & 0 & \omega^2 & \omega \\ 1 & \omega & \omega^2 & 0 & 1 \\ 1 & \omega^2 & \omega & 1 & 0 \end{pmatrix}, \quad \text{where } \omega = \mathbf{e}^{2\pi i/3}.$$

# **Appendix 2: Parametrisation of Vertex Conditions: Historical Remarks**

It is almost impossible to mention all articles where vertex conditions for differential operators on graphs are considered. As we already mentioned the whole set of vertex conditions giving all possible self-adjoint extensions of *L*min can be described either using von Neumann formulas, or the theory of boundary triplets [165, 166, 243, 301, 445, 446], or Lagrangian planes corresponding to the symplectic form given by (3.4). We shall just mention here the most important parametrisations.

## *Parametrisation Via Linear Relations*

V. Kostrykin and R. Schrader [309] suggested the following explicit parametrisation of vertex conditions

$$A\_1 \ddot{\boldsymbol{\mu}} + B\_1 \partial \ddot{\boldsymbol{\mu}} = 0,\tag{3.66}$$

where *A*<sup>1</sup> and *B*<sup>1</sup> are two *d* × *d* matrices satisfying the following conditions:


The first condition is needed to guarantee that the operator is symmetric. The second condition says that formula (3.66) imposes sufficiently many independent conditions on the functions.

A similar parametrisation of all possible vertex conditions was given by T. Aktosun, M. Klaus, and R. Weder in [22]:

$$-B\_2^\*\ddot{\mu} + A\_2^\*\partial\ddot{\mu} = 0,\tag{3.67}$$

where *A*<sup>2</sup> and *B*<sup>2</sup> are two *v* × *v* matrices satisfying the following relations:


These two parametrisations are completely equivalent and parametrize all possible self-adjoint extensions of the minimal operator. Their advantage is that the matrices *A* and *B* can often be chosen with integer entries (making calculations easier). For example, the standard vertex conditions can be written as (3.35) using just integers.

Both conditions (3.66) and (3.67) can be multiplied on the left by any invertible matrix without changing the set of admissible functions. It follows that such parametrisations are not unique and therefore their use for inverse problems is limited. Moreover it might be difficult to determine whether vertex conditions written in the form (3.66) or (3.67) really connect together all limiting values of *u* at the vertex *V* , or the vertex can be split into two (see discussion in [355] for details).

## *Parametrisation Using Hermitian Operators*

Formula (3.66) determines a certain linear relation for the limit values *u, ∂ u.* Therefore, it is natural to parameterize all such linear relations using the linear subspace *(I* <sup>−</sup> *<sup>P</sup>*−1*)*C*<sup>v</sup>* <sup>=</sup> *(*Ker*(S* <sup>+</sup> *I ))* <sup>⊥</sup> and the Hermitian operator *AS* = *(I* − *P*−1*)*i *I*−*S <sup>I</sup>*+*<sup>S</sup> (I* <sup>−</sup> *<sup>P</sup>*−1*)* acting on this subspace. Such parametrisation was suggested by P. Kuchment [326], and it is given by formula (3.27) (in our notations).

## *Unitary Matrix Parametrisation*

The first explicit parametrisation of vertex conditions using unitary matrices was suggested by M. Harmer [255–258]. It is almost identical to parametrisation (3.21), but its relation to the vertex scattering matrix remained hidden. The idea to parametrise vertex conditions via the vertex scattering matrix is clear from the physical point of view, and it was realised independently by P. Kurasov and M. Nowaczyk [347] leading to parametriation (3.21). As we already mentioned, this parametrisation of vertex conditions is the most suitable from our point of view.

**Problem 9** Consider the star graph formed by three semi-infinite edges [*xj ,*∞*), j* = 1*,* 2*,* 3*.* Express the standard vertex conditions


Are these vertex conditions properly connecting and scaling-invariant?

**Problem 10** Consider the lasso graph depicted in Fig. 2.5 with magnetic Schrödinger operator satisfying the standard vertex conditions at the vertex, *i.e.*  the operator *L*st <sup>0</sup>*,a.* Assume that the electric potential is zero *q(x)* = 0 everywhere on *-*, while the magnetic potential is zero on the semi-infinite edge. Let us denote by the flux of the magnetic field through the loop: <sup>=</sup> \* *<sup>x</sup>*<sup>2</sup> *<sup>x</sup>*<sup>1</sup> *a(x)dx.* Let *Ua* be the unitary transformation *u(x)* |→ exp −*i* \* *x <sup>x</sup>*<sup>1</sup> *a(y)dy u(x)* removing the magnetic potential on the loop. Consider the Laplacian

$$L\_{\Phi} = U\_a L\_{0,a} U\_a^{-1}.$$


**Problem 11** Vertex conditions can be written in the form (3.21) where *S* = *S***v***(*1*)* is used as a parameter. How should formula (3.21) be modified so that *S***v***(k*0*)* is used as a parameter instead of *S***v***(*1*)*, for *<sup>k</sup>*<sup>0</sup> <sup>∈</sup> <sup>R</sup>*, k* /= <sup>1</sup>*.*

**Problem 12** Let *-*<sup>5</sup> be a graph formed by 4 edges [*x*2*j*−1*, x*2*<sup>j</sup>* ]*, j* = 1*,* 2*,...,* 4*.* Let *L* be the corresponding Laplace operator defined on the domain of functions satisfying the vertex conditions:

⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ 1 5 −2 −10 0 0 −3 110 −10 0 0 −1 210 −1 0 −1 0 −1 110 −10 0 0 −1 220 00 −2 0 −2 0 1 −100000 −10 0 1 0 0 0 0 2 1 −100 −20 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ *u(x*1*) u(x*2*) u(x*3*) u(x*4*) u(x*5*) u(x*6*) u(x*7*) u(x*8*)* ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ −100 −1 0 −1 −1 0 1 00 1 1 1 0 0 0 11 0 0 0 0 1 0 00 0 0 0 0 0 1 33 1 1 1 0 3 0 11 0 0 0 0 1 −100 −1 0 −100 1 00 1 0 1 −1 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ *u*' *(x*1*)* −*u*' *(x*2*) u*' *(x*3*)* −*u*' *(x*4*) u*' *(x*5*)* −*u*' *(x*6*) u*' *(x*7*)* −*u*' *(x*8*)* ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ *.* (3.68)

The corresponding vertex scattering matrix is energy independent. Reconstruct the metric graph taking into account that the vertex conditions respect connectivity of the graph.

Write the vertex conditions using the other two standard parametrisations:


*Hint*: Use the fact that the vertex conditions lead to an energy independent vertex scattering matrix and, therefore, can be written using projectors as (3.34) or (3.33). Hence it is enough to calculate the kernels of the matrices on the different sides of (3.68). The corresponding kernels should be orthogonal and span C8*.*

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 4 Elementary Spectral Properties of Quantum Graphs**

Our first step will be to give a rigorous, self-contained definition of a quantum graph—Schrödinger operator on a metric graph. We shall also start looking at its spectral properties depending on whether the underlying finite metric graph contains non-compact edges or not. We shall do that without deriving the secular equation on the spectrum, but using general spectral theory methods instead. Our main tool will be comparison between the differential operator on the metric graph and of the Dirichlet operator determined by the same differential expression on the set of independent edges and Dirichlet conditions at all endpoints. In this way we shall be able to prove that the spectrum of finite compact graphs is discrete and satisfies the Weyl asymptotics independently of the particular value of the potentials and vertex conditions. In the case where non-compact edges are present we shall describe the absolutely continuous component of the spectrum. The last section will be devoted to the properties of the ground state eigenfunction, in particular we derive the most general form of vertex conditions that guarantee that the ground state can be chosen positive.

# **4.1 Quantum Graphs as Self-adjoint Operators**

As we already mentioned, quantum graphs should be considered as triples formed by a metric graph, a differential operator and vertex conditions. Let us provide a formal definition of the operator *L***<sup>S</sup>** *q,a(-)* in the case of most general parameters considered in this book. Assume that the parameters *-, q, a* and **S** satisfy the following assumptions:

• the graph is a metric graph formed by *Nc* compact and *Ni* infinite edges; • the (electric) potential *q* is a real-valued absolutely integrable potential on *-* satisfying in addition the Faddeev condition (4.2):

$$q \in L\_1(\Gamma),\tag{4.1}$$

$$\int\_{\Gamma} (1+|\mathbf{x}|) \cdot |q(\mathbf{x})| dx < \infty;\tag{4.2}$$

• the magnetic potential *a* is a real-valued uniformly bounded function, continuous on every edge

$$a \in C(\Gamma \mid \mathbf{V});\tag{4.3}$$

• the vertex matrix **S** is a unitary properly connecting matrix, in other words, **S** is a collection of *<sup>M</sup>* irreducible unitary *<sup>d</sup><sup>m</sup>* <sup>×</sup>*d<sup>m</sup>* matrices *<sup>S</sup>m*, where *<sup>d</sup><sup>m</sup>* is the degree of the vertex *V m.*

Consider first any edge *En* and the differential expression (2.17)

$$\pi\_{q,a} = \left( i \frac{d}{d\mathbf{x}} + a(\mathbf{x}) \right)^2 + q(\mathbf{x})^2$$

defined on the functions *u* from the Sobolev space *W*<sup>1</sup> <sup>2</sup> *(En)* which are mapped by *τq,a* to a function from *L*2*(En)*,

$$
\pi\_{q,a}\mu = f \in L\_2(E\_n). \tag{4.4}
$$

Every such function *u* is continuous as a function from *W*<sup>1</sup> <sup>2</sup> *(En)* and satisfies the differential equation

$$\left(i\frac{d}{dx} + a\right)^2 u + \underbrace{q\sum\_{\in C(E\_n)}}\_{\in L\_{1,loc}(E\_n)} = \underbrace{f}\_{\in L\_2(E\_n)}\tag{4.5}$$

with a certain *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*(En).*1 The function *<sup>u</sup>* is continuous, hence *qu* is locally absolutely integrable. Moreover, every square integrable function is locally absolutely

<sup>1</sup> If the graph  is compact (no semi-infinite edges are present), then one may drop the subscript *loc* in the formula above.

integrable. Summing up we have

$$\operatorname{id}\frac{d}{dx}\left(\operatorname{i}\frac{d}{dx}+a\right)u = f - qu - a\underbrace{\left(\operatorname{i}\frac{d}{dx}+a\right)u}\_{L\_2(E\_n)}.\tag{4.6}$$

The magnetic potential *a* is a continuous function and ( *i d dx* + *a* ) *u* is square integrable, hence their product is also locally square integrable and locally absolutely integrable. It follows that the derivative of ( *i d dx* + *a* ) *u* is a locally absolutely integrable function, hence ( *i d dx* + *a* ) *u* is continuous, hence *u*' is also continuous.

Since every function *u* from the domain of the differential operator is continuous and has continuous first derivative, *i.e.* 

$$u, u' \in C(E\_n),$$

we may introduce the following vectors associated with each vertex *V <sup>m</sup>*:

$$\begin{aligned} \vec{u}(V^m) &:= \{ u(\boldsymbol{x}\_j) \}\_{\boldsymbol{\chi}\_j \in V^m} \in \mathbb{C}^{d^m}, \\ \partial \vec{u}(V^m) &:= \{ \partial u(\boldsymbol{x}\_j) \}\_{\boldsymbol{\chi}\_j \in V^m} \in \mathbb{C}^{d^m}, \end{aligned} \tag{4.7}$$

where the extended normal derivatives *∂u(xj )* were defined in (2.26) and *u(xj )* denotes the limit value of the function *u* from inside the corresponding edge. Then following (3.21) we impose the vertex conditions that any function from the domain of the operator should satisfy

$$i(S^m - I)\vec{u}(V^m) = (S^m + I)\partial\vec{u}(V^m), \ \ m = 1, 2, \dots, M,\tag{4.8}$$

where *<sup>S</sup><sup>m</sup>* are the irreducible unitary *<sup>d</sup><sup>m</sup>* <sup>×</sup>*d<sup>m</sup>* matrices forming the *<sup>D</sup>* <sup>×</sup> *<sup>D</sup>* matrix **<sup>S</sup>** (*<sup>D</sup>* <sup>=</sup> <sup>2</sup>*Nc* <sup>+</sup> *Ni*). **Unitarity** of *<sup>S</sup><sup>m</sup>* is needed to ensure that the differential operator is symmetric and even self-adjoint. **Irreducibility** of the matrices *S<sup>m</sup>* is needed to ensure that no one of the vertices can be divided so that the set of functions satisfying vertex conditions is preserved, *i.e.* that the vertex conditions reflect how different edges in  are connected to each other. We call such vertex conditions *properly connecting.* 

**Definition 4.1** The operator *L***<sup>S</sup>** *q,a(-)* acting in the Hilbert space *L*2*(-)* is defined by the differential expression *τq,a* = ( *i d dx* <sup>+</sup> *a(x)*)<sup>2</sup> +*q(x)* on the domain consisting of functions from the Sobolev space *W*<sup>1</sup> <sup>2</sup> *(-* \ **<sup>V</sup>***)* <sup>=</sup> <sup>⊕</sup>*<sup>N</sup> <sup>n</sup>*=<sup>1</sup> *<sup>W</sup>*<sup>1</sup> <sup>2</sup> *(En)* such that their image under *τq,a* lies inside the Hilbert space *L*2*(-)*, *i.e.* 

$$\pi\_{q,a}u = \left(\left(i\frac{d}{dx} + a(\chi)\right)^2 + q(\chi)\right)u \in L\_2(\Gamma).$$

and the vertex conditions (4.8) are satisfied.

Let us prove that the operator *L***<sup>S</sup>** *q,a(-)* defined above is symmetric. Given any two functions *u* and *v* from the domain of the operator, the following holds:

$$\begin{split} \int\_{E\_n} \overline{\left( \left( \overline{d} \frac{d}{dx} + a(x) \right)^2 + \overline{u}(x) \right) u(x)} v(x) dx \\ &= - \int\_{E\_n} \overline{u(x)} \left( \left( \overline{d} \frac{d}{dx} + a(x) \right)^2 + \overline{u}(x) \right) v(x) dx \\ = - \int\_{E\_n} \overline{\left( \left( \overline{d} - ia(x) \right)^2 \right)} u(x) v(x) dx \\ &+ \int\_{E\_n} \overline{u(x)} \left( \left( \overline{d} - ia(x) \right)^2 \right) v(x) dx \\ = - \overline{(u'(x\_{2n}) - ia(x\_{2n})u(x\_{2n}))} v(x\_{2n}) \\ &+ \overline{(u'(x\_{2n-1}) - ia(x\_{2n-1})u(x\_{2n-1}))} v(x\_{2n-1}) \\ + \overline{u(x\_{2n})} \left( v'(x\_{2n}) - ia(x\_{2n})v(x\_{2n}) \right) \\ - \overline{u(x\_{2n-1})} \left( v'(x\_{2n-1}) - ia(x\_{2n-1})v(x\_{2n-1}) \right) \\ = \overline{\partial u(x\_{2n})} v(x\_{2n}) + \overline{\partial u(x\_{2n-1})} v(x\_{2n-1}) - \overline{u(x\_{2n})} \partial v(x\_{2n}) + \overline{u(x\_{2n-1})} \partial v(x\_{2n-1}). \end{split}$$

Integration by parts is possible, since we have proven that the first derivatives are continuous. Note that in the case of semi-infinite interval we get a contribution from just one endpoint. Putting all *N* edges together, we arrive at

$$\begin{aligned} \left< \mathfrak{r}\_{q,a}\mu, \upsilon \right>\_{L\_2(\Gamma)} - \left< \mu, \mathfrak{r}\_{q,a}\upsilon \right>\_{L\_2(\Gamma)} \\\\ = \sum\_{m=1}^{M} \left( \langle \partial \vec{u}(V^m), \vec{v}(V^m) \rangle\_{\mathbb{C}^{d^m}} - \langle \vec{u}(V^m), \partial \vec{v}(V^m) \rangle\_{\mathbb{C}^{d^m}} \right). \end{aligned} \tag{4.9}$$

The terms associated with each vertex cancel out separately since both *(u(V m), ∂u(V m))* and *(v(V m), ∂v(V m))* satisfy vertex conditions (4.8). The cancellation follows from Theorem 3.4, one may also use the proof of Theorem 3.2 with *A* and *B* given by (3.19). An alternative proof of the symmetry of *L***<sup>S</sup>** *q,a* can be given using Hermitian parameterisation of vertex conditions described in Sect. 3.4.

**Problem 13** Show that the operator *L***<sup>S</sup>** *q,a* is symmetric, using Hermitian parameterisation of the vertex conditions (3.27) instead of (4.8).

Thus we have proven that the operator *L***<sup>S</sup>** *q,a(-)* is symmetric, but in quantum mechanics one usually deals with self-adjoint operators only. Let *L* be a densely defined linear operator in a Hilbert space *L*2*(-)*. Then the adjoint operator *L*∗ is defined on the domain Dom *(L*∗*) v* formed by all functions *v* ∈ H such that <sup>&</sup>lt;*Lu, v*<sup>&</sup>gt; is a bounded linear functional with respect to *<sup>u</sup>* <sup>∈</sup> Dom *(L).*2 Every bounded functional in accordance with the F. Riesz representation theorem is given by scalar product with a certain element *h* ∈ H

$$
\langle L\mu, \upsilon\rangle = \langle \mu, h\rangle.
$$

Then the action of the adjoint operator is given by *L*∗*v* = *h.* The vector *h* is unique, since the functions *u* span Dom *(L)*, which in turn is dense in the Hilbert space. An operator *L* is called self-adjoint if and only if *L*<sup>∗</sup> = *L*, where it is required that, not only the action, but also the domains of the operators *L* and *L*∗ coincide. Every self-adjoint operator is symmetric—this follows directly from the definition of the adjoint operator—but the opposite implication does not always hold. Our next step is to prove that the whole family of operators *L***<sup>S</sup>** *q,a(-)* is self-adjoint. To make the formulas more transparent, it is wise to eliminate the magnetic potential *a* first and prove that the operators *L***<sup>S</sup>** *<sup>q</sup> (-)* <sup>≡</sup> *<sup>L</sup>***<sup>S</sup>** *q,*0*(-)* are self-adjoint for any *q* and **S***.* Really, consider the unitary transformation in *L*2*(En)* given as multiplication by a unimodular function calculated using the magnetic potential

$$U\_a^n: u(\mathbf{x}) \mapsto \exp\left(i \int\_{\chi\_{2n-1}}^{\chi} a(\mathbf{y}) d\mathbf{y}\right) \mu(\mathbf{x}).\tag{4.10}$$

Then the corresponding differential expressions are related via

$$
\pi\_{q,a} = U\_a^n \tau\_{q,0} (U\_a^n)^{-1}.\tag{4.11}
$$

It follows that any magnetic Schrödinger operator *L***<sup>S</sup>** *q,a(-)* is unitarily equivalent to a certain (non-magnetic) Schrödinger operator *L***S**˜ *<sup>q</sup>* which may be parameterised by a different vertex matrix **S**˜, but with the same electric potential *q.*<sup>3</sup>

<sup>2</sup> Remember that we are dealing with unbounded operators, hence <*Lu, v*<sup>&</sup>gt; determines a linear functional with respect to *u*, but it might be unbounded with respect to the original Hilbert space norm. The functional is bounded with respect to *u* if and only if there exists a constant *Cv* such that |<*Lu, v*>| ≤ *Cv*║*u*║*.*

<sup>3</sup> The matrices **S**˜ can be easily calculated from **S** using the integrals over the magnetic potential along the edges, we are going to return to this question in Chap. 16.

**Theorem 4.2** *The Schrödinger operator L***<sup>S</sup>** *q,a(-) is a self-adjoint operator in the Hilbert space L*2*(-), provided that is a finite metric graph, the potentials q and a satisfy conditions (4.1), (4.2) and (4.3) and the matrix* **S** *is unitary.* 

*Proof* We are going to prove the theorem by calculating the adjoint operator directly from the definition. As already mentioned, it is enough to prove the theorem for the zero magnetic potential.

We prove first that the operator is densely defined. On each edge *En* = [*x*2*n*−1*, x*2*n*] introduce the function

$$
\sigma(\mathbf{x}) := \int\_{\chi\_{2n-1}}^{\chi} q(\mathbf{y}) d\mathbf{y},
$$

which is continuous. Then the set of functions

$$w(\mathbf{x}) = e^{\int\_{x\_{2n}^{\infty}(\frac{\gamma}{4})dy} w(\mathbf{x})}, \quad w \in C\_0^{\infty}(\Gamma \backslash \mathbf{V});$$

is dense and belongs to the domain of the operator since using the identity

$$v' - \sigma v = e^{\int\_{x\_{2n}}^{\chi} \sigma(\chi) d\chi} w'(\chi)$$

we have

$$\begin{split} -v'' + q(\mathbf{x})v &= -\left[v' - \sigma(\mathbf{x})v\right]' - \sigma(\mathbf{x})\left[v' - \sigma(\mathbf{x})v\right] - \sigma^2(\mathbf{x})v \\ &= -\left(e^{\int\_{x\_{2n}-1}^{\chi} \sigma(\mathbf{y}) dy} w'\right)' - \sigma(\mathbf{x})e^{\int\_{x\_{2n}-1}^{\chi} \sigma(\mathbf{y}) dy} w' - \sigma^2 e^{\int\_{x\_{2n}-1}^{\chi} \sigma(\mathbf{y}) dy} w' \\ &= -e^{\int\_{x\_{2n}}^{\chi} \sigma(\mathbf{y}) dy} \left(w'' + 2\sigma(\mathbf{x})w' + \sigma^2(\mathbf{x})w\right), \end{split}$$

where the right hand side clearly belongs to *L*2*(-).* Here we also took into account that every function from *C*∞ <sup>0</sup> *(-* \ **V***)* has zero limit values at every vertex, and therefore trivially satisfies the vertex conditions.

Consider now any single edge, say *E*<sup>1</sup> = [*x*1*, x*2]*,* and a function *u* ∈ *C*∞ <sup>0</sup> *(x*1*, x*2*).* We need to establish which functions *v* ∈ *L*2*(E*1*)* determine a bounded linear functional via

$$\langle -u'' + qu, v \rangle\_{L\_2(E\_1)} = \int\_{\chi\_1}^{\chi\_2} \overline{(-u'' + qu)} v dx = \int\_{\chi\_1}^{\chi\_2} \overline{u} h dx \equiv \langle u, h \rangle\_{L\_2(E\_1)},$$

where *h* is a certain element from *L*2*(E*1*).* It follows that *v* is a solution of the differential equation

$$\begin{aligned} -\upsilon'' + \underbrace{q(\upsilon)}\_{=\overline{q}(\upsilon)}\upsilon &= h(\upsilon) \\ = \overline{q}(\upsilon) \end{aligned}$$

in the distributional (generalised) sense. Let us rewrite the equation using the continuous function *σ (x)* <sup>=</sup> *<sup>x</sup> <sup>x</sup>*<sup>1</sup> *q(y)dy*

$$-\left[v' - \sigma(\mathbf{x})v\right]' - \sigma(\mathbf{x})\left[v' - \sigma(\mathbf{x})v\right] - \sigma^2(\mathbf{x})v = h(\mathbf{x}).$$

It follows that the extended first derivative *v*[1] *(x)* := *v*' − *σ (x)v* satisfies the first order ordinary differential equation

$$-\frac{d}{dx}v^{[1]} - \sigma(\mathbf{x})v^{[1]} = \underbrace{h(\mathbf{x}) + \sigma^2(\mathbf{x})v}\_{=:\,\mathbf{g}(\mathbf{x})} \in L\_2(E\_1) \subset L\_1(E\_1).$$

Every solution to the equation is a continuous function, hence we have

$$v^{\left[1\right]}(\mathbf{x}) \equiv v' - \sigma(\mathbf{x})v = -e^{-\int\_{\chi\_1}^{\chi} \sigma(\mathbf{y})d\mathbf{y}} \left(\int\_{\chi\_1}^{\chi} e^{\int\_{\chi\_1}^{\chi} \sigma(\mathbf{y})d\mathbf{y}} g(\mathbf{s})ds + C\right).$$

It follows that the function *v* is again a solution to the same first order ordinary differential equation with right hand side being a continuous function. Repeating the same analysis we conclude that the function *v* is continuous which in turn implies that even its first derivative is continuous. In particular *v* belongs to *W*<sup>1</sup> <sup>2</sup> *(E*1*).* The same analysis applies to all edges in *-*.

Summing up, *v* should be a function from *W*<sup>1</sup> <sup>2</sup> *(-* \ **V***)* such that −*v*'' + *q(x)v* ∈ *L*2*(-).* Hence in order to show that Dom *((L***<sup>S</sup>** *<sup>q</sup> (-))*∗*)* <sup>=</sup> Dom *(L***<sup>S</sup>** *<sup>q</sup> (-)),* it remains to prove that *v* should satisfy the same vertex conditions. Consider now functions *u* with the support including just one of the vertices, say *V* <sup>1</sup>*.* Then integrating by parts twice, we get the formula

$$
\langle \tau\_q u, v \rangle\_{L\_2(\Gamma)} = \langle \partial \vec{u}(V^1), \vec{v}(V^1) \rangle\_{\mathbb{C}^{d^1}} - \langle \vec{u}(V^1), \partial \vec{v}(V^1) \rangle\_{\mathbb{C}^{d^1}} + \langle u, \tau\_q v \rangle\_{L\_2(\Gamma)},
$$

where the last term is a bounded functional with respect to *u*, since *τq v* ∈ *L*2*(-).* The functionals

$$
\mu \mapsto \vec{\mu}(V^1), \ u \mapsto \partial \vec{\mu}(V^1)
$$

are not bounded on *L*2*(-).* Hence <−*u*'' + *qu, v*>*L*2*(-)* determines a bounded linear functional if and only if the boundary terms vanish identically for any *<sup>u</sup>* <sup>∈</sup> Dom *(L***<sup>S</sup>** *<sup>q</sup> (-)* :

$$\langle \partial \vec{u}(V^1), \vec{v}(V^1) \rangle\_{\mathbb{C}^{d^1}} - \langle \vec{u}(V^1), \partial \vec{v}(V^1) \rangle\_{\mathbb{C}^{d^1}} \equiv 0.$$

Following (3.19) and putting *<sup>A</sup>* <sup>=</sup> *i(S*<sup>1</sup> <sup>−</sup> *I ), B* <sup>=</sup> *<sup>S</sup>*<sup>1</sup> <sup>+</sup> *<sup>I</sup>* the limiting values *u(V* <sup>1</sup>*), ∂u(V* <sup>1</sup>*)* satisfy the vertex conditions described by *<sup>S</sup>*<sup>1</sup> if and only if they can be represented as

$$\vec{\mu}(V^{\mathsf{I}}) = \underbrace{\left( (\mathcal{S}^{\mathsf{I}})^{\*} + I \right) \vec{t}}\_{B^{\*}}, \quad \partial \vec{\mu}(V^{\mathsf{I}}) = \underbrace{-i \left( (\mathcal{S}^{\mathsf{I}})^{\*} - I \right) \vec{t}}\_{=A^{\*}}.$$

where the vector *t* <sup>∈</sup> <sup>C</sup>*d<sup>m</sup>* is arbitrary. Therefore we necessarily get

$$\begin{split} 0 &= \left\langle -i\left( (S^{\mathbb{I}})^{\*} - I \right) \vec{t}, \vec{v}(V^{\mathbb{I}}) \right\rangle\_{\mathbb{C}^{d^{1}}} - \left\langle \left( (S^{\mathbb{I}})^{\*} + I \right) \vec{t}, \partial \vec{v}(V^{\mathbb{I}}) \right\rangle\_{\mathbb{C}^{d^{1}}} \\ &= \left\langle \vec{t}, i\left( S^{\mathbb{I}} - I \right) \vec{v}(V^{\mathbb{I}}) - \left( S^{\mathbb{I}} + I \right) \partial \vec{v}(V^{\mathbb{I}}) \right\rangle\_{\mathbb{C}^{d^{1}}}. \end{split}$$

Arbitrariness of *t* implies that

$$i\left(S^1 - I\right)\vec{v}(V^1) - \left(S^1 + I\right)\partial\vec{v}(V^1) = 0,$$

*i.e.* that *v* satisfies the vertex condition (4.8) at *V* <sup>1</sup>*.* Since the same analysis is applicable to any vertex in *-*, we conclude that Dom *((L***<sup>S</sup>** *<sup>q</sup> (-))*∗*)* <sup>=</sup> Dom *(L***<sup>S</sup>** *<sup>q</sup> (-))*.

Since all contributions from the vertices vanish, integration by parts gives

$$\langle L\_q^{\mathbf{S}}(\Gamma)u, v \rangle\_{L\_2(\Gamma)} = \langle u, \mathfrak{r}\_q v \rangle\_{L\_2(\Gamma)},$$

which implies that the operators *(L***<sup>S</sup>** *<sup>q</sup> (-))*∗ is given by the same differential expression *τq .* We conclude that *L***<sup>S</sup>** *<sup>q</sup> (-)* is a self-adjoint operator in *L*2*(-).* ⨅⨆

Self-adjointness of the operators allows us to apply the whole machinery of quantum mechanics and spectral theory, in particular perturbation theory, to study spectral properties of our models.

## **4.2 The Dirichlet Operator and the Weyl's Law**

To discuss spectral properties of Schrödinger operators on metric graphs, let us in addition to the operator *L***<sup>S</sup>** *q,a(-)* consider the corresponding Dirichlet operator - the operator defined by the same differential expression, but on the functions satisfying Dirichlet conditions at all vertices.

**Definition 4.3** The **Dirichlet operator** *L*<sup>D</sup> *q,a(-)* is defined by the differential expression *τq,a* (see (2.17)) on the domain of functions *u* from the Sobolev space *W*<sup>1</sup> <sup>2</sup> *(-* \ **V***)* with *τq,au* ∈ *L*2*(En), n* = 1*,* 2*,...,N,* satisfying the Dirichlet conditions at all endpoints:

$$
\mu(\mathbf{x}\_f) = 0, \quad \mathbf{x}\_f \in \mathbf{V}.\tag{4.12}
$$

#### 4.2 The Dirichlet Operator and the Weyl's Law 73

Every such operator can be written as the orthogonal sum

$$L\_{q,a}^D(\Gamma) = \bigoplus\_{n=1}^N L\_{q,a}^\mathcal{D}(E\_n),\tag{4.13}$$

where *L*<sup>D</sup> *q,a(En)* are the differential operators determined by expressions (2.17) and Dirichlet conditions on the edges *En.* It follows that the Dirichlet operator corresponds to the graph  with all edges separated from each other. In other words, the corresponding conditions are not properly connecting (unless of course the metric graph  is a collection of disjoint intervals). It is clear that the Dirichlet operator is self-adjoint as an orthogonal sum of (self-adjoint) Schrödinger operators on compact and semi-infinite intervals.

To investigate the spectrum of the Dirichlet operator one may consider each operator *L*<sup>D</sup> *q,a(En)* separately. The magnetic potential *a* can be eliminated using the same unitary transformation (4.10)

$$L\_{q,a}^{D}(E\_n) = U\_a^{-1} L\_{q,0}^{D}(E\_n) U\_a. \tag{4.14}$$

Here we used that the unitary transformation not only maps the differential expressions like in (4.11), but that the Dirichlet conditions are preserved as well. Any unitary transformations preserves the spectrum, hence the operators *L*<sup>D</sup> *q,a(En)* and *L<sup>D</sup> q,*0*(En)* are isospectral, *i.e.* their spectra coincide (including the multiplicity and type).

If the edge *En* is compact, then the spectrum of the Schrödinger operator *L<sup>D</sup> <sup>q</sup> (En)* is purely discrete and consists of an infinite sequence of simple eigenvalues with unique accumulation point +∞*.* The standard way to prove this fact is to look at the resolvent *(L<sup>D</sup> <sup>q</sup> (En)* <sup>−</sup> *λ)*−<sup>1</sup> and show that it is compact.

If the edge *En* is semi-infinite, then one has to take into account that the potential satisfies the Faddeev condition (4.2), which implies that the spectrum contains continuous branch [0*,*∞*)* and possibly a finite number of negative eigenvalues [133, 218, 219, 397].

The Dirichlet operator is a finite orthogonal sum of operators on the edges and therefore its spectrum is a union of the spectra of *L*<sup>D</sup> *q,a(En).* Hence the spectrum of the Dirichlet operator *L*<sup>D</sup> *q,a(-)* contains the branch of continuous spectrum [0*,*∞*)* with the multiplicity equal to the number *Ni* of infinite edges and discrete eigenvalues with unique accumulation point +∞*.* Of course there is no reason for the eigenvalues to be simple, but their multiplicity cannot exceed the number of edges.

If both compact and non-compact edges are present, then we observe eigenvalues embedded into the continuous spectrum, but these eigenvalues and the continuous spectrum correspond to different operators in the decomposition (4.13).

If compact edges are absent (*Nc* = 0), then the spectrum above zero is purely continuous. One may have negative eigenvalues, but their number is finite.

It follows that spectral properties of compact and non-compact graphs differ drastically.

*Compact Graphs* Let us prove that the spectrum of the Dirichlet Schrödinger operator satisfies the Weyl's asymptotic law, provided semi-infinite edges are absent (*Ni* = 0).

**Theorem 4.4** *Let L*<sup>D</sup> *q,a(-) denote the Dirichlet Schrödinger operator on a compact finite graph -. The spectrum of L*<sup>D</sup> *q,a(-) is purely discrete and satisfies the Weyl's law* 

$$
\lambda\_n(L\_{q,a}^D(\Gamma)) = \left(\frac{\pi}{\mathcal{L}}\right)^2 n^2 + \mathcal{O}(n), \quad n \to \infty,\tag{4.15}
$$

*where* L *is the total length given by (2.9).* 

*Proof* We are going to prove the theorem assuming that *q* is uniformly bounded (satisfies condition (2.19)), also the Theorem holds for *L*1-potentials (under general assumptions (4.1)). Summable potentials require use of quadratic forms to be considered in Chap. 11.

The proof is simplified if, instead of working with the asymptotic formulas like (4.15), one introduces the eigenvalue counting function for an interval *<sup>Δ</sup>* <sup>⊂</sup> <sup>R</sup>:

$$E\_A(\Delta) = \text{the number of eigenvalues of } A \text{ in the interval } \Delta. \tag{4.16}$$

We shall also use the simplified notation which is useful provided *A* is semibounded:

*EA(λ)* = the number of eigenvalues of *A* which are less or equal to *λ.* (4.17)

We start by looking at the Dirichlet Laplacian *LD(-).* The spectrum of *LD(En)* on the edge *En* of length *ℓn* is easy to calculate:

$$\lambda\_j(L^D(E\_n)) = \left(\frac{\pi}{\ell\_n}j\right)^2, \quad j = 1, 2, \dots, 2$$

The corresponding eigenvalue counting function is then given by the following formula for positive values of *λ*:

$$E\_{L^D(E\_n)}(\lambda) = \left[\frac{\ell\_n}{\pi}\sqrt{\lambda}\right],$$

where [·] denotes the integer part. Of course it is equal to zero for negative *λ*. Since the functions on the edges *En, n* = 1*,* 2*,... ,N,* are independent as far as the Dirichlet operator is concerned, the counting function for *LD(-)* is equal to the sum of counting functions for single edges and can be easily calculated:

$$E\_{L^D(\Gamma)}(\lambda) = \begin{cases} 0, & \lambda < 0\\ \sum\_{n=1}^N \left[ \frac{\ell\_n}{\pi} \sqrt{\lambda} \right], & \lambda \ge 0. \end{cases} \tag{4.18}$$

For any *N* positive numbers *an,* the following elementary estimate holds:

$$\sum\_{n=1}^{N} a\_n - N < \left[ \sum\_{n=1}^{N} a\_n \right] - N + 1 \le [a\_1] + [a\_2] + \dots + [a\_N] \le \left[ \sum\_{n=1}^{N} a\_n \right] \le \sum\_{n=1}^{N} a\_n. \tag{4.19}$$

Taking into account that *ELD(-)(λn)* = *n* we derive that the eigenvalue counting function satisfies the two-sided estimate

$$\frac{\mathcal{L}}{\pi}\sqrt{\lambda} - N < E\_{L^D(\Gamma)}(\lambda) \le \frac{\mathcal{L}}{\pi}\sqrt{\lambda}.\tag{4.20}$$

It follows that the spectrum of the Dirichlet Laplacian satisfies (4.15):

$$
\lambda\_n(L^D(\Gamma)) = \left(\frac{\pi}{\mathcal{L}}\right)^2 n^2 + \mathcal{O}(n), \ n \to \infty. \tag{4.21}
$$

Let us discuss now the Schrödinger operator *L<sup>D</sup> <sup>q</sup>* assuming (2.19). The operator *L<sup>D</sup> <sup>q</sup>* is a bounded perturbation of *LD(-)* with the norm of the perturbation bounded by the *L*∞-norm of the potential ║*q*║∞*.* Hence the following estimate holds for the eigenvalues:

$$|\lambda\_j(L\_q^D(\Gamma)) - \lambda\_j(L^D(\Gamma))| \le \|q\|\_\infty,\tag{4.22}$$

implying that

$$
\lambda\_j(L\_q^D(\Gamma)) = \lambda\_j(L^D(\Gamma)) + \mathcal{O}(1), \ j \to \infty. \tag{4.23}
$$

Taking into account (4.21) and that magnetic potential can be eliminated on each single interval, we get the asymptotic formula (4.15) for the Dirichlet (Schrödinger) eigenvalues. ⨅⨆

To prove the theorem for *q* ∈ *L*1*(-)* one may use that the Dirichlet Schrödinger operator on  is infinitesimally small in the quadratic form sense with respect to the Dirichlet Laplacian, hence the asymptotics (4.15) follows. The theorem holds even for singular (distributions of first order) potentials as the same asymptotics holds for any compact interval [272, 273, 461, 462].

*Non-compact Graphs* Investigating the Dirichlet Schrödinger operator on a graph containing several semi-infinite edges, it is reasonable to present it as the following orthogonal sum:

$$L\_q^D(\Gamma) = \left(\bigoplus\_{n=1}^{N\_c} L\_q^D(E\_n)\right) \oplus \left(\bigoplus\_{n=N\_c+1}^{N=N\_c+N\_l} L\_q^D(E\_n)\right). \tag{4.24}$$

The spectrum of the first operator on the compact part of  has already been studied: it is discrete and satisfies the Weyl's law with the total length L substituted with the length of the compact part <sup>L</sup>*<sup>c</sup>* := <sup>∑</sup>*Nc <sup>n</sup>*=<sup>1</sup> *ℓn.*

The second operator is a sum of *Ni* Schrödinger operators on the semi-axes and therefore its spectrum is given by the branch of the absolutely continuous spectrum [0*,*∞*)* of multiplicity *Ni* and maybe a finite number of negative eigenvalues.

The spectrum of *L<sup>D</sup> <sup>q</sup> (-)* is the union of the spectra of these two operators and therefore contains eigenvalues imbedded into the continuous spectrum, provided *Nc* /= 0*.*

## **4.3 Spectra of Quantum Graphs**

**Compact Graphs** Let us discuss now spectra of the Schrödinger operators *L***<sup>S</sup>** *q,a(-)* by comparing them with the Dirichlet operators introduced above. In operator theory, one would use the following proposition which relates the spectra of two operators which are close in the resolvent sense.

**Proposition 4.5 (Theorem 3, p. 215 in [90])** *Let A* = *A*∗*, B* = *B*<sup>∗</sup> *satisfy* 

$$T := (B - \zeta I)^{-1} - (A - \zeta I)^{-1}, \quad \text{rank } T = \dim \mathcal{R} \,(T) = r < \infty.$$

*If the spectrum of A in the interval Δ is discrete then so is the spectrum of B and* 

$$N(\Delta, A) - r \le N(\Delta, B) \le N(\Delta, A) + r.$$

Here R *(T )* denotes the range of the operator *T* . The main assumption of the proposition is that the difference between the resolvents of two operators has finite rank. Interested readers should consult the book by M.S. Birman and M.Z. Solomjak [90], or any other standard text on the spectral theory of self-adjoint operators. Our immediate goal is to check that the difference between the resolvents of *L***<sup>S</sup>** *q,a(-)* and *L*<sup>D</sup> *q,a(-)* satisfies the assumptions of the above proposition. This will allow us to prove the Weyl's asymptotic law for finite compact graphs. Note that we are going to give an alternative proof of this fact in Chap. 5 without using abstract operator theory.

**Theorem 4.6** *Let the metric graph be finite and compact, then the spectrum of the magnetic Schrödinger operator L<sup>S</sup> q,a(-) is pure discrete, satisfying the Weyl's asymptotic law* 

$$
\lambda\_n(L\_{q,a}^{\mathbb{S}}(\Gamma)) = \left(\frac{\pi}{\mathcal{L}}\right)^2 n^2 + \mathcal{O}(n), \quad n \to \infty,\tag{4.25}
$$

*and thus having unique accumulation point* +∞*.*

*Proof* In what follows we are going to identify the minimal operator *L*min *q,a* defined on *C*∞ 0 *(-* \ **<sup>V</sup>***)* functions with its closure defined on all functions from *W*<sup>1</sup> 2 *(-* \ **V***)* with *τq,au* ∈ *L*2*(En), n* = 1*,* 2*,...,N,* and satisfying both Dirichlet and Neumann conditions at all endpoints:

$$
\mu(\mathbf{x}\_f) = 0 = \partial \mu(\mathbf{x}\_f). \tag{4.26}
$$

The operators *L***<sup>S</sup>** *q,a* and *L*<sup>D</sup> *q,a* are two different self-adjoint extensions of this minimal operator *L*min *q,a ,* which is symmetric.

Since the operators *L***<sup>S</sup>** *q,a* and *L*<sup>D</sup> *q,a* are given by the same differential expression we have

$$\mu \in \text{Dom}\left(L\_{q,a}^{\min}(\Gamma)\right) \Rightarrow (L\_{q,a}^S - \lambda)\mu = (L\_{q,a}^D - \lambda)\mu.$$

This formula can also be written in the following form:

$$\left.(L\_{q,a}^{\mathcal{S}} - \lambda)^{-1} - (L\_{q,a}^{D} - \lambda)^{-1}\right|\_{\mathcal{R}\left(L\_{q,a}^{\min} - \lambda\right)} = 0.1$$

Hence to prove that the difference between the resolvents is a finite rank operator, it is enough to determine the dimension of the orthogonal complement <sup>R</sup>*(L*min *q,a (-)* <sup>−</sup> *λ)*⊥*.* Assume that *<sup>f</sup>* <sup>∈</sup> <sup>R</sup>*(L*min *q,a (-)* − *λ)*⊥. Then

$$\langle (L\_{q,a}^{\min} - \lambda)\mu, f\rangle = 0$$

for any *<sup>u</sup>* <sup>∈</sup> Dom *(L*min *q,a ).* Integrating by parts twice we get

$$
\langle u, (\mathfrak{r}\_{q,a} - \overline{\lambda})f \rangle = 0.
$$

No boundary terms appear because every *<sup>u</sup>* <sup>∈</sup> Dom *(L*min *q,a (-))* satisfies (4.26). Taking into account that the domain of *L*min *q,a (-)* is dense in *L*2*(-)*, we conclude that

$$(\mathfrak{r}\_{q,a} - \overline{\lambda})f = 0.\tag{4.27}$$

Note that no conditions are imposed at the endpoints of the intervals, hence the dimension of the subspace <sup>R</sup>*(L*min *q,a (-)* − *λ)*<sup>⊥</sup> is equal to 2*N*, since on each interval, (4.27) is a second order differential equation and has two linearly independent solutions.4

Proposition 4.5 implies that the spectrum of *L***<sup>S</sup>** *q,a* is pure discrete, so the number of eigenvalues in any interval *(*−∞*, λ*] differ by at most 2*N.* Since the eigenvalues of *L*<sup>D</sup> *q,a* satisfy the Weyl's law (4.15) (identical to (4.25)), the same holds for the eigenvalues of *L***<sup>S</sup>** *q,a.* ⨅⨆

The obtained results can be seen as one of the first observations that should be used solving the inverse problems:

**Observation 4.7** *The total length of the graph can be deduced from the asymptotics of the eigenvalues* 

$$\mathcal{L} = \pi \lim\_{n \to \infty} \frac{n}{\sqrt{\lambda\_n(L\_{q,a}^{\mathbf{S}}(\Gamma))}},\tag{4.28}$$

*provided the graph is finite and compact.* 

Note that we used the minimal operator *L*min *q,a* to estimate the dimension of the subspace <sup>R</sup>*(L*min *q,a (-)* − *λ)*⊥*.* Whilst the inclusion

$$\text{Dom}\left(L\_{q,a}^{\mathcal{S}}(\Gamma)\right) \cap \text{Dom}\left(L\_{q,a}^{D}(\Gamma)\right) \supset \text{Dom}\left(L\_{q,a}^{\min}(\Gamma)\right),$$

always holds, it might happen that the intersection between the domains of *L***<sup>S</sup>** *q,a* and *L*<sup>D</sup> *q,a* is much bigger than the domain of the minimal operator. For example, for the standard and Dirichlet operators

$$\text{Dom}\left(L\_{q,a}^{\text{st}}(\Gamma)\right) \cap \text{Dom}\left(L\_{q,a}^{D}(\Gamma)\right)$$

consists of all functions satisfying both the Dirichlet and standard vertex conditions:

$$\mu(\mathbf{x}\_j) = 0, \, j = 1, 2, \dots, 2N \quad \text{and} \quad \sum\_{\mathbf{x}\_j \in V^m} \partial \mu(\mathbf{x}\_j) = 0, \, m = 1, 2, \dots, M.$$

Hence the dimension of R*(L*min *q,a (-)* − *λ)*<sup>⊥</sup> cannot be larger than *M (*≤ 2*N )* in this case.

**Non-compact Graphs** We are not going to discuss spectral properties of noncompact graphs in depth in this book, but for the sake of completeness, we prove that

<sup>4</sup> What we have done can be seen as follows: we have calculated the deficiency indices of the operator *L*min *q,a (-).* Equation (4.27) can also be written in the operator form *(L*max *q,a* − *λ)f* = 0*,* since *(L*min *q,a (-))*<sup>∗</sup> <sup>=</sup> *<sup>L</sup>*max *q,a (-)*.

in the presence of semi-infinite edges the operator has branches of the continuous spectrum. To prove this result we shall need the following theorem from [90].

**Proposition 4.8 (Theorem 5, p. 216 in [90])** *Under the hypotheses of Proposition 4.5* 

$$
\Sigma\_c(B) = \Sigma\_c(A).
$$

*(Here ∑c denotes the continuous spectrum of an operator.)* 

We have already proved that the spectrum of *L*<sup>D</sup> *q,a(-)* contains branch [0*,*∞*)* of continuous spectrum of multiplicity *Ni.* Modifying the proof of Theorem 4.6 we see that the difference between the resolvents *(LS q,a* <sup>−</sup> *λ)*−<sup>1</sup> <sup>−</sup> *(L<sup>D</sup> q,a* <sup>−</sup> *λ)*−1 has rank at most *D* = 2*Nc* + *Ni.* One needs to take into account that equation (4.27) has just one square integrable solution in the case of semi-infinite intervals.

**Theorem 4.9** *The spectrum of the magnetic Schrödinger operator L<sup>S</sup> q,a(-) on the finite metric graph with Ni semi-infinite edges contains a branch of continuous spectrum* [0*,*∞*) with multiplicity Ni. The negative spectrum consists of a finite number of eigenvalues.* 

Note that this theorem does not say anything about the positive discrete spectrum. Different scenarios may be observed: the positive discrete spectrum may be empty, or embedded eigenvalues may be observed. The eigenfunctions corresponding to embedded eigenvalues are necessarily equal to zero on all semi-infinite edges.

## **4.4 Laplacian Ground State**

Let us study the lowest eigenvalue and the corresponding eigenfunction for the standard Laplacian *L*st*(-)*. This operator is uniquely determined by the metric graph *-*, and, therefore, its spectrum is sometimes referred to as the spectrum of the graph *-.* We have already seen that the constant function *ψ*1*(x)* ≡ 1 is an eigenfunction corresponding to the eigenvalue *λ*<sup>1</sup> = 0 :

• *ψ*<sup>1</sup> satisfies the eigenfunction equation

$$-\psi\_{1}^{\prime\prime}(\mathbf{x}) = 0;\tag{4.29}$$

• *ψ*<sup>1</sup> satisfies the standard vertex conditions (2.27).

If the graph  is connected, then the multiplicity of the ground state (as the lowest eigenvalue and the corresponding eigenfunction are often called) is equal to one. Otherwise the multiplicity is equal to the number of connected components in *-.*

**Lemma 4.10** *Let be a compact finite graph. The standard Laplacian is a nonnegative operator with the lowest eigenvalue λ*<sup>1</sup> = 0 *with the multiplicity equal to the number β*<sup>0</sup> *of connected components in -.*

*Proof* Suppose first that the graph  is connected. Let us calculate the quadratic form of *L*st*(-)*:

$$\langle L^{\mathrm{st}}u, u\rangle = \sum\_{n=1}^{N} \int\_{E\_n} |u'(x)|^2 \ge 0.$$

Hence the operator is semibounded, more precisely non-negative. It follows that the spectrum of *L*st contains only non-negative eigenvalues. The lowest possible such number is *λ*<sup>1</sup> = 0, and we have seen that it is an eigenvalue with the eigenfunction *ψ*<sup>1</sup> ≡ 1*.* It remains to prove that such eigenfunction is unique.

Any eigenfunction corresponding to *λ*<sup>1</sup> = 0 should minimise the quadratic form, hence it is equal to a constant function on every edge. The continuity condition in (2.27) implies that the eigenfunction is equal to the same constant on the whole graph.

Consider any graph  consisting of *β*<sup>0</sup> connected components. The Laplace operator on  is equal to the orthogonal sum of Laplace operators defined on the connected components. Each of these operators has *λ*<sup>1</sup> = 0 as an eigenvalue of multiplicity 1*.* Hence the ground state for the Laplacian on  has multiplicity *β*0*.* ⨅⨆

An alternative proof can be given by looking at the eigenfunction equation (4.29) directly. Solutions to this differential equation are linear functions on every edge. Standard vertex conditions imply that the function is continuous on the whole metric graph. Its maximum is situated at one of the vertices, say *V* 1, since the function is linear on the edges. All normal derivatives at *V* <sup>1</sup> are non-positive, since it is a maximum. But their sum is zero (Kirchhoff condition in (2.27)), hence each of the normal derivatives is zero. This implies that the function is constant on the edges joined at the vertex where the maximum is attained. It follows that the maximum is also attained at any vertex directly connected to *V* <sup>1</sup> by an edge. The same reasoning may be applied to this vertex implying that the function is constant on all edges emanating from it. Continuing this procedure, we prove that *ψ*<sup>1</sup> is a constant function of the whole *-*, since it is connected. Hence the ground state is a constant function.

The obtained result can also be reformulated in the form suitable for inverse problems:

**Observation 4.11** *The multiplicity of the eigenvalue λ*<sup>1</sup> = 0 *of the standard Laplacian L*st*(-) determines the number of connected components in -.*

# **4.5 Bonus Section: Positivity of the Ground State for Quantum Graphs**

The celebrated nodal domain Courant theorem [148, 149, 431] implies that the ground state eigenfunction for a Schrödinger operator on an interval can be chosen positive and the corresponding eigenvalue is simple; moreover, if the boundary conditions are not Dirichlet, then the ground state has no zeroes. Our aim is to examine to which extent a similar theorem holds for operators on metric graphs.

## *4.5.1 The Case of Standard Vertex Conditions*

In this subsection we consider Schrödinger operators on metric graphs with standard vertex conditions.

**Theorem 4.12** *Let L*st *<sup>q</sup>* = − *<sup>d</sup>*<sup>2</sup> *dx*<sup>2</sup> <sup>+</sup> *q(x), q(x)* <sup>∈</sup> <sup>R</sup>*, q* <sup>∈</sup> *<sup>L</sup>*1*(-) be a standard Schrödinger operator on a finite compact connected metric graph -. The domain of the operator is given by all functions u from the Sobolev space W*<sup>1</sup> <sup>2</sup> *(-* \ **V***) (here* **V** *denotes the set of all vertices in -) such that* 

$$-\mu'' + qu \in L\_2(\Gamma) \tag{4.30}$$

*and satisfying standard vertex conditions at the vertices:* 

$$\begin{cases} \mathbf{x}\_l, \mathbf{x}\_j \in V^m \Rightarrow \boldsymbol{\mu}(\mathbf{x}\_l) = \boldsymbol{\mu}(\mathbf{x}\_j) - \text{ continuity}, \\\\ \sum\_{\mathbf{x}\_j \in V^m} \partial \boldsymbol{\mu}(\mathbf{x}\_j) = \mathbf{0} - \text{ Kirchoff}. \end{cases} \tag{4.31}$$

*Then the ground state eigenfunction is unique and may be chosen strictly positive.* 

**Note** The original proof of Å. Pleijel [431] of the Courant nodal domain theorem can be transferred to standard Scrhödinger operators on graphs without many modifications. That proof would imply that the ground state eigenfunction has a single nodal domain. But this property cannot exclude the possibility that the function has zeroes - not every zero leads to nodal domains as in the case of one interval. Consider for example a non-negative function on a ring: if the function has just one zero, then there is just one nodal domain. It is necessary to show that the ground state eigenfunction cannot have zeroes without changing sign both inside the edges and at the vertices (see the second part of the proof below).

*Proof* Let *u* be a ground state eigenfunction, then the function *u* is also a ground state, since both the differential equation

$$-\mu'' + q(\mathbf{x})\mu = \lambda\_1 \mu,\ \lambda\_1 \in \mathbb{R},\tag{4.32}$$

and the vertex conditions are invariant under complex conjugation. Hence the ground state(s) (as well as all other eigenfunctions) can always be chosen real valued.

The quadratic form of the operator *L*st *<sup>q</sup>* is given by

$$\mathcal{Q}\_{L\_q^\text{st}} = \langle \mu, L\_q \mu \rangle = \int\_{\Gamma} |u'(\mathbf{x})|^2 d\mathbf{x} + \int\_{\Gamma} q(\mathbf{x}) |u(\mathbf{x})|^2 d\mathbf{x},\tag{4.33}$$

and the lowest eigenvalue can be obtained by minimising the Rayleigh quotient (see also Proposition 4.19)

$$\lambda\_1 = \min\_{\mu} \frac{\int\_{\Gamma} |u'(\mathbf{x})|^2 d\mathbf{x} + \int\_{\Gamma} q(\mathbf{x}) |u(\mathbf{x})|^2 d\mathbf{x}}{\int\_{\Gamma} |u(\mathbf{x})|^2 d\mathbf{x}}. \tag{4.34}$$

The domain of the quadratic form is given by all functions from *W*<sup>1</sup> <sup>2</sup> *(-* \ **V***)*, which are in addition continuous at the vertices. One may extend the domain of the quadratic form allowing functions which are not necessarily from *W*<sup>1</sup> <sup>2</sup> on the edges, but are piece-wise *W*<sup>1</sup> <sup>2</sup> and continuous. This will allow additional dummy degree two vertices on the edges, which of course can be removed if the vertex conditions are standard. We are going to call such functions *admissible*.

Any ground state is an admissible function minimising the Rayleigh quotient. If a function *u* is real valued and admissible, then |*u*| is also admissible with the same Rayleigh quotient. Hence the minimiser of (4.34) can be chosen not only real, but even non-negative.

We shall prove now that if *ψ*<sup>1</sup> is a non-negative minimiser for (4.34), then it is never equal to zero. This would imply that *ψ*<sup>1</sup> may be chosen strictly positive. We need to exclude that *ψ*<sup>1</sup> may have zeroes on the edges or at the vertices.

If *ψ*<sup>1</sup> is a minimiser of the Rayleigh quotient, then it is an eigenfunction of the corresponding Schrödinger equation, *i.e.* it satisfies the differential equation on the edges as well as vertex conditions.

The fact that non-negative *ψ*<sup>1</sup> satisfies the differential equation (4.32) and standard vertex conditions will allow us to prove that it is never equal to zero. Assume first that *ψ*<sup>1</sup> is equal to zero at a certain point *x*<sup>0</sup> inside an edge *En.* The function *ψ*<sup>1</sup> is a minimiser for (4.34) and therefore satisfies the second order differential equation (4.32) on the edge. The function *ψ*<sup>1</sup> is continuously differentiable in particular when *x* = *x*<sup>0</sup> and its derivative there should be equal to zero, since the function is non-negative and *ψ*1*(x*0*)* = 0*.* It follows that at this particular point the function *ψ*<sup>1</sup> satisfies zero Cauchy data

$$\begin{cases} \psi\_1(\mathbf{x}\_0) = 0\\ \psi\_1'(\mathbf{x}\_0) = 0 \end{cases}$$

and therefore is identically equal to zero on the whole edge *En* as a solution to the second order ordinary differential equation. This implies that *ψ*<sup>1</sup> should be equal to zero in a vertex—we come to the second possibility we need to consider.

Assume now that *ψ*<sup>1</sup> is equal to zero at a certain vertex *V m.* Since *ψ*<sup>1</sup> is a minimiser for (4.34) it satisfies the standard vertex conditions at this vertex. The function *ψ*<sup>1</sup> is non-negative and is equal to zero at the vertex. It follows that all normal derivatives are non-negative, but their sum is equal to zero; hence all normal derivatives are actually equal to zero. We see that as before *ψ*<sup>1</sup> satisfies a second order differential equation with zero Cauchy data on every edge incident to *V <sup>m</sup>*. It follows that *ψ*<sup>1</sup> is zero not only at this particular vertex *V <sup>m</sup>* but at all neighbouring vertices as well. Repeating the argument we conclude that *ψ*<sup>1</sup> is identically equal to zero on the whole *-* (which is assumed to be connected) and therefore is not an eigenfunction.

It remains to prove that the lowest eigenvalue is simple. Assume that the lowest eigenvalue is not simple and there exist two orthogonal eigenfunctions *ψ*<sup>1</sup> and *ψ*2*.* One of these eigenfunctions can be chosen positive, say *ψ*1, then the other one necessarily has zeroes, since it is continuous and attains both positive and negative values being orthogonal to *ψ*1*.* Every such function is identically equal to zero as we have already proven. Hence the lowest eigenvalue is in fact simple. ⨅⨆

In a similar way the following corollary can be proven:

**Corollary 4.13** *Let Lq* = − *<sup>d</sup>*<sup>2</sup> *dx*<sup>2</sup> <sup>+</sup> *q(x), q(x)* <sup>∈</sup> <sup>R</sup>*, q* <sup>∈</sup> *<sup>L</sup>*1*(-) be a Schrödinger operator on a finite compact connected metric graph -. The domain of the operator is given by all functions from the Sobolev space W*<sup>1</sup> <sup>2</sup> *(-* \ **V***) satisfying (4.30) and delta type vertex conditions at the vertices:* 

$$\begin{cases} \text{the functions are continuous} \\ \sum\_{\mathbf{x}\_{j}\in V^{m}} \partial \mu(\mathbf{x}\_{j}) = \alpha\_{i} \mu(V^{m}) \end{cases}, \quad \alpha\_{l} \in \mathbb{R}. \tag{4.35}$$

*Then the ground state eigenfunction may be chosen strictly positive. Moreover, the corresponding eigenvalue is simple.* 

To prove the corollary one needs to take into account that the domain of the quadratic form is again invariant under taking the complex conjugate and the absolute value. Moreover if *ψ*<sup>1</sup> is equal to zero at a vertex, then it satisfies standard vertex conditions there.

## *4.5.2 A Counterexample*

Our goal here is to provide an explicit example showing that it might be impossible to choose the ground state eigenfunction positive. Consider the Laplacian on the cycle graph *-(*2*.*3*)* formed by two edges *E*<sup>1</sup> = [*x*1*, x*2]*, E*<sup>2</sup> = [*x*3*, x*4] with signing

#### **Fig. 4.1** Graph *-(*2*.*3*)*

conditions (3.43) introduced at the two vertices *<sup>V</sup>* <sup>1</sup> = {*x*1*, x*4}*, V* <sup>2</sup> = {*x*2*, x*3} (see Fig. 4.1)

$$\begin{cases} \boldsymbol{u}(\mathbf{x}\_{1}) = -\boldsymbol{u}(\mathbf{x}\_{4}) \\ \partial \boldsymbol{u}(\mathbf{x}\_{1}) = \partial \boldsymbol{u}(\mathbf{x}\_{4}) \end{cases} \text{ and } \begin{cases} \boldsymbol{u}(\mathbf{x}\_{2}) = -\boldsymbol{u}(\mathbf{x}\_{3}) \\ \partial \boldsymbol{u}(\mathbf{x}\_{2}) = \partial \boldsymbol{u}(\mathbf{x}\_{3}) \end{cases} \tag{4.36}$$

The lowest eigenvalue *λ*<sup>1</sup> = 0 is simple, but the corresponding eigenfunction is not sign definite

$$\psi\_1(x) = \begin{cases} 1, & x \in E\_1, \\ -1, & x \in E\_2. \end{cases}$$

The main reason for the ground state not to be positive in this example is that the domain of the quadratic form is not invariant under taking the absolute value (only functions equal to zero at the vertices satisfy |*u(x*1*)*| = −|*u(x*4*)*|*,* |*u(x*2*)*| = −|*u(x*3*)*|), despite the fact that the ground state eigenfunction can be chosen real valued.

Our goal in the rest of this section is to characterise vertex conditions that guarantee that the ground state is unique and can be chosen positive definite. It is not our hope to describe all metric graphs with positive definite ground states.

## *4.5.3 Invariance of the Quadratic Form*

Our proof of Theorem 4.12 for standard vertex conditions was based on the fact that the eigenfunctions are solutions to the second order differential equation on the edges and the following two properties of the quadratic form:


One may put together these two conditions by requiring that the following inequality holds [337]

$$\mathcal{Q}(|u|,|u|) \le \mathcal{Q}(u,u),\tag{4.37}$$

where the value of the quadratic form is set to +∞ if the argument does not belong to its domain. Therefore this inequality implies in particular that the domain of the quadratic form is invariant under taking the absolute value.

We shall need a few definitions: two are coming from [442] (positive and strongly positive functions), and one (strictly positive) is new—we need it to formulate a stronger version of the theorem.

**Definition 4.14** A function *f* is called **positive** if it is non-negative *f (x)* ≥ 0*.* A function *f* ∈ *L*2*(-)* is called **strongly positive** if *f (x) >* 0 holds almost everywhere on *-.* Finally, a function *f* is called **strictly positive** if *f (x) >* 0 holds everywhere on except at those vertices, where Dirichlet conditions are assumed.

It is clear that every strictly positive function is strongly positive, and every strongly positive one is positive.

For arbitrary vertex conditions the quadratic form of the Schrödinger operator *L***S** *<sup>q</sup> (-)* is given by (3.55)

$$\mathcal{Q}\_{L^{\mathcal{S}}\_q(\Gamma)}(u,u) = \int\_{\Gamma} |u'(\mathbf{x})|^2 d\mathbf{x} + \int\_{\Gamma} q(\mathbf{x}) |u(\mathbf{x})|^2 d\mathbf{x} + \sum\_{m=1}^M \langle A^m \vec{u}(V^m), \vec{u}(V^m) \rangle\_{\mathbb{C}^M}.\tag{4.38}$$

The domain of the quadratic form coincides with *W*<sup>1</sup> <sup>2</sup> *(-* \ **V***)* subject to generalised Dirichlet conditions (3.56)

$$P\_{-1}^{m}\vec{\mu}(V^{m}) = 0, \ m = 1, 2, \ldots, M. \tag{4.39}$$

We are going to show, that we have property (4.37) only if the vertex conditions are generalised delta couplings (see Sect. 3.7). It is straightforward to see that generalised delta couplings imply (4.37), provided the weights are positive and the matrix **A** is a negative Minkowski *M*-matrix—this will be seen from the proof. A real Hermitian matrix is called **Minkowski matrix** if all its non-diagonal elements are non-negative:

$$\mathcal{M}\_{lj} \ge 0, \quad i \ne j. \tag{4.40}$$

**Theorem 4.15** *Let L***<sup>S</sup>** *<sup>q</sup> (-) be a Schrödinger operator on a connected finite compact metric graph with arbitrary properly connecting vertex conditions. We assume that under taking the absolute value the domain of the quadratic form does not change and the value of the quadratic form does not increase* 

$$\mathcal{Q}\_{L^{S}\_{\tilde{q}}(\Gamma)}(|u|, |u|) \le \mathcal{Q}\_{L^{S}\_{\tilde{q}}(\Gamma)}(u, u). \tag{4.41}$$

*Then the vertex conditions are either Dirichlet conditions or generalised deltacouplings as described in (3.44) with all weights non-negative and the matrices*  <sup>−</sup>**A***<sup>m</sup> being Minkowski M-matrices.* 

*Proof* We divide the proof into two steps.

**Step 1** *The domain of the quadratic form is invariant under taking the absolute value if only if each subspace (I* <sup>−</sup> *<sup>P</sup> <sup>m</sup>* −1*)*C*d<sup>m</sup> , m* = 1*,* 2*,... ,M, is trivial or is generated by several vectors a<sup>m</sup> <sup>j</sup> with non-negative coordinates and disjoint supports.* 

Let us remind that the projector *P <sup>m</sup>* <sup>−</sup><sup>1</sup> is the eigenprojector corresponding to the eigenvalue <sup>−</sup><sup>1</sup> (if any) of the unitary matrix *<sup>S</sup><sup>m</sup>* parameterising vertex conditions at the vertex *V m.* The domain of the quadratic form is given by the requirement that each vector *u(V m)* of boundary values belongs to the linear subspace *(I* <sup>−</sup>*<sup>P</sup> <sup>m</sup>* −1*)*C*d<sup>m</sup> .* The vertices can be treated separately.

Let us pick up any particular vertex *V* and assume that it joins together the endpoints *x*1*, x*2*,...,xd .* Consider any basis *a<sup>j</sup> , j* = 1*,* 2*, . . . , n, n* ≤ *d* generating the subspace. Without loss of generality we may assume that the vectors satisfy

$$\tilde{a}\_{j}(\mathbf{x}\_{l}) = \delta\_{lj}, \quad i, j = 1, 2, \dots, n. \tag{4.42}$$

This can be achieved by permuting the coordinates in C*<sup>d</sup>* and Gaussian elimination. Note that we do not require that the basis is orthogonal.

Taking the absolute value we map any vector *u* <sup>∈</sup> <sup>C</sup>*<sup>d</sup>* to a vector from R*<sup>d</sup>* <sup>+</sup> in accordance to the following rule:

$$|\ddot{u}(V)| = |(u(\mathbf{x}\_1), u(\mathbf{x}\_2), \dots, u(\mathbf{x}\_d))| = (|u(\mathbf{x}\_1)|, |u(\mathbf{x}\_2)|, \dots, |u(\mathbf{x}\_d)|).\tag{4.43}$$

Consider the two-dimensional subspace generated by the vectors *a*<sup>1</sup> and *a*2*.* Every vector in this subspace is uniquely determined by its first two coordinates:

$$
\vec{u} = \alpha \vec{a}\_1 + \beta \vec{a}\_2, \ \alpha, \beta \in \mathbb{C} \implies \alpha = \vec{u}(\mathbf{x}\_1) \ \&\ \ \beta = \vec{u}(\mathbf{x}\_2) \dots
$$

In particular, the coordinate number *j* is given by

$$
\vec{u}(\mathbf{x}\_{j}) = \alpha \vec{a}\_{1}(\mathbf{x}\_{j}) + \beta \vec{a}\_{2}(\mathbf{x}\_{j}).\tag{4.44}
$$

Every vector in the subspace is uniquely determined by its first *n* coordinates, hence the vector |*u*| belongs to the same subspace if and only if

$$|\ddot{\mu}| = |\alpha| \left| \ddot{a}\_1 \right| + |\beta| \left| \ddot{a}\_2 \right| \tag{4.45}$$

holds. Comparing the *j* -th coordinates calculated using (4.44) and (4.45), we obtain:

$$|\alpha \tilde{a}\_1(\mathbf{x}\_j) + \beta \tilde{a}\_2(\mathbf{x}\_j)| = |\alpha| \left| \tilde{a}\_1(\mathbf{x}\_j) \right| + |\beta| \left| \tilde{a}\_2(\mathbf{x}\_j) \right|. \tag{4.46}$$

The latter equality holds for any *α* and *β* if and only if at least one of the coordinates *a*1*(xj )* and *a*2*(xj )* is equal to zero, in other words only if the vectors *a*<sup>1</sup> and *a*<sup>2</sup> have disjoint supports, since *j* /= 1*,* 2*,* is arbitrary.

Comparing the first two coordinates we conclude that

$$|\vec{u}| = |\alpha|\vec{a}\_1 + |\beta|\vec{a}\_2.$$

The coordinates of this vector are non-negative only if the coordinates of the basis vectors *a<sup>j</sup> , j* = 1*,* 2*,* are non-negative.

The same analysis applies to any pair of vectors from the basis, hence we conclude that all *a<sup>j</sup>* have disjoint supports and non-negative entries.

Thus we have proven that the conditions at each vertex should be the generalised delta-couplings, since without loss of generality the vectors *a<sup>j</sup>* can be normalised.

With each vertex *<sup>V</sup> <sup>m</sup>* we associate vectors *a<sup>m</sup> <sup>j</sup> , j* <sup>=</sup> <sup>1</sup>*,* <sup>2</sup>*,...,nm* <sup>≤</sup> *<sup>d</sup><sup>m</sup>* with nonnegative coordinates and disjoint supports and a Hermitian *nm* <sup>×</sup> *nm* matrix **<sup>A</sup>***<sup>m</sup>* connecting the boundary values at the vertex *V <sup>m</sup>* via the second equation in (3.44)

$$\left< \vec{a}^{m}\_{j}, \partial \vec{u} (V^{m}) \right> = \sum\_{i=1}^{n^{m}} A^{m}\_{j\bar{i}} \left< \vec{a}^{m}\_{i}, \vec{u} (V^{m}) \right>.$$

**Step 2** *The value of the quadratic form does not increase while taking the absolute value if and only if the Hermitian matrix* −**<sup>A</sup>** = −**A**<sup>1</sup> <sup>⊕</sup> **<sup>A</sup>**<sup>2</sup> ⊕ ··· ⊕ **<sup>A</sup>***<sup>M</sup> is of Minkowski class (all non-diagonal entries are non-negative).* 

Of course, we assume here that the domain of the quadratic form does not change while taking the absolute value. We need to satisfy the inequality (4.41) which can be written as follows using (4.38)

$$\begin{split} \int\_{\Gamma} \left( \left( |u'| (\mathbf{x}) \right)^2 + q(\mathbf{x}) |u(\mathbf{x})|^2 \right) d\mathbf{x} &+ \sum\_{m=1}^M \langle |\tilde{\mathbf{u}}^m|, \mathbf{A}^m |\tilde{\mathbf{u}}^m \rangle \rangle\_{\mathbb{C}^m} \\ &\leq \int\_{\Gamma} \left( |u'(\mathbf{x})|^2 + q(\mathbf{x}) |u(\mathbf{x})|^2 \right) d\mathbf{x} + \sum\_{m=1}^M \langle \tilde{\mathbf{u}}^m, \mathbf{A}^m \tilde{\mathbf{u}}^m \rangle\_{\mathbb{C}^m} . \end{split} \tag{4.47}$$

The integral terms containing the potential *q* can be cancelled. Therefore sufficient conditions for the inequality to hold are that

$$\begin{aligned} \int\_{\Gamma} \left( |u|'(\mathbf{x}) \right)^2 d\mathbf{x} &\qquad \le \int\_{\Gamma} |u'(\mathbf{x})|^2 d\mathbf{x}, \\ \sum\_{m=1}^M \langle |\vec{\mathbf{u}}^m|, \mathbf{A}^m |\vec{\mathbf{u}}^m| \rangle\_{\mathbb{C}^{n^m}} &\le \sum\_{m=1}^M \langle \vec{\mathbf{u}}^m, \mathbf{A}^m \vec{\mathbf{u}}^m \rangle\_{\mathbb{C}^{n^m}}, \end{aligned} \tag{4.48}$$

The first inequality is trivially satisfied, since we have

$$\left( |\mu|'(\mathbf{x}) \right)^2 \le \left| \mu'(\mathbf{x}) \right|^2 \tag{4.49}$$

pointwise, but this does not directly imply that the second inequality should also be satisfied. On the other hand point values of functions from *W*<sup>1</sup> <sup>2</sup> are not controlled by *L*2-norms of their first derivatives.5 We can see this explicitly by constructing a piece-wise *W*<sup>1</sup> <sup>2</sup> [0*, ℓ*] function with arbitrarily small *L*2-norm of the first derivative and arbitrary values at the endpoints. Assume that we are looking for a function on [0*, ℓ*] satisfying the boundary conditions

$$u(0) = r\_1 e^{i\theta\_1}, \quad u(\ell) = r\_2 e^{i\theta\_2}, r\_1, r\_2 \in \mathbb{R}\_+, \theta\_1, \theta\_2 \in \mathbb{R}\_-$$

Consider for example the following function:

$$u(\mathbf{x}) = \begin{cases} r\_1 e^{i\theta\_1} \left(\frac{\epsilon}{r\_1}\right)^{3\chi/\ell}, & 0 \le \chi \le \ell/3, \\\epsilon e^{i\frac{3(\theta\_2-\theta\_1)}{\ell}x + (2\theta\_1-\theta\_2)}, & \ell/3 \le \chi \le 2\ell/3; \\\ e^{i\theta\_2} \frac{\epsilon^3}{r\_2^3} \left(\frac{r\_2}{\epsilon}\right)^{3\chi/\ell}, & 2\ell/3 \le \chi \le \ell, \end{cases}$$

with a certain 0 *< ∈* 1*.* The function is piece-wise continuous and small at the middle of the interval:

$$|\mu(\mathbf{x})| = \epsilon, \quad \ell/3 \le \mathbf{x} \le 2\ell/3.$$

Then the difference between the Dirichlet integrals for *u* and |*u*| is proportional to *<sup>∈</sup>*<sup>2</sup> since the two derivatives <sup>|</sup>*u*' *(x)*| and |*u(x)*| 'are equal on [0*, ℓ/*3]∪[2*ℓ/*3*, ℓ*]

$$\begin{split} \|u'\|\_{L\_2} - \| |u|'\|\_{L\_2} &= \int\_0^\ell \left( |u'(\mathbf{x})|^2 - (|u(\mathbf{x})|')^2 \right) d\mathbf{x} = \int\_{\ell/3}^{2\ell/3} |u'(\mathbf{x})|^2 d\mathbf{x} \\ &= \int\_{\ell/3}^{2\ell/3} \epsilon^2 \frac{9(\theta\_2 - \theta\_1)^2}{\ell^2} d\mathbf{x} = \frac{3(\theta\_2 - \theta\_1)^2}{\ell} \epsilon^2. \end{split}$$

It follows that inequality (4.47) holds for any piecewise *W*<sup>1</sup> <sup>2</sup> -function if and only if the second inequality in (4.48) holds for arbitrary limiting values (in addition to the first one which holds independently of the vertex conditions).

Different vertices can be treated independently, hence the following inequality should hold for each vertex

$$
\langle\langle \vec{\mathbf{u}}^{m} | \, , \mathbf{A}^{m} | \vec{\mathbf{u}}^{m} | \rangle \le \langle \vec{\mathbf{u}}^{m} , \mathbf{A}^{m} \vec{\mathbf{u}}^{m} \rangle, \ m = 1, 2, \ldots, M. \tag{4.50}
$$

<sup>5</sup> This can be seen for example from the Sobolev inequality (11.11) below.

Our goal is to prove that the matrices **A***<sup>m</sup>* should have non-positive entries outside the diagonal. Consider first the case where **<sup>u</sup>***<sup>m</sup>* has just two non-zero coordinates (in other words, *u(V m)* belongs to the linear span of say *a<sup>m</sup>* <sup>1</sup> and *a<sup>m</sup>* <sup>2</sup> ). Calculating the quadratic forms of **<sup>u</sup>***<sup>m</sup>* <sup>=</sup> *αa<sup>m</sup>* <sup>1</sup> <sup>+</sup> *βa<sup>m</sup>* <sup>2</sup> and <sup>|</sup>**u***m*<sup>|</sup> and substituting into (4.50), we get:

$$
\langle |\vec{\mathbf{u}}^{m}|, \mathbf{A}^{m} |\vec{\mathbf{u}}^{m}| \rangle = |\alpha|^{2} (\mathbf{A}^{m})\_{11} + |\alpha| \left| \beta \right| \left( (\mathbf{A}^{m})\_{21} + (\mathbf{A}^{m})\_{12} \right) + |\beta|^{2} (\mathbf{A}^{m})\_{22}
$$

$$
\leq |\alpha|^{2} (\mathbf{A}^{m})\_{11} + \alpha \bar{\beta} (\mathbf{A}^{m})\_{21} + \bar{\alpha} \beta (\mathbf{A}^{m})\_{12} + |\beta|^{2} (\mathbf{A}^{m})\_{22} = \langle \vec{\mathbf{u}}^{m}, \mathbf{A}^{m} \vec{\mathbf{u}}^{m} \rangle,
$$

implying that

$$|a| \, |b| \text{Re} \, (\mathbf{A}^m)\_{12} \le \text{Re} \, \left( \bar{a} b(\mathbf{A}^m)\_{12} \right)$$

holds for any complex *α* and *β*, which is possible if and only if *(***A***m)*<sup>12</sup> is a nonpositive real number. Of course the same condition is enough even if more than two vectors are considered. No restriction on the diagonal elements is needed, since in the inequality above the diagonal terms cancel.

Summing up, the domain of the quadratic form is given by the requirement that at each vertex *V <sup>m</sup>* the vector of boundary values belongs to a subspace spanned by *nm* <sup>≤</sup> *<sup>d</sup><sup>m</sup>* vectors with disjoint supports and non-negative coordinates *u(V m)* <sup>∈</sup> <sup>L</sup>{*a<sup>m</sup> i* } *nm <sup>i</sup>*=<sup>1</sup> and the matrix **A***<sup>m</sup>* is Hermitian with non-positive non-diagonal entries. Remember that we consider only properly connecting vertex conditions implying that the supports of vectors *a<sup>m</sup> <sup>j</sup>* cover all points *xj* <sup>∈</sup> *<sup>V</sup> <sup>m</sup>* (see Eq. (3.46)) and the matrix **A***<sup>m</sup>* is irreducible.

Thus the vertex conditions coincide with the generalised delta-couplings determined by (3.44). ⨅⨆

It is straightforward to see that generalised delta couplings guarantee that (4.41) holds. Using Beurling-Deny criterion one may characterise possible vertex conditions with the help of Theorem 6.85 in [393], but our characterisation appears more explicit.

# *4.5.4 Positivity of the Ground State for Generalised Delta-Couplings*

We are ready to generalise Theorem 4.12 to Schrödinger operators with generalised delta vertex conditions.

**Theorem 4.16** *Let L***<sup>S</sup>** *<sup>q</sup> (-) be a Schrödinger operator on a connected finite compact metric graph with q* ∈ *L*1*(-). Suppose the vertex conditions are either Dirichlet or generalised delta-couplings with non-negative weights and the coupling matrix*  **A** *being a negative Minkowski M-matrix. Then the ground state is unique and may be chosen real, in which case it is strictly positive.*

*Proof* We are going to modify the proof of Theorem 4.12. It has been already shown that generalised delta couplings with the prescribed properties guarantee that the quadratic form does not increase under taking the absolute value (Theorem 4.15), hence the ground state can always be chosen non-negative. It remains as before to exclude the possibility that *ψ*<sup>1</sup> is zero at other points, than the Dirichlet vertices.

Assume the opposite: the eigenfunction *ψ*<sup>1</sup> is equal to zero at a certain point *x*<sup>0</sup> ∈ *-,* which is not a Dirichlet point. As before, two possibilities should be considered:


If *x*<sup>0</sup> lies on an edge, then repeating the arguments used in the proof of Theorem 4.12 we conclude that *ψ*<sup>1</sup> is identically equal to zero on the whole edge. In particular the function *ψ*<sup>1</sup> is equal to zero at the two vertices that are the endpoints of the edge.

It remains to study the case where *x*<sup>0</sup> belongs to one of the vertices, say *V <sup>m</sup>*. Remember that we do not necessarily assume continuity of the functions at the vertices. Consider the corresponding weight vectors *<sup>a</sup><sup>m</sup>* <sup>1</sup> *, <sup>a</sup><sup>m</sup>* <sup>2</sup> *,..., <sup>a</sup><sup>m</sup> <sup>n</sup><sup>m</sup> .* If *x*<sup>0</sup> belongs to the support of *<sup>a</sup><sup>m</sup>* <sup>1</sup> , then the first coordinate in **<sup>u</sup>***<sup>m</sup>* is zero. We consider the second condition in (3.44):

$$
\underbrace{\langle \vec{a}\_1^m, \partial \vec{u}(V^m) \rangle}\_{\geq 0} = 0 + \sum\_{i=2}^{n^m} \underbrace{(\mathbf{A}^m)\_{1i}}\_{\leq 0} \underbrace{\langle \vec{a}\_i^m, \vec{u}(V^m) \rangle}\_{\geq 0} \cdot 1
$$

The left hand side is non-negative, since the function is non-negative inside the edges and is equal to zero at the endpoints in the support of *<sup>a</sup><sup>m</sup>* <sup>1</sup> *.* The scalar products <*a<sup>m</sup> <sup>i</sup> , u*> are non-negative, since the function *u* is non-negative. This is possible only if

$$
\langle \vec{a}\_1^m, \partial \vec{u}(V^m) \rangle = 0
$$

and all

$$
\langle \vec{a}\_l^m, \vec{u}(V^m) \rangle = 0,
$$

provided *(***A***m)*1*<sup>i</sup>* /= <sup>0</sup>*.* At least one of the coefficients *(***A***m)*1*<sup>i</sup>* is different from zero, since otherwise the matrix **A***<sup>m</sup>* is reducible. It follows that at least one other coordinate of **u***<sup>m</sup>* is zero. Repeating this procedure if necessary, we prove not only that the vector **<sup>u</sup>***<sup>m</sup>* is identically zero, but that all scalar products of *∂u(V m)* with the vectors *a<sup>m</sup>* <sup>1</sup> *, <sup>a</sup><sup>m</sup>* <sup>2</sup> *,..., <sup>a</sup><sup>m</sup> <sup>n</sup><sup>m</sup>* are zero. It follows that all normal derivatives at *<sup>V</sup> <sup>m</sup>* are also zero, since *∂ψ*1*(xℓ), xℓ* <sup>∈</sup> *<sup>V</sup> <sup>m</sup>* are all non-negative.

We conclude that on every edge incident to *V <sup>m</sup>* the function *ψ*<sup>1</sup> is a solution of the second order differential equation satisfying zero Cauchy data. Hence the function is identically equal to zero on all edges incident to *V m.*

Repeating the argument for the vertices connected to *V <sup>m</sup>* by an edge, we conclude that *ψ*<sup>1</sup> is zero on all edges incident to those vertices. Continuing this procedure we conclude that *ψ*<sup>1</sup> ≡ 0 on the whole *-*, since the graph is connected. It follows that the ground state eigenfunction can be chosen strictly positive.

The arguments used above cannot be applied to Dirichlet points, it is required that the ground state vanishes at such points, but positivity of the normal derivative at such points does not lead to any contradiction.

The uniqueness of the ground state follows as before from the fact that the ground state can be chosen non-negative. ⨅⨆

One may combine Theorems 4.15 and 4.16 to prove the following

**Corollary 4.17** *Assume that the quadratic form of the Schrödinger operator on a connected finite compact metric graph does not increase under taking the absolute value, then the corresponding ground state is unique and can be chosen strictly positive, i.e. the eigenfunction is equal to zero only at the points where the Dirichlet conditions are prescribed.* 

In fact we not only proved that *ψ*<sup>1</sup> is strongly positive, but characterised explicitly all points where *ψ*<sup>1</sup> is equal to zero. If there are no Dirichlet points, then *ψ*<sup>1</sup> is separated from zero *ψ*1*(x)* ≥ *δ >* 0*.*

One may also prove the opposite statement, namely that generalised delta couplings together with Dirichlet conditions form the optimal family of vertex conditions that guarantee positivity of the ground state eigenfunction. It is not true that any vertex conditions different from the described class lead to an operator with a non sign-definite ground state. The following theorem is proven in [337]:

**Theorem 4.18** *The class of generalised delta-couplings and Dirichlet conditions is optimal to guarantee positivity of the ground state eigenfunction for Schrödinger operators on metric graphs in the following sense:* 

*Assume that Hermitian vertex conditions not from the selected class are given. Then there exists a metric graph and a Laplace operator on it with given vertex conditions at one of the vertices and generalised delta-couplings and Dirichlet conditions at the other vertices such that its ground state eigenfunction cannot be chosen non-negative.* 

The theorem can be proven by just looking at the Laplacian on a non-equilateral star graph with the vertex conditions not from the described class. To get operators with non-positive ground state eigenfunctions one considers different limits as the edge lengths go to zero. Please consult paper [337] for a complete proof.

Positivity and uniqueness of the ground state is closely related to the positivity preserving property of the corresponding semigroup. This question has been discussed in [337] and [393].

## **4.6 First Spectral Estimates**

The Weyl's asymptotics (4.15) does not imply any estimate for some of the eigenvalues, or that a Schrödinger operator cannot have multiple eigenvalues even with large indices. It turns out that the maximal multiplicity of the eigenvalues can be estimated using the structure of the underlying metric graphs [279], more precisely using the corresponding discrete graph. We are going to discuss this question in full detail in Chap. 17. We present here just few naive estimates necessary for our analysis in the following chapters.

**Rayleigh Quotient and Max-Min Principle** Every Schrödinger operator is semibounded from below and has discrete spectrum (provided the metric graph is finite and compact) allowing to effectively use *min-max* and *max-min* principles involving Rayleigh quotients [442]:

**Proposition 4.19** *Let A be a self-adjoint, semi-bounded from below operator with discrete spectrum, then the eigenvalues λn(A) counted from below can be calculated using one of the following two formulas* 

$$\lambda\_n(A) = \min\_{\mathcal{V}\_n} \max\_{u \in \mathcal{V}\_n} \frac{\mathcal{Q}\_A(u, u)}{\|u\|^2}, \quad n = 1, 2, \dots, \tag{4.51}$$

$$\lambda\_n(A) = \max\_{\mathcal{V}\_{n-1}} \min\_{\mu \perp \mathcal{V}\_{n-1}} \frac{\mathcal{Q}\_A(\mu, \mu)}{\|\mu\|^2}, \quad n = 1, 2, \dots, \tag{4.52}$$

*where QA(u, u) denotes the quadratic form associated with the operator A and* V*<sup>n</sup> ranges over all n-dimensional subspaces of the domain* Dom *(QA) of the quadratic form.* 

#### **Uniform Estimates**

**Theorem 4.20** *Let be a compact finite metric graph of the total length* L*, with N edges and M vertices. The eigenvalues of the standard Laplacian L*st*(-) satisfy the estimate* 

$$\left(\frac{\pi}{\mathcal{L}}\right)^2 (n - M)^2 \le \lambda\_n (L\_0^{\text{st}}) \le \left(\frac{\pi}{\mathcal{L}}\right)^2 (n + N - 1)^2, \quad n \ge M. \tag{4.53}$$

*Proof* For a bounded from below self-adjoint operator *A* with discrete spectrum, define the *eigenvalue counting function EA(λ)* by (4.17). The standard Laplacian is positive therefore when calculating the eigenvalue counting function we assume that *λ* ≥ 0*.* This is also the reason that the lower estimate in (4.53) is interesting only if *n>M*.

#### 4.6 First Spectral Estimates 93

The eigenvalue counting function for the Dirichlet Laplacian *L*D*(-)* has already been calculated, see (4.18). We also obtained the following two-sided estimate (4.20)

$$\frac{\mathcal{L}}{\pi}\sqrt{\lambda} - N + 1 \le E\_{L^{\rm D}(\Gamma)}(\lambda) \le \frac{\mathcal{L}}{\pi}\sqrt{\lambda}.\tag{4.54}$$

We have already shown that the difference between the resolvent of *L*st and *L*<sup>D</sup> is of finite rank (less or equal to 2*N*, see the proof of Theorem 4.6). It is not hard to improve this result and show that the rank is less than *M*, since both the Dirichlet and standard Laplacians are defined on functions that are actually continuous at the vertices. Assume that

$$(L^{\mathcal{D}} - \lambda)\mu^{\mathcal{D}} = f, \quad (L^{\text{st}} - \lambda)\mu^{\text{st}} = f.$$

Then the difference *<sup>ψ</sup>* := *<sup>u</sup>*<sup>D</sup> <sup>−</sup> *<sup>u</sup>*st satisfies the homogenous differential equation on the edges

$$\left(-\frac{d^2}{dx^2} - \lambda\right)\psi = 0$$

and are continuous at the vertices, since functions in the domain of *L<sup>D</sup>* and *L*st are continuous. The function *ψ* is uniquely determined by its values *ψ(V m)* at the vertices. Consider any edge *En* = [*x*2*n*−1*, x*2*n*] between *<sup>V</sup> <sup>i</sup>* and *<sup>V</sup> <sup>j</sup>* , then

$$\psi(\mathbf{x}) = \psi(V^i) \frac{\sin k(\mathbf{x} - \mathbf{x}\_{2n})}{\sin k(\mathbf{x}\_{2n-1} - \mathbf{x}\_{2n})} + \psi(V^j) \frac{\sin k(\mathbf{x} - \mathbf{x}\_{2n-1})}{\sin k(\mathbf{x}\_{2n} - \mathbf{x}\_{2n-1})}, \quad k^2 = \lambda. \tag{4.55}$$

Remember that *λ /*<sup>∈</sup> <sup>R</sup> and therefore does not belong to the spectrum of *L*D, hence sin *k(x*2*n*−*x*2*n*−1*)* = sin *kℓn* /= 0*.* So the resolvent difference is of rank less or equal to *M.* Therefore we have

$$E\_{L^{\text{st}}(\Gamma)}(\lambda) \le E\_{L^{\text{D}}(\Gamma)}(\lambda) + M \le \left[\frac{\sqrt{\lambda}}{\pi}\mathcal{L}\right] + M. \tag{4.56}$$

The lower estimate (4.54) can in principle be modified in a similar way, but instead we shall take into account that *L*st*(-)* <sup>≤</sup> *<sup>L</sup>*D*(-)* (in the sense of quadratic forms). These quadratic forms are given by the same expression and their domains include continuous at the vertices *W*<sup>1</sup> <sup>2</sup> *(-* \ **V***)* functions. The domain of the quadratic form domain for Dirichlet operator is characterised by the additional assumption that the function is equal to zero at the vertices (endpoints). Therefore max-min principle (Proposition 4.19) implies that the eigenvalues of the Dirichlet Laplacian do not exceed the eigenvalues of the standard Laplacian. Therefore the lower bound (4.54) on the eigenvalue counting function *ELD(-)* is valid for *EL*st*(-)* as well.

Putting the lower and upper estimates together we have

$$
\left[\frac{\sqrt{\lambda}}{\pi}\mathcal{L}\right] - N + 1 \le E\_{L^{\mathfrak{a}}(\Gamma)}(\lambda) \le \left[\frac{\sqrt{\lambda}}{\pi}\mathcal{L}\right] + M \tag{4.57}
$$

Setting *<sup>λ</sup>* <sup>=</sup> *<sup>π</sup>*<sup>2</sup> <sup>L</sup><sup>2</sup> *<sup>n</sup>*<sup>2</sup> we obtain

$$n - N + 1 \le E\_{L^{\mathfrak{sl}}(\Gamma)}\left(\frac{\pi^2}{\mathcal{L}^2} n^2\right) \le n + M,$$

so

$$
\lambda\_{n-N+1} \le \frac{\pi^2}{\mathcal{L}^2} n^2 \le \lambda\_{n+M}.
$$

Setting *n*' <sup>=</sup> *<sup>n</sup>* <sup>+</sup> *<sup>M</sup>* we get *λn*' <sup>≥</sup> *<sup>π</sup>*<sup>2</sup> <sup>L</sup><sup>2</sup> *(n*' <sup>−</sup> *M)*<sup>2</sup> and similarly we find *λn*' <sup>≤</sup> *π*2 <sup>L</sup><sup>2</sup> *(n*' <sup>+</sup> *<sup>N</sup>* <sup>−</sup> <sup>1</sup>*)*2*,* which proves the theorem. ⨅⨆

The derived estimates can be improved if one takes into account the topology and geometry of the graph. We are going to return to this question below (Chaps. 11–13).

To illustrate how far from optimal the estimates are, let us check, what can be said about the maximal possible multiplicity of the eigenvalues. Assume that a certain eigenvalue *λn* has multiplicity *μ i.e.* 

$$
\lambda\_n = \lambda\_{n+1} = \dots = \lambda\_{n+\mu-1},
$$

holds. We use estimate (4.53) for *λn* from above and for *λn*+*μ*−<sup>1</sup> from below to get:

$$\left(\frac{\pi}{\mathcal{L}}\right)^2 \left(n + \mu - M\right)^2 \le \lambda\_{n + \mu - 1} = \left(\frac{\pi}{\mathcal{L}}\right)^2 \lambda\_n \le \left(n + N\right)^2$$

implying

$$n + \mu - M \le n + N.$$

Therefore we have

$$
\mu \le \mathcal{M} + N.\tag{4.58}
$$

This estimate is slightly better than the obvious estimate *μ* ≤ 2*N*, which follows from the fact that the differential equation on each interval has two linearly independent solutions.

On the other hand, the method developed in the proof allows us to do much better, provided there are no eigenfunctions equal to zero at the vertices.

**Corollary 4.21** *Let λ*<sup>0</sup> *be an eigenvalue of the standard Laplacian L*st*(-) and let λ*<sup>0</sup> *be not from the spectrum of the Dirichlet Laplacian L*D*(-), then the multiplicity of this eigenvalue does not exceed the number of vertices M.* 

*Proof* The key point if that for *λ*<sup>0</sup> /= *<sup>π</sup> ℓn m, m* <sup>∈</sup> <sup>N</sup>, the values of the eigenfunction at the vertices *ψ(V m)* determine the eigenfunction on the whole graph via formula (4.55). Hence the corresponding multiplicity cannot exceed *M.* ⨅⨆

The obtained estimate on the multiplicity of the eigenvalues holds even for Schrödinger operators with standard vertex conditions, since the proof is based on the fact that the eigenfunction on every edge is uniquely determined by its values at the endpoints.

**Problem 14** Find a counterexample showing that assumption, that *λ*<sup>0</sup> does not belong to the spectrum of the Dirichlet operator, is needed in Corollary 4.21.

**Problem 15** How one should modify the assumption in order to adapt Corollary 4.21 for the case of Schrödinger operators?

**Problem 16** Consider a finite metric graph  with several semi-infinite edges and the corresponding standard Laplacian. Prove that if *ψ* is an eigenfunction corresponding to a **positive** eigenvalue *λ >* 0*,* then the restriction of *ψ* to any semi-infinite edge is identically equal to zero. What is the reason that we require *λ* to be positive? Does the statement hold true for negative eigenvalues?

**Problem 17** Construct your own example of a metric graph for which Courant theorem is not valid.

**Problem 18** What is the maximal multiplicity of the ground state for the standard Schrödinger operator on a finite compact metric graph with *β*<sup>0</sup> connected components?

**Problem 19** Construct an example of a non-compact metric graph with a finite (non-zero) number of embedded eigenvalues.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 5 The Characteristic Equation**

This chapter is devoted to compact graphs formed by a finite number of bounded intervals. We already know that the spectrum of the corresponding magnetic Schrödinger operator is discrete and our main goal is to obtain characteristic equations determining the spectrum (eigenvalues) precisely. We describe here three different methods to obtain an explicit characteristic equation. These methods are based on


Each of these methods has certain advantages, which explains their applicability in different situations.

## **5.1 Characteristic Equation I: Edge Transfer Matrices**

## *5.1.1 Transfer Matrix for a Single Interval*

#### **One-Dimensional Schrödinger Equation**

With the one-dimensional Schrödinger equation

$$\left(-\frac{d^2}{d\mathbf{x}^2} + q(\mathbf{x})\right)\mathbf{g}(\lambda, \mathbf{x}) = \lambda \mathbf{g}(\lambda, \mathbf{x}),\tag{5.1}$$

on the interval *E*<sup>1</sup> = [*x*1*, x*2] one associates the following **transfer matrix** 

$$T\_q(\lambda; x\_1, x\_2) : \begin{pmatrix} g(\lambda, x\_1) \\ g'(\lambda, x\_1) \end{pmatrix} \mapsto \begin{pmatrix} g(\lambda, x\_2) \\ g'(\lambda, x\_2) \end{pmatrix},\tag{5.2}$$

© The Author(s) 2024

P. Kurasov, *Spectral Geometry of Graphs*, Operator Theory: Advances and Applications 293, https://doi.org/10.1007/978-3-662-67872-5\_5

97

where *g* is any solution to (5.1). The transfer matrix maps Cauchy data for *x* = *x*<sup>1</sup> to Cauchy data for *x* = *x*2*.* To calculate the entries of the transfer matrix, consider the solutions *c(λ, x), s(λ, x)* to the differential equation (5.1) determined by the initial conditions:

$$\begin{cases} c(\lambda, x\_{\mathbb{I}}) = 1, \\ c'(\lambda, x\_{\mathbb{I}}) = 0, \end{cases} \quad \text{and} \quad \begin{cases} s(\lambda, x\_{\mathbb{I}}) = 0, \\ s'(\lambda, x\_{\mathbb{I}}) = 1. \end{cases} \tag{5.3}$$

Then the transfer matrix is simply given by

$$T\_q(\lambda; \mathbf{x}\_1, \mathbf{x}\_2) \equiv \begin{pmatrix} t\_{11}(k) \ t\_{12}(k) \\ t\_{21}(k) \ t\_{22}(k) \end{pmatrix} = \begin{pmatrix} c(\lambda, \mathbf{x}\_2) \ s(\lambda, \mathbf{x}\_2) \\ c'(\lambda, \mathbf{x}\_2) \ s'(\lambda, \mathbf{x}\_2) \end{pmatrix}. \tag{5.4}$$

If *q* ≡ 0 then the entries of the transfer matrix are just usual sin and cos functions, which explains our notation

$$T\_0(\lambda; \mathbf{x}\_1, \mathbf{x}\_2) = \begin{pmatrix} \cos k(\mathbf{x}\_2 - \mathbf{x}\_1) & \frac{\sin k(\mathbf{x}\_2 - \mathbf{x}\_1)}{k} \\\\ -k \sin k(\mathbf{x}\_2 - \mathbf{x}\_1) \cos k(\mathbf{x}\_2 - \mathbf{x}\_1) \end{pmatrix}, \quad k = \sqrt{\lambda}. \tag{5.5}$$

For non-trivial *q* the functions *c* and *s* are solutions to certain integral equations [105, 381]

$$\begin{aligned} c(k, \mathbf{x}) &= \cos k(\mathbf{x} - \mathbf{x}\_1) - \int\_{\mathbf{x}\_1}^{\mathbf{x}} \frac{\sin k(\mathbf{x} - t)}{k} q(t) c(k, t) dt, \\ s(k, \mathbf{x}) &= \frac{\sin k(\mathbf{x} - \mathbf{x}\_1)}{k} - \int\_{\mathbf{x}\_1}^{\mathbf{x}} \frac{\sin k(\mathbf{x} - t)}{k} q(t) s(k, t) dt. \end{aligned} \tag{5.6}$$

It is easy to see that the solution is unique and can be obtained by iterations. It follows that the functions *c(λ, x)*, as well as *s(λ, x)*, depend analytically on *λ.* Their properties are described in detail in [381].

One can easily see that the determinant of *T*0*(λ)* is identically equal to one. This is a general fact and always holds, provided the magnetic potential is zero. To prove this, note that *c(λ, x)* and *s(λ, x)* are two linearly independent solutions to the second order differential equation (5.1), hence their Wronskian is constant

$$\begin{split} &\frac{\partial}{\partial x}\det T\_{q}(\lambda;x\_{1},x) \\ &=\frac{\partial}{\partial x}\left(c(\lambda,x)s'(\lambda,x) - s(\lambda,x)c'(\lambda,x)\right) \\ &=c'(\lambda,x)s'(\lambda,x) + c(\lambda,x)s''(\lambda,x) - s'(\lambda,x)c'(\lambda,x) - s(\lambda,x)c''(\lambda,x) \\ &=c(\lambda,x)(q(x)-\lambda)s(\lambda,x) - s(\lambda,x)(q(x)-\lambda)c(\lambda,x) = 0. \end{split}$$

Since det *Tq (λ*; *<sup>x</sup>*1*, x*1*)* <sup>=</sup> det- *c(λ, x*1*) s(λ, x*1*) c*' *(λ, x*1*) s*' *(λ, x*1*)* <sup>=</sup> det-1 0 0 1 = 1, we conclude that

$$\det T\_q(\lambda; \mathbf{x}\_1, \mathbf{x}\_2) = 1. \tag{5.7}$$

**Problem 20** What is the "physical" interpretation of formula (5.7)?

#### **Magnetic Schrödinger Equation**

There is a simple connection between the transfer matrices for different magnetic potentials. We begin with the following explicit relation (see (4.10) and (4.11))

$$\left(\left(i\frac{d}{dx} + a(\mathbf{x})\right)^2 + q(\mathbf{x})\right)e^{i\int\_{x\_0}^{\mathbf{x}} a(\mathbf{y})d\mathbf{y}}g(\mathbf{x}) = e^{i\int\_{x\_0}^{\mathbf{x}} a(\mathbf{y})d\mathbf{y}}\left(-\frac{d^2}{dx^2} + q(\mathbf{x})\right)g(\mathbf{x}),\tag{5.8}$$

obtained by differentiation. This identity means that the magnetic potential in the one-dimensional Schrödinger equation can be eliminated. We obtain a similar relation between the extended derivatives

$$\left(\frac{d}{dx} - ia(\mathbf{x})\right) e^{i\int\_{x\_0}^{\mathbf{x}} a(\mathbf{y}) d\mathbf{y}} \mathbf{g}(\mathbf{x}) = e^{i\int\_{x\_0}^{\mathbf{x}} a(\mathbf{y}) d\mathbf{y}} \frac{d}{d\mathbf{x}} \mathbf{g}(\mathbf{x}).\tag{5.9}$$

This formula explains the reason we are using extended derivatives describing vertex conditions for magnetic operators. In principle these conditions can be expressed via the limiting values of the functions and their normal derivatives, but exploiting extended derivatives makes the transfer between the operators with different magnetic potentials much easier.

The transfer matrix *Tq,a* for the magnetic Schrödinger equation on the interval [*x*1*, x*2] is defined as the 2 × 2 matrix connecting the *extended* Cauchy data

$$T\_{q,a}(\lambda; \mathbf{x}\_1, \mathbf{x}\_2) : \begin{pmatrix} f(\lambda, \mathbf{x}\_1) \\ f'(\lambda, \mathbf{x}\_1) - ia(\mathbf{x}\_1)f(\lambda, \mathbf{x}\_1) \end{pmatrix} \mapsto \begin{pmatrix} f(\lambda, \mathbf{x}\_2) \\ f'(\lambda, \mathbf{x}\_2) - ia(\mathbf{x}\_2)f(\lambda, \mathbf{x}\_2) \end{pmatrix} \tag{5.10}$$

for any function *f (λ, x)* being a solution to the differential equation

$$
\left( \left( i \frac{d}{dx} + a(\mathbf{x}) \right)^2 + q(\mathbf{x}) \right) f(\lambda, \mathbf{x}) = \lambda f(\lambda, \mathbf{x}).\tag{5.11}
$$

Formula (5.8) implies that if *g* solves Eq. (5.1), then the function *f (x)* = *<sup>e</sup>iA(x)g(x), A(x)* <sup>=</sup> *<sup>x</sup> <sup>x</sup>*<sup>0</sup> *a(y)dy*, is a solution to (5.11), and vice versa every solution to (5.11) can be obtained in this way. Taking into account that the connection between the Cauchy data for the function *g* is described by the transfer matrix *Tq* we see that

$$T\_{q,a} = e^{i\Phi\_1} T\_q,\tag{5.12}$$

with the phase <sup>1</sup> given by

$$
\Phi\_1 = \int\_{\chi\_1}^{\chi\_2} a(\chi) d\chi. \tag{5.13}
$$

The derived formulas imply in particular that the transfer matrix does not depend on the particular shape of the magnetic potential, but just on the integral over the interval. Note that relation (5.7) does not generally hold for nontrivial magnetic potentials.

*Question 5.1* Why is it necessary to consider extended derivatives instead of usual derivatives in (5.10)?

*Question 5.2* Does the determinant of the transfer matrix depend on *λ* if the magnetic flux is not zero? What is the correct formula for det *Tq,a(λ)*?

## *5.1.2 The Characteristic Equation*

Let us discuss now how to determine the spectrum of a quantum graph. Every eigenfunction is a nontrivial solution of the eigenfunction differential equations on the edges satisfying in addition vertex conditions. To obtain the spectrum one has to identify those *λ* for which a nontrivial solution occurs. The transfer matrices for the edges encode all necessary information concerning solutions of the eigenfunction equation on each edge and therefore allow one to reduce the problem of solving the system of coupled differential equations to a certain finite dimensional linear system. The price we have to pay is that the entries in the linear system are functions of the spectral parameter.

Let *ψ(λ, x)* be an eigenfunction corresponding to the eigenvalue *λ.* Every such function solves the differential equation (5.11) on each interval *En* and satisfies vertex conditions (3.53) at every vertex. Let us denote by *T <sup>n</sup> q,a* the transfer matrix corresponding to the magnetic Schrödinger equation (5.11) on the interval *En*

$$T\_{q,a}^n : \begin{pmatrix} \psi(\mathbf{x\_{2n-1}})\\ \partial \psi(\mathbf{x\_{2n-1}}) \end{pmatrix} \mapsto \begin{pmatrix} \psi(\mathbf{x\_{2n}})\\ -\partial \psi(\mathbf{x\_{2n}}) \end{pmatrix}.\tag{5.14}$$

The last equation can be written in the matrix form

$$\left(\text{diag}\left\{\begin{pmatrix}t\_{11}^{n}-1\\t\_{21}^{n}\end{pmatrix}\right\}\_{n=1}^{N}, \text{diag}\left\{\begin{pmatrix}t\_{12}^{n}\,0\\t\_{22}^{n}\,1\end{pmatrix}\right\}\_{n=1}^{N}\right)\begin{pmatrix}\vec{\Psi}^{\text{c}}\\\partial\vec{\Psi}^{\text{c}}\end{pmatrix}=0,\tag{5.15}$$

where *,∂* are the vectors of boundary values written in the basis associated with the edges

$$
\vec{\Psi} = \begin{pmatrix} \psi(\mathbf{x}\_1) \\ \psi(\mathbf{x}\_2) \\ \vdots \\ \psi(\mathbf{x}\_{2N}) \end{pmatrix}, \quad \partial \vec{\Psi} = \begin{pmatrix} \partial \psi(\mathbf{x}\_1) \\ \partial \psi(\mathbf{x}\_2) \\ \vdots \\ \partial \psi(\mathbf{x}\_{2N}) \end{pmatrix}. \tag{5.16}$$

To be more precise, the 2*N* × 4*N* matrix describing the linear system (5.15) is


The vertex conditions (3.53) can also be transformed into a similar form

$$(i(\mathbf{I} - \mathbf{S}), \mathbf{I} + \mathbf{S}) \begin{pmatrix} \vec{\Psi} \\ \partial \vec{\Psi} \end{pmatrix} = 0. \tag{5.18}$$

Here we assume that the vertex scattering matrix **S** is written in the basis associated with the edges. Equations (5.15) and (5.18) together give us 4*N* linear homogeneous equations on 4*N* boundary values of the function *ψ*

$$\begin{pmatrix} \text{diag}\left\{ \begin{pmatrix} t\_{11}^{n} - 1\\ t\_{21}^{n} \end{pmatrix} \right\}\_{n=1}^{N} \text{diag}\left\{ \begin{pmatrix} t\_{12}^{n} \ 0\\ t\_{22}^{n} \ 1 \end{pmatrix} \right\}\_{n=1}^{N} \\\\ i(\mathbf{I} - \mathbf{S}^{\mathbf{e}}) & \mathbf{I} + \mathbf{S}^{\mathbf{e}} \end{pmatrix} \begin{pmatrix} \ddot{\Psi}^{\mathbf{e}} \\ \partial \ddot{\Psi}^{\mathbf{e}} \end{pmatrix} = \mathbf{0}, \tag{5.19}$$

which has a nontrivial solution if and only if the determinant of the 4*N* ×4*N* matrix is zero.

Every nontrivial solution to the system determines a nontrivial solution to the coupled system of differential equations, moreover the multiplicities of the eigenvalues coincide. Hence the following characteristic equation

$$\det \begin{pmatrix} \text{diag} \left\{ \begin{pmatrix} t\_{11}^n(k) - 1 \\ t\_{21}^n(k) & 0 \end{pmatrix} \right\}\_{n=1}^N & \text{diag} \left\{ \begin{pmatrix} t\_{12}^n(k) \ 0 \\ t\_{22}^n(k) \ 1 \end{pmatrix} \right\}\_{n=1}^N \\\\ i(\mathbf{I} - \mathbf{S}) & \mathbf{I} + \mathbf{S} \end{pmatrix} = 0 \tag{5.20}$$

determines the spectrum of *L***<sup>S</sup>** *q,a*, while the number of linearly independent solutions to (5.19) coincides with the multiplicities of the corresponding eigenvalues for *L<sup>S</sup> q,a.* Our analysis can be summarised as follows

**Theorem 5.1** *Let be a finite compact graph, then the eigenvalues of the corresponding magnetic Schrödinger operator L<sup>S</sup> q,a are solutions to the characteristic equation (5.20) and their multiplicities coincide with the number of linearly independent solutions to the linear system (5.19).* 

Using perturbation theory for self-adjoint operators we already proved that the spectrum is discrete and satisfies Weyl's law (4.25) (see Chap. 4). One may get the same result just looking at the secular function—the function is analytic and its zeroes cannot have finite accumulation points.

The determinant given by (5.20) is an analytic function whose zeroes give eigenvalues of the Schrödinger operator on a compact graph. One might get an impression that the order of the zeroes coincides with the multiplicity of the corresponding eigenvalues. It is not always true: see Example 5.2 below.

## *5.1.3 The Characteristic Equation, Second Look*

Deriving Eq. (5.20) we did not pay much attention to the number of linear equations involved. For complicated graphs it might be important to reduce their number as we are going to discuss now.

Introduce the *N* dimensional vectors

$$\begin{array}{ll}\tilde{\Psi}^{\text{odd}} = \{\psi(\mathbf{x}\_{2n-1})\}\_{n=1}^{N}, & \partial\tilde{\Psi}^{\text{odd}} = \{\partial\psi(\mathbf{x}\_{2n-1})\}\_{n=1}^{N}, \\\tilde{\Psi}^{\text{even}} = \{\psi(\mathbf{x}\_{2n})\}\_{n=1}^{N}, & \partial\tilde{\Psi}^{\text{even}} = \{\partial\psi(\mathbf{x}\_{2n})\}\_{n=1}^{N}. \end{array} \tag{5.21}$$

The vector <sup>o</sup>*,*<sup>e</sup> <sup>=</sup> *(* odd*,* even*)* is obtained from the vector <sup>e</sup> by a permutation. The corresponding permutation of the matrix **S** will be denoted by **S**o*,*e*.*

Define the *N* × *N* matrices

$$\mathbf{T}\_{lj} = \text{diag}\left\{ t\_{lj}^{n} \right\}\_{n=1}^{N}, \quad i, j = 1, 2. \tag{5.22}$$

#### 5.1 Characteristic Equation I: Edge Transfer Matrices 103

Then Eqs. (5.14) can be re-written as

$$
\begin{pmatrix} \mathbf{T}\_{11} \ \mathbf{T}\_{12} \\ \mathbf{T}\_{21} \ \mathbf{T}\_{22} \end{pmatrix} \begin{pmatrix} \tilde{\Psi}^{\text{odd}} \\ \partial \vec{\Psi}^{\text{odd}} \end{pmatrix} = \begin{pmatrix} \tilde{\Psi}^{\text{even}} \\ -\partial \vec{\Psi}^{\text{even}} \end{pmatrix}. \tag{5.23}
$$

This equation allows one to express the 2*<sup>N</sup>* dimensional vectors <sup>o</sup>*,*<sup>e</sup> <sup>=</sup> *(* odd*,* even*)* and *∂* <sup>=</sup> *(∂* odd*, ∂* even*)* in terms of *N*-dimensional vectors odd and *∂* odd only:

$$\begin{array}{l} \vec{\Psi}^{\rm o,e} = \begin{pmatrix} \mathbf{I}\_{N} & \mathbf{0}\_{N} \\ \mathbf{T}\_{11} & \mathbf{T}\_{12} \end{pmatrix} \begin{pmatrix} \vec{\Psi}^{\rm odd} \\ \partial \vec{\Psi}^{\rm odd} \end{pmatrix}, \\\ \partial \vec{\Psi}^{\rm o,e} = \begin{pmatrix} \mathbf{0}\_{N} & \mathbf{I}\_{N} \\ -\mathbf{T}\_{21} & -\mathbf{T}\_{22} \end{pmatrix} \begin{pmatrix} \vec{\Psi}^{\rm odd} \\ \partial \vec{\Psi}^{\rm odd} \end{pmatrix} \end{array} \tag{5.24}$$

Substitution into the vertex conditions (3.53) leads to the system of 2*N* linear equations

$$i\left(\mathbf{S}^{\rm o,\varepsilon} - \mathbf{I}\_{2N}\right)\begin{pmatrix} \mathbf{I}\_{N} & \mathbf{0}\_{N} \\ \mathbf{T}\_{11} & \mathbf{T}\_{12} \end{pmatrix}\begin{pmatrix} \tilde{\Psi}^{\rm odd} \\ \partial \tilde{\Psi}^{\rm odd} \end{pmatrix} = \left(\mathbf{S}^{\rm o,\varepsilon} + \mathbf{I}\_{2N}\right)\begin{pmatrix} \mathbf{0}\_{N} & \mathbf{I}\_{N} \\ -\mathbf{T}\_{21} & -\mathbf{T}\_{22} \end{pmatrix}\begin{pmatrix} \tilde{\Psi}^{\rm odd} \\ \partial \tilde{\Psi}^{\rm odd} \end{pmatrix},\tag{5.25}$$

where **S**o*,*<sup>e</sup> is the permutation of **S** described above.

The system has a nontrivial solution if and only if

$$\det\left\{i\left(\mathbf{S}^{\rm o.c} - \mathbf{I}\_{2N}\right)\left(\begin{array}{cc} \mathbf{I}\_N & \mathbf{0}\_N\\ \mathbf{T}\_{11}(\lambda) & \mathbf{T}\_{12}(\lambda) \end{array}\right) + \left(\mathbf{S}^{\rm o.c} + \mathbf{I}\_{2N}\right)\begin{pmatrix} \mathbf{0}\_N & -\mathbf{I}\_N\\ \mathbf{T}\_{21} & \mathbf{T}\_{22} \end{pmatrix}\right\} = 0. \tag{5.26}$$

The characteristic equation is given by the determinant of a certain 2*N* ×2*N* matrix (instead of a 4*N* × 4*N* matrix). As before the multiplicity of an eigenvalue of the operator *L***<sup>S</sup>** *q,a* coincides with the dimension of the kernel of the operator matrix

$$i\left(\mathbf{S}^{\rm o,c} - \mathbf{I}\_{2N}\right)\left(\begin{array}{cc} \mathbf{I}\_N & \mathbf{0}\_N\\ \mathbf{T}\_{11}(\lambda) & \mathbf{T}\_{12}(\lambda) \end{array}\right) + \left(\mathbf{S}^{\rm o,c} + \mathbf{I}\_{2N}\right)\left(\begin{array}{cc} \mathbf{0}\_N & -\mathbf{I}\_N\\ \mathbf{T}\_{21} & \mathbf{T}\_{22} \end{array}\right).$$

Let us illustrate our approach considering one of the most explicit examples the ring graph formed by just one edge. We are going to return back to this example several times in this chapter.

**Example 5.2** Consider the ring graph *(*2*.*1*)* formed by one edge [*x*1*, x*2] with endpoints joined together (see Fig. 5.1). Then the spectrum of the corresponding **Fig. 5.1** The cycle graph *(*1*.*2*)*

standard Laplacian can be determined using the free transfer matrix given by (5.5) and the vertex scattering matrix

$$\mathbf{S}\_{\mathbf{V}} = \begin{pmatrix} 0 \ 1 \\ 1 \ 0 \end{pmatrix} \tag{5.27}$$

corresponding to standard vertex conditions for the vertex of degree 2*.*

We are going to calculate the spectrum using both characteristic equations derived in this section (Eqs. (5.20) and (5.26)).

Remembering notation <sup>1</sup> = *x*<sup>2</sup> − *x*<sup>1</sup> the first equation reads as follows:

$$\det \begin{bmatrix} \cos k\ell\_1 & -1 \ \frac{\sin k\ell\_1}{k} & 0\\ -k \sin k\ell\_1 & 0 & \cos k\ell\_1 \ 1\\ i & -i & 1 & 1\\ -i & i & 1 & 1 \end{bmatrix} = 0$$
 
$$\Rightarrow 4i(1 - \cos k\ell\_1) = 0. \tag{5.28}$$

The second characteristic equation gives us

$$\det\begin{bmatrix} i\begin{pmatrix} -1 & 1\\ 1 & -1 \end{pmatrix} \begin{pmatrix} 1 & 0\\ \cos k\ell\_1 & \frac{\sin k\ell\_1}{k} \end{pmatrix} + \begin{pmatrix} 1 \ 1\\ 1 \ 1 \end{pmatrix} \begin{pmatrix} 0 & -1\\ -k\sin k\ell\_1 \cos k\ell\_1 \end{pmatrix} \end{bmatrix} = 0$$

$$\Rightarrow \det\begin{bmatrix} -k\sin k\ell\_1 + i(-1 + \cos k\ell\_1) - 1 + \cos k\ell\_1 + i\frac{\sin k\ell\_1}{k} \\\ -k\sin k\ell\_1 - i(-1 + \cos k\ell\_1) - 1 + \cos k\ell\_1 - i\frac{\sin k\ell\_1}{k} \end{bmatrix} = 0$$

and implies the same equation (5.28) of course. This equation has solutions *kn* <sup>=</sup> <sup>2</sup>*<sup>π</sup>* <sup>1</sup> *n, n* = 0*,* 1*,* 2*,....* Note that the characteristic function has second order zeroes at these points.

Let us examine the rank of the matrix, i.e. multiplicities of the eigenvalues. For all nonzero *n* the matrices are given by

$$
\begin{pmatrix} 1 & -1 & 0 \ 0 \\ 0 & 0 & 1 \ 1 \\ i & -i & 1 \ 1 \\ -i & i & 1 \ 1 \end{pmatrix} \quad \text{and} \quad \begin{pmatrix} 0 \ 0 \\ 0 \ 0 \end{pmatrix}.
$$

Their ranks are equal to 2 and 0 respectively, which corresponds to eigenvalues having multiplicity 2*.*

For *n* = 0 the matrices take the form

$$
\begin{pmatrix} 1 & -1 & \ell\_1 & 0 \\ 0 & 0 & 1 & 1 \\ i & -i & 1 & 1 \\ -i & i & 1 & 1 \end{pmatrix} \quad \text{and} \quad \begin{pmatrix} 0 & i\ell\_1 \\ 0 - i\ell\_1 \end{pmatrix}.
$$

and their ranks are 3 and 1. The eigenvalue zero has multiplicity 1*.*

All this is in complete agreement with the rule: rank of the matrix in square brackets is equal to its size minus the multiplicity of the eigenvalue.

Summing up we conclude that described method is universal since it


This method has also certain disadvantages:


**Problem 21** Consider the standard Schrödinger operator on a finite metric graph with *M* vertices and *N* edges. Standard vertex conditions imply that the functions are continuous at the vertices, hence the limiting values at the edge end points span a space of dimension *M* + 2*N* instead of 4*N*. Modify the transfer matrix approach to write the secular equation using a matrix in C2*N*+*M.*

**Problem 22** Use the developed formalism to obtain a characteristic equation for the standard magnetic Schrödinger equation on the 8-shape graph presented in Fig. 2.7. Show, that the spectrum depends on the fluxes of the magnetic field through the cycles *j* <sup>=</sup> *<sup>x</sup>*2*<sup>j</sup> <sup>x</sup>*2*j*−<sup>1</sup> *a(x)dx*, but not on the particular form of the magnetic potential.

In the case of the standard Laplacian check that the spectrum you obtain coincides with the result of Problem 3.

## **5.2 Characteristic Equation II: Scattering Approach**

In this section we are going to describe an alternative approach to the characteristic equation based on ideas from scattering theory [252, 320, 321]. Note that this method works perfectly for positive eigenvalues only. It will be used in Chap. 8 to derive the trace formula for quantum graphs.

# *5.2.1 On the Scattering Matrix Associated with a Compact Interval*

Our first step is to define the scattering matrix for the Schrödinger equation on the finite interval *E*<sup>1</sup> = [*x*1*, x*2]*.* Under our assumptions on the potential the spectrum is discrete and therefore no scattering phenomena may be observed. In order to introduce the associated scattering matrix consider the Schrödinger equation on the whole line <sup>R</sup> ⊃ [*x*1*, x*2], extending both the electric and magnetic potentials outside the original interval by zero. Here we assume that the support of the magnetic potential is separated from the endpoints of the interval to ensure that the extended potential is continuous and hence the functions from the domain of the operator are continuously differentiable. This assumption is not restrictive, since we already know that spectral properties of quantum graphs do not depend on the concrete form of the magnetic potential but just on its fluxes. The associated Schrödinger operator has absolutely continuous spectrum [0*,*∞*)* with double multiplicity. The corresponding generalised eigenfunctions *ψ* are bounded solutions to the differential equation (5.11) considered on the whole R*.* Outside the interval *E*<sup>1</sup> = [*x*1*, x*2] the function is just a combinations of plane waves

$$\psi(\mathbf{x}, \lambda) = \begin{cases} a\_1 e^{ik(\mathbf{x} - \mathbf{x}\_1)} + b\_1 e^{-ik(\mathbf{x} - \mathbf{x}\_1)}, & \mathbf{x} < \mathbf{x}\_1, \\ a\_2 e^{-ik(\mathbf{x} - \mathbf{x}\_2)} + b\_2 e^{ik(\mathbf{x} - \mathbf{x}\_2)}, & \mathbf{x} > \mathbf{x}\_2. \end{cases} \tag{5.29}$$

To calculate the function *ψ* it remains to solve the differential equation on the interval [*x*1*, x*2]. The amplitudes *aj , bj* should be chosen so that the function and its first derivative are continuous at *x*<sup>1</sup> and *x*<sup>2</sup> (and therefore on the whole axis). Every such function is uniquely determined by the amplitudes *a*1*, a*<sup>2</sup> of the incoming waves, which allows one to define the corresponding 2 × 2 **edge scattering matrix**  *S***<sup>e</sup>** by the map (Fig. 5.2)

$$S\_{\mathfrak{e}}(k) : \begin{pmatrix} a\_1 \\ a\_2 \end{pmatrix} \mapsto \begin{pmatrix} b\_1 \\ b\_2 \end{pmatrix}. \tag{5.30}$$

**Fig. 5.2** The edge scattering matrix

To calculate the scattering matrix we are going to use the transfer matrix *T* = *Tq,a(λ).* Let us assume first that the magnetic potential is identically equal to zero. Then the Cauchy data for the function *ψ* at the points *x*<sup>1</sup> and *x*<sup>2</sup> are

$$
\begin{pmatrix} \psi(\mathbf{x}\_{1}) \\ \partial \psi(\mathbf{x}\_{1}) \end{pmatrix} = \begin{pmatrix} a\_{1} + b\_{1} \\ ik a\_{1} - ik b\_{1} \end{pmatrix}, \quad \begin{pmatrix} \psi(\mathbf{x}\_{2}) \\ \partial \psi(\mathbf{x}\_{2}) \end{pmatrix} = \begin{pmatrix} a\_{2} + b\_{2} \\ -ika\_{2} + ik b\_{2} \end{pmatrix}. \tag{5.31}
$$

Taking into account (5.14) we get the system of two linear equations

$$\begin{cases} t\_{11}a\_1 + t\_{11}b\_1 + ikt\_{12}a\_1 - ikt\_{12}b\_1 = & a\_2 + b\_2\\ t\_{21}a\_1 + t\_{21}b\_1 + ikt\_{22}a\_1 - ikt\_{22}b\_1 = & -ika\_2 + ikb\_2 \end{cases}$$

which can be resolved to obtain the relation between the vectors *a*1 *a*2 and *b*1 *b*2 

$$S\_{\mathbf{e}}(k) = \begin{pmatrix} \frac{k^2 t\_{12} - ik(t\_{11} - t\_{22}) + t\_{21}}{k^2 t\_{12} + ik(t\_{11} + t\_{22}) - t\_{21}} \frac{2ik}{k^2 t\_{12} + ik(t\_{11} + t\_{22}) - t\_{21}}\\\frac{2ik}{k^2 t\_{12} + ik(t\_{11} + t\_{22}) - t\_{21}} \frac{k^2 t\_{12} + ik(t\_{11} - t\_{22}) + t\_{21}}{k^2 t\_{12} + ik(t\_{11} + t\_{22}) - t\_{21}} \end{pmatrix} \tag{5.32}$$

To obtain this formula we just used that the determinant of the transfer matrix *T* is equal to one (5.7), which is true since we assumed that the magnetic potential is zero.

It remains to prove that the denominator *<sup>k</sup>*2*t*12+*ik(t*11+*t*22*)*−*t*<sup>21</sup> does not vanish. We are going to show that it is different from zero for any *λ* outside the negative real semiaxis. The negative semiaxis has to be excluded, since the Schrödinger operator on R may have negative eigenvalues there.

**Lemma 5.3** *The function* 

$$d(k) := k^2 t\_{12}(k) + ik \left(t\_{11}(k) + t\_{22}(k)\right) - t\_{21}(k)$$

*does not vanish in the upper half-plane outside the imaginary axis, i.e. in the region k* : Im *k* ≥ 0*,*Re *k* = 0 *.*

*Proof* Assume to the contrary that *d(k*0*)* = 0 for a certain *k*0*,* Im *k*<sup>0</sup> *>* 0*,*Re *k*<sup>0</sup> = 0*.* Let us show that under this assumption there exists an eigenfunction *<sup>ψ</sup>* corresponding to the nonreal *λ*<sup>0</sup> <sup>=</sup> *<sup>k</sup>*<sup>2</sup> <sup>0</sup>*.* Such function can be constructed so that outside *E*<sup>1</sup> it is given by

$$\psi(\mathbf{x}) = \begin{cases} b\_1 e^{ik\_0|\mathbf{x}-\mathbf{x}\_1|}, & \mathbf{x} < \mathbf{x}\_1, \\ b\_2 e^{ik\_0|\mathbf{x}-\mathbf{x}\_2|}, & \mathbf{x} > \mathbf{x}\_2, \end{cases}$$

i.e. it decreases exponentially as *x* → ±∞*.* Substituting the Cauchy data for *ψ*

$$
\begin{pmatrix} \psi(\mathbf{x}\_{\mathrm{l}}) \\ \partial \psi(\mathbf{x}\_{\mathrm{l}}) \end{pmatrix} = \begin{pmatrix} b\_{\mathrm{l}} \\ -ik\_{0}b\_{\mathrm{l}} \end{pmatrix}, \ \begin{pmatrix} \psi(\mathbf{x}\_{\mathrm{2}}) \\ \partial \psi(\mathbf{x}\_{\mathrm{2}}) \end{pmatrix} = \begin{pmatrix} b\_{\mathrm{2}} \\ ik\_{0}b\_{\mathrm{2}} \end{pmatrix}.$$

into the transfer matrix equation (5.14) we get the homogeneous linear system

$$\begin{cases} \left(t\_{11} - ik\_0 t\_{12}\right) b\_1 - b\_2 &= 0, \\ \left(t\_{21} - ik\_0 t\_{22}\right) b\_1 - ik\_0 b\_2 &= 0. \end{cases}$$

The determinant of the linear system coincides with *d(k*0*)*, which we assumed to be zero, therefore there exists a nontrivial solution *(b*1*, b*2*).* The corresponding function *ψ* solves the eigenfunction differential equation and belongs to the Hilbert space *<sup>L</sup>*2*(*R*)*, i.e. it is an eigenfunction corresponding to *λ*<sup>0</sup> <sup>∈</sup>*/* <sup>R</sup>*,* which is impossible for the Schrödinger operator, which is self-adjoint. We got a contradiction proving our assertion for nonreal *λ.*

It remains to prove that the equation

$$d(k) = k^2 t\_{12} + ik(t\_{11} + t\_{22}) - t\_{21} = 0\tag{5.33}$$

cannot be satisfied for real *<sup>k</sup>* (i.e. positive *<sup>λ</sup>* <sup>=</sup> *<sup>k</sup>*2). The functions *tij (λ), i, j* <sup>=</sup> <sup>1</sup>*,* <sup>2</sup> are real valued for real *λ* and evaluating the real and imaginary parts of (5.33) separately we get

$$\begin{cases} k^2 t\_{12} - t\_{21} = 0, \\ t\_{11} + t\_{22} = 0. \end{cases} \tag{5.34}$$

The second equation together with (5.7) imply that

$$t\_{12}t\_{21} = -\left(1 + \left(t\_{11}\right)^2\right).$$

which means that *t*<sup>12</sup> and *t*<sup>21</sup> should have opposite signs. This contradicts the first equation in (5.34).

Summing up, Eq. (5.32) provides an explicit formula for the edge scattering matrix in the case of zero magnetic potential. If the magnetic potential is different from zero, then the corresponding solution is related to the solution with zero magnetic potential via formula (5.8). The two solutions coincide to the left of the interval [*x*1*, x*2], provided *x*<sup>0</sup> ≤ *x*1; to the right of the interval the solution with zero magnetic potential has to be multiplied by the exponential factor *ei*<sup>1</sup> *,* where <sup>1</sup> <sup>=</sup> *<sup>x</sup>*<sup>2</sup> *<sup>x</sup>*<sup>1</sup> *a(x)dx* was already defined in (5.13). The corresponding scattering matrix has to be modified multiplying it by a diagonal matrix and its inverse as follows

$$\begin{split} S\_{\mathbf{e}} &= \begin{pmatrix} 1 & 0\\ 0 \ e^{i\Phi\_{1}} \end{pmatrix} \begin{pmatrix} \frac{k^{2}t\_{12} - ik(t\_{11} - t\_{22}) + t\_{21}}{k^{2}t\_{12} + ik(t\_{11} + t\_{22}) - t\_{21}} & \frac{2ik}{k^{2}t\_{12} + ik(t\_{11} + t\_{22}) - t\_{21}}\\ \frac{2ik}{k^{2}t\_{12} + ik(t\_{11} + t\_{22}) - t\_{21}} & \frac{k^{2}t\_{12} + ik(t\_{11} - t\_{22}) + t\_{21}}{k^{2}t\_{12} + ik(t\_{11} + t\_{22}) - t\_{21}} \end{pmatrix} \begin{pmatrix} 1 & 0\\ 0 \ e^{-i\Phi\_{1}} \end{pmatrix} \\ &= \begin{pmatrix} \frac{k^{2}t\_{12} - ik(t\_{11} - t\_{22}) + t\_{21}}{k^{2}t\_{12} + ik(t\_{11} + t\_{22}) - t\_{21}} & e^{-i\Phi\_{1}} \frac{2ik}{k^{2}t\_{12} + ik(t\_{11} + t\_{22}) - t\_{21}}\\ e^{i\Phi\_{1}} \frac{2ik}{k^{2}t\_{12} + ik(t\_{11} + t\_{22}) - t\_{21}} & \frac{k^{2}t\_{12} + ik(t\_{11} - t\_{22}) + t\_{21}}{k^{2}t\_{12} + ik(t\_{11} + t\_{22}) - t\_{21}} \end{pmatrix}. \end{split} \tag{5.35}$$

An alternative way to derive the above formula is to use the relation between the transfer matrices (5.12)

**Example 5.4** In the case of zero potentials *q(x)* ≡ 0*, a(x)* ≡ 0 the transfer matrix is given by

$$T\_{0,0} = \begin{pmatrix} \cos k\ell\_1 & \frac{\sin k\ell\_1}{k} \\ -k\sin k\ell\_1 & \cos k\ell\_1 \end{pmatrix},$$

where <sup>1</sup> = *x*<sup>2</sup> − *x*<sup>1</sup> is the length of the edge [*x*1*, x*2]*.* Straightforward calculations imply that the corresponding scattering matrix is

$$S\_{\mathfrak{e}} = \begin{pmatrix} 0 & e^{ik\ell\_1} \\ e^{ik\ell\_1} & 0 \end{pmatrix}. \tag{5.36}$$

The scattering matrix shows that as expected the plane waves *e*−*ik*|*x*−*xj* <sup>|</sup> penetrate through the system without any reflection, but gain extra phase factor *eik*<sup>1</sup> due to the change of the parametrisation.

*Question 5.3* Why is it not always possible to use this formalism to determine negative eigenvalues?

**Problem 23** Prove formula (5.36) directly from (5.32), i.e. perform the *straightforward calculations* mentioned above.

**Problem 24** Check that the edge scattering matrix *S***<sup>e</sup>** given by formula (5.32) is unitary for real *k.*

Hint: use the fact that for zero magnetic potential the transfer matrix is real on the real line and has unit determinant.

**Problem 25** Use relation (5.12) for the transfer matrices in order to derive a relation for the scattering matrices.

# *5.2.2 Positive Spectrum and Scattering Matrices for Finite Compact Graphs*

Let be an arbitrary finite graph formed by *N* edges *En, n* = 1*,* 2*,... ,N.* With every edge *En* = [*x*2*n*−1*, x*2*n*] we associate the edge scattering matrix *S<sup>n</sup>* **<sup>e</sup>** , which allows us to introduce the block diagonal edge scattering matrix **Se** as follows

$$\mathbf{S\_{e}}(k) = \text{diag}\left\{ \mathbf{S\_{e}}^{\boldsymbol{n}}(k) \right\}\_{n=1}^{N} = \begin{pmatrix} S\_{\mathbf{e}}^{1}(k) & 0\_{2} & 0\_{2} & \dots & 0\_{2} \\ 0\_{2} & S\_{\mathbf{e}}^{2}(k) & 0\_{2} & \dots & 0\_{2} \\ 0\_{2} & 0\_{2} & S\_{\mathbf{e}}^{3}(k) & \dots & 0\_{2} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0\_{2} & 0\_{2} & 0\_{2} & \dots & S\_{\mathbf{e}}^{N}(k) \end{pmatrix},\tag{5.37}$$

where 02 denotes the 2 × 2 zero matrix.

Let *ψ* be an eigenfunction for the Schrödinger equation corresponding to a certain positive eigenvalue *λ >* 0*.* For every edge *En* we consider the Schrödinger equation extended to the whole line <sup>R</sup> ⊃ [*x*2*n*−1*, x*2*n*]*.* The function *ψ*|*En* satisfies the eigenfunction equation on *En* and possesses unique extension outside the interval. The corresponding amplitudes of the plane waves may be denoted by *a*2*n*−1*, b*2*n*−1*, a*2*n, b*2*n.* These amplitudes are related via the corresponding edge scattering matrix

$$S\_{\mathfrak{e}}^{n}\left(\begin{array}{c}a\_{2n-1}\\a\_{2n}\end{array}\right)=\begin{pmatrix}b\_{2n-1}\\b\_{2n}\end{pmatrix}.\tag{5.38}$$

Using vector notations

$$\vec{A} = \{a\_j\}\_{j=1}^{2N}, \quad \vec{B} = \{b\_j\}\_{j=1}^{2N} \tag{5.39}$$

this identity can be written as

$$\mathbf{S}\_{\mathbf{e}}(k)\bar{A} = \bar{B}.\tag{5.40}$$

The boundary values of the function *ψ* can also be expressed using the vectors *A, B*

$$\begin{cases} \vec{\Psi} = \vec{A} + \vec{B} \\ \partial \vec{\Psi} = ik(\vec{A} - \vec{B}) \end{cases} \tag{5.41}$$

Substituting the boundary values into the vertex conditions (3.53) we get the following relation

$$i\left(\mathbf{S} - \mathbf{I}\right)\left(\vec{A} + \vec{B}\right) = \left(\mathbf{S} + \mathbf{I}\right)ik\left(\vec{A} - \vec{B}\right),$$

which in turn leads to

$$\mathbf{S}\_{\mathbf{V}}(k)\bar{B} = \bar{A},\tag{5.42}$$

with **Sv***(k)* given by (3.54). Comparing (5.40) and (5.42) we get the following remarkable relation

$$\mathbf{S}\_{\mathbf{V}}(k)\mathbf{S}\_{\mathbf{e}}(k)\,\tilde{A} = \tilde{A}.\tag{5.43}$$

Note that both vertex and edge scattering matrices **Sv***(k)* and **Se***(k)* possess blockdiagonal representations (3.54) and (5.37), but in different bases. The vertex scattering matrix is block-diagonal in the vertex basis of boundary values, while the edge scattering is block-diagonal in the edge basis of boundary values.

We see that the vector *A* is an eigenvector for the matrix **Sv***(k)***Se***(k)* corresponding to the eigenvalue 1, hence the characteristic equation can be written in the form

$$\det\left(\mathbf{S}\_{\mathbf{V}}(k)\mathbf{S}\_{\mathbf{e}}(k) - \mathbf{I}\right) = 0.\tag{5.44}$$

The last equation suggests us to introduce the **graph scattering matrix** 

$$\mathbb{S}(k) := \mathbf{S}\_{\mathbf{V}}(k)\mathbf{S}\_{\mathbf{e}}(k). \tag{5.45}$$

This matrix describes a certain discrete (scattering) process inside the graph on the edges and at the vertices. It is used to determine the spectrum of a compact graph and should not be mixed up with the scattering matrix which appears when the graph has semi-infinite edges. The corresponding balance equation

$$
\mathbb{S}(k)\ddot{A} = \ddot{A}\tag{5.46}
$$

has a solution for special values of the energy parameter *<sup>k</sup>* <sup>=</sup> *kn*, such that *k*<sup>2</sup> *<sup>n</sup>* are the eigenvalues of the Schrödinger operator. The corresponding vector *A* determines the amplitudes of edge-incoming waves. These amplitudes are well-balanced so that the set of edge-incoming waves with these amplitudes turns into the same set after one edge and one vertex scattering.

**Theorem 5.5** *The positive spectrum of the magnetic Schrödinger operator L***<sup>S</sup>** *q,a on a finite compact metric graph coincides with the set of solutions to the characteristic equation* 

$$\det\left(\mathbb{S}(k) - \mathbf{I}\right) = 0.\tag{5.47}$$

*The multiplicity of the spectrum coincides with the dimension of the solution space for the linear system (5.43).* 

*Proof* We have already proven that if *ψ(λ)* is an eigenfunction for *L***<sup>S</sup>** *q,a*, then *λ* satisfies Eq. (5.44). It remains to calculate multiplicities of the eigenvalues. For positive *λ* there is a one-to-one correspondence between the amplitudes *a*2*n*−1*, a*2*<sup>n</sup>* and the solution of the eigenfunction equation on the interval [*x*2*n*−1*, x*2*n*]*.* It follows that the number of linearly independent eigenfunctions for the Schrödinger operator coincides with the number of linearly independent solutions of the linear system (5.43).

Note that the characteristic equation (5.44) cannot be used to determine nonpositive eigenvalues. It will be shown that even for Laplace operators equation (5.43) does not necessarily give the correct multiplicity of the zero eigenvalue (see Example 5.6 and Chap. 8).

Equation (5.44) is simplified if the vertex scattering matrix does not depend on the energy parameter **Sv***(k)* = **S**, so that the energy enters the characteristic equation via the edge scattering matrix only

$$\det\left(\mathbf{SS}\_{\mathbf{e}}(k) - \mathbf{I}\right) = 0.\tag{5.48}$$

**Example 5.6** Let us return to the ring graph *(*1*.*2*)* already considered in Example 5.2. The edge scattering matrix is given by (5.36) and the vertex scattering matrix by (5.27). The corresponding characteristic equation will be

$$\det\begin{bmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} \begin{pmatrix} 0 & \exp ik\ell\_1 \\ \exp ik\ell\_1 & 0 \end{pmatrix} - \begin{pmatrix} 1 \ 0 \\ 0 \ 1 \end{pmatrix} \end{bmatrix} = 0$$
 
$$\Leftrightarrow (\exp ik\ell\_1 - 1)^2 = 0.$$

We are getting the same eigenvalues 2*π* 1 2 *<sup>n</sup>*2*, n* <sup>=</sup> <sup>0</sup>*,* <sup>1</sup>*,* <sup>2</sup>*,....*

For all values of *n* the matrix in square brackets is just zero matrix and has rank 0*.* Hence the dimension of the space of solutions is equal to 2. We are getting correct multiplicity of all non-zero eigenvalues, but the method does not work for *k* = 0*,* since the corresponding eigenvalue has multiplicity one despite that the matrix in the square brackets has zero rank. The main reason for this discrepancy is that independent solutions for zero energy are given by linear functions instead of exponentials used to derive (5.47).

The described approach to characteristic equation has the following advantages:


This method has also a few serious drawbacks:


## **5.3 Characteristic Equation III: M-Function Approach**

Titchmarsh-Weyl M-functions are proven to be an efficient tool in solving the inverse problems for the one-dimensional Schrödinger equation [43, 381]. Their analogue in several dimensions—the Dirichlet-to-Neumann map—is also widely used nowadays. In this section we are going to explore the possibility to use M-functions to determine the spectrum of quantum graphs. The M-functions introduced here are associated with single intervals building the metric graph. These functions should not be mixed up with graph's M-function associated with graph's boundary (to be introduced in Chap. 17). These two definitions coincide only if the graph is formed by a single edge.

The characteristic equation we are going to derive in this section has a remarkably simple form, for example in the case of standard conditions the spectrum is given as determinant of a certain *M* × *M* matrix. The main drawback is that only the eigenvalues different from the Dirichlet-Dirichlet eigenvalues on the edges are determined.

## *5.3.1 M-Function for a Single Interval*

For simplicity we consider the case of zero magnetic potential. Let *g(λ, x)* be any solution to the differential equation (5.1) on the interval *E*<sup>1</sup> = [*x*1*, x*2], then the corresponding 2 × 2 matrix function *M***e***(λ)* is defined as follows

$$M\_{\mathfrak{e}}(\boldsymbol{\lambda}): \begin{pmatrix} \operatorname{g}(\boldsymbol{\lambda}, \boldsymbol{x}\_{\mathrm{1}})\\ \operatorname{g}(\boldsymbol{\lambda}, \boldsymbol{x}\_{\mathrm{2}}) \end{pmatrix} \mapsto \begin{pmatrix} \partial \operatorname{g}(\boldsymbol{\lambda}, \boldsymbol{x}\_{\mathrm{1}})\\ \partial \operatorname{g}(\boldsymbol{\lambda}, \boldsymbol{x}\_{\mathrm{2}}) \end{pmatrix} \equiv \begin{pmatrix} \partial\_{\mathbf{n}} \operatorname{g}(\boldsymbol{\lambda}, \boldsymbol{x}\_{\mathrm{1}})\\ \partial\_{\mathbf{n}} \operatorname{g}(\boldsymbol{\lambda}, \boldsymbol{x}\_{\mathrm{2}}) \end{pmatrix}. \tag{5.49}$$

It maps the values of the solution at the endpoints of the interval to the values of the normal derivatives at these points. For nonreal *λ* the M-function is uniquely determined by the transfer matrix. Really (5.2) implies that

$$\begin{cases} t\_{11}\mathbf{g}(\mathbf{x}\_{1}) + t\_{12}\partial\mathbf{g}(\mathbf{x}\_{1}) = \mathbf{g}(\mathbf{x}\_{2}) \\\\ t\_{21}\mathbf{g}(\mathbf{x}\_{1}) + t\_{22}\partial\mathbf{g}(\mathbf{x}\_{1}) = \partial\mathbf{g}(\mathbf{x}\_{2}) \end{cases} \Rightarrow \begin{cases} \partial\mathbf{g}(\mathbf{x}\_{1}) = -\frac{t\_{11}}{t\_{12}}\mathbf{g}(\mathbf{x}\_{1}) + \frac{1}{t\_{12}}\mathbf{g}(\mathbf{x}\_{2}) \\\\ -\partial\mathbf{g}(\mathbf{x}\_{2}) = \frac{1}{t\_{12}}\mathbf{g}(\mathbf{x}\_{1}) - \frac{t\_{22}}{t\_{12}}\mathbf{g}(\mathbf{x}\_{2}) \end{cases}, \end{cases}, \tag{5.50}$$

where we used that det *T (λ)* = 1*.* Moreover *t*12*(λ)* = 0 for Im *λ* = 0, since otherwise the Schrödinger equation with Dirichlet boundary conditions at *x*<sup>1</sup> and *x*<sup>2</sup> would have a nonreal eigenvalue.

We get the following expression for the matrix M-function

$$M\_{\mathbf{e}}(\lambda) = \begin{pmatrix} -\frac{t\_{11}(k)}{t\_{12}(k)} \ \frac{1}{t\_{12}(k)}\\ \frac{1}{t\_{12}(k)} \ -\frac{t\_{22}(k)}{t\_{12}(k)} \end{pmatrix},\tag{5.51}$$

and see immediately that it is symmetric *M<sup>t</sup>* **<sup>e</sup>***(λ)* = *M***e***(λ)* and analytic in Im *λ* = 0*.* To prove that *M***e***(λ)* has positive imaginary part in the upper half-plane Im *λ >* 0 we use just integration by parts

$$\begin{split} &\lambda\left\|\left\|\left\|\left\|\left|\boldsymbol{g}(\lambda)\right\|\right\|\_{L\_{2}(E\_{1})} = \langle\operatorname{g}(\lambda),L\_{q}\operatorname{g}(\lambda)\rangle\_{L\_{2}(E\_{1})}\\ &=\overline{\operatorname{g}}(\lambda,\boldsymbol{x}\_{1})\operatorname{\partial g}(\lambda,\boldsymbol{x}\_{1}) + \overline{\operatorname{g}}(\lambda,\boldsymbol{x}\_{2})\operatorname{\partial g}(\lambda,\boldsymbol{x}\_{2}) + \int\_{E\_{1}}\left(\left|\boldsymbol{g}'(\lambda,\boldsymbol{x})\right|^{2} + q(\boldsymbol{x})|\operatorname{g}(\lambda,\boldsymbol{x})|^{2}\right)dx \\ &= \left\langle \left(\begin{array}{c}g(\lambda,\boldsymbol{x}\_{1})\\ g(\lambda,\boldsymbol{x}\_{2})\end{array}\right), \quad \underbrace{\begin{pmatrix}\operatorname{\partial g}(\lambda,\boldsymbol{x}\_{1})\\ \operatorname{\partial g}(\lambda,\boldsymbol{x}\_{2})\end{pmatrix}}\_{\mathsf{H}\_{\mathsf{H}}(\lambda)}\right\rangle + \underbrace{\int\_{E\_{1}}\left(\left|\operatorname{\partial g}(\lambda,\boldsymbol{x})\right|^{2} + q(\boldsymbol{x})|\operatorname{g}(\lambda,\boldsymbol{x})|^{2}\right)dx}\_{\mathsf{H}\in\mathbb{R}}. \end{split}$$

since *q(x)* <sup>∈</sup> <sup>R</sup>. Taking the imaginary part we arrive at:

$$\left\| \operatorname{Im} \lambda \parallel \operatorname{g}(\lambda) \parallel \right\|\_{L\_2(E\_1)}^2 = \left\langle \begin{pmatrix} \operatorname{g}(\lambda, \mathbf{x}\_{\mathrm{I}})\\\operatorname{g}(\lambda, \mathbf{x}\_2) \end{pmatrix}, \operatorname{Im} \operatorname{M}\_{\mathbf{e}}(\lambda) \begin{pmatrix} \operatorname{g}(\lambda, \mathbf{x}\_{\mathrm{I}})\\\operatorname{g}(\lambda, \mathbf{x}\_2) \end{pmatrix} \right\rangle\_{\mathbb{C}^2},\tag{5.52}$$

i.e. that Im *M***e***(λ)* is nonnegative for Im *λ >* 0*.*

If *g(λ, x)* is a solution to (5.1), then *g(λ, x)* is a solution to the same equation with *λ* instead of *λ.* Hence *M***<sup>e</sup>** is symmetric with respect to the real line *M*<sup>∗</sup> **<sup>e</sup>** *(λ)* = *M***e***(λ).*

Let us summarise our knowledge about the function *M***e***(λ)*:


$$\operatorname{Im} M(\lambda) / \operatorname{Im} \lambda \ge 0;\tag{5.53}$$

(3) it is symmetric with respect to the real axis:

$$M^\*(\lambda) = M(\overline{\lambda}).\tag{5.54}$$

Any function, or a matrix valued function, satisfying conditions (1)–(3) is called a **Herglotz-Nevanlinna** function.1 Properties of Herglotz-Nevanlinna functions are well-understood [33, 239, 324]. They are often used to describe spectra of selfadjoint operators.

The M-function encodes all spectral information about the differential expression on *E*1*.* In particular, its singularities coincide with the zeroes of the coefficient *t*12*(k)* in the transfer matrix, provided *<sup>λ</sup>* <sup>=</sup> *<sup>k</sup>*2*.* The equation

$$t\_{12}(k) = 0$$

determines precisely the spectrum of the differential operator on *E*<sup>1</sup> given by *τq,a* and Dirichlet boundary conditions at the endpoints

$$
\mu(\mathbf{x}\_1) = \mu(\mathbf{x}\_2) = 0.
$$

We call it Dirichlet-Dirichlet spectrum. It will be proven later, that the knowledge of just one diagonal entry of *M***e***(λ)* allows one to reconstruct the length of *E*<sup>1</sup> and the electric potential on it (see Chap. 19). Knowing the M-function, there is no need anymore to solve the differential equation on the edges—all we need is the relation between the boundary values given by the M-function. The role of this function is similar to the role of the scattering matrix in Sect. 5.2.

**Example 5.7** In the case of Laplace operator (zero magnetic *a(x)* ≡ 0 and electric potentials *q(x)* ≡ 0) the edge M-function is determined just by the length <sup>1</sup> of the corresponding interval

$$M\_{\mathbf{e}}(\lambda) = \begin{pmatrix} -k \cot k\ell\_1 & \frac{k}{\sin k\ell\_1} \\ \frac{k}{\sin k\ell\_1} & -k \cot k\ell\_1 \end{pmatrix}. \tag{5.55}$$

<sup>1</sup> Such functions are also called Pick or *R*-functions.

**Problem 26** Prove that *M***e***(λ)* is a Herglotz-Nevanlinna function in the case of zero potential directly using only representation (5.55).

**Problem 27** What is the relation between the edge M-function just introduced and the edge scattering matrix *S***<sup>e</sup>** defined by (5.32)?

## *5.3.2 The Edge M-Function*

Introducing the edge M-function we are not going to pay any attention to the way the edges are connected to each other. In some sense this matrix is analogous to the edge scattering matrix *S***e***(k)* introduced earlier—it describes the relation between the boundary values to all solutions of the eigenfunction equation on all edges without paying attention to vertex conditions.

The **edge M-function Me** is the 2*N* × 2*N* matrix function which is blockdiagonal in the edge basis of boundary values:

$$\mathbf{M}\_{\mathbf{e}}(\lambda) = \text{diag}\left\{M\_{\mathbf{e}}^{n}(\lambda)\right\}\_{n=1}^{N},\tag{5.56}$$

where *M<sup>n</sup>* **<sup>e</sup>** *(λ)* are the 2 × 2 functions associated with the edges *En* = [*x*2*n*−1*, x*2*n*]*.* The edge M-function is a Herglotz-Nevanlinna function, since each *M<sup>n</sup>* **<sup>e</sup>** *(λ)* is Herglotz-Nevanlinna. This M-function is determined by the differential expression *τq,a* on all intervals and possesses no information about the vertex conditions or connectivity of the metric graph.

The edge M-function originally defined for non-real *λ* can be continued to all *λ*, which are not Dirichlet-Dirichlet eigenvalues for the differential expression *τq,a* on any of the edges. These eigenvalues coincide with the zeroes of the coefficients *t n* <sup>12</sup>*(λ)* associated with the intervals *En.* For a finite compact graph the set always has measure zero implying that **M**e*(λ)* is well-defined for real *λ* almost everywhere.

The edge M-function, as well as the edge scattering matrix, describes the dependence between the boundary values of any function *ψ* solving the eigenfunction equation on every edge

$$\mathbf{M}\_{\mathbf{e}}(\lambda) : \vec{\Psi} \mapsto \partial \vec{\Psi}. \tag{5.57}$$

Using the amplitudes *A* and the edge scattering matrix the last relation can be written as follows

$$\mathbf{M\_e(\lambda) \left(\mathbf{I} + \mathbf{S\_e(k)}\right) \bar{A} = ik \left(\mathbf{I} - \mathbf{S\_e(k)}\right) \bar{A},}$$

which leads to the following relation between the edge scattering matrix and the edge M-function

$$\mathbf{M\_e}(\lambda) = ik \frac{\mathbf{I} - \mathbf{S\_e}(k)}{\mathbf{I} + \mathbf{S\_e}(k)} \quad \Leftrightarrow \quad \mathbf{S\_e}(k) = \frac{ik\mathbf{I} - \mathbf{M\_e}(\lambda)}{ik\mathbf{I} + \mathbf{M\_e}(\lambda)}. \tag{5.58}$$

These relations hold for all complex and almost all real *λ.*

# *5.3.3 Characteristic Equation via the M-Function: General Vertex Conditions*

Let *ψ(x)* be any function solving Eq. (5.11) on every edge and suppose *λ* does not belong to the union of Dirichlet–Dirichlet spectra on the edges (*t n* <sup>12</sup>*(λ)* = 0, *n* = 1*,* 2*,...,N*). Then the corresponding vectors of boundary values satisfy the linear equation

$$\mathbf{M}\_{\mathfrak{e}}(\lambda)\ddot{\Psi} = \partial\_{\mathfrak{n}}\bar{\Psi},\tag{5.59}$$

i.e. the M-function maps the vector of function values to the vector of normal derivatives *∂***n***.* The function *ψ* is an eigenfunction if it in addition satisfies the vertex conditions (3.53). We write these conditions in the form

$$i\left(\mathbf{S} - \mathbf{I}\right)\ddot{\Psi} = \left(\mathbf{S} + \mathbf{I}\right)\partial\_{\mathbf{n}}\ddot{\Psi},\tag{5.60}$$

where **S** = **Sv***(*1*)* is the unitary matrix parametrising the vertex conditions. Note that this matrix connects together limiting values associated with each vertex separately, hence it is reducible (except when the graph has only one vertex). But here we use the basis, where the boundary values are ordered with respect to the edges, not with respect to the vertices. Therefore the matrix **S** is not block diagonal.

Equations (5.59) and (5.60) imply that *λ* is an eigenvalue only if

$$\left\{i(\mathbf{S} - \mathbf{I}) - (\mathbf{S} + \mathbf{I})\mathbf{M}\_{\mathbf{e}}(\lambda)\right\}\Psi = 0\tag{5.61}$$

holds. Note that not all eigenvalues of *L***<sup>S</sup>** *<sup>q</sup>* correspond to nontrivial solutions of (5.61), since we excluded from our analysis the Dirichlet-Dirichlet eigenvalues.

All eigenvalues of *L<sup>S</sup> <sup>q</sup>* different from the Dirichlet-Dirichlet eigenvalues on the edges are given by the characteristic equation

$$\det\left\{i(\mathbf{S} - \mathbf{I}) - (\mathbf{S} + \mathbf{I})\mathbf{M}\_{\mathbf{e}}(\lambda)\right\} = 0.\tag{5.62}$$

All dependence on *λ* is via the M-function **Me***(λ)*, even in the case of energy dependent vertex scattering matrices. The advantage of the characteristic equation (5.62) is that it is given as a determinant of 2*N* × 2*N* matrix.

Since (3.53) is a Hermitian relation, the last equation has no nonreal solutions as expected. One may see this by applying spectral projectors **P**−<sup>1</sup> and **P**<sup>⊥</sup> <sup>−</sup><sup>1</sup> := **<sup>I</sup>**−**P**−<sup>1</sup> associated with the unitary matrix **S**

$$\mathbf{P}\_{-1}^{\perp}i\frac{\mathbf{I}-\mathbf{S}}{\mathbf{I}+\mathbf{S}}\mathbf{P}\_{-1}^{\perp}\vec{\Psi}=\mathbf{P}\_{-1}^{\perp}\mathbf{M}\_{\mathbf{e}}(\lambda)\mathbf{P}\_{-1}^{\perp}\vec{\Psi}.\tag{5.63}$$

The matrix on the left hand side is nothing else than the Hermitian matrix **AS** introduced in Sect. 3.4. We use here bold face in order to indicate that the matrix **AS** includes not one, but all vertices in the graph *.*2 We get the following equation

$$\mathbf{A}\_{\mathbf{S}} \ddot{\boldsymbol{\Psi}} = \mathbf{P}\_{-\mathbf{l}}^{\perp} \mathbf{M}\_{\mathbf{e}}(\boldsymbol{\lambda}) \mathbf{P}\_{-\mathbf{l}}^{\perp} \ddot{\boldsymbol{\Psi}}.$$

Since **Me***(λ)* has a positive imaginary part in Im *λ* = 0 no nontrivial solution to the last equation exists.

Another way to write the characteristic equation is using the inverse of the edge M-function. This approach works for all *λ*, which are not Neumann-Neumann eigenvalues for any separate edge. For every such *λ* Eq. (5.59) can be written in the form

$$
\ddot{\Psi} = (\mathbf{M}\_{\mathbf{e}}(\lambda))^{-1} \partial \ddot{\Psi} \tag{5.64}
$$

implying that

$$\left\{ i(\mathbf{S} - \mathbf{I}) (\mathbf{M}\_\mathbf{e}(\lambda))^{-1} - (\mathbf{S} + \mathbf{I}) \right\} \partial \vec{\Psi} = 0. \tag{5.65}$$

The corresponding characteristic equation

$$\det\left\{i(\mathbf{S}-\mathbf{I})(\mathbf{M}\_{\mathbf{e}}(\lambda))^{-1}-(\mathbf{S}+\mathbf{I})\right\}=0\tag{5.66}$$

determines the eigenvalues, which are not Neumann-Neumann eigenvalues on the edges.

# *5.3.4 Reduction of the M-Function for Standard Vertex Conditions*

If we assume standard vertex conditions, then the characteristic equation can be reduced to a determinant of a certain *M* ×*M* matrix. To this end consider the *M* ×*M*

<sup>2</sup> Written in the vertex basis the matrix **AS** has a block-diagonal structure but we use the edge basis here.

matrix function **M**st*(λ)* defined by

$$\begin{aligned} \left(\mathbf{M}^{\mathrm{st}}\right)\_{mm'} &= \sum\_{\mathbf{E}\_{\mathbf{n}} \colon \mathbf{z}\_{2n-1} \in \mathbf{V}^{\mathrm{m}}} (M\_{\mathbf{e}}^{\mathrm{n}})\_{12} + \sum\_{\begin{subarray}{c} \mathbf{E}\_{\mathbf{n}} \colon \mathbf{z}\_{2n} \in \mathbf{V}^{\mathrm{m}}\\ \mathbf{z}\_{2n} \in \mathbf{V}^{\mathrm{m}'} \end{subarray}} (M\_{\mathbf{e}}^{\mathrm{n}})\_{21}, & m \neq m';\\\\ \left(\mathbf{M}^{\mathrm{st}}\right)\_{mm} &= \sum\_{\mathbf{E}\_{\mathbf{n}} \colon \mathbf{z}\_{2n-1} \in \mathbf{V}^{\mathrm{m}}} (M\_{\mathbf{e}}^{\mathrm{n}})\_{11} + \sum\_{\mathbf{E}\_{\mathbf{n}} \colon \mathbf{z}\_{2n} \in \mathbf{V}^{\mathrm{m}}} (M\_{\mathbf{e}}^{\mathrm{n}})\_{12} + (M\_{\mathbf{e}}^{\mathrm{n}})\_{21}) \,. \end{aligned}$$

Note that the last sum is over all loop edges attached to *V m.* The entries of this Mfunction are different from zero only if the corresponding entries in the adjacency matrix are different from zero. The total number of items in both sums appearing in the formula for non-diagonal entries is equal to the number of parallel edges. For diagonal entries the total number of items is equal to the valency of the vertex where the loops are counted twice.

The matrix function **M**st*(λ)* can be written as a sum of *N* Herglotz-Nevanlinna matrices associated with different edges. Let *En* be an edge connecting the vertices *V <sup>m</sup>* and *V <sup>m</sup>*' *.* Then determine the *<sup>M</sup>* <sup>×</sup> *<sup>M</sup>* matrix **M***n(λ)* having just four non-zero entries

$$\begin{array}{ll} \left(\mathbf{M}^{n}\right)\_{mm} = \left(M\_{\mathbf{e}}^{n}\right)\_{11}, & \left(\mathbf{M}^{n}\right)\_{mm'} = \left(M\_{\mathbf{e}}^{n}\right)\_{12}, \\\left(\mathbf{M}^{n}\right)\_{m'm} = \left(M\_{\mathbf{e}}^{n}\right)\_{21}, & \left(\mathbf{M}^{n}\right)\_{m'm'} = \left(M\_{\mathbf{e}}^{n}\right)\_{22}. \end{array} \tag{5.68}$$

It takes the form

$$\mathbf{M}^n(\lambda) = \begin{pmatrix} 0 \dots & 0 & \dots & 0 & \dots & 0 \\ \vdots \ddots & \vdots & & \vdots & & \vdots \\ 0 & \dots & (M\_\mathbf{e}^n)\_{11} & \dots (M\_\mathbf{e}^n)\_{12} & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots & & \vdots \\ 0 & \dots & (M\_\mathbf{e}^n)\_{21} & \dots (M\_\mathbf{e}^n)\_{22} & \dots & 0 \\ \vdots & \vdots & & \vdots & \ddots & \vdots \\ 0 & \dots & 0 & \dots & 0 & \dots & 0 \end{pmatrix} . \tag{5.69}$$

If the edge *En* is a loop attached to the vertex *V m,* then the matrix **M***n(λ)* has just one nonzero entry

$$\left(\mathbf{M}^{n}\right)\_{mm} = (M\_{\mathbf{e}}^{n})\_{11} + (M\_{\mathbf{e}}^{n})\_{12} + (M\_{\mathbf{e}}^{n})\_{21} + (M\_{\mathbf{e}}^{n})\_{22},\tag{5.70}$$

so that

$$\mathbf{M}^{n}(\lambda) = \begin{pmatrix} 0 \dots & 0 & \dots & 0 \\ \vdots \ddots & \vdots & & \vdots \\ 0 \dots . \ (M\_{\mathbf{e}}^{n})\_{11} + (M\_{\mathbf{e}}^{n})\_{12} + (M\_{\mathbf{e}}^{n})\_{21} + (M\_{\mathbf{e}}^{n})\_{22} \dots 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 \dots & 0 & \dots & 0 \end{pmatrix} . \tag{5.71}$$

Then the following formula holds

$$\mathbf{M}^{\rm st}(\lambda) = \sum\_{n=1}^{N} \mathbf{M}^{n}(\lambda),\tag{5.72}$$

proving that **M**st*(λ)* is a Herglotz-Nevanlinna matrix function.

Let us show that the matrix function **M**st*(λ)* can be used to determine the spectrum of quantum graphs with standard vertex conditions. Consider all functions *ψ* which are continuous on and denote by *ψ(V m)* their (common) values at the vertices *<sup>V</sup> m, m* <sup>=</sup> <sup>1</sup>*,* <sup>2</sup>*,...,M*. The corresponding *M*-dimensional vector {*ψ(V m)*}*<sup>M</sup> <sup>m</sup>*=<sup>1</sup> will be denoted by *ψ***v***.* This vector is closely related to the 2*N*dimensional vector **<sup>v</sup>** introduced earlier. In the case of continuous functions all entries in **<sup>v</sup>** connected with the same vertex are equal and therefore the dimension of the vector may be reduced without losing any information on the values of the function *ψ* at the edges' endpoints.

The sum of normal derivatives at the vertex *V <sup>m</sup>* can be calculated as follows:

$$\begin{split} \partial\vartheta(V^{m}) &:= \sum\_{\mathbf{x}\_{\mathbf{n}} \in V^{m}} \partial\psi(\mathbf{x}\_{\mathbf{n}}) \\ &= \sum\_{E\_{\mathbf{n}}:\mathbf{x}\_{2n-1} \in V^{m}} (M\_{\mathbf{e}}^{n})\_{11} \psi(V^{m}) + \sum\_{E\_{\mathbf{n}}:\mathbf{x}\_{2n} \in V^{m}} (M\_{\mathbf{e}}^{n})\_{22} \psi(V^{m}) \\ &+ \sum\_{m'=1}^{M} \left( \sum\_{\begin{subarray}{c} \mathbf{x}\_{2n}: \\ \mathbf{x}\_{2n} = 1 \ \mathbf{x} \end{subarray}} (M\_{\mathbf{e}}^{n})\_{12} \psi(V^{m'}) + \sum\_{\begin{subarray}{c} \mathbf{x}\_{2n}: \\ \mathbf{x}\_{2n} \in V^{m'} \end{subarray}} (M\_{\mathbf{e}}^{n})\_{12} \psi(V^{m'}) \right). \end{split} \tag{5.73}$$

Introducing the *M*-dimensional vector

$$\partial \vec{\psi}\_{\mathbf{V}} := \{ \partial \psi (V^m) \}\_{m=1}^M \tag{5.74}$$

the last equation can be written in the matrix form

$$
\partial \vec{\psi}\_{\mathbf{v}} = \mathbf{M}^{\text{st}}(\lambda) \,\,\vec{\psi}\_{\mathbf{v}} \tag{5.75}
$$

with the M-function **M**st*(λ)* given by (5.67). The function *ψ* satisfies the standard vertex conditions if and only if the vector *∂ψ***<sup>v</sup>** is identically equal to zero, i.e. if *ψ***<sup>v</sup>** belongs to the kernel of **M**st*(λ).* It follows that the corresponding characteristic equation is simply

$$\det \mathbf{M}^{\mathrm{st}}(\lambda) = 0.\tag{5.76}$$

As before, this equation does not determine the whole spectrum of the quantum graph, but only the eigenvalues different from the Dirichlet-Dirichlet eigenvalues on the edges.

**Example 5.8** We return back to the ring graph *(*1*.*2*)* already considered in Examples 5.2 and 5.6. The edge M-function is given by (5.55) and the vertex scattering matrix by (5.27). The characteristic equation takes the form

$$\det\left[i\begin{pmatrix}-1 & 1\\1 & -1\end{pmatrix} - \begin{pmatrix}1 \ 1\\1 \ 1\end{pmatrix} \begin{pmatrix}-k\cot k\ell\_{1} & \frac{k}{\sin k\ell\_{1}}\\\-\frac{k}{\sin k\ell\_{1}} & -k\cot k\ell\_{1}\end{pmatrix}\right] = 0$$

$$\Rightarrow \det\begin{bmatrix}k\cot k\ell\_{1} - \frac{k}{\sin k\ell\_{1}} - i & k\cot k\ell\_{1} - \frac{k}{\sin k\ell\_{1}} + i\\k\cot k\ell\_{1} - \frac{k}{\sin k\ell\_{1}} + i & k\cot k\ell\_{1} - \frac{k}{\sin k\ell\_{1}} - i\end{bmatrix} = 0$$

$$\Rightarrow -4ik\frac{\cos k\ell\_{1} - 1}{\sin k\ell\_{1}} = 0.$$

Consider nonzero values of *k.* The numerator has second order zeroes at points *kn* <sup>=</sup> <sup>2</sup>*<sup>π</sup>* <sup>1</sup> *n, n* = 1*,* 2*,...,* while the denominator has first order zeroes there. Hence the quotient has just first order zeroes at *kn.* The determinant has second order zero at *k* = 0*.*

We see that not only the multiplicity of the eigenvalue *λ* = 0 is not reflected correctly, but of all other eigenvalues as well. The reason is that all points *mπ* <sup>1</sup> belong to the Dirichlet-Dirichlet spectrum of the operator on the interval [*x*1*x*2] of length 1*.* As a result not only the multiplicities of the eigenvalues *λn* <sup>=</sup> *<sup>k</sup>*<sup>2</sup> *<sup>n</sup>* do not coincide with the order of these zeroes in the determinant, but the determinant function is singular at *(*2*m*+1*)π* <sup>1</sup> *, m* <sup>∈</sup> <sup>N</sup>*.*

The advantage of the M-function approach is

• it underlines the analytic structure of the problem so that the spectrum is given by zeroes of a certain (matrix-valued) Herglotz-Nevanlinna function;

• the rank of the matrix is reduced significantly if standard vertex conditions are assumed.

On the other hand this method has the following serious drawback:

• the Dirichlet-Dirichlet eigenvalues on the edges should be excluded from the consideration, such positive numbers require separate investigation.

**Problem 28** Consider the 8-shape graph *(*2*.*4*)* given in Fig. 2.7. Calculate the spectrum of the standard Laplacian using all three methods from the current chapter. Compare the results with the calculations carried our in Sect. 2.2. Do you get all eigenvalues with correct multiplicities?

**Problem 29** Let *(*2*.*3*)* be the graph formed by two edges of lengths <sup>1</sup> and <sup>2</sup> connected at their endpoints forming a loop of length <sup>1</sup> +2*.* Consider the standard Laplacian *L*st*((*1*,* 2*)* and write characteristic equations on its spectrum using all three methods described.


**Problem 30** How does M-function depend on the magnetic potential? Derive an explicit formula connecting the M-functions corresponding to the same electric but different magnetic potentials. How to see from the third characteristic equation (5.62) that the spectrum of a magnetic Schrödinger operator depends only on the fluxes of the magnetic field through the cycles.

**Problem 31** Give another one explicit example of a metric graph, such that the standard Laplacian has eigenvalues not determined by the characteristic equation (5.76).

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 6 Standard Laplacians and Secular Polynomials**

In this chapter we start systematic studies of spectral properties of graph Laplacians—standard Laplace operators on metric graphs. Our main interest will be families of metric graphs having the same topological structure. Metric graphs from such a family correspond to the same discrete graph but the lengths of the edges may be different. Common spectral properties of such families (and hence of all metric graphs) are best described by certain multivariate low degree polynomials.

# **6.1 Secular Polynomials**

The secular equation determining the spectrum of a Schrödinger operator on a finite compact metric graph takes especially simple form for standard Laplacians—the Laplace operators determined by standard vertex conditions. It turns out that in this case the spectrum can be effectively described using low degree multivariate polynomials. Most ideas can be applied to any Laplacian with scaling invariant vertex conditions but we restrict our presentation to standard conditions for the sake of clarity. We shall closely follow [351].

**Theorem 6.1** *The spectrum of the standard Laplacian on a finite compact metric graph is given by the zeroes of the trigonometric polynomial* 

$$p\_\Gamma(k) = \det\left(\mathbf{S}\_\mathbf{e}(k) - \mathbf{S}\right) = \sum\_{j=1}^J p\_j e^{iw\_j k},\tag{6.1}$$

*where <sup>J</sup>* <sup>∈</sup> <sup>N</sup> *is a certain integer, pj* <sup>∈</sup> <sup>R</sup>*, and* 

$$w\_j = \sum\_{n=1}^{N} \nu\_n^j \ell\_n,\tag{6.2}$$

*with <sup>N</sup> <sup>n</sup>*=<sup>1</sup> *<sup>ν</sup><sup>j</sup> <sup>n</sup>* <sup>≤</sup> <sup>2</sup>*N,ν<sup>j</sup> <sup>n</sup>* ∈ {0*,* 1*,* 2}*.* 1

*Proof* Consider the secular equation (5.47)

$$\det(\mathbf{S}(k) - \mathbf{I}) = \det(\mathbf{S}\_{\mathbf{V}}(k)\mathbf{S}\_{\mathbf{e}}(k) - \mathbf{I}) = 0.$$

For standard vertex conditions the vertex scattering matrix **Sv***(k)* is Hermitian and energy-independent **Sv***(k)* ≡ **S**, hence the secular equation can be written as

$$p\_\Gamma(k) = \det\left(\mathbf{S}\_\mathbf{e}(k) - \mathbf{S}\right) = 0,$$

since **S**<sup>2</sup> <sup>=</sup> **<sup>I</sup>** (remember that **<sup>S</sup>** is also Hermitian) and we are only interested in the zeroes of the secular function.

The entries of the edge scattering matrix for the Laplacian are just exponentials *<sup>e</sup>ikn* . Therefore the entries of the matrix **Se***(k)*−**<sup>S</sup>** are just sums of the exponentials and reals. Taking the determinant we get products of such exponentials leading to a trigonometric polynomial. Each exponential in the polynomial is a product of at most 2*<sup>N</sup>* exponentials *eikn* , hence (6.2) holds.

Every zero of the trigonometric polynomial *p-(k)* corresponds to an eigenvalue of the standard Laplacian. Moreover, for positive eigenvalues the order of the zero coincides with the multiplicity of the eigenvalue (see Theorem 8.1 below). On the other hand, Lemma 4.10 states that the multiplicity of the eigenvalue *λ* = 0 is always equal to 1 for connected graphs (independently of the order of the trigonometric polynomial). We are going to return to this question in Chap. 8 while proving the trace formula.

The dependence on *k* in the trigonometric polynomial is via the exponentials *eikn* , therefore let us introduce new complex variables

$$z\_n = e^{ik\ell\_n}, \quad n = 1, 2, \ldots, N,\tag{6.3}$$

and the multivariate **secular polynomials** *PG(***z***)*

$$P\_G(\mathbf{z}) = \det(\mathbf{E}(\mathbf{z}) - \mathbf{S}), \quad \mathbf{z} = (z\_1, z\_2, \dots, z\_N) \in \mathbb{C}^N,\tag{6.4}$$

<sup>1</sup> It follows that *wj* <sup>∈</sup> <sup>L</sup>N{*n*} *N <sup>n</sup>*=1*,* where <sup>L</sup><sup>N</sup> denotes the linear span with coefficients from <sup>N</sup>.

where **E** is the following 2 × 2 block-diagonal matrix

$$\mathbf{E}(\mathbf{z}) = \text{diag}\left\{ \begin{pmatrix} 0 & z\_{j} \\ z\_{j} & 0 \end{pmatrix} \right\}\_{j=1}^{N} = \begin{pmatrix} 0 & z\_{1} & 0 & 0 & 0 & 0 & \dots & 0 & 0 \\ z\_{1} & 0 & 0 & 0 & 0 & 0 & \dots & 0 & 0 \\ 0 & 0 & 0 & z\_{2} & 0 & 0 & \dots & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & 0 & 0 & z\_{3} & \dots & 0 & 0 \\ 0 & 0 & 0 & 0 & z\_{3} & 0 & \dots & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & 0 & 0 & 0 & \dots & 0 & z\_{N} \\ 0 & 0 & 0 & 0 & 0 & 0 & \dots & z\_{N} & 0 \\ \end{pmatrix}. \tag{6.5}$$

The secular polynomial is determined entirely by the discrete graph *G* associated with the original metric graph *-*, hence the notation *PG(***z***)*. The discrete graph *G(-)* has the same set of vertices and edges as *-*. The only difference is that the edges are seen as pairs of vertices, not as subintervals of R as in Definition 2.1.

The original secular equation (5.47) can be written using the secular polynomial as

$$P\_G(e^{ik\ell\_1}, e^{ik\ell\_2}, \dots, e^{ik\ell\_N}) \equiv p\_\Gamma(k) = 0,\tag{6.6}$$

where *n* are the lengths of the edges. Therefore the zero sets *ZG* of the secular polynomials together with the edge lengths determine the spectra of standard Laplacians. The rest of this and the following chapters will be devoted to the studies of secular polynomials, especially of their zero sets.

Let us put together a few obvious properties of the secular polynomials.


$$\mathbb{T}^{N} = \{ \mathbf{z} \in \mathbb{C}^{N} : |z\_{n}| = 1, n = 1, 2, \dots, N \} \subset \mathbb{C}^{N},$$

$$\mathbf{Z}\_{G} = \left\{ \mathbf{z} \in \mathbb{T}^{N} : P\_{G}(\mathbf{z}) = 0 \right\},\tag{6.7}$$

then the spectrum of the standard Laplacian *L*st  is given by the intersections of the curve *<sup>k</sup>* → *(eik*<sup>1</sup> *,...,eikN )* <sup>∈</sup> <sup>T</sup>*<sup>N</sup>* with *ZG.*

(5) Similarly denoting by **Z***<sup>G</sup>* the zero set of *PG(e<sup>i</sup>ϕ)* on the real torus

$$
\mathbf{T}^N = \left(\mathbb{R}/2\pi\mathbb{Z}\right)^N,
$$

$$
\mathbf{Z}\_G = \left\{ \boldsymbol{\mathfrak{p}} \in \mathbf{T}^N \, : \, P\_G(e^{i\boldsymbol{\mathfrak{p}}}) = 0 \right\},
\tag{6.8}
$$

the spectrum of *L*st  is given by the intersections of the line *k* → *(k*1*,... , kN )* <sup>∈</sup> <sup>T</sup>*<sup>N</sup>* with **Z***G.*

(6) The point **<sup>1</sup>** <sup>=</sup> *(*1*,* <sup>1</sup>*,...,* <sup>1</sup>*)* <sup>∈</sup> <sup>C</sup>*<sup>N</sup>* always belongs to *ZG* since **E***(***1***)* ⎛ ⎜ ⎝ 1 *. . .* 1 ⎞ ⎟ ⎠ <sup>=</sup>

⎛ ⎜ ⎝ 1 *. . .* 1 ⎞ ⎟ ⎠ <sup>=</sup> **<sup>S</sup>** ⎛ ⎜ ⎝ 1 *. . .* 1 ⎞ ⎟ ⎠ *.* It will be shown later (Theorem 8.2) that, provided the graph

is connected, the secular function *p-(k)* has a zero of multiplicity 1 + *β*1*(-)* instead of 1, as one may expect.

In what follows complex and real coordinates related via (6.3)

$$\mathbf{z} = e^{i\varphi}, \ \varphi = (\varphi\_1, \varphi\_2, \dots, \varphi\_N).$$

as well as complex T*<sup>N</sup>* and real **T***<sup>N</sup>* tori will be used simultaneously.

It will also be convenient for us to follow [457] and understand secular polynomials projectively, that is two polynomials *P*<sup>1</sup> and *P*<sup>2</sup> are considered equal if and only if there exists a non-zero complex number *λ* such that

$$P^{\parallel}(\mathbf{z}) = \lambda \boldsymbol{P}^2(\mathbf{z})\,.$$

In the following example we describe our strategy to study spectral properties of graph Laplacians considering the simplest metric graph with a non-trivial spectrum.

**Example 6.2** Consider the lasso graph *G(*2*.*2*)* depicted in Fig. 6.1.

**Fig. 6.1** Graph *G(*2*.*2*)*

The corresponding secular polynomial depends on two variables

$$\begin{split} P\_{(2,2)}(z\_1, z\_2) &= \det \begin{pmatrix} \begin{pmatrix} 0 & z\_1 & 0 & 0\\ z\_1 & 0 & 0 & 0\\ 0 & 0 & 0 & z\_2\\ 0 & 0 & z\_2 & 0 \end{pmatrix} \end{pmatrix} - \begin{pmatrix} -1/3 & 2/3 & 2/3 & 0\\ 2/3 & -1/3 & 2/3 & 0\\ 2/3 & 2/3 & -1/3 & 0\\ 0 & 0 & 0 & 1 \end{pmatrix} \\ &= \frac{1}{3} \left( 3 - 4z\_1 + z\_1^2 + z\_2^2 - 4z\_1 z\_2^2 + 3z\_1^2 z\_2^2 \right) \\ &= \frac{1}{3} (z\_1 - 1) \left( -3 + z\_1 - z\_2^2 + 3z\_1 z\_2^2 \right) \\ &= (z\_1 - 1) \left( -3 + z\_1 - z\_2^2 + 3z\_1 z\_2^2 \right), \end{split}$$

where the last equality holds since the polynomials are treated projectively.

The first degree factor *z*<sup>1</sup> − 1 appears due to the loop formed by the edge *E*1. Every metric graph with a loop has eigenfunctions given by the sine function on the loop

$$
\psi(\mathbf{x}) = \sin k(\mathbf{x} - \mathbf{x}\_{\mathrm{l}}), \quad k = \frac{2\pi}{\ell\_{\mathrm{l}}} m, \ m \in \mathbb{N}
$$

extended by zero to the rest of the graph. The energies of such functions depend only on the length of the loop and are not influenced by the lengths of the other edges.

The zero set **Z***G(*2*.*2*)* <sup>∈</sup> **<sup>T</sup>**<sup>2</sup> on the real torus is presented in Fig. 6.2. It is given by the zeroes of the function

$$L\_{(2.2)}(\varphi\_1, \varphi\_2) = \sin\frac{\varphi\_1}{2} \left( 3\sin(\frac{\varphi\_1}{2} + \varphi\_2) + \sin(\frac{\varphi\_1}{2} - \varphi\_2) \right).$$

**Fig. 6.2** The zero set **Z***G(*2*.*2*)* on the real torus

**Fig. 6.3** Graphical representation of the spectrum of *-(*2*.*2*)* for rationally dependent (left figure) and rationally independent (right figure) edge lengths

The line *ϕ*<sup>1</sup> = 0 (one-dimensional subtorus) corresponds to the linear factor *z*<sup>1</sup> − 1. The remaining (closed) curve is the zero set for the trigonometric polynomial 3 sin*( <sup>ϕ</sup>*<sup>1</sup> <sup>2</sup> <sup>+</sup> *<sup>ϕ</sup>*2*)* <sup>+</sup> sin*( <sup>ϕ</sup>*<sup>1</sup> <sup>2</sup> − *ϕ*2*).*

The zero set is an algebraic manifold with two singular points corresponding to the intersections between the line and the curve: *(*0*,* 0*)* an *(*0*,π)* = *(*0*,* −*π )*.

To obtain the spectrum of the standard Laplacian one has to choose the edge lengths *(*1*,* 2*)* and plot on the same torus the line *(k*1*, k*2*)* <sup>∈</sup> **<sup>T</sup>**2*.* The intersection points with the zero set **Z***G(*2*.*2*)* give the spectrum (see Fig. 6.3).

If the lengths are rationally dependent 1*/*<sup>2</sup> <sup>∈</sup> <sup>Q</sup>, then the line is periodic on the torus and there is just a finite number of intersection points. The spectrum is periodic in *k*-scale. The left figure shows solutions in the case 2*/*<sup>1</sup> = 1*/*3 and we observe 5 intersection points.

If the lengths are rationally independent 1*/*<sup>2</sup> <sup>∈</sup>*/* <sup>Q</sup>, then the line densely covers the whole torus and the intersection points are dense in **Z***G*. Only the spectrum connected with the loop eigenfunctions is periodic (corresponding to the intersections between the two lines *(k*1*, k*2*)* and *ϕ*<sup>1</sup> = 0), the rest of the spectrum has no period. The right figure shows solutions in the case 2*/*<sup>1</sup> <sup>=</sup> <sup>1</sup><sup>+</sup> √5 <sup>2</sup> .

Depending on whether the edge lengths are rationally dependent or not, one observes either multiple eigenvalues (1*/*<sup>2</sup> <sup>∈</sup> <sup>Q</sup>) or arbitrarily close high energy eigenvalues (1*/*<sup>2</sup> <sup>∈</sup>*/* <sup>Q</sup>). Close eigenvalues occur when the line *k* <sup>=</sup> *(k*1*, k*2*)* comes near to the singular points. If one takes away the periodic part of the spectrum *<sup>k</sup>* <sup>=</sup> <sup>2</sup>*<sup>π</sup>* <sup>1</sup> *m, m* <sup>∈</sup> <sup>Z</sup>, then the remaining spectrum is uniformly discrete as there is always a minimal distance between the subsequent intersections between the line and the zeroes of the reduced trigonometric polynomial

$$3\sin(\frac{\varphi\_1}{2} + \varphi\_2) + \sin(\frac{\varphi\_1}{2} - \varphi\_2).$$

## **6.2 Secular Polynomials for Small Graphs**

In this section we shall present secular polynomials for the graphs formed by one, two, or three edges. The calculations are elementary and essentially follow Example 6.2 above. This will not only allow us to describe graphically spectra of such graphs but will be intensively used later on to study reducibility of secular polynomials for arbitrary graphs. Therefore calculating secular polynomials we shall always factor them into irreducible factors.

All metric graphs on three edges are listed below with the enumeration of edges indicated (Fig. 6.4). Some of the graphs have degree two vertices and are formally excluded from our consideration, but we shall need these graphs in the proofs. The secular polynomials for such graphs can be obtained from the secular polynomials for genuine (i.e. without degree two vertices) graphs. Let  be a metric graph with the edges *Ej* and *El* connected at a degree two vertex. Consider the graph *-* obtained by substituting these edges with one edge *Ei*. Then the secular polynomial for *G* can be obtained by replacing the variable *zi* with the product *zj zl* in the polynomial for *G*

$$P\_G(z\_1, \ldots, z\_J, \ldots, z\_l, \ldots) = P\_{G'}(z\_1, \ldots, \underbrace{z\_J z\_l}\_{z\_l}, \ldots). \tag{6.9}$$

All secular polynomials for graphs on at most three edges are listed below:

$$\begin{aligned} P\_{(1,1)} &= z\_1^2 - 1 \equiv (z\_1 - 1)(z\_1 + 1); \\ P\_{(1,2)} &= (z\_1 - 1)^2; \\ P\_{(2,1)} &= z\_1^2 z\_2^2 - 1 \equiv (z\_1 z\_2 - 1)(z\_1 z\_2 + 1); \\ P\_{(2,2)} &= (z\_1 - 1)(3z\_1 z\_2^2 - z\_2^2 + z\_1 - 3); \\ P\_{(2,3)} &= (z\_1 z\_2 - 1)^2; \\ P\_{(2,4)} &= (z\_1 - 1)(z\_2 - 1)(z\_1 z\_2 - 1); \\ P\_{(3,1)} &= (z\_1 z\_2 z\_3 - 1)(z\_1 z\_2 z\_3 + 1); \\ P\_{(3,2)} &= 3z\_1^2 z\_2^2 z\_3^3 + \left(z\_1^2 z\_2^2 + z\_1^2 z\_3^2 + z\_2^2 z\_3^2\right) - \left(z\_1^2 + z\_2^2 + z\_3^2\right) - 3; \\ P\_{(3,3)} &= (z\_3 - 1)\left(3z\_1^2 z\_2^2 z\_3 - z\_1^2 z\_2^2 + z\_3 - 3\right); \\ P\_{(3,4)} &= (z\_3 - 1)\left(-2z\_1^2 z\_2^2 z\_3 - z\_1^2 z\_3 - z\_2^2 z\_3 + z\_1^2 + z\_2^2 + 2\right); \\ P\_{(3,5)} &= (z\_1 z\_2 - 1)(3z\_1 z\_2 z\_3^2 - z\_3^2 + z\_1 z\_2 - 3); \\ P\_{(3,6)} &= (z\_1 z\_2 z\_3 - 1)^2; \\ P\_{(3,7)} &= (z\_1 - 1)(z\_3 - 1) \end{aligned}$$

**Fig. 6.4** Graphs on one, two and three edges

$$\times \left( 9z\_1^2 z\_2 z\_3 - 3 \left( z\_1^2 z\_2 + z\_1^2 z\_3 \right) - z\_2 z\_3 + z\_1^2 + 3 \left( z\_2 + z\_3 \right) - 9 \right);$$

$$P\_{(3,8)} = (z\_2 - 1)(z\_3 - 1)$$

$$\times \left( 5z\_1^2 z\_2 z\_3 + \left( z\_1^2 z\_2 + z\_1^2 z\_3 \right) + 3 z\_2 z\_3 - 3 z\_1^2 - \left( z\_2 + z\_3 \right) - 5 \right);$$

$$P\_{(3,9)} = \left( 3z\_1 z\_2 z\_3 + \left( z\_1 z\_2 + z\_1 z\_3 + z\_2 z\_3 \right) - \left( z\_1 + z\_2 + z\_3 \right) - 3 \right)$$

$$\left( 3z\_1 z\_2 z\_3 - \left( z\_1 z\_2 + z\_1 z\_3 + z\_2 z\_3 \right) - \left( z\_1 + z\_2 + z\_3 \right) + 3 \right);$$

$$P\_{(3,10)} = (z\_1 z\_2 - 1)(z\_3 - 1)(z\_1 z\_2 z\_3 - 1);$$

$$P\_{(3,11)} = (z\_1 - 1)(z\_2 - 1)(z\_3 - 1)$$

$$\times \left( 3z\_1 z\_2 z\_3 + \left( z\_1 z\_2 + z\_1 z\_3 + z\_2 z\_3 \right) - \left( z\_1 + z\_2 + z\_3 \right) - 3 \right). \tag{6.10}$$

The corresponding trigonometric (Laurent) polynomials *LG(ϕ)* := *PG(e<sup>i</sup>ϕ)* are:

$$\begin{aligned} L\_{1,1} &= \sin\varphi\_1 \equiv \sin\frac{q\_1}{2}\cos\frac{\varphi\_1}{2}; \\ L\_{1,2} &= \cos\varphi\_1 - 1 \equiv \left(\sin\frac{\varphi\_1}{2}\right)^2; \\ L\_{2,1,2} &= \sin(\varphi\_1 + \varphi\_2) \equiv \sin\frac{q\_1 + q\_2}{2}\cos\frac{\varphi\_1 + q\_2}{2}; \\ L\_{2,2,2} &= 3\cos(\varphi\_1 + \varphi\_2) - 4\cos\varphi\_2 + \cos(\varphi\_1 - \varphi\_2) \\ &\equiv \sin\frac{q\_1}{2}\left(3\sin\frac{(q\_1^2 + q\_2)}{2} + \varphi\_2\right) + \sin(\frac{q\_1^2}{2} - \varphi\_2)\rangle; \\ L\_{2,3} &= \cos(\varphi\_1 + \varphi\_2) - 1 \equiv \left(\sin\frac{q\_1 + q\_2}{2}\right)^2; \\ L\_{2,4} &= \sin(q\_1 + q\_2) - \sin q\_1 - \sin q\_2 \equiv \sin\frac{q\_1}{2}\sin\frac{\varphi\_1 + q\_2}{2}; \\ L\_{3,1} &= \sin(q\_1 + q\_2 + q\_3) \equiv \sin\frac{q\_1 + q\_2 + q\_3}{2}\cos\frac{\varphi\_1 + q\_2 + q\_3}{2}; \\ L\_{3,2} &= 3\sin(q\_1 + q\_2 + q\_3) + \sin(q\_1 + q\_2 - q\_3) + \sin(q\_1 - q\_2 + q\_3) \\ &\quad + \sin(-\varphi\_1 + \varphi\_2 + \varphi\_3); \\ L\_{3,3} &= \sin\frac{q\_2}{2}\left(3\sin(q\_1 + q\_2 + \frac{q\_3}{2}) - \sin(q\_1 + q\_2 - \frac{q\_3}{2})\right); \\ L\_{3,4} &= \sin\frac{q\_3}{2} \end{aligned}$$

;

*L(*3*.*5*)* <sup>=</sup> sin *ϕ*1 <sup>+</sup> *<sup>ϕ</sup>*<sup>2</sup> 2 3 sin*( ϕ*1 + *ϕ*<sup>2</sup> <sup>2</sup> <sup>+</sup> *<sup>ϕ</sup>*3*)* <sup>+</sup> sin*( ϕ*1 + *ϕ*<sup>2</sup> <sup>2</sup><sup>−</sup> *<sup>ϕ</sup>*3*)* ; *L(*3*.*6*)* = cos*(ϕ*1 + *ϕ*2 + *ϕ*3*)* − 1 ≡ sin *ϕ*1 <sup>+</sup> *<sup>ϕ</sup>*2 <sup>+</sup> *<sup>ϕ</sup>*<sup>3</sup> 2 2 ; *L(*3*.*7*)* <sup>=</sup> sin *ϕ*<sup>2</sup> 2 sin *ϕ*<sup>3</sup> 2 9 sin*(ϕ*1 <sup>+</sup> *<sup>ϕ</sup>*<sup>2</sup> <sup>2</sup><sup>+</sup> *<sup>ϕ</sup>*<sup>3</sup> <sup>2</sup>*)* <sup>−</sup> 3 sin*(ϕ*1 <sup>+</sup> *<sup>ϕ</sup>*<sup>2</sup> <sup>2</sup><sup>−</sup> *<sup>ϕ</sup>*<sup>3</sup> <sup>2</sup>*)* <sup>−</sup> 3 sin*(ϕ*1 <sup>−</sup> *<sup>ϕ</sup>*<sup>2</sup> <sup>2</sup><sup>+</sup> *<sup>ϕ</sup>*<sup>3</sup> 2 *)* <sup>+</sup> sin*(ϕ*1 <sup>−</sup> *<sup>ϕ</sup>*<sup>2</sup> <sup>2</sup><sup>−</sup> *<sup>ϕ</sup>*<sup>3</sup> 2 *)* ; *L(*3*.*8*)* <sup>=</sup> sin *ϕ*<sup>2</sup> 2 sin *ϕ*<sup>3</sup> 2 5 sin*(ϕ*1 <sup>+</sup> *<sup>ϕ</sup>*2 <sup>+</sup> *<sup>ϕ</sup>*<sup>3</sup> <sup>2</sup>*)* <sup>+</sup> sin*(ϕ*1 <sup>+</sup> *<sup>ϕ</sup>*<sup>2</sup> <sup>2</sup><sup>−</sup> *<sup>ϕ</sup>*<sup>3</sup> <sup>2</sup>*)* <sup>+</sup> sin*(ϕ*1 <sup>−</sup> *<sup>ϕ</sup>*<sup>2</sup> <sup>2</sup><sup>+</sup> *<sup>ϕ</sup>*<sup>3</sup> 2 *)* <sup>−</sup> 3 sin*(ϕ*1 <sup>−</sup> *<sup>ϕ</sup>*<sup>2</sup> <sup>2</sup><sup>−</sup> *<sup>ϕ</sup>*<sup>3</sup> 2 *)* ; *L(*3*.*9*)* = 3 sin *ϕ*1 <sup>+</sup> *<sup>ϕ</sup>*2 <sup>+</sup> *<sup>ϕ</sup>*<sup>3</sup> <sup>2</sup> <sup>+</sup> sin *ϕ*1 <sup>+</sup> *<sup>ϕ</sup>*2 <sup>−</sup> *<sup>ϕ</sup>*<sup>3</sup> <sup>2</sup> <sup>+</sup> sin *ϕ*1 <sup>−</sup> *<sup>ϕ</sup>*2 <sup>+</sup> *<sup>ϕ</sup>*<sup>3</sup> 2 <sup>+</sup> sin −*ϕ*1 <sup>+</sup> *<sup>ϕ</sup>*2 <sup>+</sup> *<sup>ϕ</sup>*<sup>3</sup> 2 3 cos *ϕ*1 <sup>+</sup> *<sup>ϕ</sup>*2 <sup>+</sup> *<sup>ϕ</sup>*<sup>3</sup> <sup>2</sup> <sup>−</sup> cos *ϕ*1 + *ϕ*2 − *ϕ*<sup>3</sup> <sup>2</sup> <sup>−</sup> cos *ϕ*1 − *ϕ*2 + *ϕ*<sup>3</sup> 2 <sup>−</sup> cos −*ϕ*1 <sup>+</sup> *<sup>ϕ</sup>*2 <sup>+</sup> *<sup>ϕ</sup>*<sup>3</sup> 2 ; *L(*3*.*10*)* <sup>=</sup> sin *ϕ*1 <sup>+</sup> *<sup>ϕ</sup>*<sup>2</sup> 2 sin *ϕ*<sup>3</sup> 2 sin *ϕ*1 <sup>+</sup> *<sup>ϕ</sup>*2 <sup>+</sup> *<sup>ϕ</sup>*<sup>3</sup> <sup>2</sup> ; *L(*3*.*11*)* <sup>=</sup> sin *ϕ*<sup>1</sup> 2 sin *ϕ*<sup>2</sup> 2 sin *ϕ*<sup>3</sup> 2 3 sin *ϕ*1 <sup>+</sup> *<sup>ϕ</sup>*2 <sup>+</sup> *<sup>ϕ</sup>*<sup>3</sup> <sup>2</sup> <sup>+</sup> sin *ϕ*1 <sup>+</sup> *<sup>ϕ</sup>*2 <sup>−</sup> *<sup>ϕ</sup>*<sup>3</sup> <sup>2</sup> <sup>+</sup> sin *ϕ*1 <sup>−</sup> *<sup>ϕ</sup>*2 <sup>+</sup> *<sup>ϕ</sup>*<sup>3</sup> 2 <sup>+</sup> sin −*ϕ*1 <sup>+</sup> *<sup>ϕ</sup>*2 <sup>+</sup> *<sup>ϕ</sup>*<sup>3</sup> 2 *.* (6.11)

The following graphs

$$G\_{(2.1)},\,G\_{(2.3)},\,G\_{(3.1)},\,G\_{(3.3)},\,G\_{(3.5)},\,G\_{(3.6)},\,G\_{(3.10)}$$

have degree two vertices and their polynomials can be obtained from the polynomials for genuine graphs via formula (6.9).

The remaining 10 genuine metric graphs on at most three edges are

$$G\_{(1.1)}, G\_{(1.2)}, G\_{(2.2)}, G\_{(2.4)}, G\_{(3.2)}, G\_{(3.4)}, G\_{(3.7)}, G\_{(3.8)}, G\_{(3.9)}, G\_{(3.9)}, G\_{(3.11)}. \tag{6.12}$$

Note that only one secular polynomial (*P(*3*.*2*)*) is irreducible. Reducibility of other polynomials has two reasons

(1) The graphs

$$G\_{(1.2)}, G\_{(2.2)}, G\_{(2.4)}, G\_{(3.4)}, G\_{(3.7)}, G\_{(3.8)}, G\_{(3.11)},$$

contain loops. The secular polynomials have linear factors *zn*−1 corresponding to loops formed by *En*; the number of such factors is equal to the number of loops

	- on one edge **W**<sup>1</sup> = *G(*1*.*1*)*;
	- on two edges **W**<sup>2</sup> = *G(*2*.*3*)*;
	- on three edges **W**<sup>3</sup> = *G(*3*.*9*).*

The secular polynomial is a product of the polynomials *P*<sup>s</sup> *<sup>W</sup>* and *P*<sup>a</sup> *<sup>W</sup>* , corresponding to eigenfunctions symmetric and antisymmetric with respect to the simultaneous inversion of all edges. Both polynomials have degree 1 in each variable.

In what follows we shall divide the secular polynomial by the linear factors corresponding to the loops; the obtained polynomial *P*∗ *<sup>G</sup>* will be called **reduced secular polynomial** 

$$P\_G^\*(\mathbf{z}) = P\_G(\mathbf{z}) \prod\_{\substack{E\_n \text{is a loop in } \mathcal{O}}} (z\_n - 1)^{-1} \tag{6.13}$$

In the case of watermelon graphs *G* = *WN* we shall have two reduced secular polynomials *P*<sup>s</sup> *<sup>W</sup>* and *<sup>P</sup>*<sup>a</sup> *<sup>W</sup>* corresponding to symmetric and antisymmetric eigenfunctions:

$$P\_W(\mathbf{z}) = P\_W^s P\_W^\mathbf{a}.$$

All reduced secular polynomials for genuine metric graphs on at most three edges are:

*P*<sup>s</sup> *(*1*.*1*)* = *z*<sup>1</sup> − 1; *P*a *(*1*.*1*)* = *z*<sup>1</sup> + 1; *P*∗ *(*1*.*2*)* = *z*<sup>1</sup> − 1; *P*∗ *(*2*.*2*)* <sup>=</sup> <sup>3</sup>*z*1*z*<sup>2</sup> <sup>2</sup> <sup>−</sup> *<sup>z</sup>*<sup>2</sup> <sup>2</sup> + *z*<sup>1</sup> − 3; *P*∗ *(*2*.*4*)* = *z*1*z*<sup>2</sup> − 1; *P*∗ *(*3*.*2*)* <sup>=</sup> <sup>3</sup>*z*<sup>2</sup> 1*z*2 2*z*2 <sup>3</sup> + *z*2 1*z*2 <sup>2</sup> <sup>+</sup> *<sup>z</sup>*<sup>2</sup> 1*z*2 <sup>3</sup> <sup>+</sup> *<sup>z</sup>*<sup>2</sup> 2*z*2 3 − *z*2 <sup>1</sup> <sup>+</sup> *<sup>z</sup>*<sup>2</sup> <sup>2</sup> <sup>+</sup> *<sup>z</sup>*<sup>2</sup> 3 − 3; *P*∗ *(*3*.*4*)* = −2*z*<sup>2</sup> 1*z*2 <sup>2</sup>*z*<sup>3</sup> <sup>−</sup> *<sup>z</sup>*<sup>2</sup> <sup>1</sup>*z*<sup>3</sup> <sup>−</sup> *<sup>z</sup>*<sup>2</sup> <sup>2</sup>*z*<sup>3</sup> <sup>+</sup> *<sup>z</sup>*<sup>2</sup> <sup>1</sup> <sup>+</sup> *<sup>z</sup>*<sup>2</sup> <sup>2</sup> + 2; *P*∗ *(*3*.*7*)* <sup>=</sup> <sup>9</sup>*z*<sup>2</sup> <sup>1</sup>*z*2*z*<sup>3</sup> − 3 *z*2 <sup>1</sup>*z*<sup>2</sup> <sup>+</sup> *<sup>z</sup>*<sup>2</sup> <sup>1</sup>*z*<sup>3</sup> <sup>−</sup> *<sup>z</sup>*2*z*<sup>3</sup> <sup>+</sup> *<sup>z</sup>*<sup>2</sup> <sup>1</sup> + 3 *z*<sup>2</sup> + *z*3*)* − 9; *P*∗ *(*3*.*8*)* <sup>=</sup> <sup>5</sup>*z*<sup>2</sup> <sup>1</sup>*z*2*z*<sup>3</sup> + *z*2 <sup>1</sup>*z*<sup>2</sup> <sup>+</sup> *<sup>z</sup>*<sup>2</sup> <sup>1</sup>*z*<sup>3</sup> <sup>+</sup> <sup>3</sup>*z*2*z*<sup>3</sup> <sup>−</sup> <sup>3</sup>*z*<sup>2</sup> <sup>1</sup> − *z*<sup>2</sup> + *z*<sup>3</sup> − 5; ⎧ ⎨ ⎩ *P*s *(*3*.*9*)* = 3*z*1*z*2*z*<sup>3</sup> + *z*1*z*<sup>2</sup> + *z*1*z*<sup>3</sup> + *z*2*z*<sup>3</sup> − *z*<sup>1</sup> + *z*<sup>2</sup> + *z*<sup>3</sup> − 3 ; *P*a *(*3*.*9*)* = 3*z*1*z*2*z*<sup>3</sup> − *z*1*z*<sup>2</sup> + *z*1*z*<sup>3</sup> + *z*2*z*<sup>3</sup> − *z*<sup>1</sup> + *z*<sup>2</sup> + *z*<sup>3</sup> + 3 ; *P*∗ *(*3*.*11*)* = 3*z*1*z*2*z*<sup>3</sup> + *z*1*z*<sup>2</sup> + *z*1*z*<sup>3</sup> + *z*2*z*<sup>3</sup> − *z*<sup>1</sup> + *z*<sup>2</sup> + *z*<sup>3</sup> − 3*.* (6.14)

It will be convenient to introduce **reduced Laurent polynomials** *L*∗ *(i.j )* associated with the reduced secular polynomials *P*∗ *(i.j )*. These polynomials are convenient for plotting zero sets on the real torus **T***<sup>N</sup>* . For example we have

$$\begin{aligned} L\_{(3,7)}^{\*} &= 9\sin(\varphi\_1 + \frac{\varphi\_2}{2} + \frac{\varphi\_3}{2}) - 3\sin(\varphi\_1 + \frac{\varphi\_2}{2} - \frac{\varphi\_3}{2}) - 3\sin(\varphi\_1 - \frac{\varphi\_2}{2} + \frac{\varphi\_3}{2}) \\ &+ \sin(\varphi\_1 - \frac{\varphi\_2}{2} - \frac{\varphi\_3}{2}); \\ L\_{(3,8)}^{\*} &= 5\sin(\varphi\_1 + \frac{\varphi\_2 + \varphi\_3}{2}) + \sin(\varphi\_1 + \frac{\varphi\_2}{2} - \frac{\varphi\_3}{2}) + \sin(\varphi\_1 - \frac{\varphi\_2}{2} + \frac{\varphi\_3}{2}) \\ &- 3\sin(\varphi\_1 - \frac{\varphi\_2}{2} - \frac{\varphi\_3}{2}). \end{aligned} \tag{6.15}$$

## **6.3 Zero Sets for Small Graphs**

In addition to the zero sets introduced in (6.7) and (6.8) we shall be interested in the zero sets of the reduced secular polynomials:

$$\mathbb{Z}Z\_G^\* = \left\{ \mathbf{z} \in \mathbb{T}^N \, : \, P\_G^\*(\mathbf{z}) = 0 \right\}, \quad \mathbb{Z}\_G^\* = \left\{ \boldsymbol{\varphi} \in \mathbb{T}^N \, : \, P\_G^\*(e^{i\boldsymbol{\varphi}}) = 0 \right\}.\tag{6.16}$$

The reduced sets **Z**<sup>s</sup> *<sup>W</sup>* and **Z**<sup>a</sup> *<sup>W</sup>* for watermelon graphs are defined similarly.

Let us describe these zero sets for graphs on one, two and three edges.

**Graphs on One Edge** The zero sets are given by one or two points on the unit circle:

$$\begin{aligned} Z\_{G\_{(1,1)}} &= \{-1, 1\}, & Z\_{G\_{(1,1)}}^{\mathbf{s}} &= \{1\}, & Z\_{G\_{(1,1)}}^{\mathbf{a}} &= \{-1\}, \\ Z\_{G\_{(1,2)}} &= \{1\}, & Z\_{G\_{(1,2)}}^{\mathbf{s}} &= Z\_{G\_{(1,2)}}^{\mathbf{s}} = Z\_{G\_{(1,2)}}^{\mathbf{a}} = \{1\}, \end{aligned} \tag{6.17}$$

or using real coordinates

$$\begin{aligned} \mathbf{Z}\_{G\_{(1,1)}} &= \{0, \pi\}, \mathbf{Z}\_{G\_{(1,1)}}^{\mathbf{s}} = \{0\}, \mathbf{Z}\_{G\_{(1,1)}}^{\mathbf{a}} = \{\pi\},\\ \mathbf{Z}\_{G\_{(1,2)}} &= \{0\}, \quad \mathbf{Z}\_{G\_{(1,2)}}^{\mathbf{s}} = \mathbf{Z}\_{G\_{(1,2)}}^{\mathbf{s}} = \mathbf{Z}\_{G\_{(1,2)}}^{\mathbf{a}} = \{0\}. \end{aligned} \tag{6.18}$$

**Graphs on Two Edges** The zero sets are curves on the two-dimensional torus. We plot below both the total zero set **Z***<sup>G</sup>* and its subset **Z**<sup>∗</sup> *<sup>G</sup>* determined by the reduced secular polynomial (Figs. 6.5, 6.6 6.7, and 6.8).

**Graphs on Three Edges** The zero sets are (singular) two-dimensional surfaces on the three-dimensional torus. The following plots show the zero sets **Z** and **Z**∗ using real representation (Figs. 6.9, 6.10, 6.11, 6.12, 6.13, 6.14, 6.15, 6.16, 6.17, 6.18, and 6.19).

**Fig. 6.5** Zero sets **<sup>Z</sup>**, **<sup>Z</sup>**<sup>∗</sup> <sup>=</sup> **<sup>Z</sup>**<sup>s</sup> and **<sup>Z</sup>**<sup>a</sup> for *G(*2*.*1*)*

**Fig. 6.6** Zero sets **Z** and **Z**<sup>∗</sup> for *G(*2*.*2*)*

**Fig. 6.7** Zero sets **Z** and **Z**<sup>∗</sup> coincide for *G(*2*.*3*)*

Analysing the figures we identify the singular subset **Z**sing *<sup>G</sup>* —the set of all points on the zero manifolds **Z**∗ *<sup>G</sup>* corresponding to reduced factors, where the manifold is not smooth:


**Fig. 6.8** Zero sets **Z** and **Z**<sup>∗</sup> for *G(*2*.*4*)*

**Fig. 6.9** Zero sets **Z** and **Z***<sup>s</sup>* and **Z***<sup>a</sup>* for *G(*3*.*1*)*


Points from **Z**sing *<sup>G</sup>* always lead to multiple eigenvalues if the line *k* comes across. Note that multiple eigenvalue may also occur when the manifold is smooth but double degenerate like it happens for *G(*3*.*6*)* or when the zero set for one of the linear factors crosses the zero set for the reduced secular polynomial.

**Fig. 6.10** Zero sets **Z** and **Z**<sup>∗</sup> coincide for *G(*3*.*2*)*

**Fig. 6.11** Zero sets **Z** and **Z**<sup>∗</sup> for *G(*3*.*3*)*

**Fig. 6.12** Zero sets **Z** and **Z**<sup>∗</sup> for *G(*3*.*4*)*

**Fig. 6.13** Zero sets **Z** and **Z**<sup>∗</sup> for *G(*3*.*5*)*

**Fig. 6.14** Zero sets **Z** and **Z**<sup>∗</sup> coincide for *G(*3*.*6*)*

**Fig. 6.15** Zero sets **Z** and **Z**<sup>∗</sup> for *G(*3*.*7*)*

**Fig. 6.16** Zero sets **Z** and **Z**<sup>∗</sup> for *G(*3*.*8*)*

**Fig. 6.17** Zero sets **<sup>Z</sup>** <sup>=</sup> **<sup>Z</sup>**∗, **<sup>Z</sup>**<sup>s</sup> , and **Z**<sup>a</sup> for *G(*3*.*9*)*

**Fig. 6.18** Zero sets **Z** and **Z**<sup>∗</sup> for *G(*3*.*10*)*

**Fig. 6.19** Zero sets **Z**, **Z**<sup>∗</sup> and **Z**<sup>∗</sup> once more for *G(*3*.*11*)*

We summarise our observations as:

**Lemma 6.3** *Consider all graphs on three edges. Then the (singular) twodimensional zero manifolds Z*∗ *<sup>G</sup> determined by the reduced polynomials P*<sup>∗</sup> *<sup>G</sup> (given by* (6.14)*) have up to* 8 *singular points (if any):* 

$$\begin{array}{l} \begin{array}{l} \text{S}^{\text{sing}}\_{G\_{(3,1)}} = \emptyset, \\ \text{Z}^{\text{sing}}\_{G\_{(3,2)}} = \{ (\pm i, \pm i, \pm i) \} - 8 \text{ points}, \\ \text{Z}^{\text{sing}}\_{G\_{(3,3)}} = \emptyset, \\ \text{Z}^{\text{sing}}\_{G\_{(3,4)}} = \{ (\pm i, \pm i, -1) \} - 4 \text{ points}, \\ \text{Z}^{\text{sing}}\_{G\_{(3,3)}} = \emptyset, \\ \text{Z}^{\text{sing}}\_{G\_{(3,3)}} = \emptyset, \\ \text{Z}^{\text{sing}}\_{G\_{(3,3)}} = \emptyset, \\ \text{Z}^{\text{sing}}\_{G\_{(3,3)}} = \{ (\pm i, -1, -1) \} - 2 \text{ points}, \\ \text{Z}^{\text{sing}}\_{G\_{(3,3)}} = \{ (1, 1, 1), (-1, -1, -1) \} - 2 \text{ points}, \\ \text{Z}^{\text{sing}}\_{G\_{(3,1)}} = \emptyset, \\ \text{Z}^{\text{sing}}\_{G\_{(3,1)}} = \emptyset, \\ \text{Z}^{\text{sing}}\_{G\_{(3,1)}} = \{ (-1, -1, -1) \} - 1 \text{ point}. \end{array} \tag{6.19}$$

*The singular subset has thus co-dimension at least* 2 *with respect to Z*∗ *G.*

The plotted figures indicate how the zero sets of secular polynomials may look. In order to make our analysis mathematically rigorous it is necessary to prove that the zero manifolds are smooth outside the indicated above singular points. This analysis can be found in the appendix below.

Lemma 6.3 proves one of the conjectures formulated in [160, p. 350] for graphs on three edges: the co-dimension of the singular set on the zero manifold is at least two.

We are interested in the singular points for *Z*∗ *<sup>G</sup>*, i.e. in the intersection between the singular set and the torus T*<sup>N</sup>* , as described by the above Lemma. Checking the proof it is clear that the listed singular points determine not only the singularities of the zero manifold restricted to the unit torus, but also the singularities Zsing *<sup>G</sup>* of the zero set in C*<sup>N</sup>*

$$\mathcal{Z}\_G^\* = \left\{ \mathbf{z} \in \mathbb{C}^N \, : \, P\_G(\mathbf{z}) = 0 \right\} \,.$$

In other words, all singular points for the polynomial as a function on C*<sup>N</sup>* lie on the unit torus

$$Z\_G^{\text{sing}} = \mathcal{Z}\_G^{\text{sing}} \cap \mathbb{T}^N = \mathcal{Z}\_G^{\text{sing}}.$$

# **Appendix 1: Singular Sets on Secular Manifolds, Proof of Lemma 6.3**

This appendix contains proof of Lemma 6.3. Each of the graphs should be considered separately, but the analysis is similar. It appears easier to work with the trigonometric polynomials *L(ϕ)* listed in (6.11), this allows us to use trigonometric identities. The singular set is characterised by four scalar equations: the original secular equation and three additional scalar equations ∇*L(ϕ)* = **0**.

We need to go through all 11 graphs on three edges. It will be convenient to classify the graphs according to the number of terms in the reduced secular trigonometric polynomial appearing in (6.11).

I **Single frequency**: *G(*3*.*1*), G(*3*.*6*), G(*3*.*10*)*.

It is clear that the zero manifold is given by hyperplanes *ϕ*<sup>1</sup> + *ϕ*<sup>2</sup> + *ϕ*<sup>3</sup> = const*.* Intersections are not possible as the hyperplanes are parallel.

	- **Graph** *G(*3*.*3*)*.

The reduced secular equation depends on just two variables: *x* := *ϕ*<sup>1</sup> +*ϕ*<sup>2</sup> and *y* := *ϕ*3*/*2 :

$$3\sin(x+y) - \sin(x-y) = 0.$$

The singular points are determined by

$$\begin{cases} 3\cos(\mathbf{x}+\mathbf{y}) - \cos(\mathbf{x}-\mathbf{y}) = 0\\ 3\cos(\mathbf{x}+\mathbf{y}) + \cos(\mathbf{x}-\mathbf{y}) = 0 \end{cases}$$

leading in particular to

$$\cos(\mathbf{x} \pm \mathbf{y}) = 0 \Rightarrow \begin{cases} \sin(\mathbf{x} + \mathbf{y}) = \pm 1 \\ \sin(\mathbf{x} - \mathbf{y}) = \pm 1 \end{cases}$$

contradicting the secular equation. No singular points occur.

• **Graph** *G(*3*.*5*)*.

The analysis is completely analogous to the case of graph *G(*3*.*3*).*

III **Three frequencies**: *G(*3*.*4*)*. Introducing variables

$$\infty := \varphi\_1 - \varphi\_2, \quad \text{y :=} \varphi\_3/2, \quad z := \varphi\_1 + \varphi\_2 + \varphi\_3/2$$

the equation can be transformed as

$$2\sin z + \sin(x+y) - \sin(x-y) = 0.$$

The singular points are determined by

$$\begin{cases} \cos(\mathbf{x} + \mathbf{y}) - \cos(\mathbf{x} - \mathbf{y}) = \mathbf{0}, \\ \cos(\mathbf{x} + \mathbf{y}) + \cos(\mathbf{x} - \mathbf{y}) = \mathbf{0}, \\ \cos z = \mathbf{0}, \end{cases}$$

leading to

$$\cos(\mathbf{x} + \mathbf{y}) = \cos(\mathbf{x} - \mathbf{y}) = \cos z = 0 \Rightarrow \begin{cases} \sin(\mathbf{x} + \mathbf{y}) = \pm 1, \\ \sin(\mathbf{x} - \mathbf{y}) = \pm 1, \\ \sin z = \pm 1. \end{cases}$$

Checking all possible combinations we arrive at the 4 singular points listed above.


The secular polynomial almost coincides with the symmetric factor for the Watermelon graph *G(*3*.*9*)*: change of variables *zj* → *<sup>z</sup>*<sup>2</sup> *<sup>j</sup>* transforms the two polynomials into each other. The zero set for the watermelon is discussed below.

• **Graph** *G(*3*.*7*).* Introducing new variables

$$\infty := \varphi\_1 + \frac{\varphi\_2 - \varphi\_3}{2}, \\ \mathbf{y} := \varphi\_1 - \frac{\varphi\_2 - \varphi\_3}{2}, \\ z = \varphi\_1 + \frac{\varphi\_2 + \varphi\_3}{2}$$

the equation transforms as

$$2\left\{\sin z - \Im \sin x - \Im \sin y - \sin(z - x - y) = 0. \right. \tag{6.20}$$

The singular points satisfy in addition

$$\begin{cases} -\mathfrak{Z}\cos x + \cos(z - x - y) = 0 \\ -\mathfrak{Z}\cos y + \cos(z - x - y) = 0 \Rightarrow \cos x = \cos y = \mathfrak{Z}\cos z \\ \mathfrak{Y}\cos z - \cos(z - x - y) = 0 \end{cases} \tag{6.21}$$

We examine the two possibilities:

– *x* = *y* leads to the system

$$\begin{cases} 9\sin z - 6\sin x - \sin(z - 2x) = 0, \\ \cos(z - 2x) - 3\cos x = 0, \\ 3\cos z - \cos x = 0. \end{cases} \tag{6.22}$$

To prove that the system has no solutions we use first trigonometric identities and eliminate sin *z* from the first two equations, then we eliminate cos *z* using the last equation in (6.22)

$$
\cos x (10 - 2\cos^2 x)^2 = 4\cos x (1 - \cos^2 x)(9 - \cos^2 x).
$$

The equation may have a solution only if cos *x* = 0. From the last equation in (6.22) we get cos *z* = 0 implying

$$
\sin z = \pm 1, \quad \sin x = \pm 1, \quad \sin(z - 2x) = \pm 1,
$$

which contradicts the secular equation.

– *x* = −*y*, then (6.20) implies sin *z* = 0, while the last equation in (6.21) implies cos *z* = 0. We arrive at a contradiction.

• **Graph** *G(*3*.*8*).*

We introduce new coordinates

$$\infty := \varphi\_1 + \frac{\varphi\_2 - \varphi\_3}{2}, \\ \mathbf{y} := \varphi\_1 - \frac{\varphi\_2 - \varphi\_3}{2}, \\ z := \varphi\_1 + \frac{\varphi\_2 + \varphi\_3}{2},$$

changing the equation to

$$
\Im \sin z + \sin x + \sin y + \Im \sin (z - x - y) = 0. \tag{6.23}
$$

The singular points are determined by additional equations

$$\begin{cases} \cos x - 3\cos(z - x - y) = 0, \\ \cos y - 3\cos(z - x - y) = 0, \quad \Rightarrow \cos x = \cos y = -5\cos z. \\ \\$\cos z + 3\cos(z - x - y) = 0, \end{cases} \quad \text{(6.24)}$$

We examine two possibilities again:

– *x* = *y* leads to the system

$$\begin{cases} \mathfrak{H}\sin z + 2\sin x + 3\sin(z - 2x) = 0, \\ \cos x - 3\cos(z - 2x) = 0 \\ \cos x + \mathfrak{H}\cos z = 0. \end{cases} \tag{6.25}$$

Using trigonometric identities, eliminating first sin *z* from the two equations and then eliminating cos *z* using the third equation in (6.25) we arrive at

$$
\cos x (1 + \Im \cos^2 x)^2 = -\Im \cos x (1 - \cos^2 x)(5 + \Im \cos x),
$$

which has no solutions unless cos *x* = 0.

Equation cos *x* = 0 together with (6.23) imply

$$ \begin{cases} x = y = \frac{\pi}{2} + \pi n\\ z = -\frac{\pi}{2} + \pi n \end{cases} \Rightarrow \begin{cases} \varphi\_1 = \pm \frac{\pi}{2},\\ \varphi\_2 = \varphi\_3 = \pi. \end{cases} $$

– *x* = −*y* leads to two contradictory equations coming from (6.23) and the third equation in (6.24)

$$
\sin z = 0, \quad \cos z = 0. \tag{6.26}
$$

We conclude that the singular points are

$$(\varphi\_1, \varphi\_2, \varphi\_3) = (\pm \pi/2, \pi, \pi) \in \mathbf{T}^3.$$

• **Graph** *G(*3*.*9*)*—the watermelon.

The secular polynomial is a product of two irreducible factors. Let us study the zero sets for the factors separately.

**Symmetric eigenfunctions.** We denote by **Z**<sup>s</sup> *G(*3*.*9*)* the zero set corresponding to symmetric eigenfunctions, i.e. the zero set of

$$3\sin\frac{\varphi\_1+\varphi\_2+\varphi\_3}{2} + \sin\frac{\varphi\_1+\varphi\_2-\varphi\_3}{2} + \sin\frac{\varphi\_1-\varphi\_2+\varphi\_3}{2} + \sin\frac{-\varphi\_1+\varphi\_2+\varphi\_3}{2}.$$

Introducing the coordinates

$$\infty := \frac{\varphi\_1 + \varphi\_2 - \varphi\_3}{2}, \\ \mathbf{y} := \frac{\varphi\_1 - \varphi\_2 + \varphi\_3}{2}, \\ z := \frac{\varphi\_1 + \varphi\_2 + \varphi\_3}{2}$$

the equation is written as

$$
\Im \sin z + \sin x + \sin y + \sin(z - (x + y)) = 0. \tag{6.27}
$$

Taking the gradient and setting it equal to zero we get three scalar equations:

⎧ ⎨ ⎩ cos *x* − cos*(z* − *(x* + *y))* = 0 cos *y* − cos*(z* − *(x* + *y))* = 0 3 cos *z* + cos*(z* − *(x* + *y))* = 0 ⇒ cos *x* = cos *y* = −3 cos *z.* (6.28)

Two possibilities should be considered:

– *x* = *y* leads to

$$\begin{cases} \cos x - \cos(z - 2x) = 0, \\ 3\sin z + 2\sin x + \sin(z - 2x) = 0, \\ \cos x = -3\cos z. \end{cases} \tag{6.29}$$

Expanding trigonometric functions and eliminating first sin *z* from the first two equations and then cos *z* using the last equation we arrive at

$$
\cos x (1 + \cos^2 x)^2 = -\sin^2 x \cos x (3 + \cos^2 x),
$$

which is solvable only if cos *x* = 0. The the last equation in (6.28) implies that cos *z* = 0 and therefore sin *z* = ±1. It follows from the secular equation (6.27) that

$$x = \mp \frac{\pi}{2} + 2\pi n\_1, \quad y = \mp \frac{\pi}{2} + 2\pi n\_2, \quad z = \pm \frac{\pi}{2} + 2\pi n\_3.$$

These equations determine the unique singular point

$$(\varphi\_1, \varphi\_2, \varphi\_3) = (\pi, \pi, \pi) \in \mathbf{T}^3.$$

– *x* = −*y* substituted into the secular equation (6.27) and the third equation in (6.28) leads to contradiction (6.26).

**Antisymmetric eigenfunctions.** We denote by **Z**<sup>a</sup> *G(*3*.*9*)* the zero set corresponding to antisymmetric eigenfunctions, that is the zero set of

$$\begin{aligned} 3\cos\frac{\varphi\_1+\varphi\_2+\varphi\_3}{2} - \cos\frac{\varphi\_1+\varphi\_2-\varphi\_3}{2} - \cos\frac{\varphi\_1-\varphi\_2+\varphi\_3}{2} \\ -\cos\frac{-\varphi\_1+\varphi\_2+\varphi\_3}{2} .\end{aligned}$$

Using the same coordinates the equation takes the form

$$(3\cos z - \cos x - \cos y - \cos(z - (x + y)) = 0. \tag{6.30}$$

The surface is singular if the gradient is zero:

$$\begin{cases} \sin x - \sin(z - (x + y)) = 0 \\ \sin y - \sin(z - (x + y)) = 0 \\ -3\sin z + \sin(z - (x + y)) = 0 \end{cases} \Rightarrow \sin x = \sin y = 3\sin z. \tag{6.31}$$

We have two possibilities to consider:

– *x* = *y* leads to the system

$$\begin{cases} 3\cos z - 2\cos x - \cos(z - 2x) = 0 \\ 3\sin z - \sin(z - 2x) = 0 \\ \sin x = 3\sin z \end{cases} \tag{6.32}$$

Expanding using trigonometric identities, excluding this time cos *z* from the first two equations and sin *z* using the third equation in (6.31) leads to

$$-\sin x (1 + \sin^2 x) = \sin x \cos^2 x (3 + \sin^2 x).$$

The equation has a solution only if sin *x* = 0 and hence sin *z* = 0*.* Taking into account the secular equation (6.30) we conclude that there are just two solutions

$$\mathbf{x} = \mathbf{y} = z = \mathbf{0} \quad \mathbf{x} = \mathbf{y} = z = \pi, \mathbf{y}$$

determining just one singular point on the torus

$$(\varphi\_1, \varphi\_2, \varphi\_3) = (0, 0, 0) \in \mathbf{T}^3.$$

This is just Dirac point clearly seen on Fig. 6.17.


The polynomial coincides with the polynomial describing symmetric functions for *G(*3*.*9*)* and has already been studied in full detail.

**Problem 32** Show that the singular set **Z**sing *G(*3*.*5*)* is empty.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 7 Reducibility of Secular Polynomials**

Analysing graphs on up to three edges we noted that the secular polynomials are reducible if and only if the graphs either contain loops or are watermelon graphs. It is our goal in this chapter to prove this statement for arbitrary finite graphs (Theorem 7.19). We are going to use contractions and extensions of graphs allowing us to consider just a few specially chosen families of graphs together with graphs on up to three edges. These families are studied as a preparation for the proof of the factorisation theorem.

# **7.1 Contraction of Graphs**

In this section we discuss what happens to secular polynomials as edges in the discrete graph are contracted so that the edge endpoints are merged together. It turns out that there exists an explicit formula connecting the secular polynomials for a graph and its contraction. This formula explains how reducibility of secular polynomials for large graphs can be traced to contracted graphs.

**Definition 7.1** Let  be a metric graph on *N* edges with *M* vertices, pick any edge *En*<sup>0</sup> = [*x*2*n*0−1*, x*2*n*<sup>0</sup> ] in *-*, then the graph *-*- —the **contraction** of  by deleting the edge *En*0—is the graph on *N* − 1 edges defined as follows:

(1) If *En*<sup>0</sup> connects two vertices *V <sup>m</sup>*- <sup>0</sup> and *V <sup>m</sup>*-- <sup>0</sup> , then

$$\begin{array}{l} V^{m}(\Gamma') = V^{m}(\Gamma), \quad m \neq m\_{0}^{\prime}, m\_{0}^{\prime\prime};\\ V^{m\_{0}^{\prime}}(\Gamma') = V^{m\_{0}^{\prime}}(\Gamma) \cup V^{m\_{0}^{\prime\prime}}(\Gamma) \; \{x\_{2n\_{0}-1}, x\_{2n\_{0}}\}. \end{array} \tag{7.1}$$

*-* is a graph with *<sup>M</sup>* <sup>−</sup> <sup>1</sup> vertices. (No vertex *<sup>V</sup> <sup>m</sup>*-- <sup>0</sup> *(-*- *)*.) (2) If *En*<sup>0</sup> is a loop attached to *V <sup>m</sup>*<sup>0</sup> *(-)*, then

$$\begin{aligned} V^m(\Gamma') &= V^m(\Gamma), \quad m \neq m\_0, \; ;\\ V^{m\_0}(\Gamma') &= V^{m\_0}(\Gamma) \; \{ \mathbf{x}\_{2n\_0 - 1}, \mathbf{x}\_{2n\_0} \}. \end{aligned} \tag{7.2}$$

*-*has *M* vertices.

The contracted graph will be denoted by

$$
\Gamma' = \Gamma \backslash\_{E\_0} \. . \tag{7.3}
$$

The definition can be extended to the case where several edges or even a part of the original graph are deleted simultaneously. Contracting the edges it will be convenient to indicate which edges are preserved in the graph. We shall denote by *Gi*1*,...,iN*<sup>1</sup> the graph obtained from *G* by contracting all edges *Ej* with *j* = *i*1*, i*2*,...,iN*<sup>1</sup> .

It was more convenient to define contraction of metric graphs, but the definition can be used for the discrete graphs as well. We are going to use the same notation in the discrete case.

Examining examples of secular polynomials presented above (6.10) we observe an explicit formula connecting the secular polynomial of a discrete graph and its contraction. For example we have:

$$\begin{aligned} P\_{(3,2)} &= 3z\_1^2 z\_2^2 z\_3^2 + \left(z\_1^2 z\_2^2 + z\_1^2 z\_3^2 + z\_2^2 z\_3^2\right) - \left(z\_1^2 + z\_2^2 + z\_3^2\right) - 3\\ \rightarrow\_{z\_3 \to 1} P\_{(2,1)}(z\_1, z\_2) &= z\_1^2 z\_2^2 - 1\\ \rightarrow\_{z\_2 \to 1} P\_{(1,1)}(z\_1) &= z\_1^2 - 1. \end{aligned} \tag{7.4}$$

Remember that the secular polynomials are treated projectively since we are interested in their zero sets. If the contracting edge forms a loop the original polynomial contains the factor *(zn*0−1*)*, nevertheless the limit holds in the projective sense *e.g.* 

$$\lim\_{z\_1 \to 1} P\_{(2,2)}(z\_1, z\_2) = \lim\_{z\_1 \to 1} \left( (z\_1 - 1)(3z\_1 z\_2^2 - z\_2^2 + z\_1 - 3) \right)$$

$$= 2(z\_2^2 - 1) = P\_{(1,1)}(z\_2). \tag{7.5}$$

These two examples are just illustration for the general formula proven below.

**Lemma 7.2** *Let G be a finite connected discrete graph, and let G*\*Ej be its contraction by deleting the edge Ej . Then the secular polynomials for the graphs are connected via* 

$$P\_{G\backslash\_{E\_j}}(z\_1,\ldots,\hat{z}\_j,\ldots,z\_N) = \lim\_{z\_j \to 1} P\_G(z\_1,\ldots,z\_N),\tag{7.6}$$

*where z*ˆ*<sup>j</sup> means that zj is not present in the list and the limit is taken projectively (see Corollary 7.5).* 

*Proof* We assume that the graph has several edges and that the edge *EN* is deleted. The two cases where *EN* forms a loop or not should be considered separately.

**Case A: Contracting Non-Loop** Assume that the edge *EN* is not a loop. Without loss of generality we denote by *d*<sup>1</sup> and *d*<sup>2</sup> the degrees of the vertices connected by *EN .* Then following (6.4) the secular polynomial is given by the determinant

$$P\_G(\mathbf{z}) = \det\begin{bmatrix} \ddots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\ \cdots & \frac{d\_2 - 2}{d\_2} & \* - \frac{2}{d\_2} & 0 & 0 & \dots & -\frac{2}{d\_2} & 0\\ \cdots & \* - \frac{2}{d\_2} & \frac{d\_2 - 2}{d\_2} & \* & 0 & \dots & -\frac{2}{d\_2} & 0\\ \cdots & 0 & \* & \frac{d\_1 - 2}{d\_1} & \* - \frac{2}{d\_1} & \dots & 0 & -\frac{2}{d\_1}\\ \cdots & 0 & 0 & \* - \frac{2}{d\_1} & \frac{d\_1 - 2}{d\_1} & \dots & 0 & -\frac{2}{d\_1}\\ \cdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \vdots\\ \cdots & -\frac{2}{d\_2} & -\frac{2}{d\_2} & 0 & 0 & \dots & \frac{d\_2 - 2}{d\_2} & z\_N\\ \cdots & 0 & 0 & -\frac{2}{d\_1} & -\frac{2}{d\_1} & \dots & z\_N & \frac{d\_1 - 2}{d\_1} \end{bmatrix},$$

where symbols ∗ indicate possible entries containing *z*1*,...,zN*−1*.* To calculate *PG(z*1*,...,zN*−1*,* 1*)* explicitly we are going to apply a variant of Gauß elimination to diagonalise the lowest principal 2 × 2 block allowing then to eliminate all other entries in the last two columns:

$$P\_G(z\_1, z\_2, \dots, z\_{N-1}, 1)$$

$$=\det\begin{bmatrix}\cdot & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\ \cdot & \frac{d\_2-2}{d\_2} & \*-\frac{2}{d\_2} & 0 & 0 & \dots & -\frac{2}{d\_2} & 0\\ \cdot & \dots & -\frac{2}{d\_2} & \frac{d\_2-2}{d\_2} & \* & 0 & \dots & -\frac{2}{d\_2} & 0\\ \cdot & \dots & 0 & \* & \frac{d\_1-2}{d\_1} & \*-\frac{2}{d\_1} & \dots & 0 & -\frac{2}{d\_1}\\ \cdot & \dots & 0 & 0 & \* -\frac{2}{d\_1} & \frac{d\_1-2}{d\_1} & \dots & 0 & -\frac{2}{d\_1}\\ \cdot & \dots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots\\ \cdot & \dots & -\frac{2}{d\_2} & -\frac{2}{d\_2} & 0 & 0 & \dots & \frac{d\_2-2}{d\_2} & 1\\ \cdot & 0 & 0 & -\frac{2}{d\_1} & -\frac{2}{d\_1} & \dots & 1 & \frac{d\_1-2}{d\_1} \end{bmatrix}$$

= det ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ *... . . . . . . . . . . . . . . . . . . . . . ... <sup>d</sup>*2−<sup>2</sup> *<sup>d</sup>*<sup>2</sup> ∗ − <sup>2</sup> *<sup>d</sup>*<sup>2</sup> 0 0 *...* <sup>−</sup> <sup>2</sup> *<sup>d</sup>*<sup>2</sup> 0 *...* ∗ − <sup>2</sup> *d*2 *d*2−2 *<sup>d</sup>*<sup>2</sup> <sup>∗</sup> <sup>0</sup> *...* <sup>−</sup> <sup>2</sup> *<sup>d</sup>*<sup>2</sup> 0 *...* <sup>0</sup> <sup>∗</sup> *<sup>d</sup>*1−<sup>2</sup> *<sup>d</sup>*<sup>1</sup> ∗ − <sup>2</sup> *<sup>d</sup>*<sup>1</sup> *...* <sup>0</sup> <sup>−</sup> <sup>2</sup> *d*1 *...* 0 0 ∗ − <sup>2</sup> *d*1 *d*1−2 *<sup>d</sup>*<sup>1</sup> *...* <sup>0</sup> <sup>−</sup> <sup>2</sup> *d*1 *... . . . . . . ... . . . . . . . . . . . . ...* <sup>−</sup> <sup>2</sup> *<sup>d</sup>*<sup>2</sup> <sup>−</sup> <sup>2</sup> *<sup>d</sup>*<sup>2</sup> 0 0 *... <sup>d</sup>*2−<sup>2</sup> *<sup>d</sup>*<sup>2</sup> 1 *...* <sup>2</sup>*(d*1−2*) d*1*d*2 2*(d*1−2*) <sup>d</sup>*1*d*<sup>2</sup> <sup>−</sup> <sup>2</sup> *<sup>d</sup>*<sup>1</sup> <sup>−</sup> <sup>2</sup> *<sup>d</sup>*<sup>1</sup> *...* <sup>1</sup> <sup>−</sup> *<sup>d</sup>*1−<sup>2</sup> *d*1 *d*2−2 *<sup>d</sup>*<sup>2</sup> 0 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ <sup>=</sup> <sup>2</sup>*(d*1+*d*2−2*) d*1*d*2 × det ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ *... . . . . . . . . . . . . . . . . . . . . . ... <sup>d</sup>*2−<sup>2</sup> *<sup>d</sup>*<sup>2</sup> ∗ − <sup>2</sup> *<sup>d</sup>*<sup>2</sup> 0 0 *...* <sup>−</sup> <sup>2</sup> *<sup>d</sup>*<sup>2</sup> 0 *...* ∗ − <sup>2</sup> *d*2 *d*2−2 *<sup>d</sup>*<sup>2</sup> <sup>∗</sup> <sup>0</sup> *...* <sup>−</sup> <sup>2</sup> *<sup>d</sup>*<sup>2</sup> 0 *...* <sup>0</sup> <sup>∗</sup> *<sup>d</sup>*1−<sup>2</sup> *<sup>d</sup>*<sup>1</sup> ∗ − <sup>2</sup> *<sup>d</sup>*<sup>1</sup> *...* <sup>0</sup> <sup>−</sup> <sup>2</sup> *d*1 *...* 0 0 ∗ − <sup>2</sup> *d*1 *d*1−2 *<sup>d</sup>*<sup>1</sup> *...* <sup>0</sup> <sup>−</sup> <sup>2</sup> *d*1 *... . . . . . . ... . . . . . . . . . . . . ...* <sup>−</sup> *<sup>d</sup>*<sup>1</sup> *<sup>d</sup>*1+*d*2−<sup>2</sup> <sup>−</sup> *<sup>d</sup>*<sup>1</sup> *d*1+*d*2−2 *d*2−2 *d*1+*d*2−2 *d*2−2 *<sup>d</sup>*1+*d*2−<sup>2</sup> *...* 0 1 *... <sup>d</sup>*1−<sup>2</sup> *d*1+*d*2−2 *d*1−2 *<sup>d</sup>*1+*d*2−<sup>2</sup> <sup>−</sup> *<sup>d</sup>*<sup>2</sup> *<sup>d</sup>*1+*d*2−<sup>2</sup> <sup>−</sup> *<sup>d</sup>*<sup>2</sup> *<sup>d</sup>*1+*d*2−<sup>2</sup> *...* 1 0 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ <sup>=</sup> <sup>2</sup>*(d*1+*d*2−2*) d*1*d*2 × det ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ *... . . . . . . . . . . . . . . . . . . . . . ...* <sup>1</sup> <sup>−</sup> <sup>2</sup> *<sup>d</sup>*1+*d*2−<sup>2</sup> ∗ − <sup>2</sup> *<sup>d</sup>*1+*d*2−<sup>2</sup> 0 0 *...* 0 0 *...* ∗ − <sup>2</sup> *<sup>d</sup>*1+*d*2−<sup>2</sup> <sup>1</sup> <sup>−</sup> <sup>2</sup> *<sup>d</sup>*1+*d*2−<sup>2</sup> <sup>∗</sup> <sup>0</sup> *...* 0 0 *...* <sup>0</sup> <sup>∗</sup> <sup>1</sup> <sup>−</sup> <sup>2</sup> *<sup>d</sup>*1+*d*2−<sup>2</sup> ∗ − <sup>2</sup> *<sup>d</sup>*1+*d*2−<sup>2</sup> *...* 0 0 *...* 0 0 ∗ − <sup>2</sup> *<sup>d</sup>*1+*d*2−<sup>2</sup> <sup>1</sup> <sup>−</sup> <sup>2</sup> *<sup>d</sup>*1+*d*2−<sup>2</sup> *...* 0 0 *... . . . . . . ... . . . . . . . . . . . . ...* <sup>−</sup> *<sup>d</sup>*<sup>1</sup> *<sup>d</sup>*1+*d*2−<sup>2</sup> <sup>−</sup> *<sup>d</sup>*<sup>1</sup> *d*1+*d*2−2 *d*2−2 *d*1+*d*2−2 *d*2−2 *<sup>d</sup>*1+*d*2−<sup>2</sup> *...* 0 1 *... <sup>d</sup>*1−<sup>2</sup> *d*1+*d*2−2 *d*1−2 *<sup>d</sup>*1+*d*2−<sup>2</sup> <sup>−</sup> *<sup>d</sup>*<sup>2</sup> *<sup>d</sup>*1+*d*2−<sup>2</sup> <sup>−</sup> *<sup>d</sup>*<sup>2</sup> *<sup>d</sup>*1+*d*2−<sup>2</sup> *...* 1 0 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ *.*

We see that

$$P\_G(z\_1, \ldots, z\_{N-1}, 1) = -\frac{2(d\_1 + d\_2 - 2)}{d\_1 d\_2} P\_{G'}(z\_1, \ldots, z\_{N-1}),\tag{7.7}$$

since the secular polynomial for the contracted graph is given by the determinant of the main *(*2*N* − 2*)* × *(*2*N* − 2*)* block of the transformed matrix. The factor is neglected in projective formalism. Note that the degree of the new vertex is non-zero *d* = *d*<sup>1</sup> + *d*<sup>2</sup> − 2 *>* 0, unless *G* is formed by single edge, which is excluded.

**Case B: Contracting a Loop** Assume that the edge *EN* forms a loop attached to a vertex of degree *d.* Note that *d* ≥ 3 since otherwise the graph *G* is not connected. We again calculate the secular polynomial explicitly. We subtract first the 2*N* −1-st column from the last column

*PG(z)* = det ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ *... . . . . . . . . . . . . . . . ... <sup>d</sup>*−<sup>2</sup> *<sup>d</sup>* ∗ − <sup>2</sup> *<sup>d</sup> ...* <sup>−</sup><sup>2</sup> *<sup>d</sup>* <sup>−</sup><sup>2</sup> *d ...* ∗ − <sup>2</sup> *d d*−2 *<sup>d</sup> ...* <sup>−</sup><sup>2</sup> *<sup>d</sup>* <sup>−</sup><sup>2</sup> *d ... ... ... ... ... ... ...* <sup>−</sup><sup>2</sup> *<sup>d</sup>* <sup>−</sup><sup>2</sup> *<sup>d</sup> ... <sup>d</sup>*−<sup>2</sup> *<sup>d</sup> zN* <sup>−</sup> <sup>2</sup> *d ...* <sup>−</sup><sup>2</sup> *<sup>d</sup>* <sup>−</sup><sup>2</sup> *<sup>d</sup> ... zN* <sup>−</sup> <sup>2</sup> *d d*−2 *d* ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ = det ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ *... . . . . . . . . . . . . . . . ... <sup>d</sup>*−<sup>2</sup> *<sup>d</sup>* ∗ − <sup>2</sup> *<sup>d</sup> ...* <sup>−</sup><sup>2</sup> *<sup>d</sup>* 0 *...* ∗ − <sup>2</sup> *d d*−2 *<sup>d</sup> ...* <sup>−</sup><sup>2</sup> *<sup>d</sup>* 0 *... ... ... ... ... ... ...* <sup>−</sup><sup>2</sup> *<sup>d</sup>* <sup>−</sup><sup>2</sup> *<sup>d</sup> ... <sup>d</sup>*−<sup>2</sup> *<sup>d</sup> zN* − 1 *...* <sup>−</sup><sup>2</sup> *<sup>d</sup>* <sup>−</sup><sup>2</sup> *<sup>d</sup> ... zN* <sup>−</sup> <sup>2</sup> *<sup>d</sup>* 1 − *zN* ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ = *(zN* − 1*)* det ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ *... . . . . . . . . . . . . . . . ... <sup>d</sup>*−<sup>2</sup> *<sup>d</sup>* ∗ − <sup>2</sup> *<sup>d</sup> ...* <sup>−</sup><sup>2</sup> *<sup>d</sup>* 0 *...* ∗ − <sup>2</sup> *d d*−2 *<sup>d</sup> ...* <sup>−</sup><sup>2</sup> *<sup>d</sup>* 0 *... ... ... ... ... ... ...* <sup>−</sup><sup>2</sup> *<sup>d</sup>* <sup>−</sup><sup>2</sup> *<sup>d</sup> ... <sup>d</sup>*−<sup>2</sup> *<sup>d</sup>* 1 *...* <sup>−</sup><sup>2</sup> *<sup>d</sup>* <sup>−</sup><sup>2</sup> *<sup>d</sup> ... zN* <sup>−</sup> <sup>2</sup> *<sup>d</sup>* −1 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

When the factor *zN* − 1 is extracted calculating the remaining determinant one may put *zN* = 1 and as before use a variant of Gauß elimination for the lowest 2 × 2 principal block:

$$\det\begin{bmatrix} \cdot & \vdots & \vdots & \vdots & \vdots & \vdots\\ \cdot & \frac{d-2}{d} & \* - \frac{2}{d} & \dots & -\frac{2}{d} & 0\\ \cdot & \* - \frac{2}{d} & \frac{d-2}{d} & \dots & -\frac{2}{d} & 0\\ \cdot & \cdot & \cdot & \cdot & \cdot & \dots & \dots\\ \cdot & -\frac{2}{d} & -\frac{2}{d} & \dots & \frac{d-2}{d} & 1\\ \cdot & -\frac{2}{d} & -\frac{2}{d} & \dots & \frac{d-2}{d} & -1 \end{bmatrix}$$

*.*

 

= det ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ *... . . . . . . . . . . . . . . . ... <sup>d</sup>*−<sup>2</sup> *<sup>d</sup>* ∗ − <sup>2</sup> *<sup>d</sup> ...* <sup>−</sup><sup>2</sup> *<sup>d</sup>* 0 *...* ∗ − <sup>2</sup> *d d*−2 *<sup>d</sup> ...* <sup>−</sup><sup>2</sup> *<sup>d</sup>* 0 *... ... ... ... ... ... ...* <sup>−</sup><sup>2</sup> *<sup>d</sup>* <sup>−</sup><sup>2</sup> *<sup>d</sup> ... <sup>d</sup>*−<sup>2</sup> *<sup>d</sup>* 1 *...* <sup>−</sup><sup>2</sup> <sup>2</sup> *<sup>d</sup>* <sup>−</sup><sup>2</sup> <sup>2</sup> *<sup>d</sup> ...* <sup>2</sup>*d*−<sup>2</sup> *<sup>d</sup>* 0 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ = 2 det ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ *... . . . . . . . . . . . . . . . ... <sup>d</sup>*−<sup>2</sup> *<sup>d</sup>* ∗ − <sup>2</sup> *<sup>d</sup> ...* <sup>−</sup><sup>2</sup> *<sup>d</sup>* 0 *...* ∗ − <sup>2</sup> *d d*−2 *<sup>d</sup> ...* <sup>−</sup><sup>2</sup> *<sup>d</sup>* 0 *... ... ... ... ... ... ...* 0 0 *...* 0 1 *...* <sup>−</sup><sup>2</sup> *<sup>d</sup>* <sup>−</sup><sup>2</sup> *<sup>d</sup> ... <sup>d</sup>*−<sup>2</sup> *<sup>d</sup>* 0 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ = −2 *d* − 2 *d* det ⎡ ⎢ ⎢ ⎢ ⎣ *... . . . . . . . . . ... <sup>d</sup>*−<sup>4</sup> *<sup>d</sup>*−<sup>2</sup> ∗ − <sup>2</sup> *<sup>d</sup>*−<sup>2</sup> *... ...* ∗ − <sup>2</sup> *d*−2 *d*−4 *<sup>d</sup>*−<sup>2</sup> *... ... ... ... ...* ⎤ ⎥ ⎥ ⎥ ⎦

$$= -2\frac{d-2}{d}P\_{G'}(z\_1, \dots, z\_{N-1}),$$

since again the secular polynomial for the contracted graph coincides with the *(*2*N*− 2*)* × *(*2*N* − 2*)* main minor. We conclude that the limit (7.6) holds in the projective sense.

**Corollary 7.3** *The secular polynomials for graphs G and G*- *, where G is obtained from G by reducing several edges are related via the following formula* 

$$P\_{G'}(z\_1, \ldots, \hat{z}\_{j\_1}, \ldots, \hat{z}\_{j\_2}, \ldots, \hat{z}\_{j\_K}, \ldots, z\_N) = \lim\_{\substack{z\_{j\_1} \to \ 1 \\ \ldots \\ z\_{j\_K} \to 1}} P\_G(z\_1, \ldots, z\_N), \quad (7.8)$$

*where zj*<sup>1</sup> *,...,zjK are the edges that are contracted. The limit is again taken in the projective sense.* 

**Corollary 7.4** *From the proof of last lemma we see that the secular polynomial never has (zj* <sup>−</sup> <sup>1</sup>*)*<sup>2</sup> *as a factor, unless the graph is a simple loop.*

**Corollary 7.5** *It might be convenient to modify the formula for the secular polynomial for the contracted graph to avoid the limits* 

• *If the edge Ej does not form a loop in G, i.e. PG(z*1*,...,* 1 *j ,...zN )* ≡ 0*, then* 

$$P\_{G'}(z\_1, \ldots, \hat{z\_j}, \ldots, z\_N) = P\_G(z\_1, \ldots, \underbrace{1}\_{j}, \ldots, z\_N). \tag{7.9}$$

• *If the edge Ej forms a loop in G, i.e. PG(z*1*,...,zN )* = *(zj* −1*)P*<sup>∗</sup> *<sup>G</sup>(z*1*,...,zN ), then* 

$$P\_{G'}(z\_1, \ldots, \hat{z}\_j, \ldots, z\_N) = P\_G^\*(z\_1, \ldots, \underbrace{1}\_{j}, \ldots, z\_N). \tag{7.10}$$

**Corollary 7.6** *Let G be a finite connected discrete graph, then the secular polynomial PG contains factor zj* − 1 *if and only if the edge Ej forms a loop.* 

*Proof Let us examine the proof of Lemma 7.2: if the original graph contains a loop formed by Ej , then the secular polynomial contains the factor zj* − 1*. In all other cases the secular polynomial is not identically zero for zj* = 1 *and hence contains no linear factor zj* − 1*.*  

Lemma 7.2 answers the following very important question:

*What happens to the secular polynomial under the graph's contraction?* 

The Lemma implies in particular that reducibility of the secular polynomial is preserved, unless one of the factors depends entirely on the contracted variables. This principle will allows us, as far as reducibility is concerned, to go down from large graphs to their contractions containing a small number of edges, thus trivialising the problem to a relatively small set of graphs on three edges.

The following lemma will play an important role allowing to diminish further the number of graphs that have to be considered. It turns out that it is enough to work with genuine graphs even though degree two vertices may appear under contraction.<sup>1</sup>

**Lemma 7.7** *Any genuine graph on at least four edges can be contracted to a genuine graph on three edges.* 

*Proof* Assume that *G* is a genuine graph on at least four edges. To prove the lemma it is enough to show that one of the edges in *G* can be contracted leading to a smaller genuine graph. Then contracting edges one-by-one we arrive at a genuine graph on three edges.

<sup>1</sup> Secular polynomials for graphs with degree two vertices are given by formula (6.9).

Contraction of an edge leads to a non-genuine graph only if the contracted edge is either


In all other cases the contracted graph is again genuine, provided the original graph was genuine.

Hence a genuine graph cannot be contracted to another genuine graph only if all its edges are either pendants attached to degree two vertices or loops attached to degree four vertices. All possible such graphs are:


All graphs from the list have at least three edges. 

## **7.2 Extensions of Graphs**

Let us discuss now what happens when one goes in the opposite direction from small to large graphs. We need to formalise which graphs should be discussed starting from a particular small graph.

**Definition 7.8** Let  be a metric graph on *N* edges having *M* vertices, we say that the graph *-* is an **extension** of  obtained by adding the edge *EN*+<sup>1</sup> if and only if *-*is a graph on *N* + 1 edges such that

$$
\Gamma = \Gamma' \backslash\_{E\_{N+1}}.\tag{7.11}
$$

There are two mechanisms how the extension is achieved

(1) **inserting an edge by splitting a vertex**—the new edge *EN*+<sup>1</sup> = [*x*2*N*+1*, x*2*N*+2] breaks a certain vertex *m*<sup>0</sup> into two vertices *m* and *m*- so that the corresponding equivalence class is first divided

$$V^{m\_0}(\Gamma) = (V^{m\_0})' \cup (V^{m\_0})', \quad (V^{m\_0})' \cap (V^{m\_0})'' = \emptyset$$

and then extended by adding the endpoints *x*2*N*+<sup>1</sup> and *x*2*N*+<sup>2</sup> separately

$$V^{m'}(\Gamma') = (V^{m\_0})' \cup \{ \mathbf{x}\_{2N+1} \}, \quad V^{m''}(\Gamma') = (V^{m\_0})'' \cup \{ \mathbf{x}\_{2N+1} \};$$

(2) **adding a loop**—the new edge *EN*+<sup>1</sup> forms a loop and is connected to a certain existing vertex *m*<sup>0</sup> in so that the endpoints *x*2*N*+<sup>1</sup> and *x*2*N*+<sup>2</sup> are added to the

corresponding equivalence class

$$V^{m\_0}(\Gamma') = V^{m\_0}(\Gamma) \cup \{ \mathfrak{x}\_{2N+1}, \mathfrak{x}\_{2N+2} \}.$$

Note that **adding of a pendant edge** to a vertex *V <sup>m</sup>*- can be achieved by formally dividing the equivalence class as *V <sup>m</sup>*<sup>0</sup> *(-)* <sup>=</sup> *<sup>V</sup> <sup>m</sup>*<sup>0</sup> *(-)* ∪ ∅ and then adding the endpoints *<sup>x</sup>*2*N*+<sup>1</sup> and *<sup>x</sup>*2*N*+<sup>2</sup> to *<sup>V</sup> <sup>m</sup>*<sup>0</sup> and to the empty set respectively.

**Appending an edge** between two existing vertices in  is not an extension since contraction of this edge leads to the graph where the two vertices are glued together.

Extending graphs one has either to specify which vertex is going to be split and how, or to which vertex a loop will be attached. Extension of graphs gives much more freedom in comparison with the contraction which is uniquely determined specifying the edge to be contracted. Let us illustrate this freedom by considering all possible extensions of the loop graph *G(*1*.*2*)* (See Fig. 7.1).

The graph *G(*1*.*2*)* has single vertex, hence its extensions are obtained either by adding a loop (graph *G(*2*.*4*)*), or by adding a pendant edge (graph *G(*2*.*2*)*), or by splitting the vertex into two (graph *G(*2*.*3*)*).

Consider now the graph *G(*2*.*2*)*. It has two vertices, hence two extensions are obtained by adding a loop or a pendant edge to each of these vertices (the graphs *G(*3*.*7*), G(*3*.*8*), G(*3*.*3*),* and *G(*3*.*4*)*). Splitting of the central vertex leads to the graph *G(*3*.*5*)* and once more to the graph *G(*3*.*3*)*. Extensions of the other two graphs *G(*2*.*4*)* and *G(*2*.*3*)* are marked by different colours.

Note that the graphs *G(*3*.*1*)* and *G(*3*.*2*)* never appear extending *G(*1*.*2*)*, the reason is very simple: these two graphs are trees, but the starting graph contains one cycle.

Looking at the diagram presented in Fig. 7.1 we see that some of the graphs on three edges may be obtained from different graphs on two edges. The number of cycles is always preserved unless the extension is obtained by adding a loop. What is more interesting is that if the resulting graph is a genuine metric graph on three edges (contains no degree two vertices) and not a watermelon graph *G(*3*.*9*)*, then the secular polynomial contains only factors corresponding to loops. In other words, potential factors which can be observed for non-genuine graphs on two edges disappear under extension.

Graph contractions will be used to prove reducibility of secular polynomials which is preserved under contractions in the sense of Lemma 7.2. Graph extensions on the opposite will be used to prove irreducibility of secular polynomials for large graphs.

Several of our proofs will be based on the following simple argument. Assume that we want to prove that the secular polynomial *pG* for a large graph *G* on *N* edges possesses a certain property connected with reducibility of polynomials. Consider then the contracted graph *Gi*1*,i*2*,...,in , n* ≤ *N* where {*ij* } *n <sup>j</sup>*=<sup>1</sup> ⊂ {1*,* <sup>2</sup>*,...,N*} is the set of preserved edges. Depending on whether the secular polynomial for the contracted graph possesses the same reducibility property or not one may draw conclusions concerning reducibility of *pG*. If all *Gi*1*,i*2*,...,in* possess the required property, then we are done. Otherwise one needs to go up and check extensions of

the contracted graph, of course only extensions compatible with the original graph *G* should be considered, *i.e.* extensions of *Gi*1*,i*2*,...,in* which are at the same time contractions of *G*. Pretty often the contracted graph contains degree two vertices which are excluded from our studies, hence the original graph *G* cannot contain such vertices. This allows us to conclude that among all reductions of *G* there is an extension of *Gi*1*,i*2*,...,in* , where to each degree two vertex a pendant edge or a loop is attached. Here we need to look just at the extensions for which the degree of the vertex increases. We shall always be dealing with graphs on two or three edges, which makes the total number of graphs that has to be analysed finite.

We present two lemmas illustrating our approach.

**Lemma 7.9** *The secular polynomial for any graph G on two or more edges and without degree two vertices can be written as a product of two factors depending (non-trivially) on the same two variables (and any other variables) only if the corresponding edges belong to a simple cycle, i.e. a cycle passing each of its vertices just once, or both of them form loops.* 

**Remark 7.10** Note that two parallel edges belong to the simple cycle formed by these two edges.

*Proof* Let us start by examining all graphs on two edges. We need to check genuine graphs only: the lasso graph *G(*2*.*2*)* and the figure eight graph *G(*2*.*4*)*. The second graph possesses the desired property: the secular polynomial can be factorised into factors depending on both variables *z*<sup>1</sup> and *z*2. The first graph does not possess the desired property since the secular polynomial is a product of the linear factor *(z*<sup>1</sup> − 1*)* and an irreducible factor. We conclude that the lemma not only holds for all (genuine) graphs on two edges, but the statement can be formulated using *if and only if*.

We proceed to arbitrary graphs. Without loss of generality assume that the variables in question are *z*<sup>1</sup> and *z*2*.* Contract the original graph *G* to *G*1*,*<sup>2</sup> by collapsing all the edges except *E*<sup>1</sup> and *E*2*.* Let us consider all possible graphs on two edges with their secular polynomials listed in (6.10). We shall look at their extensions.

(1) *G*1*,*<sup>2</sup> = *G(*2*.*1*)*—segment.

Using the argument formulated above we conclude that among compatible extensions of *G*1*,*<sup>2</sup> one finds either the graph *G*1*,*2*,*<sup>3</sup> = *G(*3*.*2*)* or *G*1*,*2*,*<sup>3</sup> = *G(*3*.*4*)*. The secular polynomials for both graphs cannot be written as a product of two factors both depending on *z*<sup>1</sup> and *z*2. Thus all extensions of the graph *G(*2*.*1*)* do not possess the desired property.


**Fig. 7.2** Graph *G(*2*.*4*)*

The two loops are not preserved if and only if the extensions of *G(*2*.*4*)* are obtained by spritting the central vertex *V* = {*x*1*, x*2*, .x*3*, x*4} into two (see Fig. 7.2). We are interested in the splittings that destroy at least one of the loops, in fact there are just two possibilities (up to the exchange of the edges and their orientations)

(a) *V* → {*x*1*, x*4}∪{*x*2*, x*3}; (b) *V* → {*x*1*, x*2*, x*4}∪{*x*3}.

In the case (a) the edges are parallel in the extension, then these edges either are parallel in *G*, or there exists another extension of *G*1*,*<sup>2</sup> with the splitting of *V* as in the case (b). In other words, it is sufficient to study the case (b).

Assume that the vertex *<sup>V</sup>* is split into *<sup>V</sup>* <sup>1</sup> = {*x*1*, x*2*, x*4} and *<sup>V</sup>* <sup>2</sup> = {*x*3}. Then among extensions of *G*1*,*<sup>2</sup> that are compatible with *G* there is a graph, where these two vertices are connected either by a single edge, or by a watermelon graph on *d* ≥ 2 edges. In the first case in accordance with the principle formulated above among compatible with *G* extensions of *G*1*,*<sup>2</sup> there is a graph with either pendant edge or a loop attached to the degree two vertex containing *x*3. In the second case one of the extensions is the watermelon on a loop graph described below. Let us check all three possibilities:

• Among contractions of *G* there is a graph on four edges given by the loop formed by *E*<sup>2</sup> and *E*<sup>3</sup> with the pendant edge *E*<sup>4</sup> and loop *E*<sup>1</sup> attached at the two vertices (see Fig. 7.3). Here and below (see Fig. 7.4) we use a certain particular enumeration of all graphs on four edges but we refrain from listing all such graphs. The secular polynomial is

$$P\_{(4.15)} = (z\_1 - 1) \left( 6z\_1 z\_2^2 z\_3^2 z\_4^2 - 4z\_1 z\_2 z\_3 z\_4^2 - z\_1 (z\_2^2 + z\_3^2) z\_4^2 + 2z\_1 z\_2^2 z\_3^2 \right)$$

$$+ (z\_2^2 + z\_3^2) z\_4^2 - 4z\_2 z\_3 (1 + z\_1 + z\_4^2) + z\_1 (z\_2^2 + z\_3^2)$$

$$- (z\_2^2 + z\_3^2) + 2z\_4^2 + 6 \right),$$

where the second factor is irreducible. The secular polynomial cannot be written as a product of factors, both depending on *z*<sup>1</sup> and *z*2*.*

#### **Fig. 7.4** Graph *G(*4*.*22*)*

• Among contractions of *G* there is a graph on four edges given by the loop formed by *E*<sup>2</sup> and *E*<sup>3</sup> with the loops *E*<sup>1</sup> and *E*<sup>4</sup> attached at the two vertices (see Fig. 7.4).

The secular polynomial is

$$\begin{aligned} P\_{(4.22)} &= (z\_1 - 1)(z\_4 - 1) \begin{pmatrix} 4z\_1 z\_2^2 z\_3^2 z\_4 - 2z\_1 z\_2 z\_3 z\_4 \\ -z\_1 (z\_2^2 + z\_3^2) z\_4 + (z\_1 z\_2^2 + z\_1 z\_3^2 + z\_2^2 z\_4 + z\_3^2 z\_4) \\ -2(z\_1 z\_2 z\_3 + z\_2 z\_3 z\_4) - (z\_2^2 + z\_3^2) - 2z\_2 z\_3 + 4 \end{pmatrix}, \end{aligned}$$

the last factor is irreducible, the secular polynomial cannot be written as a product of polynomials depending on both *z*<sup>1</sup> and *z*2*.*

• Among contractions of *G* there is a graph on *d* + 2 edges formed by the loop *E*<sup>1</sup> and *d* + 1 parallel edges. This graph is a watermelon with a loop graph **W***d***L** and we are going to return to it in Example 7.13 in the following section. It will be proven that the secular polynomial is a product of *(z*<sup>1</sup> − 1*)* and a certain irreducible polynomial.

It follows that the secular polynomial for *G* can be written as a product of two factors both depending on *z*<sup>1</sup> and *z*<sup>2</sup> only if the edges *E*<sup>1</sup> and *E*<sup>2</sup> belong to a cycle or both edges form loops.

> 

The lemma does not imply that the secular polynomial for graphs where two edges belong to the same cycle can be written as a product of two factors both depending on both variables. On the other hand, if the two edges form loops, then we already know that the secular polynomial contains factors *(z*<sup>1</sup> − 1*)* and *(z*<sup>2</sup> − 1*)*, hence it can be written as two factors both depending on *z*<sup>1</sup> and *z*2.

The result for two variables can be extended for the case of three variables as follows.

**Lemma 7.11** *If the secular polynomial for a graph G on three or more edges and without degree two vertices can be written as a product of two factors, both depending on the same three variables, say z*1*, z*2*, z*3*, then the contracted graph G*1*,*2*,*<sup>3</sup> *is either the flower graph* **F**<sup>3</sup> = *G(*3*.*11*) or the watermelon graph* **W**<sup>3</sup> = *G(*3*.*9*).*

*Proof* Using Lemma 7.9 we conclude that the edges *E*1*, E*2*,* and *E*<sup>3</sup> are arranged in one of the following ways (up to permutation of the edges):

(1) three-flower graph **F**<sup>3</sup> = *G(*3*.*11*)*;

(2) watermelon graph **W**<sup>3</sup> = *G(*3*.*9*)*;


The first two graphs are genuine graphs on three edges, while the last two—not. We need to show that among compatible with *G* extensions of *G(*3*.*10*)* and *G(*3*.*6*)* there are no graphs with the desired factorisation property.

The graph *G(*3*.*10*)* contains a degree two vertex, hence among its extensions compatible with *G* we have either the graph *G(*4*.*15*)* or *G(*4*.*22*)*—the graphs obtained by attaching a pendant edge or a loop to the degree two vertex in *G(*3*.*10*)* (see Figs. 7.3 and 7.4). Both graphs have already been considered proving Lemma 7.9, where it was shown that the secular polynomial is not factorisable into polynomials depending on both *z*<sup>1</sup> and *z*2. Hence the polynomial for *G(*3*.*10*)* is also not factorisable into factors depending on the three variables.

Let us turn to the loop graph *G(*3*.*6*).* The vertices have degree two and therefore cannot be preserved in *G.* Using the argument formulated above, we conclude that for each vertex *<sup>V</sup> <sup>j</sup>* there exists an edge, say *<sup>E</sup>*3+*<sup>j</sup>* forming a loop or a pendant edge in *G*1*,*2*,*3*,*3+*<sup>j</sup>* . Altogether there are 4 possibilities (up to permutation of edges) that have to be considered:


These graphs are plotted in Fig. 7.5.

Find below the corresponding secular polynomials maximally factorised:

*P(*6*.*1*)* <sup>=</sup> <sup>27</sup> <sup>−</sup> <sup>3</sup>*z*<sup>2</sup> <sup>−</sup> <sup>3</sup>*z*<sup>2</sup> <sup>−</sup> *<sup>z</sup>*<sup>2</sup> *z*2 <sup>−</sup> <sup>16</sup>*z*1*z*2*z*<sup>3</sup> <sup>−</sup> <sup>3</sup>*z*<sup>2</sup> <sup>−</sup> *<sup>z</sup>*<sup>2</sup> *z*2 <sup>−</sup> *<sup>z</sup>*<sup>2</sup> *z*2 <sup>+</sup> *<sup>z</sup>*<sup>2</sup> *z*2 *z*2 <sup>+</sup>9*z*<sup>2</sup> <sup>+</sup> <sup>3</sup>*z*<sup>2</sup> *z*2 <sup>+</sup> <sup>3</sup>*z*<sup>2</sup> *z*2 <sup>−</sup> <sup>3</sup>*z*<sup>2</sup> *z*2 *z*2 <sup>−</sup> <sup>16</sup>*z*1*z*2*z*3*z*<sup>2</sup> <sup>−</sup> *<sup>z</sup>*<sup>2</sup> *z*2 <sup>+</sup> *<sup>z</sup>*<sup>2</sup> *z*2 *z*2 <sup>+</sup>*z*<sup>2</sup> *z*2 *z*2 <sup>+</sup> <sup>3</sup>*z*<sup>2</sup> *z*2 *z*2 *z*2 <sup>+</sup> <sup>9</sup>*z*<sup>2</sup> <sup>−</sup> *<sup>z</sup>*<sup>2</sup> *z*2 <sup>+</sup> <sup>3</sup>*z*<sup>2</sup> *z*2 <sup>+</sup> *<sup>z</sup>*<sup>2</sup> *z*2 *z*2 <sup>−</sup> <sup>16</sup>*z*1*z*2*z*3*z*<sup>2</sup> <sup>+</sup>3*z*<sup>2</sup> *z*2 <sup>+</sup> *<sup>z</sup>*<sup>2</sup> *z*2 *z*2 <sup>−</sup> <sup>3</sup>*z*<sup>2</sup> *z*2 *z*2 <sup>+</sup> <sup>3</sup>*z*<sup>2</sup> *z*2 *z*2 *z*2 <sup>+</sup> <sup>3</sup>*z*<sup>2</sup> *z*2 <sup>+</sup> *<sup>z</sup>*<sup>2</sup> *z*2 *z*2 <sup>−</sup> <sup>3</sup>*z*<sup>2</sup> *z*2 *z*2 <sup>+</sup>3*z*<sup>2</sup> *z*2 *z*2 *z*2 <sup>−</sup> <sup>16</sup>*z*1*z*2*z*3*z*<sup>2</sup> *z*2 <sup>+</sup> *<sup>z</sup>*<sup>2</sup> *z*2 *z*2 <sup>−</sup> *<sup>z</sup>*<sup>2</sup> *z*2 *z*2 *z*2 <sup>+</sup> <sup>3</sup>*z*<sup>2</sup> *z*2 *z*2 *z*2 <sup>+</sup>9*z*<sup>2</sup> *z*2 *z*2 *z*2 *z*2 <sup>+</sup> <sup>9</sup>*z*<sup>2</sup> <sup>+</sup> <sup>3</sup>*z*<sup>2</sup> *z*2 <sup>−</sup> *<sup>z</sup>*<sup>2</sup> *z*2 <sup>+</sup> *<sup>z</sup>*<sup>2</sup> *z*2 *z*2 <sup>−</sup> <sup>16</sup>*z*1*z*2*z*3*z*<sup>2</sup> <sup>+</sup> <sup>3</sup>*z*<sup>2</sup> *z*2 <sup>−</sup>3*z*<sup>2</sup> *z*2 *z*2 <sup>+</sup> *<sup>z</sup>*<sup>2</sup> *z*2 *z*2 <sup>+</sup> <sup>3</sup>*z*<sup>2</sup> *z*2 *z*2 *z*2 <sup>+</sup> <sup>3</sup>*z*<sup>2</sup> *z*2 <sup>−</sup> <sup>3</sup>*z*<sup>2</sup> *z*2 *z*2 <sup>+</sup> *<sup>z</sup>*<sup>2</sup> *z*2 *z*2 <sup>+</sup>3*z*<sup>2</sup> *z*2 *z*2 *z*2 <sup>−</sup> <sup>16</sup>*z*1*z*2*z*3*z*<sup>2</sup> *z*2 <sup>+</sup> *<sup>z</sup>*<sup>2</sup> *z*2 *z*2 <sup>+</sup> <sup>3</sup>*z*<sup>2</sup> *z*2 *z*2 *z*2 <sup>−</sup> *<sup>z</sup>*<sup>2</sup> *z*2 *z*2 *z*2 <sup>+</sup>9*z*<sup>2</sup> *z*2 *z*2 *z*2 *z*2 <sup>+</sup> <sup>3</sup>*z*<sup>2</sup> *z*2 <sup>+</sup> *<sup>z</sup>*<sup>2</sup> *z*2 *z*2 <sup>+</sup> *<sup>z</sup>*<sup>2</sup> *z*2 *z*2 <sup>−</sup> *<sup>z</sup>*<sup>2</sup> *z*2 *z*2 *z*2 <sup>−</sup> <sup>16</sup>*z*1*z*2*z*3*z*<sup>2</sup> *z*2 <sup>−</sup>3*z*<sup>2</sup> *z*2 *z*2 <sup>+</sup> <sup>3</sup>*z*<sup>2</sup> *z*2 *z*2 *z*2 <sup>+</sup> <sup>3</sup>*z*<sup>2</sup> *z*2 *z*2 *z*2 <sup>+</sup> <sup>9</sup>*z*<sup>2</sup> *z*2 *z*2 *z*2 *z*2 <sup>+</sup> *<sup>z</sup>*<sup>2</sup> *z*2 *z*2 <sup>−</sup> *<sup>z</sup>*<sup>2</sup> *z*2 *z*2 *z*2 <sup>−</sup>*z*<sup>2</sup> *z*2 *z*2 *z*2 <sup>−</sup> <sup>3</sup>*z*<sup>2</sup> *z*2 *z*2 *z*2 *z*2 <sup>−</sup> <sup>16</sup>*z*1*z*2*z*3*z*<sup>2</sup> *z*2 *z*2 <sup>−</sup> *<sup>z</sup>*<sup>2</sup> *z*2 *z*2 *z*2 <sup>−</sup>3*z*<sup>2</sup> *z*2 *z*2 *z*2 *z*2 <sup>−</sup> <sup>3</sup>*z*<sup>2</sup> *z*2 *z*2 *z*2 *z*2 <sup>+</sup> <sup>27</sup>*z*<sup>2</sup> *z*2 *z*2 *z*2 *z*2 *z*2 *.*

*P(*6*.*2*)* = *(*−1 + *z*6*)* <sup>−</sup> <sup>3</sup>*z*<sup>2</sup> <sup>−</sup> <sup>2</sup>*z*<sup>2</sup> <sup>−</sup> *<sup>z</sup>*<sup>2</sup> *z*2 <sup>−</sup> <sup>8</sup>*z*1*z*2*z*<sup>3</sup> <sup>−</sup> <sup>3</sup>*z*<sup>2</sup> <sup>−</sup> *<sup>z</sup>*<sup>2</sup> *z*2 <sup>+</sup> <sup>6</sup>*z*<sup>2</sup> <sup>+</sup>3*z*<sup>2</sup> *z*2 <sup>+</sup> <sup>2</sup>*z*<sup>2</sup> *z*2 <sup>−</sup> <sup>3</sup>*z*<sup>2</sup> *z*2 *z*2 <sup>−</sup> <sup>8</sup>*z*1*z*2*z*3*z*<sup>2</sup> <sup>−</sup> *<sup>z</sup>*<sup>2</sup> *z*2 <sup>+</sup> *<sup>z</sup>*<sup>2</sup> *z*2 *z*2 <sup>+</sup> <sup>6</sup>*z*<sup>2</sup> <sup>−</sup>*z*<sup>2</sup> *z*2 <sup>+</sup> <sup>2</sup>*z*<sup>2</sup> *z*2 <sup>+</sup> *<sup>z</sup>*<sup>2</sup> *z*2 *z*2 <sup>−</sup> <sup>8</sup>*z*1*z*2*z*3*z*<sup>2</sup> <sup>+</sup> <sup>3</sup>*z*<sup>2</sup> *z*2 <sup>−</sup> <sup>3</sup>*z*<sup>2</sup> *z*2 *z*2 <sup>+</sup> <sup>2</sup>*z*<sup>2</sup> *z*2 <sup>+</sup>*z*<sup>2</sup> *z*2 *z*2 <sup>−</sup> <sup>2</sup>*z*<sup>2</sup> *z*2 *z*2 <sup>+</sup> <sup>3</sup>*z*<sup>2</sup> *z*2 *z*2 *z*2 <sup>−</sup> <sup>8</sup>*z*1*z*2*z*3*z*<sup>2</sup> *z*2 <sup>+</sup> *<sup>z</sup>*<sup>2</sup> *z*2 *z*2 <sup>+</sup>3*z*<sup>2</sup> *z*2 *z*2 *z*2 <sup>+</sup> <sup>3</sup>*z*<sup>2</sup> *z*<sup>6</sup> <sup>+</sup> *<sup>z</sup>*<sup>2</sup> *z*2 *z*<sup>6</sup> <sup>−</sup> <sup>8</sup>*z*1*z*2*z*3*z*<sup>6</sup> <sup>+</sup> <sup>3</sup>*z*<sup>2</sup> *z*<sup>6</sup> <sup>−</sup> <sup>2</sup>*z*<sup>2</sup> *z*2 *z*<sup>6</sup> <sup>+</sup> *<sup>z</sup>*<sup>2</sup> *z*2 *z*6 <sup>+</sup>2*z*<sup>2</sup> *z*2 *z*2 *z*<sup>6</sup> <sup>−</sup> <sup>3</sup>*z*<sup>2</sup> *z*2 *z*<sup>6</sup> <sup>+</sup> <sup>3</sup>*z*<sup>2</sup> *z*2 *z*2 *z*<sup>6</sup> <sup>−</sup> <sup>8</sup>*z*1*z*2*z*3*z*<sup>2</sup> *z*<sup>6</sup> <sup>+</sup> *<sup>z</sup>*<sup>2</sup> *z*2 *z*<sup>6</sup> <sup>+</sup> <sup>2</sup>*z*<sup>2</sup> *z*2 *z*2 *z*6 <sup>−</sup>*z*<sup>2</sup> *z*2 *z*2 *z*<sup>6</sup> <sup>+</sup> <sup>6</sup>*z*<sup>2</sup> *z*2 *z*2 *z*2 *z*<sup>6</sup> <sup>+</sup> *<sup>z</sup>*<sup>2</sup> *z*2 *z*<sup>6</sup> <sup>−</sup> *<sup>z</sup>*<sup>2</sup> *z*2 *z*2 *z*<sup>6</sup> <sup>−</sup> <sup>8</sup>*z*1*z*2*z*3*z*<sup>2</sup> *z*6 <sup>−</sup>3*z*<sup>2</sup> *z*2 *z*<sup>6</sup> <sup>+</sup> <sup>2</sup>*z*<sup>2</sup> *z*2 *z*2 *z*<sup>6</sup> <sup>+</sup> <sup>3</sup>*z*<sup>2</sup> *z*2 *z*2 *z*<sup>6</sup> <sup>+</sup> <sup>6</sup>*z*<sup>2</sup> *z*2 *z*2 *z*2 *z*<sup>6</sup> <sup>−</sup> *<sup>z</sup>*<sup>2</sup> *z*2 *z*2 *z*6 <sup>−</sup>3*z*<sup>2</sup> *z*2 *z*2 *z*2 *z*<sup>6</sup> <sup>−</sup> <sup>8</sup>*z*1*z*2*z*3*z*<sup>2</sup> *z*2 *z*<sup>6</sup> <sup>−</sup> *<sup>z</sup>*<sup>2</sup> *z*2 *z*2 *z*<sup>6</sup> <sup>−</sup> <sup>2</sup>*z*<sup>2</sup> *z*2 *z*2 *z*2 *z*6 <sup>−</sup>3*z*<sup>2</sup> *z*2 *z*2 *z*2 *z*<sup>6</sup> <sup>+</sup> <sup>18</sup>*z*<sup>2</sup> *z*2 *z*2 *z*2 *z*2 *z*6 *.*

*P(*6*.*3*)* = *(*−1 + *z*5*)(*−1 + *z*6*)* <sup>−</sup> <sup>2</sup>*z*<sup>2</sup> <sup>−</sup> <sup>2</sup>*z*<sup>2</sup> <sup>−</sup> *<sup>z</sup>*<sup>2</sup> *z*2 <sup>−</sup> <sup>4</sup>*z*1*z*2*z*<sup>3</sup> <sup>−</sup> <sup>3</sup>*z*<sup>2</sup> <sup>+</sup> <sup>4</sup>*z*<sup>2</sup> <sup>+</sup> <sup>2</sup>*z*<sup>2</sup> *z*2 <sup>+</sup> <sup>2</sup>*z*<sup>2</sup> *z*2 <sup>−</sup>3*z*<sup>2</sup> *z*2 *z*2 <sup>−</sup> <sup>4</sup>*z*1*z*2*z*3*z*<sup>2</sup> <sup>−</sup> *<sup>z</sup>*<sup>2</sup> *z*2 <sup>+</sup> <sup>2</sup>*z*<sup>2</sup> *z*<sup>5</sup> <sup>+</sup> *<sup>z</sup>*<sup>2</sup> *z*2 *z*<sup>5</sup> <sup>−</sup> <sup>4</sup>*z*1*z*2*z*3*z*<sup>5</sup> <sup>+</sup> <sup>3</sup>*z*<sup>2</sup> *z*5 <sup>−</sup>2*z*<sup>2</sup> *z*2 *z*<sup>5</sup> <sup>−</sup> <sup>2</sup>*z*<sup>2</sup> *z*2 *z*<sup>5</sup> <sup>+</sup> <sup>3</sup>*z*<sup>2</sup> *z*2 *z*2 *z*<sup>5</sup> <sup>−</sup> <sup>4</sup>*z*1*z*2*z*3*z*<sup>2</sup> *z*<sup>5</sup> <sup>+</sup> *<sup>z</sup>*<sup>2</sup> *z*2 *z*<sup>5</sup> <sup>+</sup> <sup>2</sup>*z*<sup>2</sup> *z*2 *z*2 *z*5 <sup>+</sup>2*z*<sup>2</sup> *z*<sup>6</sup> <sup>+</sup> *<sup>z</sup>*<sup>2</sup> *z*2 *z*<sup>6</sup> <sup>−</sup> <sup>4</sup>*z*1*z*2*z*3*z*<sup>6</sup> <sup>+</sup> <sup>3</sup>*z*<sup>2</sup> *z*<sup>6</sup> <sup>−</sup> <sup>2</sup>*z*<sup>2</sup> *z*2 *z*<sup>6</sup> <sup>−</sup> <sup>2</sup>*z*<sup>2</sup> *z*2 *z*6 <sup>+</sup>3*z*<sup>2</sup> *z*2 *z*2 *z*<sup>6</sup> <sup>−</sup> <sup>4</sup>*z*1*z*2*z*3*z*<sup>2</sup> *z*<sup>6</sup> <sup>+</sup> *<sup>z</sup>*<sup>2</sup> *z*2 *z*<sup>6</sup> <sup>+</sup> <sup>2</sup>*z*<sup>2</sup> *z*2 *z*2 *z*<sup>6</sup> <sup>−</sup> *<sup>z</sup>*<sup>2</sup> *z*2 *z*5*z*<sup>6</sup> <sup>−</sup>4*z*1*z*2*z*3*z*5*z*<sup>6</sup> <sup>−</sup> <sup>3</sup>*z*<sup>2</sup> *z*5*z*<sup>6</sup> <sup>+</sup> <sup>2</sup>*z*<sup>2</sup> *z*2 *z*5*z*<sup>6</sup> <sup>+</sup> <sup>2</sup>*z*<sup>2</sup> *z*2 *z*5*z*<sup>6</sup> <sup>+</sup> <sup>4</sup>*z*<sup>2</sup> *z*2 *z*2 *z*5*z*<sup>6</sup> <sup>−</sup>3*z*<sup>2</sup> *z*2 *z*2 *z*5*z*<sup>6</sup> <sup>−</sup> <sup>4</sup>*z*1*z*2*z*3*z*<sup>2</sup> *z*5*z*<sup>6</sup> <sup>−</sup> *<sup>z</sup>*<sup>2</sup> *z*2 *z*5*z*<sup>6</sup> <sup>−</sup> <sup>2</sup>*z*<sup>2</sup> *z*2 *z*2 *z*5*z*<sup>6</sup> <sup>−</sup>2*z*<sup>2</sup> *z*2 *z*2 *z*5*z*<sup>6</sup> <sup>+</sup> <sup>12</sup>*z*<sup>2</sup> *z*2 *z*2 *z*2 *z*5*z*<sup>6</sup> *.*

$$\begin{split} P\_{(6,4)} &= (-1 + z\_4)(-1 + z\_5)(-1 + z\_6) \\ & \quad \left( 4 - z\_1^2 - z\_2^2 - z\_1 z\_2 z\_3 - z\_3^2 + z\_1^2 z\_4 + z\_2^2 z\_4 - z\_1^2 z\_2^2 z\_4 - z\_1 z\_2 z\_3 z\_4 + z\_2^2 z\_5 \right) \\ & \quad - z\_1 z\_2 z\_3 z\_5 + z\_3^2 z\_5 - z\_2^2 z\_3^2 z\_5 - z\_2^2 z\_4 z\_5 + z\_1^2 z\_2^2 z\_4 z\_5 - z\_1 z\_2 z\_3 z\_4 z\_5 \\ & \quad + z\_2^2 z\_3^2 z\_4 z\_5 + z\_1^2 z\_6 - z\_1 z\_2 z\_3 z\_6 + z\_3^2 z\_6 - z\_1^2 z\_3^2 z\_6 - z\_1^2 z\_4 z\_6 + z\_1^2 z\_2^2 z\_4 z\_6 \\ & \quad - z\_1 z\_2 z\_3 z\_4 z\_6 + z\_1^2 z\_3^2 z\_4 z\_6 - z\_1 z\_2 z\_3 z\_5 z\_6 - z\_3^2 z\_5 z\_6 + z\_1^2 z\_3^2 z\_5 z\_6 + z\_2^2 z\_3^2 z\_5 z\_6 \\ & \quad - z\_1^2 z\_2^2 z\_4 z\_5 z\_6 - z\_1 z\_2 z\_3 z\_4 z\_5 z\_6 - z\_1^2 z\_3^2 z\_4 z\_5 z\_6 - z\_2^2 z\_3^2 z\_4 z\_5 z\_6 \\ & \quad + 4 z\_1^2 z\_2^2 z\_3^2 z\_4 z\_5 z\_6 \text{)}. \end{split}$$

The last factors in all four polynomials are irreducible. Let us illustrate this by considering the first polynomial *P(*6*.*1*)*. The reduction *G*4*,*5*,*<sup>6</sup> of this graph is the star graph *G(*3*.*2*)* with an irreducible secular polynomial. Hence one of the polynomials in the hypothetical reduction of *P(*6*.*1*)* should be independent of *z*4*, z*5*, z*6:

$$P\_{(6.1)}(\mathbf{z}) = \underbrace{P\_1(z\_1, z\_2, z\_3)}\_{=(z\_1 z\_2 z\_3 - 1)} P\_2(z\_1, z\_2, z\_3, z\_4, z\_5, z\_6).$$

The first polynomial must be equal to *(z*1*z*2*z*<sup>3</sup> − 1*)* due to unique factorisation of *P(*3*.*6*)* <sup>=</sup> *(z*1*z*2*z*<sup>3</sup> <sup>−</sup> <sup>1</sup>*)*2*.* To see that *(z*1*z*2*z*<sup>3</sup> <sup>−</sup> <sup>1</sup>*)* is not a factor in *P(*6*.*1*)* it is enough to calculate

$$P\_{(6,1)}(-2,-1/2,1,z\_4,z\_5,z\_6) = -9 + 9z\_4^2 + 9z\_5^2z\_6^2 - 9z\_4^2z\_5^2z\_6^2 \neq 0,$$

despite *z*1*z*2*z*<sup>3</sup> = *(*−2*)* × *(*−1*/*2*)* × 1 = 1. Irreducibility of the remaining three polynomials is proven in the same way.

We see that the reduction into factors depending on all three variables *z*1*, z*2*, z*<sup>3</sup> is always destroyed. The secular polynomials contain only factors caused by singleedge loops present in the corresponding graphs.

We conclude that *G*1*,*2*,*<sup>3</sup> is either the flower graph **F**<sup>3</sup> = *G(*3*.*11*)* or the watermelon graph **W**<sup>3</sup> = *G(*3*.*9*)*. 

# **7.3 Secular Polynomials for the Watermelon Graph and Its Closest Relatives**

This section contains further three examples of graphs to be used in the proof of Theorem 7.19 below.

**Example 7.12 Watermelon graph W***<sup>d</sup>* formed by *d* parallel edges connecting two vertices *V* <sup>1</sup> and *V* <sup>2</sup> (Fig. 7.6).

To determine the secular equation we observe that the graph has a symmetry for arbitrary choice of edge lengths, namely the graph is invariant under the

**Fig. 7.6** Watermelon graph **W***<sup>d</sup>*

simultaneous inversion of all edges. Therefore the eigenfunctions can be divided into two classes: symmetric (invariant under the symmetry transformation) and anti-symmetric (multiplied by −1 when the symmetry transformation is applied). To describe the spectra corresponding to these two classes of eigenfunctions let us introduce the matrix

$$Z\_d(\mathbf{z}) = \text{diag}\ (z\_1, z\_2, \dots, z\_d). \tag{7.12}$$

Then the zero sets are given by the equations

• for symmetric eigenfunctions

$$P\_{\mathbf{W}\_d}^s(\mathbf{z}) := \det \left( Z\_d(\mathbf{z}) - S\_d^{\mathrm{st}} \right) = 0; \tag{7.13}$$

• for anti-symmetric eigenfunctions

$$P\_{\mathbf{W}\_d}^a(\mathbf{z}) := \det \left( Z\_d(\mathbf{z}) + S\_d^{\mathrm{st}} \right) = 0. \tag{7.14}$$

Here *S*st *<sup>d</sup>* is the vertex scattering matrix for degree *d* vertices already introduced in (3.41). It follows that the secular polynomial *P***W***<sup>d</sup>* can be factorised as

$$P\_{\mathbf{W}\_d}(\mathbf{z}) = P\_{\mathbf{W}\_d}^s(\mathbf{z}) P\_{\mathbf{W}\_d}^a(\mathbf{z}),$$

where *P<sup>a</sup>* **<sup>W</sup>***<sup>d</sup>* and *P<sup>s</sup>* **<sup>W</sup>***<sup>d</sup>* are first degree polynomials in each variable, invariant under permutations of variables. Such polynomials are reducible only if they are products of identical one-variable factors, which is not the case. It follows that both *P<sup>s</sup>* **<sup>W</sup>***<sup>d</sup>* and *Pa* **<sup>W</sup>***<sup>d</sup>* are irreducible.

**Problem 33** Justify formulas (7.13) and (7.14) using the fact that for symmetric and antisymmetric eigenfunctions the corresponding amplitudes satisfy

$$a\_{2j-1} = \pm a\_{2j}, \quad b\_{2j-1} = \pm b\_{2j}.$$

**Example 7.13 Watermelon with a loop graph W***d***L** formed by *d*, *d* ≥ 3 parallel edges between the vertices *<sup>V</sup>* <sup>1</sup> and *<sup>V</sup>* <sup>2</sup> and one loop *Ed*+<sup>1</sup> attached to *<sup>V</sup>* <sup>2</sup> (Fig. 7.7).

**Fig. 7.7** Watermelon with a loop graph **W***d***L**

Existence of the loop implies that the secular polynomial can be written as

$$P\_{\mathbf{W}\_d\mathbf{L}}(\mathbf{z}) = (z\_{d+1} - 1)P\_{\mathbf{W}\_d\mathbf{L}}^\*(\mathbf{z}) .$$

Our goal is to show that *P*∗ **<sup>W</sup>***d***<sup>L</sup>** is irreducible. Assume on the contrary that it is reducible and can be written as a product of two factors:

$$P\_{\mathbf{W}\_d\mathbf{L}}^\*(\mathbf{z}) = \mathcal{Q}(\mathbf{z})\ R(\mathbf{z})\,.$$

Note that *P*∗ **<sup>W</sup>***d***<sup>L</sup>** is at most linear in *zd*+<sup>1</sup> since the original polynomial is quadratic in each variable, hence only one of the two polynomials in the factorisation depends on *zd*+1*,* say

$$\mathcal{Q} = \mathcal{Q}(z\_1, \dots, z\_d, z\_{d+1}), \quad \mathcal{R} = \mathcal{R}(z\_1, \dots, z\_d).$$

When the loop edge *Ed*+<sup>1</sup> is contracted the graph **W***d***L** turns into **W***<sup>d</sup>* formed by *d* parallel edges, hence we have

$$(\mathcal{Q}(z\_1, \ldots, z\_d, 1)R(z\_1, \ldots, z\_d) = P\_{\mathbf{W}\_d}^s(z\_1, \ldots, z\_d) P\_{\mathbf{W}\_d}^d(z\_1, \ldots, z\_d).$$

Since the polynomials associated with **W***<sup>d</sup>* are irreducible, the polynomial *R* coincides with either *P<sup>s</sup>* **<sup>W</sup>***<sup>d</sup>* or *<sup>P</sup><sup>a</sup>* **W***d .*

Assume first that *R(***z***)* <sup>=</sup> *<sup>P</sup><sup>s</sup>* **W***d (***z***)*, which in particular implies that the eigenvalues corresponding to symmetric eigenfunctions of **W***<sup>d</sup>* are always present in the spectrum of **W***d***L** independently of the length *d*+1*.* Consider the eigenfunctions on **W***d***L** corresponding to these eigenvalues. Some of these eigenfunctions must be different from zero at *V* <sup>2</sup>*.* If all such eigenfunctions would be equal to zero at *<sup>V</sup>* 2, then they would be identically equal to zero on *Ed*+1, since the length of this edge is arbitrary. That would imply that their restrictions to the *d* parallel edges are eigenfunctions on **W***<sup>d</sup>* . For any choice of *n, n* = 1*,* 2*,...,d* the corresponding eigenvalues must coincide with the eigenvalues of the symmetric eigenfunctions on **W***<sup>d</sup> ,* which would imply that the restrictions of the eigenfunctions on **W***d***L** coincide with the symmetric eigenfunctions on **W***<sup>d</sup> .* That would imply that all symmetric eigenfunctions on **W***<sup>d</sup>* are equal to zero at *V* <sup>2</sup> and hence at *V* <sup>1</sup> by symmetry, which in turn would imply that all functions from the domain of the Laplacian on **W***<sup>d</sup>* attain opposite values at the vertices *u(V* <sup>1</sup>*)* = −*u(V* <sup>2</sup>*)* which is not the case. (Remember that every function from the operator domain can be written as a sum of symmetric and antisymmetric eigenfunctions),

#### 7.3 Watermelon Graph and Its Closest Relatives 169

We conclude that the Laplacian on **W***d***L** has an eigenvalue, which is independent of *d*+<sup>1</sup> and the corresponding eigenfunction is not equal to zero at *<sup>V</sup>* <sup>2</sup>*.* To prove that this is also impossible we calculate the secular equation based on the Titchmarsh-Weyl M-functions. Let us assume that the parallel edges are parametrised in the direction from *<sup>V</sup>* <sup>2</sup> to *<sup>V</sup>* <sup>1</sup> as intervals [0*, n*] and the loop edge as [0*, d*+1]*.* Every eigenfunction *ψ* is a solution to the differential equation −*ψ*--*(x)* <sup>=</sup> *<sup>k</sup>*2*<sup>ψ</sup>* and is easily determined by the function values at the vertices *ψm* <sup>=</sup> *ψ(V m), m* <sup>=</sup> <sup>1</sup>*,* <sup>2</sup> :

$$\psi(\mathbf{x}) = \begin{cases} \frac{\sin kx}{\sin k\ell\_n} \psi\_1 - \frac{\sin k(\mathbf{x} - \ell\_n)}{\sin k\ell\_n} \psi\_2, \mathbf{x} \in E\_n, n = 1, 2, \dots, d; \\\frac{\sin kx - \sin k(\mathbf{x} - \ell\_{d+1})}{\sin k\ell\_{d+1}} \psi\_2, \quad \mathbf{x} \in E\_{d+1}. \end{cases} \tag{7.15}$$

For any values of *ψ*<sup>1</sup> and *ψ*<sup>2</sup> the above formula determines a continuous function on the graph. It remains to check that Kirchhoff conditions are satisfied

$$\begin{aligned} \partial \psi(V^1) &= (-k) \left( \sum\_{n=1}^d \cot k \ell\_n \psi\_1 - \sum\_{n=1}^d \frac{1}{\sin k \ell\_n} \psi\_2 \right) &= 0; \\ \partial \psi(V^2) &= k \left( \sum\_{n=1}^d \frac{1}{\sin k \ell\_n} \psi\_1 - \sum\_{n=1}^d \cot k \ell\_n \psi\_2 \right) + 2k \tan k \frac{\ell\_{d+1}}{2} \ \psi\_2 = 0. \end{aligned} \tag{7.16}$$

Excluding *ψ*<sup>1</sup> and *ψ*<sup>2</sup> from the system of two linear equations we arrive at the secular equation

$$2\tan k \frac{\ell\_{d+1}}{2} = \frac{\left(\sum\_{n=1}^{d} \cot k\ell\_n\right)^2 - \left(\sum\_{n=1}^{d} \frac{1}{\sin k\ell\_n}\right)^2}{\sum\_{n=1}^{d} \cot k\ell\_n}.\tag{7.17}$$

The left hand side is an explicit function of *d*+1, while the right hand side contains no *d*+<sup>1</sup> implying that solutions *kj* = *kj (*1*,...,d , d*+1*)* to the above equation always depend on *d*+1*.*

The developed method can be applied to the case where *R(***z***)* <sup>=</sup> *<sup>P</sup><sup>a</sup>* **W***d (***z***)* with the only difference that symmetric eigenfunctions on **W***<sup>d</sup>* should be substituted with the antisymmetric ones.

We arrived to a contradiction proving that the polynomial *P*∗ **<sup>W</sup>***d***<sup>L</sup>** is irreducible.

Note that for *d* = 2 the graph **W**2**L** coincides with the figure-eight graph and the secular polynomial is reducible.

**Example 7.14 Watermelon on a stick graph W***<sup>d</sup>* **I** on three vertices formed by *d* parallel edges between the vertices *<sup>V</sup>* <sup>1</sup> and *<sup>V</sup>* <sup>2</sup> and one edge *Ed*+<sup>1</sup> between *<sup>V</sup>* <sup>2</sup> and *V* <sup>3</sup> (Fig. 7.8).

**Fig. 7.8** Watermelon on a stick graph **W***<sup>d</sup>* **I**

We shall prove that the secular polynomial is irreducible. Assume on the contrary that

$$P\_{\mathbf{W}\_d\mathbf{I}}(\mathbf{z}) = \mathcal{Q}\mathbf{w}\_d\mathbf{I}(\mathbf{z})R\_{\mathbf{W}\_d\mathbf{I}}(\mathbf{z})\dots$$

The polynomials depend on the variables *z*1*,...,zd* since the factors for **W***<sup>d</sup>* depend on all variables.

If one of the polynomials *Q***W***<sup>d</sup>* **<sup>I</sup>***(***z***), R***W***<sup>d</sup>* **<sup>I</sup>** is independent of *zd*+<sup>1</sup> then it coincides with either *P<sup>s</sup>* **<sup>W</sup>***<sup>d</sup>* or *P<sup>a</sup>* **<sup>W</sup>***<sup>d</sup>* and we can repeat the analysis carried out for **W***d***L** with the only difference that the secular Eq. (7.17) should be substituted with

$$\tan k\ell\_{d+1} = \frac{\left(\sum\_{n=1}^{d} \cot k\ell\_n\right)^2 - \left(\sum\_{n=1}^{d} \frac{1}{\sin k\ell\_n}\right)^2}{\sum\_{n=1}^{d} \cot k\ell\_n}.\tag{7.18}$$

If both polynomials *P*1*,*<sup>2</sup> **<sup>W</sup>***<sup>d</sup>* **<sup>I</sup>** depend on *zd*+<sup>1</sup> consider the graph obtained from **W***<sup>d</sup>* **I** by contracting all the edges except *E*1*, E*2*,* and *Ed*+1*.* This graph coincides with *G(*3*.*8*)* considered above, its secular polynomial is not reducible into factors both containing *zd*+1, hence we get a contradiction proving the statement.

Considered examples lead us to the following statement.

**Lemma 7.15** *Let G be any extension of the watermelon graph* **W***<sup>d</sup> on d edges E*1*,...,Ed . Assume that G is genuine, i.e. contains no degree two vertices, then the secular polynomial PG never admits factorisation as* 

$$P\_G(\mathbf{z}) = P^\mathbf{l}(\mathbf{z}\_\mathbf{l}) P^\mathbf{2}(\mathbf{z}), \quad \mathbf{z}\_\mathbf{l} = (z\_1, z\_2, \dots, z\_d), \tag{7.19}$$

*where P*<sup>1</sup> *depends on all variables z*1*, z*2*,...,zd , unless the extension is trivial and G* = **W***<sup>d</sup> .* 

*Proof* If the original watermelon graph **W***<sup>d</sup>* is preserved during the extension, *i.e.*  **W***<sup>d</sup>* is a subgraph of *G*, then the contracted graph *G*1*,...,d,d*+<sup>1</sup> coincides either with **W***d***L** considered in Example 7.13, or with the graph **W***<sup>d</sup>* **I** considered in Example 7.14. In the first case the secular polynomial is a product of the linear factor *(zd*+<sup>1</sup> − 1*)* and an irreducible polynomial depending on all variables *z*1*,...zd*+1. In the second case the secular polynomial is irreducible and depends on all variables including *zd*+1. This contradicts factorisation (7.19).

**Fig. 7.9** Extended watermelon graph

If **W***<sup>d</sup>* is not preserved during the extension, then among compatible extensions of it there exists a graph, say *G*1*,...,d,d*+1, obtained form **W***<sup>d</sup>* by adding the edge *Ed*+<sup>1</sup> to one of the parallel branches, say to the branch formed by *E*<sup>1</sup> (see Fig. 7.9).

This graph contains a degree two vertex connecting *E*<sup>1</sup> and *Ed*+1. In accordance to the argument among compatible extensions of *G*1*,...,d*+<sup>1</sup> there exists a graph obtained by attaching a pending edge, or a loop to this vertex. Let us denote this edge by *Ed*+2. Then the graph *G*1*,d*+1*,d*+<sup>2</sup> on three edges coincides either with *G(*3*.*5*)*, or with *G(*3*.*10*)*. The corresponding secular polynomials do not contain factors depending entirely on the variable *z*<sup>1</sup> contradicting factorisation (7.19). 

# **7.4 Secular Polynomials for Flower Graphs and Their Extensions**

Let us prove that the secular polynomials for flower graphs possess only trivial factorisation via linear factors corresponding to the loops.

**Lemma 7.16** *The secular polynomial for the flower graph* **F***<sup>d</sup> on d edges joined together at the single vertex is given by* 

$$P\_{\mathbf{F}\_d} = (z\_1 - 1) \dots (z\_d - 1) P\_{\mathbf{F}\_d}^\*(\mathbf{z}),\tag{7.20}$$

*where P*∗ **<sup>F</sup>***<sup>d</sup> is an irreducible polynomial, first degree in each variable.* 

*Proof* Each loop given by *Ej* determines the linear factor *(zj* − 1*)* in the secular polynomial, hence factorisation (7.20) is already proven. It remains to show that *P*∗ is irreducible. Assume on the contrary that the first degree in each variable *P*<sup>∗</sup> is reducible into two factors, one containing *z*<sup>1</sup> and another one containing *z*2. Consider the reduction *G*1*,*2, which is nothing else than the figure eight graph *G(*2*.*4*)* with

$$P\_{(2.4)} = (z\_1 - 1)(z\_2 - 1)(z\_1 z\_2 - 1).$$

The polynomial *P*∗ *G(*2*.*4*)* is irreducible leading to contradiction. 

**Lemma 7.17** *Let G be any extension of the flower graph* **F***<sup>d</sup> on d edges E*1*,...,Ed . Assume that G is genuine, i.e. contains no degree two vertices, then the secular polynomial PG admits factorisation as* 

$$P\_G(\mathbf{z}) = P^1(\mathbf{z}\_1), \ P^2(\mathbf{z}), \quad \mathbf{z}\_1 = (z\_1, z\_2, \dots, z\_d), \tag{7.21}$$

*with <sup>P</sup>*<sup>1</sup> *depending on all variables <sup>z</sup>*1*, z*2*,...,zd , if and only if the edges Ej , j* <sup>=</sup> 1*,* 2*,...,d form loops in G. In that case the polynomial P*1*(***z**1*) is a product of first degree factors:* 

$$P^{\mathbb{I}}(\mathbf{z}\_{\mathbb{I}}) = (z\_1 - 1)(z\_2 - 1)\dots(z\_d - 1)\dots$$

*Proof* It is clear that if the loops formed by *Ej , j* = 1*,* 2*,...,d* are preserved in the extension, then representation (7.21) holds and the polynomial is given by the product of fist degree polynomials *(zj* − 1*)* associated with the loops. It remains to study the case where the loops are destroyed under extension. During extension every cycle in a graph is either preserved or gains an additional edge, hence it is always possible to trace the original cycles to any graph's extension, which in turn can always be achieved by adding edges one-by-one.

In the graph *G* consider the cycles originating from the loops in **F***<sup>d</sup>* . Every such cycle contains one edge *Ej* and perhaps few additional edges. If no cycle contains additional edges, then all edges *Ej , j* = 1*,...,d* form loops. We have already considered this case.

Assume without loss of generality that the cycle containing *E*<sup>1</sup> also contains an additional edge *Ed*+1. We need to study two alternatives:


## **7.5 Reducibility of Secular Polynomials for General Graphs**

Our goal in this section is to prove that secular polynomials can always be reduced into a product of linear factors *(zn*−1*)* corresponding to the loops and an irreducible polynomial with the only exception given by the Watermelon graphs **W***<sup>d</sup> .*

**Lemma 7.18** *If the secular polynomial can be presented as a product of two factors* 

$$P\_G(\mathbf{z}) = \mathcal{Q}(\mathbf{z})\ R(\mathbf{z}),\tag{7.22}$$

*then at least one of the factors depends on all variables zn.*

*Proof* Assume the opposite: there exist two variables, say *z*<sup>1</sup> and *z*2, such that *Q* depends on *z*<sup>1</sup> but is independent of *z*2*,* while *R* depends on *z*<sup>2</sup> but is independent of *z*1*.* Consider the contracted graph *G*1*,*<sup>2</sup> defined following (7.3) and the corresponding secular polynomial *PG*1*,*<sup>2</sup> , which can be presented as a product of two single variable polynomials. Checking all graphs on two edges we see that this is impossible. 

We are prepared to prove Colin de Verdière's conjecture [160]—the main result of this section.

**Theorem 7.19** *Let G be a connected finite graph without degree two vertices. The secular polynomial for G is reducible if and only if the corresponding metric graph admits a non-trivial symmetry group for any choice of the edge lengths, in other words if and only if G contains loops or G is a watermelon graph* **W***<sup>N</sup> formed by N parallel edges.* 

*Moreover, if the secular polynomial is reducible, then the following formulas hold* 

• *if G contains loops, then* 

$$P\_G(\mathbf{z}) = \left(\prod\_{\substack{E\_n \text{is } a \text{ loop in } G}} (z\_n - 1)\right) P\_G^\*(\mathbf{z}),\tag{7.23}$$

*where the product is over the loop edges and the polynomial P*∗ *<sup>G</sup> is irreducible and is of order one in the variables corresponding to loop edges;* 

• *if G is a watermelon graph* **W***<sup>N</sup> , then the secular polynomial PG is a product of the irreducible polynomials P<sup>s</sup>* **<sup>W</sup>***<sup>N</sup> and P<sup>a</sup>* **<sup>W</sup>***<sup>N</sup> given by* (7.13) *and* (7.14) *and having order one in each variable* 

$$P\_{\mathbf{W}\_N}(\mathbf{z}) = P\_{\mathbf{W}\_N}^s(\mathbf{z}) P\_{\mathbf{W}\_N}^d(\mathbf{z}). \tag{7.24}$$

*Proof* We start by proving that metric graphs  corresponding to a certain discrete graph *G* admit a non-trivial symmetry group for any choice of the edge lengths if and only if the discrete graph either contains loops or it is a watermelon graph. Note that we assume that the graphs have no vertices of degree two. Since the edge lengths are arbitrary we may take them rationally independent. Then the metric graph  admits a symmetry group only if the elements from the group map edges to the same edges. The only nontrivial transformation is the one reversing the edge orientation. Consider any nontrivial symmetry of *-*, then there exists an edge, say *E*1, whose orientation is reversed. There are two possibilities:


It is trivial to see that the metric graphs corresponding to the watermelon graph **W***<sup>N</sup>* and graphs with loops always have non-trivial symmetry transformation (given by the reversal of all edges and the reversal of loops respectively). We have seen that the corresponding secular polynomials are reducible implying that every graph with a non-trivial symmetry group for any choice of edge lengths leads to a reducible secular polynomial.

Assume now that the secular polynomial is reducible. Taking into account Lemma 7.18 possible factorisation can always be written as

$$P\_G(\mathbf{z}) = \mathcal{Q}(z\_1, \dots, z\_{N\_1}) \ R(z\_1, \dots, z\_N), \tag{7.25}$$

with a certain *N*<sup>1</sup> ≤ *N*. Both polynomials *Q* and *R* are linear in the first *N*<sup>1</sup> variables. Using the unique factorisation in the ring of polynomials one may always assume that the polynomial *Q* is chosen maximal, *i.e.* depending on the maximal possible number of variables.

The polynomial *R* has to be irreducible.2 Assume the opposite: the polynomial *R* is reducible as

$$R(\mathbf{z}) = R^{\parallel}(\mathbf{z})R^{\mathcal{D}}(\mathbf{z}).\tag{7.26}$$

The polynomial *R* is first degree in the variables **z**<sup>1</sup> := *z*1*,...,zN*<sup>1</sup> and second degree in **z**<sup>2</sup> := *zN*1+<sup>1</sup>*,...,zN* .

The new factors *R*<sup>1</sup> and *R*<sup>2</sup> cannot be second degree in any of the variables. Assume on the contrary that *R*<sup>1</sup> is quadratic in *zN*<sup>2</sup> *, zN*2+<sup>1</sup>*,...,zN* , where *N*<sup>1</sup> *<sup>&</sup>lt;*

<sup>2</sup> This property does not automatically follow from the maximality of *Q*. Consider for example the following polynomial *P* = *(z*<sup>1</sup> − *z*2*)(z*<sup>2</sup> − *z*3*)(z*<sup>3</sup> − *z*1*).* In any factorisation of this polynomial as (7.25) the factor *R* is reducible. Fortunately for our analysis this polynomial does not appear as a secular polynomial for any graph.

*<sup>N</sup>*2 <sup>≤</sup> *<sup>N</sup>*, then *<sup>R</sup>*<sup>2</sup> is independent of *zN*<sup>2</sup> *,...,zN* and we have

$$P\_G(\mathbf{z}) = \underbrace{\left(\mathcal{Q}(\mathbf{z}\_1)\,\boldsymbol{R}^2(\mathbf{z})\right)}\_{\text{independent of } z\_{N\_2}, \dots, z\_N} \times \boldsymbol{R}^1(\mathbf{z})\,.$$

Lemma 7.18 implies then that *R*<sup>1</sup> depends on all variables **z** implying that *R*<sup>2</sup> is independent of **z**<sup>1</sup> <sup>=</sup> *(z*1*, z*2*,...,zN*<sup>1</sup> *)*. It follows that *R*<sup>2</sup> is a first degree polynomial in the variables *zN*1+<sup>1</sup>*,...,zN*2<sup>−</sup>1, hence the polynomial *Q* in the factorisation (7.25) is not maximal.

We conclude that polynomials *R*<sup>1</sup> and *R*<sup>2</sup> in the hypothetical factorisation (7.26) are linear in each of the variables *zN*1+<sup>1</sup>*,...,zN* . If one of these polynomials is independent of the first *N* variables **z**<sup>1</sup> = *(z*1*,...,zN*<sup>1</sup> *)*, then the factorisation (7.25) is not maximal. Therefore we conclude that *R*<sup>1</sup> and *R*<sup>2</sup> depend on variables from both the sets **z**<sup>1</sup> and **z**2. It follows that there exist certain three variables

$$z\_j, z\_k, z\_l, \quad j, k \le N\_1 < l,$$

such that *R*<sup>1</sup> depends on *zj* and *zl* and is independent of *zk* and *R*<sup>2</sup> depends on *zk* and *zl* and is independent of *zj* . Consider the contraction of the discrete graph *G* to the graph *Gj,k,l* on the three edges *Ej , Ek, El*. The factorisation (7.25) together with the hypothetical factorisation (7.26) imply that the secular polynomial possesses the factorisation

$$P\_{G\_{j,k,l}} = \mathcal{Q}\_{G\_{j,k,l}}(z\_j, z\_k) \boldsymbol{\mathcal{R}}\_{G\_{j,k,l}}^{\mathrm{l}}(z\_j, z\_l) \boldsymbol{\mathcal{R}}\_{G\_{j,k,l}}^2(z\_k, z\_l),$$

where *QGj,k,l, R<sup>i</sup> Gj,k,l, i* <sup>=</sup> <sup>1</sup>*,* <sup>2</sup>*,* are the reductions of the polynomials *<sup>Q</sup>* and *P<sup>i</sup>* . Among all secular polynomials for graphs on three edges, no one has factorisation compatible with the one above.

We conclude that in the factorisation (7.25) the polynomial *R(***z***)* depending on all variables is irreducible.

Consider the contracted graph *G*1*,*2*,...,N*<sup>1</sup> *,* its secular polynomial is reducible as

$$P\_{G\_{1,2,\dots,d}}(z\_1,\dots,z\_{N\_1}) = \mathcal{Q}(z\_1,\dots,z\_{N\_1}) \,\, R\_{G\_{1,\dots,N\_1}}(z\_1,\dots,z\_{N\_1}),$$

where *Q* and *RG*1*,...,N*<sup>1</sup> depend on each of the variables *z*1*, . . . .zN*<sup>1</sup> since *Q* is linear in each variable and *PG*1*,...,N*<sup>1</sup> is quadratic.

Let us prove that *G*1*,*2*,...,N*<sup>1</sup> is either a watermelon or a flower graph. To achieve this we show first that the graph does not contain cycles of discrete length greater than or equal to three and passing each vertex on the cycle just once. We call such cycles simple. Assume on the opposite that this is not the case and there exists such a cycle formed by the edges *E*1*, E*2*,...,EN*<sup>4</sup> *, N*<sup>4</sup> ≤ *N*1. Contracting the edges *E*4*,...,EN*<sup>4</sup> , leads us to a graph with the secular polynomial given by a product of first degree polynomials in all variables. We reduced our analysis to simple cycles of discrete length 3. We aim to prove that among all contractions of the graph there exists one having just three vertices, say *V* <sup>1</sup>*, V* <sup>2</sup>*, V* <sup>3</sup> connected pairwise by one or several parallel edges. Consider all simple paths connecting *V* <sup>1</sup> and *V* <sup>2</sup> not passing through *<sup>V</sup>* 3. Choose any such path and choose one edge on it, say *EN*4<sup>+</sup>1, and keep it, contracting all other edges on the path. The resulting graph has the same property as before, but there are two parallel edges connecting *V* <sup>1</sup> and *V* 2. Repeat this procedure until no new simple paths connecting *V* <sup>1</sup> and *V* <sup>2</sup> can be found. There will be several parallel edges connecting *V* <sup>1</sup> and *V* 2, but graph's secular polynomial is still reducible into a product of two first degree polynomials in each variable. Continue this procedure by identifying all paths connecting *V* <sup>2</sup> to *V* 3, each time choosing an edge and contracting all other edges on the path. We obtain a graph with several parallel edges connecting *V* <sup>2</sup> to *V* 3. Looking at paths between *V* <sup>3</sup> and *V* <sup>1</sup> gives us a graph with several parallel edges between these vertices. Finally we contract all edges not directly connecting *V* 1, *V* <sup>2</sup> and *V* 3. The resulting graph on three vertices is given by several edges connecting the vertices pairwise. Of course it might happen that no parallel edges occur and we obtain the graph *G(*3*.*6*)*, but this is forbidden due to Lemma 7.11—the secular polynomial for the original graph cannot be factorised into two factors both depending on the variables associated with the three edges building the cycle.

It remains to consider the case, where several edges connect two of the vertices. Contracting all except one of these edges leads to the watermelon with a loop graph **W***d***L** already considered in Example 7.13. The secular polynomial for this graph does not possess the desired reducibility.

We have proven that *G*1*,*2*,...,N*<sup>1</sup> is either the watermelon graph **W***N*<sup>1</sup> , or the flower graph **F***N*<sup>1</sup> . Lemmas 7.15 and 7.17 imply that factorisation (7.25) of the secular polynomial for the original graph is possible only if *G* is either a watermelon graph, or *G* contains *N*<sup>1</sup> loops given by *E*1*,...,EN*<sup>1</sup> . This accomplishes the proof since irreducibility of the polynomial *P*∗ depending on all variables has already been proven. 

This Theorem establish a remarkable connection between the topology of the graphs and reducibility of their secular polynomials. Knowing whether the secular polynomials are reducible or not allows us to better understand the structure of the spectrum. It is conjectured that the singular sets for irreducible factors have codimension at least three [160], while intersection between the zero hypersurfaces corresponding to different irreducible factors in general has co-dimension two.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 8 The Trace Formula**

This chapter is devoted to the trace formula connecting the spectrum of a finite compact metric graph with the set of closed paths on it. In other words this formula establishes a relation between spectral and geometric/topologic properties of metric graphs.

Such a formula was first proved for the Laplacian  defined on a Riemannian manifold *X* [134, 159, 175, 249] and is known now as Chazarain-Duistermaat-Guillemin-Melrose trace formula

$$\sum\_{\lambda\_j \in \text{Spec } \Delta} \cos \lambda\_j^{1/2} t = \sum\_{\mathcal{Y}} \frac{\ell(\text{prim}(\mathcal{Y}))}{|I - P\_{\mathcal{Y}}|^{1/2}} \delta(t - \ell(\mathcal{Y})) + R, \quad t > 0. \tag{8.1}$$

The sum on the left hand side is taken over all eigenvalues of the Laplacian *-*, the sum on the right hand side—over all closed geodesics on the manifold *X. (γ )* denotes the length of the geodesic *γ* and prim*(γ )*—the primitive geodesic. *Pγ* is the Poincaré map around *γ.* The remainder term *R* is a certain (non-specified) function in *L*1*,*loc, which means that this formula holds modulo *L*1*,*loc-function. Formula (8.1) can be seen as a generalisation of the classical Poisson summation formula in Fourier analysis (see (10.13) below) as well as Selberg's trace formula.

We are going to prove a direct analogue of formula (8.1) for the case of metric graphs. For simplicity we consider first the standard Laplacian, which is uniquely determined by the metric graph *.* In contrast to (8.1) the formula we are going to prove is exact and does not contain any reminder term. This formula first appeared in a paper by J.-P. Roth [451, 452], but we follow scattering matrix approach suggested in [252, 320, 321] and developed further in [346]. This formula will be used to prove that the spectrum of a quantum graph determines its Euler characteristic. Despite the fact that the original formula is proven for standard Laplacians, it can be generalised to the case of standard Schrödinger operators.

# **8.1 The Characteristic Equation: Multiplicity of Positive Eigenvalues**

Consider the standard Laplace operator on a finite compact metric graph *.* One can easily see that this operator is nonnegative, since its quadratic form is given by

$$\langle \mu, L^{\text{st}} \mu \rangle\_{L\_2(\Gamma)} = \sum\_{n=1}^{N} \int\_{E\_n} |\mu'(\mathbf{x})|^2 d\mathbf{x},\tag{8.2}$$

(see (3.55)). (The operators *ASm* appearing in (3.55) are all equal to zero.)

To determine positive eigenvalues we are going to use the characteristic equation on the spectrum derived using the scattering approach in Sect. 5.2. The eigenvalue *λ* = 0 needs special attention and will be discussed later on. Let us repeat the derivation of formula (5.47) adjusting formulas to the case of the Laplace operator with standard vertex conditions. Let *ψ* be an eigenfunction corresponding to the eigenvalue *<sup>λ</sup>* <sup>=</sup> *<sup>k</sup>*2*.* On every edge [*x*2*n*−1*, x*2*n*] it is a solution to the equation −*ψ* = *λψ* and can therefore be written using one of the following two representations:

$$\begin{split} \psi(\mathbf{x}) &= a\_{2n-1} e^{ik(\mathbf{x} - \mathbf{x}\_{2n-1})} + a\_{2n} e^{-ik(\mathbf{x} - \mathbf{x}\_{2n})} \\ &= b\_{2n-1} e^{-ik(\mathbf{x} - \mathbf{x}\_{2n-1})} + b\_{2n} e^{ik(\mathbf{x} - \mathbf{x}\_{2n})}. \end{split} \tag{8.3}$$

The amplitudes *aj* of edge-incoming waves are related to the amplitudes *bj* of edgeoutgoing waves via the edge scattering matrix *S<sup>n</sup>* **<sup>e</sup>** *(k)*

$$
\begin{pmatrix} b\_{2n-1} \\ b\_{2n} \end{pmatrix} = \underbrace{\begin{pmatrix} 0 & e^{ik\ell\_n} \\ e^{ik\ell\_n} & 0 \end{pmatrix}}\_{=:S\_\mathbf{e}^n(k)} \begin{pmatrix} a\_{2n-1} \\ a\_{2n} \end{pmatrix} . \tag{8.4}
$$

Putting together all amplitudes of incoming and outgoing waves for all edges the last relation can be written as

$$
\bar{B} = \mathbf{S}\_{\mathbf{e}}(k)\bar{A}, \tag{8.5}
$$

where the matrix **Se***(k)* is formed by 2 <sup>×</sup> <sup>2</sup> block diagonal matrices *S<sup>n</sup>* **<sup>e</sup>** *(k)* if the amplitudes forming *A* and *B* are indexed according to the endpoints *xj* .

The second relation between *A* and *B* comes from the vertex conditions

$$
\bar{A} = \mathbf{S}\bar{B},
\tag{8.6}
$$

where the matrix **S** has block-diagonal form if the amplitudes *A* and *B* are ordered following the vertices. The matrices on the diagonal are vertex scattering matrices *S*st *<sup>d</sup>* given by (3.41) with *d* equal to *dm*—the degree of the corresponding vertex. It is important for our derivations that the vertex scattering matrices corresponding to standard vertex conditions do not depend on the spectral parameter *k.*

Putting together (8.5) and (8.6) we arrive at

$$\mathbb{S}(k)\ddot{A} = \ddot{A}, \quad \mathbb{S}(k) := \mathbf{S}\mathbf{S}\_{\mathbf{e}}(k). \tag{8.7}$$

Taking the determinant we get Eq. (5.47)

$$\det(\mathbb{S}(k) - \mathbf{I}) = 0,$$

determining positive eigenvalues of *L*st*().* In other words Eq. (5.47) describes all eigenvalues of *L*st*()* with one possible exception *<sup>λ</sup>* <sup>=</sup> <sup>0</sup>*,* since the operator is nonnegative.

The 2*<sup>N</sup>* <sup>×</sup> <sup>2</sup>*<sup>N</sup>* matrix S*(k)* introduced above describes how the waves are penetrating through the collection of edges and vertices forming the graph. We called it *the graph scattering matrix*, although it is more correct to understand it as the evolution map in a discrete dynamical system associated with the metric graph. We shall use this point of view proving the trace formula.

Let us introduce the function

$$p(k) := \det(\mathbb{S}(k) - \mathbf{I}) \equiv \det\left(\mathbf{S}\mathbf{S}\_{\mathfrak{e}}(k) - \mathbf{I}\right),\tag{8.8}$$

coinciding with the secular trigonometric polynomial *p(k)* introduced in 6.1, as we agreed to treat these functions projectively:

$$p(k) = \det \mathbf{S} \ p\_\Gamma(k) \Rightarrow p(k) = p\_\Gamma(k).$$

Putting together the vertex and edge scattering matrices will help us in the proof.

The zeroes *kj* of the trigonometric polynomial *p* correspond to the eigenvalues *k*<sup>2</sup> *j* of the standard Laplacian on . The zeroes are situated symmetrically with respect to the origin

$$p(k\_j) = 0 \Rightarrow p(-k\_j) = 0.$$

We are now interested in the order of the zeroes. One should expect that the orders of zeroes coincide with the multiplicities of the corresponding eigenvalues of *L*st*(),* but this fact needs to be proven. Let *kj* be any zero of *p*, then the function *(k*<sup>2</sup> <sup>−</sup> *k*2 *<sup>j</sup> )p(k)* is also a characteristic function for the spectrum, but obviously the order of the zero at *kj* is different. The orders of zeroes and multiplicities of the eigenvalues coincide only due to the special form of *p* constructed using the suggested recipe and holds true for nonzero *k* only. As will be proven later, the order of *k* = 0 may differ from the multiplicity of *λ* = 0*.*

**Theorem 8.1** *Let p be the characteristic function for L*st*() determined by (8.8) and let kj* = 0 *be one of its zeroes. Then the order of the zero of p(k) at kj coincides with the multiplicity of the eigenvalue λj* <sup>=</sup> *<sup>k</sup>*<sup>2</sup> *<sup>j</sup> of <sup>L</sup>*st*().*

*Proof* Let us denote by *eiθn(k), n* <sup>=</sup> <sup>1</sup>*,* <sup>2</sup>*,...,* <sup>2</sup>*N ,* and *An(k)* the eigenvalues and the eigenvectors of the unitary matrix <sup>S</sup>*(k)* <sup>=</sup> **SSe***(k)*

$$\mathbb{S}(k)\,\tilde{A}\_n(k) = e^{i\theta\_n(k)}\,\tilde{A}\_n(k). \tag{8.9}$$

The determinant (8.8) can be easily calculated in terms of the phases *θn(k)*

$$p(k) = \prod\_{n=1}^{2N} \left( e^{i\theta\_n(k)} - 1 \right). \tag{8.10}$$

For *kj* = 0 there is a one-to-one correspondence (8.3) connecting the amplitudes *A* and the eigenfunctions *<sup>ψ</sup>* on *.* Therefore a real number *λj* <sup>=</sup> *<sup>k</sup>*<sup>2</sup> *<sup>j</sup>* is an eigenvalue of a certain multiplicity *m(λj )* if and only if the dimension of the kernel Ker*(*S*(k)*−*I )* is equal to *m(λj )*, in other words, if and only if among 2*N* phases *θn(kj )* there are precisely *m(λj )* phases equal to 0 *(*mod 2*π ).* Hence the function *p* has a zero at *kj* of order at least *m(λj )*, since precisely *m(λj )* terms in (8.10) vanish. On the other hand it may happen that some of the items have zeroes of higher orders.

To prove that the order is precisely equal to *m(λj )* it is enough to show that *θ n(kj )* is different from zero for all *n* such that *θn(kj )* = 0 *(*mod 2*π ).* For such *n* we have

$$
\mathbb{S}(k\_j)\tilde{A}\_n(k\_j) = \tilde{A}\_n(k\_j). \tag{8.11}
$$

The matrix <sup>S</sup>*(k)* <sup>=</sup> **SSe***(k)* possesses the following analytic expansion

$$\mathbb{S}(k) = \mathbb{S}(k\_j) + \mathbb{S}(k\_j)i\mathbf{D}(k - k\_j) + \dots,$$

where we used the fact that the vertex scattering matrix **S** is independent of the energy and the edge scattering matrix is given by 2×2 blocks in the basis associated with the edges. The matrix **D** used here is defined as

$$\mathbf{D} = \text{diag}\left\{\ell\_1, \ell\_1, \ell\_2, \ell\_2, \dots, \ell\_N, \ell\_N\right\},\tag{8.12}$$

in the edge basis. Since the entries of S*(k)* are analytic functions in *k*, the eigenvalue branches *eiθn(k)* and the corresponding eigenvectors *An(k)* can be chosen analytic

$$\begin{array}{ll}e^{i\theta\_n(k)} = 1 + i\theta\_n'(k\_j)(k - k\_j) + \dots, \\ \vec{A}\_n(k) = \vec{A}\_n(k\_j) + \vec{A}\_n'(k\_j)(k - k\_j) + \dots, \end{array} \text{ as } k \to k\_j,$$

where we used the fact that *θn(kj )* = 0*.* Substituting analytic expansions for S*(k), An(k),* and *θn(k)* into the eigenfunction Eq. (8.9) we get

$$\begin{aligned} & \left( \mathbb{S}(k\_j) + \mathbb{S}(k\_j)i\mathbf{D}(k - k\_j) + \dots \right) \left( \vec{A}\_n(k\_j) + \vec{A}\_n'(k\_j)(k - k\_j) + \dots \right) \\ &= \left( 1 + i\theta\_n'(k\_j)(k - k\_j) + \dots \right) \left( \vec{A}\_n(k\_j) + \vec{A}\_n'(k\_j)(k - k\_j) + \dots \right) .\end{aligned}$$

Comparing coefficients to first order in *k* − *kj* we obtain

$$
\mathbb{S}(k\_j)i\mathbf{D}\vec{A}\_n(k\_j) + \mathbb{S}(k\_j)\vec{A}\_n'(k\_j) = i\theta\_n'(k\_j)\vec{A}\_n(k\_j) + \vec{A}\_n'(k\_j)
$$

$$
\Leftrightarrow \left(\mathbb{S}(k\_j) - I\right)\vec{A}\_n'(k\_j) = -\mathbb{S}(k\_j)i\mathbf{D}\vec{A}\_n(k\_j) + i\theta\_n'(k\_j)\vec{A}\_n(k\_j).
$$

It remains to take into account that (8.11) implies in particular that *An(kj )* is an eigenvector of the adjoint matrix as well: <sup>S</sup>∗*(kj )An(kj )* <sup>=</sup> *<sup>A</sup>n(kj ).* This can be seen by acting with *S*∗*(kj )* on both sides of (8.11) and using that S*(kj )* is unitary. Hence the left hand side in the last displayed formula is orthogonal to *An(kj )*. Likewise, on the right assuming *θ n(kj )* = 0, we would have

$$\begin{aligned} 0 &= \langle \vec{A}\_n(k\_j), \mathbb{S}(k\_j)i\mathbf{D}\vec{A}\_n(k\_j)\rangle = \langle \mathbb{S}(k\_j)^\*\vec{A}\_n(k\_j), i\mathbf{D}\vec{A}\_n(k\_j)\rangle, \\ &= i\langle \vec{A}\_n(k\_j), \mathbf{D}\vec{A}\_n(k\_j)\rangle, \end{aligned}$$

but the matrix **D** is positive definite. Hence *θ n(kj )* is different from zero.

An alternative proof can be found in [463] and [81].

It follows that the analytic function *p* can be used to determine the spectrum of *<sup>L</sup>*st*()* including multiplicities of the eigenvalues on *(*0*,*∞*)* by just calculating all its zeroes and the corresponding orders. The key point in the proof is formula (8.3) describing the one-to-one correspondence between the eigenvectors of S*(kj )* associated with the eigenvalue 1 and the eigenfunctions of the Laplacian on *.* The eigenvalue *λ* = 0 requires more attention as will be seen below.

# **8.2 Algebraic and Spectral Multiplicities of the Eigenvalue Zero**

We have shown that equation *p(k)* <sup>=</sup> <sup>0</sup> determines the spectrum of *L*st*()* with correct multiplicities for all nonzero values of *k*, but the multiplicity of the zero eigenvalue indicated by (5.47), *i.e.* the order of the zero, may be different from the correct one. The proof of Theorem 8.1 implies that the order of zeroes of *p* coincides with the dimension of the kernel Ker*(***SSe***(k)* − **I***).* For all *k* = 0 the dimension of the kernel coincides with the number of linearly independent eigenfunctions of the Laplacian due to one-to-one correspondence between *A* and *ψ* on (see (8.3)). This correspondence is not valid anymore if *k* = 0*,* since the exponentials *e*±*ik(x*−*xj )* |*k*=<sup>0</sup> = 1 coincide.

Therefore, let us introduce two (maybe different) characteristics:1


It turns out that these multiplicities may be different and the difference depends on the topology of the graph, more precisely on the number of independent cycles. The following theorem connects these multiplicities with the Euler characteristic of given by (2.7).

**Theorem 8.2** *Let be a finite compact metric graph with β*<sup>0</sup> *connected components and Euler characteristic <sup>χ</sup>* <sup>=</sup> <sup>1</sup> <sup>−</sup> *<sup>β</sup>*1*, and let L*st*() be the corresponding standard Laplace operator. Then λ* = 0 *is an eigenvalue with spectral multiplicity ms(*0*)* = *β*<sup>0</sup> *and algebraic multiplicity ma(*0*)* = 2*β*<sup>0</sup> − *χ* = 2*β*<sup>0</sup> + *β*<sup>1</sup> − 1*.*

#### *Proof*

*Spectral Multiplicity* (Easy, quick repetition of Lemma 4.10.) Every eigenfunction corresponding to *λ* = 0 minimises the quadratic form (8.2) and therefore is a constant function on every edge. Continuity of the function at all vertices implies that the function is equal to a constant on every connected component of *.* Hence spectral multiplicity of *λ* = 0 coincides with the number *β*<sup>0</sup> of connected components in *.*

*Algebraic Multiplicity* To derive Eq. (5.47) we used the representation (8.3) for the eigenfunction. If *k* = 0, then the coefficients *aj* or *bj* are uniquely determined by *ψ(x, λ),* but it is not the case if *k* = 0: the function *ψ* determines only the sum *a*2*n*−<sup>1</sup> + *a*2*<sup>n</sup>* = *b*2*<sup>n</sup>* + *b*2*n*−1. Therefore there is no one-to-one correspondence between *ψ* and the vectors *A, B.* <sup>2</sup>

Assume first that the graph is connected. To determine the algebraic multiplicity we have to calculate the dimension of the space of solutions to the following linear system

$$\mathbf{SS}\_{\mathfrak{e}}(0)\bar{A} = \bar{A}.$$

<sup>1</sup> Note that the algebraic and spectral multiplicities introduced below have nothing to do with algebraic and spectral multiplicities for non-Hermitian matrices, but we use the same term, since the analogy is straighforward.

<sup>2</sup> For *<sup>k</sup>* <sup>=</sup> <sup>0</sup> every solution of the equation −*ψ* <sup>=</sup> *<sup>k</sup>*2*<sup>ψ</sup>* <sup>=</sup> <sup>0</sup> is a linear function, therefore representing it as a sum of two exponentials is unreasonable, but representation by exponentials lies in the basis for the definition of *f* and therefore is discussed here.

One may use standard methods of linear algebra like it was done in [346, 408]. We shall instead use the original equations in order to illuminate the relation between the algebraic multiplicity and the fundamental group on *.* The vectors *A* and *B* are related by

$$\begin{cases} \vec{B} = \mathbf{S}\_{\mathbf{e}}(0)\vec{A}, \\\\ \vec{A} = \mathbf{S}\vec{B}. \end{cases}$$

Taking into account that all matrices *S<sup>n</sup>* **<sup>e</sup>** *(*0*)* are all equal to 0 1 1 0 , the first relation can be written as

$$a\_{2n-1} = b\_{2n} \quad \text{and} \quad a\_{2n} = b\_{2n-1}, \quad n = 1, 2, \ldots, N. \tag{8.13}$$

The second relation can equivalently be written as (3.37)

$$\begin{cases} a\_l + b\_l = a\_j + b\_j, & \mathbf{x}\_l, \mathbf{x}\_j \in V^m, \\ \sum\_{\mathbf{x}\_j \in V^m} (a\_j - b\_j) = 0, & m = 1, 2, \dots, M. \end{cases} \tag{8.14}$$

Here we in some sense return back and use standard vertex conditions as they were written originally (2.27) instead of using the vertex scattering matrix.

Excluding the coefficients *bi* we get the following linear system with 2*N* unknowns *aj*

$$\begin{cases} a\_{2l-1} + a\_{2l} = a\_{2j-1} + a\_{2j}, \; i, j = 1, 2, \dots, N; \\\\ \sum\_{l: \mathbf{x}\_l \in V^m} (a\_l - a\_{l-(-1)^j}) = 0, \; m = 1, 2, \dots, M. \end{cases} \tag{8.15}$$

The first set of equations shows that the function *ψ* corresponding to *λ* = 0 is equal to a constant (as one expects taking into account the spectral multiplicity). The reason why the spectral and algebraic multiplicities may be different is that this constant function *ψ(x,* 0*)* = *c* may be represented by different vectors *A.*

With every edge *En* we associate the flux *fn* 3 defined as follows

$$f\_n = a\_{2n-1} - a\_{2n}.\tag{8.16}$$

Note that the flux so defined depends on the orientation of the edge *En*, *i.e.* it changes sign if one changes the orientation of the edge. The second set of Eq. (8.15) implies that the total flux through every vertex is zero

$$\sum\_{\substack{E\_n \text{ starts at } V^m}} f\_n = \sum\_{\substack{E\_n \text{ ends at } V^m}} f\_n, \ m = 1, 2, \dots, M. \tag{8.17}$$

Let us prove that the dimension of the space of solutions to this system of equations is equal to the number *<sup>g</sup>* <sup>=</sup> *<sup>β</sup>*<sup>1</sup> of generators for the fundamental group.<sup>4</sup>

Assume that is a tree (*N* = *M* − 1 ⇔ *β*<sup>1</sup> = 0), then the only possible flux is zero. First we note that the flux on all pendant edges is zero. To see this it is enough to look at the relation (8.17) in the case of vertex of degree one—only one sum is present and it contains just one term. Then it is clear that the flux is zero on all edges connected by one of the endpoints to pendant edges. Continuing in this way we conclude that the flux is zero on the whole tree.

Assume now, that is an arbitrary connected graph. Then by removing certain *β*<sup>1</sup> = *N* − *(M* − 1*)* edges it may be transformed to a certain tree **T** connecting all vertices. Let us denote the removed edges by *E*1*, E*2*,...,EN*−*M*+<sup>1</sup> so that

$$T = \Gamma \backslash \cup\_{n=1}^{N-M+1} E\_n \dots$$

Every removed edge *En* determines one nontrivial class of closed paths on **T** ∪ *En.* Consider the shortest paths from this class. There exist precisely two such paths

$$|a\_{2n-1}|^2 - |b\_{2n-1}|^2 = a\_{2n-1}^2 - a\_{2n}^2 = (a\_{2n-1} - a\_{2n})(a\_{2n-1} + a\_{2n}) = cf\_n.$$

Hence *fn* coincides with the flux up to multiplication by the constant *c.*

<sup>3</sup> The interpretation of *fn* as a flux can be justified by the following reasoning. The probability flux into the interval *En* = [*x*2*n*−1*, x*2*n*] from the left and right endpoints is given by |*a*2*n*−1| <sup>2</sup>−|*b*2*n*−1<sup>|</sup> 2 and |*a*2*n*| <sup>2</sup> − |*b*2*n*<sup>|</sup> <sup>2</sup> respectively. Then the unitarity of the edge scattering matrix *SE* expresses the fact that the total probability flux for each edge is zero. In the case *λ* = 0 the coefficients *an, bn* may be chosen real and the probability flux through the edge from the left to the right endpoint is given by

<sup>4</sup> The following analogy may be helpful to understand our proof. Consider a system of connected together pipes filled with some moving incompressible and frictionless liquid. If the pipe system has a cycle, then one may observe liquid flow along the cycle. Our immediate goal is to calculate how many independent flows can be observed. I still remember that this proof appeared to me during a concert at *Wiener Musikverein* and how did I try to explain it on the way home.

having opposite orientations. To each path we associate basic flux <sup>F</sup>*<sup>n</sup>* supported by it

$$\mathcal{F}^t(E\_k) = \begin{cases} \pm 1, \text{ if } E\_k \text{ belongs to the path,} \\ 0, \quad \text{if } E\_k \text{ does not belong to the path,} \end{cases}$$

where the sign in the last formula depends on whether the path runs along *Ek* in the positive (+) or negative (−) direction. Without loss of generality we assume that F*n(En)* <sup>=</sup> <sup>1</sup>*.* This condition fixes the orientation of the shortest path. In what follows we consider just one shortest path associated with *En.* Every constructed flux satisfies the system of Eq. (8.17).

Consider any flux F on satisfying the conservation law (8.17). We claim that it can be written as a linear combination of the basic fluxes <sup>F</sup>*n.* Really the flux

$$\mathcal{F} - \sum\_{n=1}^{\beta\_1 = N - M + 1} \mathcal{F}(E\_n) \mathcal{F}^n$$

is supported by the spanning tree **T**, it satisfies (8.17) on **T** and therefore it is equal to zero.

Summing up we conclude that for connected graphs the algebraic multiplicity of the zero eigenvalue is given by

$$m\_a(0) = 1 + \beta\_1 = 1 + N - (M - 1) = 2 - \chi.$$

Since the Euler characteristic *χ* is additive for not connected graphs, it is straightforward to see that formula *ma(*0*)* = 2*β*<sup>0</sup> − *χ* holds in the general case.

This theorem implies that two graph Laplacians can be isospectral only if the underlying graphs have the same number of connected components. It can clearly be seen from the proof that the spectral and algebraic multiplicities for connected graphs are equal only if the fundamental group is trivial *β*<sup>1</sup> = 0, *i.e.* if the graph is a tree.

## **8.3 The Trace Formula for Standard Laplacians**

We prove now the trace formula relating the spectrum of the standard Laplacian to the set of oriented closed paths on the graph. We consider only those paths *γ* which do not turn back in the interior of any edge, but which may turn back at the vertices. **Definition 8.3** Let {*yj* } 2*d <sup>j</sup>*=<sup>1</sup> be a finite sequence of edge endpoints on a finite compact metric graph

$$\{\mathbf{y}\_1, \mathbf{y}\_2, \mathbf{y}\_3, \dots, \mathbf{y}\_{2d}, \quad \mathbf{y}\_j \in \mathbf{V} = \{\mathbf{x}\_l\}\_{l=1}^{2N}, \dots$$

such that


where we used natural cyclic identification *y*2*d*+<sup>1</sup> = *y*1. Then the **oriented closed path** *γ* = *(y*1*, y*2*, y*3*,...,y*2*<sup>d</sup> )* is a union of edges

$$\mathcal{Y} = \{\mathsf{y}\_1, \mathsf{y}\_2\} \cup \{\mathsf{y}\_3, \mathsf{y}\_4\} \cup \dots \cup \{\mathsf{y}\_{2d-1}, \mathsf{y}\_{2d}\}$$

with endpoints *y*2*<sup>j</sup>* and *y*2*j*+<sup>1</sup> identified and inherited orientation. The paths obtained from each other by cyclically permuting the endpoints *yj (γ )* are identified.

Each pair *(y*2*j*−1*, y*2*<sup>j</sup> )* determines not only the edge the path traverses but also the direction of the path on it. The pairs *(y*2*<sup>j</sup> , y*2*j*+1*)* determine the vertices and their order on the path. Every closed path can be equivalently defined by the sequence of edges indicating path's direction on each edge.

Topologically every closed path *γ* is a cycle which can be continuously embedded in locally preserving the distances. Certain edges may appear in *γ* multiple times. Consider the graph *<sup>γ</sup>* obtained from by substituting each edge *En* with as many parallel edges identical to *En* as it appears in *γ* . If *γ* does not pass along a certain edge, then this edge is missing in *<sup>γ</sup>* . Therefore the path *γ* can be obtained by cutting *<sup>γ</sup>* through the vertices. In other words *γ* can be seen as an Eulerian path on *<sup>γ</sup>* , *i.e.* a closed path visiting each edge precisely once.

If the graph has no loops and parallel edges, then every oriented closed path is uniquely determined by the sequence of edges this path goes along. In this case the order of the edges determines the direction in which the path crosses every edge. Alternatively every oriented path is determined by the sequence of vertices in this case.

The **discrete length** *d* = *d(γ )* counts how many times the path *γ* comes across an edge, so that contribution from every edge in is equal to its multiplicity in *γ* (independently of the direction). The discrete length should not be mixed up with the **geometric length**  = *(γ )* obtained by summing the lengths of the edges respecting their multiplicities in *γ*

$$\ell(\boldsymbol{\gamma}) = \sum\_{j=1}^{d(\boldsymbol{\gamma})} (\mathbf{y}\_{2j} - \mathbf{y}\_{2j-1}). \tag{8.18}$$

The paths having opposite orientations are distinguished, then the path going along the same edges as *γ* in the opposite direction and order can be seen as its inverse.

For any edge the path *γ* going back and forth along the edge coincides with *γ* <sup>−</sup>1. Moreover, for any even *<sup>d</sup>* <sup>=</sup> <sup>2</sup>*j, j* <sup>∈</sup> <sup>N</sup>, there exists a unique oriented path supported only by the edge and having discrete length *d*. It is a multiple of the primitive path going once back and forth.

For a loop the two paths going in opposite directions are distinguished. For example among the paths supported by the loop there are 3 paths of discrete length *d* = 2:


Note that the latter path coincides with its inverse and is primitive.

By the **primitive path** of *γ* , prim *(γ )*, we denote the shortest closed path, such that the path *γ* can be obtained by repeating the path prim *(γ )* several times. For example every path supported by an edge *E*<sup>0</sup> is a multiple of the primitive path going back and forth *E*<sup>0</sup> just once.

The set P of all closed paths is infinite, but countable.

Assume that the set of edges is fixed, then the flower graph has the largest set of closed paths since any sequence of edges with arbitrary directions is allowed. Otherwise topology of the graph provides certain restrictions on the sequence.

We are ready to formulate the main result of this chapter.

**Theorem 8.4 (Trace Formula)** *Let be a finite compact metric graph with Euler characteristic <sup>χ</sup> and the total length* L*, and let L*st*() be the corresponding standard Laplacian. Then the spectral measure* 

$$\mu := 2m\_s(0)\delta + \sum\_{k\_n \neq 0} \left(\delta\_{k\_n} + \delta\_{-k\_n}\right) \tag{8.19}$$

*is a tempered positive distribution, such that not only the Fourier transform μ*ˆ *but also* | ˆ*μ*| *is tempered.* 

*The following two exact trace formulae establish the relation between the spectrum* {*k*<sup>2</sup> *<sup>n</sup>*} *of <sup>L</sup>*st*() and the set* <sup>P</sup> *of closed paths on the metric graph*

$$\begin{split} \mu(k) &= 2m\_s(0)\delta(k) + \sum\_{k\_n \neq 0} \left( \delta\_{k\_n}(k) + \delta\_{-k\_n}(k) \right) \\ &= \chi \delta(k) + \frac{\mathcal{L}}{\pi} + \frac{1}{\pi} \sum\_{\mathcal{Y} \in \mathcal{P}} \ell(\text{prim}\,(\mathcal{Y})) \mathbf{S}\_{\mathbf{V}}(\boldsymbol{\chi}) \cos k\ell(\boldsymbol{\chi}), \end{split} \tag{8.20}$$

*and* 

$$\begin{split} \hat{\mu}(l) &\equiv 2m\_{s}(0) + \sum\_{k\_{n}\neq 0} 2\cos k\_{n}l \\ &= \chi + 2\mathcal{L}\delta(l) + \sum\_{\mathcal{Y}\in\mathcal{P}} \ell(\text{prim}\,(\mathcal{Y})) \mathbf{S}\_{\mathbf{v}}(\mathcal{Y}) \Big(\delta\_{\ell(\mathcal{Y})}(l) + \delta\_{-\ell(\mathcal{Y})}(l)\Big), \end{split} \tag{8.21}$$

*where* 


*Proof* We divide the proof of theorem into two parts. The first part concerns general properties of the spectral measure establishing that *μ* is not only a tempered distribution, but also | ˆ*μ*| is tempered. This will be important in Sect. 10.2, where we show that spectral measures for metric graphs provide explicit examples of crystalline measures and Fourier quasicrystals. In the second part we prove trace formula connecting the spectral measure associated with the standard Laplacian to the set of periodic orbits on the metric graph.

#### **Part I. General Properties of the Spectral Measure**

**Step** 1. **Measure** *μ* **as a tempered distribution.** Consider the spectral measure given by (8.19) where the sum is taken over all non-zero eigenvalues *λn >* 0*, k*<sup>2</sup> *<sup>n</sup>* = *λn, kn >* 0 respecting multiplicities. The formula determines a tempered distribution since the eigenvalues accumulate towards ∞ satisfying Weyl's asymptotics (4.25). All non-zero points (including correct multiplicities) are given by zeros of the analytic secular function *p* determined by (8.8). The distribution is positive as a sum of delta distributions with non-negative integer amplitudes. The Fourier transform *μ*ˆ is also a tempered distribution.

**Step** 2. **Spectral measure and logarithmic derivative of the secular function.**  The distribution *μ* can be obtained by integrating the logarithmic derivative of *p(k)* (introduced in 8.8) around the zeroes and using Sokhotski-Plemelj formula (see *e.g.* formula (3.2.11) in [271] )<sup>6</sup>

$$\delta\_0 = \frac{1}{2\pi i} \left( \frac{1}{x - i0} - \frac{1}{x + i0} \right).$$

<sup>5</sup> It is equal to the number *β*<sup>0</sup> of connected components in accordance to Theorem 8.2.

<sup>6</sup> One may use the following reasoning to justify our calculations. Let *ϕ* be a *C*<sup>∞</sup> <sup>0</sup> *(*R*)* function with the support in [*a, b*] containing just one of the zeroes of the function *p*, say a simple zero *kj .* In

Since the zeroes are situated on the real axis, the sum of delta functions with the supports at the zeroes is equal to the jump understood in the sense of distributions

$$\frac{1}{2\pi i} \left( \frac{d}{dk} \log p(k - i0) - \frac{d}{dk} \log p(k + i0) \right).$$

More precisely we have

$$\begin{split} \mu(k) &= (2m\_s(0) - m\_a(0))\delta(k) \\ &+ \frac{1}{2\pi i} \lim\_{\epsilon \searrow 0} \left( \frac{d}{dk} \log p(k - i\epsilon) - \frac{d}{dk} \log p(k + i\epsilon) \right). \end{split} \tag{8.22}$$

We used here the fact that *p(k)* has zero of order *ma(*0*)* at the origin.

**Step** 3. **Trigonometric series for the spectral measure** *μ*. Following [350] we shall use that *p(k)* is a trigonometric polynomial

$$p(k) = P(e^{ik\ell}), \quad e^{ik\ell} = (e^{ik\ell\_1}, e^{ik\ell\_2}, \dots, e^{ik\ell\_N}),$$

this case we have:

$$\begin{split} &\lim\_{\epsilon \searrow 0} \frac{1}{2\pi i} \int\_{-\infty}^{+\infty} \left( \frac{p'(k - i\epsilon)}{p(k - i\epsilon)} - \frac{p'(k + i\epsilon)}{p(k + i\epsilon)} \right) \varphi(k) dk \\ &= \lim\_{\epsilon \searrow 0} \frac{1}{2\pi i} \left( \underbrace{\int\_{a}^{k\_j - \chi} + \int\_{k\_j - \chi}^{k\_j + \chi} + \underbrace{\int\_{k\_j + \chi}^{b}}\_{\to 0}}\_{\to 0} \right) \left( \frac{p'(k - i\epsilon)}{p(k - i\epsilon)} - \frac{p'(k + i\epsilon)}{p(k + i\epsilon)} \right) \varphi(k) dk \\ &= \lim\_{\epsilon \searrow 0} \left( \frac{1}{2\pi i} \int\_{k\_j - \chi}^{k\_j + \chi} \left( \frac{p'(k - i\epsilon)}{p(k - i\epsilon)} - \frac{p'(k + i\epsilon)}{p(k + i\epsilon)} \right) \varphi(k\_j) dk \\ &\qquad + \frac{1}{2\pi i} \int\_{k\_j - \chi}^{k\_j + \chi} \left( \frac{p'(k - i\epsilon)}{p(k - i\epsilon)} - \frac{p'(k + i\epsilon)}{p(k + i\epsilon)} \right) (\varphi(k) - \varphi(k\_j)) dk \right), \end{split}$$

where we used that *p (k*−*i*0*) p(k*−*i*0*)* <sup>=</sup> *<sup>p</sup> (k*+*i*0*) p(k*+*i*0*)* for *<sup>k</sup>* <sup>=</sup> *kj .* The first integral can be transformed to an integral along a small contour *γ (kj )* around *kj* and then calculated using residue calculus

$$\frac{1}{2\pi i} \int\_{k\_j - \chi}^{k\_j + \chi} \left( \frac{p'(k - i\epsilon)}{p(k - i\epsilon)} - \frac{p'(k + i\epsilon)}{p(k + i\epsilon)} \right) \varphi(k\_j) dk = \frac{\varphi(k\_j)}{2\pi i} \int\_{\gamma(k\_j)} \frac{p'(k)}{p(k)} dk = \varphi(k\_j).$$

To calculate the second integral we note that *<sup>p</sup> (k) p(k) (ϕ(k)*−*ϕ(kj ))* is uniformly bounded, since *<sup>p</sup> (k) p(k)* has a first order pole at *kj* and *ϕ(k)* − *ϕ(kj )*—a first order zero, and therefore the integral is zero in the limit. Thus we have

$$\lim\_{\epsilon \searrow 0} \frac{1}{2\pi i} \int\_{-\infty}^{+\infty} \left( \frac{p'(k - i\epsilon)}{p(k - i\epsilon)} - \frac{p'(k + i\epsilon)}{p(k + i\epsilon)} \right) \varphi(k) dk = \varphi(k\_j) \equiv \delta\_{k\_j}[\varphi].$$

Generalisation for the case of several and multiple zeroes is straightforward.

coming from the secular polynomial *P (***z***)*. The secular polynomial, which is nonzero inside the polydisk

$$\mathbb{D}^N = \mathbb{D} \times \mathbb{D} \times \cdots \times \mathbb{D}, \quad \mathbb{D} = \left\{ z \in \mathbb{C} : |z| < 1 \right\},$$

can be chosen to satisfy the normalisation condition7

$$P(\mathbf{0}) = 1.$$

Then log *P (***z***)* is uniquely defined by putting log *P (***0***)* = 0 and using continuous variation along the line *s***z** *,* <sup>0</sup> <sup>≤</sup> *<sup>s</sup>* <sup>≤</sup> 1. It is an analytic function inside <sup>D</sup>*<sup>N</sup>*

$$\log P(\mathbf{z}) = \sum\_{\mathbf{n} \in \mathbb{Z}\_+^N} c\_\mathbf{n} \mathbf{z}^\mathbf{n}, \quad \mathbf{z}^\mathbf{n} = z\_1^{n\_1} z\_2^{n\_2} \dots z\_N^{n\_N}. \tag{8.23}$$

Our goal is to prove that the Taylor coefficients *c***<sup>n</sup>** are uniformly bounded implying that the above series is convergent in the distributional sense. This follows from the fact that the logarithm of any analytic function is locally integrable over any totally real submanifold in C*n*—a general fact from the theory of functions of several complex variables. Polynomials are analytic functions and the unit torus is a totally real submanifold. For multivariate polynomials the integral we aim to estimate is related to Mahler's measures connected with the heights of the polynomials (see for example Section 3.2 of [183]).8 Instead of looking at the geometric properties of the intersections between algebraic varieties and the unit torus T*<sup>N</sup>* we shall, following [350], use explicit formulas for the Taylor coefficients together with the normalisation condition *P (***0***)* = 1. The Taylor coefficients can be calculated taking the spherical means

$$c\_{\mathbf{n}} = \frac{1}{(2\pi)^{N\_{\mathbf{r}} \cdot |\mathbf{n}|}} \int\_{\mathbf{T}^N} \log P(re^{i\theta}) e^{-i\mathbf{n}\theta} d\theta,\tag{8.24}$$

where we used notation |**n**| = *<sup>n</sup>*<sup>1</sup> <sup>+</sup> *<sup>n</sup>*<sup>2</sup> +···+ *nN* and integration is over **T***<sup>N</sup>* corresponding to the distinguished boundary of the polydisk <sup>T</sup>*<sup>N</sup>* <sup>=</sup> <sup>T</sup>×T×···× T. Hence to prove uniform boundedness it is enough to show that log *P (rei<sup>θ</sup> )* is absolutely integrable over **T***<sup>N</sup>* uniformly for all 0 <sup>≤</sup> *<sup>r</sup>* <sup>≤</sup> 1. Note that absolute integrability of log *P (rei<sup>θ</sup> )* for all *r <* 1 (not uniform) is not enough as *r*|**n**<sup>|</sup> in the denominator tends to zero as |**n**|→∞.

<sup>7</sup> Remember that the secular polynomials are treated projectively and *P (***0***)* <sup>=</sup> det*(*−**S***)* (see (6.4)). 8 The author would like to thank Jan Boman for pointing this connection out, providing an explicit proof and helping to discover the relation to Mahler's measures.

The real and imaginary parts of the logarithm

$$\log P(\mathbf{z}) = \ln|P(\mathbf{z})| + i \arg P(\mathbf{z})$$

can be estimated separately.

To estimate the **imaginary part**, *i.e.* arg *P (***z***)*, we consider the function *P (s***z***)* which is a polynomial in *s* of degree at most 2*N*. The constant term in the polynomial is zero and each other term contributes at most *π* to the argument, hence we have

$$|\arg P(\mathbf{z})| \le 2N\pi, \quad \mathbf{z} \in \mathbb{D}^N,\tag{8.25}$$

implying

$$\int\_{\mathbf{T}^N} |\arg P(re^{i\theta})| d\theta \le r^N (2\pi)^{N+1} N \le (2\pi)^{N+1} N. \tag{8.26}$$

The **real part**, *i.e.* ln <sup>|</sup>*P (***z***)*|, is singular on the distinguished boundary <sup>T</sup>*<sup>N</sup>* at the zeroes of *P (***z***)*, but its mean value is zero

$$\int\_{\mathbb{T}^N} \ln|P(re^{i\theta})|d\theta = \text{Re}\left(\int\_{\mathbb{T}^N} \log P(re^{i\theta})d\theta\right) = \text{Re}\left((2\pi)^N \log(P(\mathbf{0}))\right) = 0$$

and it is uniformly bounded from above

$$\ln|P(\mathbf{z})| \le \ln K,\quad\text{where }K = \sup\_{\mathbf{z} \in \mathbb{D}^N} |P(\mathbf{z})|.\tag{8.27}$$

Hence it is absolutely integrable

$$\begin{split} \int\_{\mathbb{T}^N} \left| \ln|P(re^{i\theta})| \right| d\theta &= \int\_{\mathbb{T}^N} \left| \ln|P(re^{i\theta})| - \ln K + \ln K \right| d\theta \\ &\leq - \int\_{\mathbb{T}^N} \left( \ln|P(re^{i\theta})| + \ln K\right) d\theta + (2\pi)^N \ln K \\ &= 2(2\pi)^N \ln K. \end{split} \tag{8.28}$$

Summing (8.26) and (8.28) we obtain the *r*-independent estimate

$$\int\_{\mathbf{T}^N} \left| \log P(re^{i\theta}) \right| d\theta \le 2(2\pi)^N \left( \pi N + \ln K \right), \tag{8.29}$$

implying that Taylor coefficients in (8.24) are uniformly bounded:

$$\left|c\_{\mathbf{n}}\right| \le 2\left(\pi N + \ln K\right) =: C\_{\mathbf{l}}.\tag{8.30}$$

Here and in what follows *Cj* denote different positive constants. We use Taylor's expansion (8.23) to get

$$\log p(k+i0) = \log P(e^{i(k+i0)\ell}) = \sum\_{\mathbf{n}\in\mathbb{Z}\_+^N} c\_\mathbf{n} e^{i(k+i0)(\mathbf{n}\cdot\ell)}$$

$$= \sum\_{m=0}^\infty \sum\_{\mathbf{n}\in\mathbb{Z}\_+^N} c\_\mathbf{n} e^{i(k+i0)(\mathbf{n}\cdot\ell)}.\tag{8.31}$$

Every test function *ϕ* from the Schwartz class S satisfies the estimate

$$\left| \int\_{\mathbb{R}} e^{ikd} \varphi(k) dk \right| \le \frac{C\_2}{d^{N+1}}, \quad C\_2 = C\_2(\varphi), \tag{8.32}$$

therefore we have

$$\begin{split} \left| \int\_{\mathbb{R}} \sum\limits\_{\mathbf{n} \in \mathbb{Z}\_{+}^{N}} c\_{\mathbf{n}} e^{ik(\mathbf{n} \cdot \mathbf{f})} \varphi(k) dk \right| &\leq \underbrace{m^{N-1}}\_{\text{the number of}} \underbrace{C\_{1}}\_{=\text{sup}\,|c\_{\mathbf{n}}|} \sup\limits\_{} \left| \int\_{\mathbb{R}} e^{ik(\mathbf{n} \cdot \mathbf{f})} \varphi(k) dk \right| \\ &\leq m^{N-1} C\_{1} \frac{C\_{2}}{(m \inf\{\ell\_{n}\})^{N+1}} \\ &\leq \frac{C\_{3}}{m^{2}}, \end{split}$$

implying that the series for log *p(k)* is absolutely convergent. The series can also be differentiated termwise.

Taking into account (8.22) we conclude that the spectral measure is given by the series

$$\begin{split} \mu(k) &= (2m\_s(0) - m\_a(0))\delta(k) \\ &- \frac{1}{2\pi} \left( \sum\_{\mathbf{n} \in \mathbb{Z}\_+^N} (\mathbf{n} \cdot \boldsymbol{\ell}) \, c\_\mathbf{n} e^{-ik\mathbf{n} \cdot \boldsymbol{\ell}} + \sum\_{\mathbf{n} \in \mathbb{Z}\_+^N} (\mathbf{n} \cdot \boldsymbol{\ell}) \, c\_\mathbf{n} e^{ik\mathbf{n} \cdot \boldsymbol{\ell}} \right) \\ &= (2m\_s(0) - m\_a(0))\delta(k) - \frac{1}{\pi} \sum\_{\mathbf{n} \in \mathbb{Z}\_+^N} (\mathbf{n} \cdot \boldsymbol{\ell}) \, c\_\mathbf{n} \cos \left( k\mathbf{n} \cdot \boldsymbol{\ell} \right), \end{split} \tag{8.33}$$

understood in the sense of distributions. The series converge in the distributional sense as each of the two infinite series is a distributional derivative the series for log *p(k)*. One can see this directly refining the estimates used above, as it will be done below for | ˆ*μ*|.

**Step** 4**.** | ˆ*μ*| **is tempered.** The Fourier transform of the spectral measure is

$$
\hat{\mu}(k) = (2m\_s(0) - m\_a(0)) - \left( \sum\_{\mathbf{n} \in \mathbb{Z}\_+^N} (\mathbf{n} \cdot \boldsymbol{\ell}) \ c\_\mathbf{n} \delta\_{-\mathbf{n} \cdot \boldsymbol{\ell}} + \sum\_{\mathbf{n} \in \mathbb{Z}\_+^N} (\mathbf{n} \cdot \boldsymbol{\ell}) \ c\_\mathbf{n} \delta\_{\mathbf{n} \cdot \boldsymbol{\ell}} \right), \tag{8.34}
$$

and we already know that it is a tempered distribution. Then | ˆ*μ*| is given by

$$
\hat{\mu}(k) = |2m\_s(0) - m\_a(0)| + \left( \sum\_{\mathbf{n} \in \mathbb{Z}\_+^N} (\mathbf{n} \cdot \boldsymbol{\ell}) \, |c\_\mathbf{n}| \delta\_{-\mathbf{n} \cdot \boldsymbol{\ell}} + \sum\_{\mathbf{n} \in \mathbb{Z}\_+^N} (\mathbf{n} \cdot \boldsymbol{\ell}) \, |c\_\mathbf{n}| \delta\_{\mathbf{n} \cdot \boldsymbol{\ell}} \right). \tag{8.35}
$$

We use the following estimate (similar to (8.32)) valid for any test function from the Schwartz class

$$|\varphi(d)| \le \frac{C\_4}{d^{N+2}}.$$

Then we have

 R - **<sup>n</sup>**∈Z*<sup>N</sup>* + |**n**|=*m (***n** · *-)*|*c***n**|*ϕ(l)dl* <sup>≤</sup> - **<sup>n</sup>**∈Z*<sup>N</sup>* + |**n**|=*m (***n** · *-)*|*c***n**| |*ϕ(***n** · *-)*| <sup>≤</sup> *<sup>m</sup>N*−<sup>1</sup> the number of degree *m* terms *m* sup{*n*} upper estimate for **n** · *- C*1 =sup |*c***n**| sup |*ϕ(***n** · *-)*| <sup>≤</sup> *<sup>m</sup>N*−<sup>1</sup> *<sup>m</sup>* sup{*n*} *<sup>C</sup>*<sup>1</sup> *C*4 *(m* inf{*n*}*)N*+<sup>2</sup> ≤ *C*5 *m*2 *,*

hence the series for | ˆ*μ*|[*ϕ*] is absolutely convergent for any test function from S. It follows that | ˆ*μ*| is a tempered distribution.

#### **Part II. Trace Formula**

**Step** 5**. Spectral measure via the trace of the scattering matrices.** Note that we do not have an explicit formula for the coefficients *c***n**, hence our next goal will be to get such formula using periodic orbits on . We shall repeat essentially the same calculations using instead of the secular polynomials formula (8.8) expressing the secular function via edge and vertex scattering matrices. Let us remind that S*(k)* is a product of the edge and vertex scattering matrices <sup>S</sup>*(k)* <sup>=</sup> **SSe***(k)* and **<sup>S</sup>** is energy independent. Moreover we use the fact that

$$\|\|\mathbb{S}^{\pm 1}(k \pm i\epsilon)\| < 1, \quad \epsilon > 0,\tag{8.36}$$

since **S** is unitary and **Se** satisfies the same inequality. The spectral measure is given by

$$\begin{split} \mu(k) &= (2m\_{s}(0) - m\_{a}(0))\delta(k) \\ &+ \frac{1}{2\pi i} \lim\_{\epsilon \searrow 0} \left( \frac{d}{dk} \log \det \left( \mathbb{S}(k - i\epsilon) - I \right) - \frac{d}{dk} \log \det \left( (\mathbb{S}(k + i\epsilon) - I) \right) \right) \\ &= \chi \delta(k) + \frac{1}{2\pi i} \lim\_{\epsilon \searrow 0} \left( \text{Tr} \frac{d}{dk} \log \left( \mathbb{S}(k - i\epsilon) - I \right) \right. \\ &\left. - \text{Tr} \frac{d}{dk} \log \left( \mathbb{S}(k + i\epsilon) - I \right) \right) \\ &= \chi \delta(k) + \frac{1}{2\pi i} \lim\_{\epsilon \searrow 0} \left( \text{Tr} \frac{d}{dk} \left( - \sum\_{m=1}^{\infty} \frac{1}{m} \mathbb{S}^{-m}(k - i\epsilon) \right. \\ &\left. + \log \mathbb{S}(k - i\epsilon) + \sum\_{m=1}^{\infty} \frac{1}{m} \mathbb{S}^{m}(k + i\epsilon) \right) \right) \\ &= \chi \delta(k) + \frac{1}{2\pi i} \left( \text{Tr} \sum\_{m=-\infty}^{+\infty} \mathbb{S}^{m}(k) \delta^{\prime}(k) \right), \end{split}$$

where we used


Taking into account that S *(k)* = **SS <sup>e</sup>***(k)* <sup>=</sup> **SSe***(k)i***<sup>D</sup>** <sup>=</sup> <sup>S</sup>*(k)i***D***,* where **<sup>D</sup>** is the diagonal matrix given by (8.12) we see that the distribution *μ* is given by the sum of the series

$$
\mu(k) = \chi \delta(k) + \frac{1}{2\pi} \left( \text{Tr} \sum\_{m = -\infty}^{+\infty} \mathbb{S}^m(k) \mathbf{D} \right).
$$

Our next goal is to calculate the trace having a geometric picture in mind. We are going to calculate the traces corresponding to each power *m* separately in direct correspondence with formula (8.31), where the Taylor series was summed putting together terms having the same degree *m*. It is reasonable to start with small powers.


$$\operatorname{Tr} \mathbb{S}^0(k) \mathbf{D} = \operatorname{Tr} \mathbf{D} = 2 \mathcal{L}.$$

*<sup>m</sup>* <sup>=</sup> <sup>1</sup> We calculate the contribution *e*1*,* <sup>S</sup>*(k)***D***e*1*.* Let us assume *w.l.o.g.* that the edge [*x*1*, x*2] connects together vertices *<sup>V</sup>* <sup>1</sup> and *<sup>V</sup>* 2. We get S*(k)***D***e*<sup>1</sup> <sup>=</sup> **SSe***(k)***D***e*<sup>1</sup> by applying the three matrices one after the other:

$$
\vec{e}\_1 \xrightarrow{\mathbf{D}} \ell\_1 \vec{e}\_1 \xrightarrow{\mathbf{S}\_\mathbf{e}} \ell\_1 e^{ik\ell\_1} \vec{e}\_2 \xrightarrow{\mathbf{S}} \ell\_1 e^{ik\ell\_1} \underbrace{\sum\_{\boldsymbol{x}\_f \in V^2} \mathbf{S}\_{f2} \vec{e}\_f}\_{=\sum\_{\vec{\boldsymbol{x}}\_f} \mathbf{S}\_{f2} \vec{e}\_f},\tag{8.37}$$

(see the first three pictures in Fig. 8.1). We denote here by **S***ij* the entry of the matrix **S** corresponding to the transition from the endpoint *xj* to the endpoint *xi*. The result *e*1*,* <sup>S</sup>*(k)***D***e*1 is non-zero only if both endpoints *x*<sup>1</sup> and *x*<sup>2</sup> belong to the same vertex, in other words if the edge [*x*1*, x*2] forms a loop (see Fig. 8.2). The contribution is then equal to

$$
\langle \vec{e}\_1, \mathbb{S}(k)\mathbf{D}\vec{e}\_1 \rangle = \ell\_1 e^{ik\ell\_1} \mathbf{S}\_{12}.
$$

The remaining first order contributions*ej ,* <sup>S</sup>*(k)***D***e<sup>j</sup> , j* <sup>=</sup> <sup>1</sup>*,* <sup>2</sup>*,...,* <sup>2</sup>*N ,* are calculated in the same way. Using the discrete length *d(γ )* we have

$$\text{Tr}\,\mathbb{S}(k)\mathbf{D} = \sum\_{\nu\in\mathcal{P}\atop\ell(\nu)=1} \ell(\text{prim}\,(\nu))\mathbf{S}\_{\mathbf{v}}(\mathcal{Y})e^{ik\ell(\mathcal{Y})},\tag{8.38}$$

where the product of scattering coefficients **Sv***(γ )* coincides with the single scattering coefficient on the path and every path coincides with its primitive *(γ )* = *(*prim *(γ ))*. The summation is over all loops in . If the graph has no loops, then the total contribution from the first term is zero.

**Fig. 8.1** Edge [*x*1*, x*2] does not form a loop. The numbers indicate positions of the endpoints *xj*

**Fig. 8.2** Edge [*x*1*, x*2] forms a loop. Emergence of non-zero contributions

Note that each loop contributes twice since we distinguish paths going in the opposite direction. For example contributions from the loop presented in Fig. 8.2 are

$$\ell\_{\mathbb{I}}e^{ik\ell\_{\mathbb{I}}}\mathbf{S}\_{\mathbb{I}2} \quad \text{and} \quad \ell\_{\mathbb{I}}e^{ik\ell\_{\mathbb{I}}}\mathbf{S}\_{2\mathbb{I}}.$$

*<sup>m</sup>* <sup>=</sup> <sup>2</sup> As before we start by calculating the contribution *e*1*,* <sup>S</sup>2*(k)***D***e*1. Assume first that the edge *E*<sup>1</sup> does not form a loop, then to calculate the contribution we may continue the procedure presented in Fig. 8.1. We need to determine

$$\mathbf{S\_e}(\ell\_1 e^{ik\ell\_1} \sum\_{\chi\_j \in V^2} S\_{j2}\vec{e}\_j) = \ell\_1 e^{ik\ell\_1} \sum\_{\chi\_j \in V^2} S\_{j2}\mathbf{S\_e}\vec{e}\_j. \tag{8.39}$$

Each vector **Se***e<sup>j</sup>* is just the vector associated with the opposite endpoint of the edge *xj* belongs to, multiplied by the exponential. For example if *<sup>x</sup>*<sup>3</sup> <sup>∈</sup> *<sup>V</sup>* <sup>2</sup> (as in Fig. 8.1), then

$$\mathbf{S}\_{\mathbf{e}}\vec{e}\_3 = e^{ik\ell\_2}\vec{e}\_4.$$

Denoting by *V* <sup>3</sup> the vertex *x*<sup>4</sup> belongs to we obtain

$$\mathbf{SS}\_{\mathbf{e}}\vec{e}\_3 = \sum\_{\chi\_l \in V^3} \mathbf{S}\_{l4} e^{ik\ell\_2} \vec{e}\_l \dots$$

The scalar product with *e*<sup>1</sup> is non-zero just in two cases:

• If *e<sup>j</sup>* in (8.39) coincides with *e*<sup>2</sup> corresponding to the reflection at *<sup>V</sup>* 2. Then we have

$$\mathbb{S}(k)\vec{e}\_2 = \sum\_{\boldsymbol{x}\_{\ell}\in V^{\mathbb{I}}} \mathbf{S}\_{\ell 1} e^{ik\ell\_1} \vec{e}\_{\boldsymbol{i}} \Rightarrow \langle \vec{e}\_1, \ell\_1 e^{ik\ell\_1} \mathbb{S}(k) \vec{e}\_2 \rangle = \ell\_1 e^{2ik\ell\_1} \mathbf{S}\_{11} \mathbf{S}\_{22} \dots$$

This term is always present.

• If *e<sup>j</sup>* in (8.39) is different from *e*<sup>2</sup> but the corresponding edge is parallel to [*x*1*, x*2] like the edge [*x*5*, x*6] in Fig. 8.1. Using notations from the figure we get

$$\mathbb{S}(k)\vec{e}\_{\mathfrak{S}} = \sum\_{\boldsymbol{x}\_{\ell}\in V^{1}} \mathbf{S}\_{\ell}e^{ik\ell\boldsymbol{\varepsilon}}\vec{e}\_{\boldsymbol{l}} \Rightarrow \langle \vec{e}\_{\mathfrak{I}}, \ell\_{1}e^{ik\ell\_{1}}\mathbb{S}(k)\vec{e}\_{\mathfrak{S}}\rangle = \ell\_{1}e^{ik(\ell\_{1}+\ell\_{2})}\mathbf{S}\_{16}\mathbf{S}\_{\mathfrak{S}2}.$$

This term is present only if there are two parallel edges, *i.e.* there is a closed path of discrete length 2 supported by two edges.

Assume now that *E*<sup>1</sup> forms a loop, then modifying formula (8.37) we get

$$\mathbf{SS\_{e}}\mathbf{D}\vec{e}\_{1} = \ell\_{1}e^{ik\ell\_{1}}\sum\_{\boldsymbol{\chi}\_{\boldsymbol{f}}\in V^{1}}\mathbf{S}\_{f2}\vec{e}\_{\boldsymbol{f}}... $$

The contribution

$$\langle \vec{e}\_{\mathsf{l}}, \mathbf{SS}\_{\mathsf{e}} \mathbf{SS}\_{\mathsf{e}} \mathbf{D} \vec{e}\_{\mathsf{l}} \rangle$$

is non-zero in just three cases:

• the endpoint *xj* coincides with *x*<sup>2</sup> with the contribution

$$\ell\_1 e^{2lk\ell\_1} \mathbf{S}\_{11} \mathbf{S}\_{22};$$

• the endpoint *xj* coincides with *x*<sup>1</sup> with the contribution

$$\ell\_1 e^{2lk\ell\_1} \mathbf{S}\_{12} \mathbf{S}\_{12};$$

• the endpoint *xj* belongs to an edge different from *E*1, say *E*2, forming a loop with the contribution

$$\ell\_1 e^{ik(\ell\_1 + \ell\_2)} \mathbf{S}\_{13} \mathbf{S}\_{32} \dots$$

Summing over all *j* = 1*,* 2*,...,* 2*N* we see that every oriented path of discrete length 2 contributes to the trace of S*(k)*2**D**. The paths going through two vertices or two different edges contribute twice.

Every edge *En* not forming a loop determines the unique path *γ*<sup>1</sup> going back and forth along it (see Fig. 8.3a). The contribution from this path

**Fig. 8.3** Different paths of discrete length 2

comes from the following two scalar products *e*2*n*−1*,* <sup>S</sup>2*(k)***D***e*2*n*−1 and *e*2*n,* <sup>S</sup>2*(k)***D***e*2*n* and is equal to

$$\begin{aligned} &\ell\_n e^{ik2\ell\_n} \mathbf{S}\_{2n-1\ 2n-1} \mathbf{S}\_{2n\ 2n} + \ell\_n e^{ik2\ell\_n} \mathbf{S}\_{2n\ 2n} \mathbf{S}\_{2n-1\ 2n-1} \\ &= \ell(\text{prim}\,(\mathcal{y}\_1)) e^{ik\ell(\mathcal{y}\_1)} \mathbf{S}\_{\mathbf{v}}(\mathcal{y}\_1) .\end{aligned}$$

The corresponding primitive path coincides with *γ*1, hence *(*prim *(γ*1*))* = *(γ*1*)* = 2*n*. The product of scattering coefficients is **Sv***(γ*1*)* = **S**2*n*−1 2*n*−1**S**2*<sup>n</sup>* <sup>2</sup>*n.*

Consider now the case, where there is an edge parallel to *En*. We denote the edge by *En*+<sup>1</sup> and assume *En* and *En*+<sup>1</sup> are oriented in the opposite directions (see Fig. 8.3b). Then there are two more discrete length 2 paths

$$\gamma\_2 = (\mathbf{x}\_{2n-1}, \mathbf{x}\_{2n}, \mathbf{x}\_{2n+1}, \mathbf{x}\_{2n+2}) \quad \text{and} \quad \gamma\_3 = (\mathbf{x}\_{2n+2}, \mathbf{x}\_{2n+1}, \mathbf{x}\_{2n}, \mathbf{x}\_{2n-1})$$

(marked by cyan and magenta colours respectively) contributing via the scalar products

$$\langle e\_{2n-1}, \mathbb{S}^2(k) \mathbf{D} \vec{e}\_{2n-1} \rangle, \quad \langle e\_{2n+1}, \mathbb{S}^2(k) \mathbf{D} \vec{e}\_{2n+1} \rangle$$

and

$$
\langle e\_{2n}, \mathbb{S}^2(k) \mathbf{D} \vec{e}\_{2n} \rangle, \quad \langle e\_{2n+2}, \mathbb{S}^2(k) \mathbf{D} \vec{e}\_{2n+2} \rangle,
$$

respectively. The corresponding contributions are

$$\begin{aligned} \ell\_n e^{ik(\ell\_n + \ell\_{n+1})} \mathbf{S}\_{2n-1\ 2n+2} \mathbf{S}\_{2n+1\ 2n} + \ell\_{n+1} e^{ik(\ell\_n + \ell\_{n+1})} \mathbf{S}\_{2n+1\ 2n} \mathbf{S}\_{2n-1\ 2n+2} \\ = \ell(\text{prim}(\chi\_2)) e^{ik\ell(\chi\_2)} \mathbf{S}\_{\mathbf{v}}(\chi\_4), \end{aligned}$$

and

$$\begin{aligned} \ell\_n e^{ik(\ell\_n + \ell\_{n+1})} \mathbf{S}\_{2n \ 2n+1} \mathbf{S}\_{2n+2 \ 2n-1} + \ell\_{n+1} e^{ik(\ell\_n + \ell\_{n+1})} \mathbf{S}\_{2n+2 \ 2n-1} \mathbf{S}\_{2n \ 2n+1} \\ = \ell(\text{prim}(\boldsymbol{\gamma}\_3)) e^{ik\ell(\boldsymbol{\gamma}\_3)} \mathbf{S}\_{\mathbf{v}}(\boldsymbol{\gamma}\_5), \end{aligned}$$

where *(*prim *(γ*2*))*2*(*prim *(γ*3*))* = *(γ*2*)* = *(γ*3*)* = *n* + *n*+<sup>1</sup> and

$$\mathbf{S}\_{\mathbf{V}}(\boldsymbol{\gamma}\_{2}) = \mathbf{S}\_{2n+1\ 2n} \mathbf{S}\_{2n-1\ 2n+2}, \quad \text{and} \quad \mathbf{S}\_{\mathbf{V}}(\boldsymbol{\gamma}\_{3}) = \mathbf{S}\_{2n+2\ 2n-1} \mathbf{S}\_{2n\ 2n+1\ 2n}$$

Assume now that the edge *En* forms a loop (see Fig. 8.3b), then we have the path *γ*<sup>1</sup> going once back and forth with the same contribution as above

$$\ell(\text{prim}\,(\gamma\_1))e^{ik\ell(\gamma\_1)}\mathbf{S}\_{\mathbf{v}}(\gamma\_1).$$

In addition we have two more oriented periodic paths

$$\gamma\_4 = (\mathbf{x}\_{2n-1}, \mathbf{x}\_{2n}, \mathbf{x}\_{2n-1}, \mathbf{x}\_{2n}) \quad \text{and} \quad \gamma\_5 = (\mathbf{x}\_{2n}, \mathbf{x}\_{2n-1}, \mathbf{x}\_{2n}, \mathbf{x}\_{2n-1})$$

going around the first loop twice in different directions marked by blue and orange colors respectively (see Fig. 8.3c). Each of the paths contributes to just one of the two scalar products with

$$\begin{array}{l} \mathsf{y}\_{4}: \langle \vec{e}\_{2n-1}, \mathbb{S}^{2}(k)\mathbf{D}\vec{e}\_{2n-1} \rangle \\ = \ell\_{n} e^{2ik\ell\_{n}} \mathbf{S}\_{2n-1\,2n} \mathbf{S}\_{2n-1\,2n} = \ell(\text{prim}\,(\mathsf{y}\_{4})) e^{ik\ell(\mathsf{y}\_{4})} \mathbf{S}\_{\mathbf{v}}(\mathsf{y}\_{4}), \\ \mathsf{y}\_{5}: \langle \vec{e}\_{2n}, \mathbb{S}^{2}(k)\mathbf{D}\vec{e}\_{2n} \rangle \\ = \ell\_{n} e^{2ik\ell\_{n}} \mathbf{S}\_{2n\,2n-1} \mathbf{S}\_{2n\,2n-1} = \ell(\text{prim}\,(\mathsf{y}\_{5})) e^{ik\ell(\mathsf{y}\_{5})} \mathbf{S}\_{\mathbf{v}}(\mathsf{y}\_{5}), \end{array}$$

with *(*prim *(γ*4*))* = *(*prim *(γ*5*))* = *n*; *(γ*2*)* = *(γ*3*)* = 2*n* and

$$\mathbf{S}\_{\mathbf{v}}(\boldsymbol{\gamma}\_{4}) = \mathbf{S}\_{2n-1\ 2n} \mathbf{S}\_{2n-1\ 2n}, \quad \mathbf{S}\_{\mathbf{v}}(\boldsymbol{\gamma}\_{5}) = \mathbf{S}\_{2n\ 2n-1} \mathbf{S}\_{2n\ 2n-1\ 2n}$$

We also have analogues of the paths *γ*<sup>2</sup> and *γ*<sup>3</sup> going first along one of the loops and returning back along the other one. There are four such paths since the loops can be passed in different directions (see Fig. 8.3d). We denote these paths by *γ*6*, γ*7*, γ*8*,* and *γ*9.

The result can be written as a sum over all periodic orbits with discrete length 2

$$\sum\_{\mathbf{y}\in\mathcal{P}}\ell(\text{prim}(\boldsymbol{\chi})) \mathbf{S}\_{\mathbf{v}}(\boldsymbol{\chi}) e^{ik\ell(\boldsymbol{\chi})}.\tag{8.40}$$

**Step** 7**. Arbitrary oriented closed paths.** We are ready now to look at the contributions from higher powers of S*(k)*. Our analysis shows that a term *ei,* <sup>S</sup>*m(k)***D***ei* gives a nonzero contribution only if there is a closed path *<sup>γ</sup>* on with the discrete length *d(γ )* = *m* passing through the endpoint *xi*.

Let us calculate the total contribution from any path *γ* of discrete length *d(γ )* = *m*. Assume that the path is a multiple of the primitive path

$$\text{prim}\,(\boldsymbol{\gamma}) = (\mathbf{x}\_{l\_1}, \mathbf{x}\_{l\_2}, \mathbf{x}\_{l\_3}, \dots, \mathbf{x}\_{l\_{2Q}}), \quad \boldsymbol{\gamma} = R \text{ prim}\,(\boldsymbol{\gamma}),$$

where the primitive path is formed by *Q* edges and this path should be repeated *R* times to get *γ* so that *m* = *QR*. This path contributes to the scalar products

$$\langle \vec{e}\_{i\_{2q-1}}, \mathbf{S}^m \mathbf{D} \vec{e}\_{i\_{2q-1}} \rangle, \quad q = 1, 2, \dots, \mathcal{Q}.$$

If every endpoint *xi*2*q*−<sup>1</sup> appears just once in prim *(γ )*, then the contribution from each point is equal to the length of the edge to which *xi*2*q*−<sup>1</sup> belongs multiplied by *eik(γ )* and the product **Sv***(γ )* of all vertex scattering coefficients along *γ* . If a certain endpoint appears several times, then the above contribution is multiplied by the number of times *xi*2*q*−<sup>1</sup> appears in prim *(γ )*. Summing up contributions from different vertices on the primitive path results in multiplication of *S***v***(γ )eik(γ )* by the length of the primitive path

$$\ell(\text{prim}(\wp)) \mathbf{S}\_{\mathbf{V}}(\wp) e^{ik\ell(\wp)}.$$

Summation over all paths of discrete length *m* leads to

$$\text{Tr}\,\mathbf{S}^{m}(k)\mathbf{D} = \sum\_{\boldsymbol{\nu}\in\mathcal{P}} \ell(\text{prim}\,(\boldsymbol{\nu})) \mathbf{S}\_{\mathbf{V}}(\boldsymbol{\nu}) e^{ik\ell(\boldsymbol{\nu})}.\tag{8.41}$$

Formula (8.20) is obtained by summing over all *m* and taking into account the fact that the contributions from *m* and −*m* are complex conjugates of each other. The second trace formula (8.21) is obtained via Fourier transform.

Formula (8.20) can be modified using summation over primitive orbits, provided the graph has more than one edge (is different from *(*1*.*1*)* and *(*1*.*2*)*).

$$\begin{split} \mu(k) &= 2m\_{s}(0)\delta(k) + \sum\_{k\_{\rm u} \neq 0} \left( \delta\_{k\_{\rm u}}(k) + \delta\_{-k\_{\rm u}}(k) \right) \\ &= \chi \delta(k) + \frac{\mathcal{L}}{\pi} + \frac{1}{2\pi} \sum\_{\chi \in \mathcal{P}\_{\rm ppin}} \ell(\chi) \frac{2\,\mathbf{S}\_{\rm v}(\boldsymbol{\nu})(\cos k\ell(\boldsymbol{\nu}) - \mathbf{S}\_{\rm v}(\boldsymbol{\nu}))}{1 - 2\cos k\ell(\boldsymbol{\nu})\mathbf{S}\_{\rm v}(\boldsymbol{\nu}) + \mathbf{S}\_{\rm v}^{2}(\boldsymbol{\nu})}, \end{split} \tag{8.42}$$

where Pprim denotes the set of primitive oriented paths, *i.e.* those oriented paths that coincide with their primitives. To prove the formula we note that every primitive path *γ* determines a sequence of non-primitive paths: 2*γ ,* 3*γ , . . . , nγ , . . .* Taking into account that

$$e^{ik\ell(n\boldsymbol{\gamma})} = e^{ink\ell(\boldsymbol{\gamma})} = \left(e^{ik\ell(\boldsymbol{\gamma})}\right)^n, \quad \mathbf{S}\_{\mathbf{V}}(n\boldsymbol{\gamma}) = \left(\mathbf{S}\_{\mathbf{V}}(\boldsymbol{\gamma})\right)^n,\tag{8.43}$$

we see that contributions from the multiples of *γ* form a convergent geometric progression since <sup>|</sup>**Sv***(γ )*<sup>|</sup> *<sup>&</sup>lt;* 19

$$\begin{split} \sum\_{n=1}^{\infty} \ell(\text{prim } (n\gamma)) e^{ik\ell(n\gamma)} \mathbf{S}\_{\mathbf{v}}(n\gamma) &= \sum\_{n=1}^{\infty} \ell(\boldsymbol{\gamma}) \left( e^{ik\ell(\boldsymbol{\gamma})} \mathbf{S}\_{\mathbf{v}}(\boldsymbol{\gamma}) \right)^{n} \\ &= \ell(\boldsymbol{\gamma}) \frac{e^{ik\ell(\boldsymbol{\gamma})} \mathbf{S}\_{\mathbf{v}}(\boldsymbol{\gamma})}{1 - e^{ik\ell(\boldsymbol{\gamma})} \mathbf{S}\_{\mathbf{v}}(\boldsymbol{\gamma})}. \end{split}$$

Adding the conjugated contribution we get

$$\begin{split} \sum\_{n=1}^{\infty} \ell(\text{prim } (n\text{\textquotedblleft} \nu \text{\textquotedblright} \nu \text{\textquotedblright} \mathbf{S}\_{\text{\textquotedblleft}} (n\text{\textquotedblright} \nu \text{\textquotedblright} ) + \sum\_{n=1}^{\infty} \ell(\text{prim } (n\text{\textquotedblleft} \nu \text{\textquotedblright} \mathbf{S}\_{\text{\textquotedblleft}} (n\text{\textquotedblright} \nu \text{\textquotedblright} )) \\ = \ell(\nu) \frac{2 \, \mathbf{S}\_{\text{\textquotedblleft}}(\nu) (\cos k\ell(\nu) - \mathbf{S}\_{\text{\textquotedblleft}}(\nu))}{1 - 2 \cos k\ell(\nu) \mathbf{S}\_{\text{\textquotedblleft}}(\nu) + \mathbf{S}\_{\text{\textquotedblleft}}^{2}(\nu)} . \end{split}$$

The two exceptional graphs *(*1*.*1*)* and *(*1*.*2*)* lead to classical Poisson summation formula as shown in the examples below.

**Example 8.5** Consider the segment graph *(*1*.*1*)* having length . The spectrum of the standard Laplacian is *λn* = *π* 2 *<sup>n</sup>*2*, n* <sup>=</sup> <sup>0</sup>*,* <sup>1</sup>*,* <sup>2</sup>*,...* , hence *ms(*0*)* <sup>=</sup> 1. The set of periodic orbits is very simple: every orbit is obtained by crossing the interval back and forth *n* times, so that *(γ )* = 2*n* and *l(*prim *(γ ))* = 2.

Substitution into the trace formula (8.20) gives:

$$
\delta 2\delta(k) + \sum\_{n=1}^{\infty} \left( \delta\_{\frac{\pi}{\ell}n}(k) + \delta\_{-\frac{\pi}{\ell}n}(k) \right) = \delta(k) + \frac{\ell}{\pi} + \frac{1}{\pi} \sum\_{n=1}^{\infty} 2\ell \cos k2\ell n$$

$$
\Rightarrow \sum\_{n \in \mathbb{Z}} \delta\_{\frac{\pi}{\ell}n}(k) = \frac{\ell}{\pi} \sum\_{n \in \mathbb{Z}} e^{ik2\ell n}.$$

The formula takes the simplest form if one choses = *π*

$$\sum\_{n\in\mathbb{Z}}\delta\_n(k) = \sum\_{n\in\mathbb{Z}} e^{i2\pi nk},\tag{8.44}$$

which is nothing else than the classical Poisson summation formula. This formula is going to play a very important role in Chap. 10 where crystalline measures are constructed (see in particular formula (10.13)).

<sup>9</sup> Here we use that the graph is different from *(*1*.*1*)* and *(*1*.*2*)* and therefore every path crosses at least one vertex of degree different from 1 and 2—the scattering coefficients may have unit modulus only for such vertices.

**Example 8.6** Consider now the cycle graph *(*1*.*2*)* of length . The spectrum of the standard Laplacian is *λn* = 2*π* 2 *<sup>n</sup>*2*, n* <sup>=</sup> <sup>0</sup>*,* <sup>1</sup>*,* <sup>1</sup>*,* <sup>2</sup>*,* <sup>2</sup>*,* <sup>3</sup> *...* with *ms(*0*)* <sup>=</sup> <sup>1</sup> and all other eigenvalues double degenerate. The set of periodic orbits is much more complicated: each time one may go along the cycle either clockwise or counter clockwise, but reflection coefficients are zero so that **Sv***(γ )* = 0 for all orbits that contain reflections at the vertex. Hence, it is enough to consider just the orbits going *m* times clockwise or counter clockwise, so that *(γ )* = *n* and *l(*prim *(γ ))* = .

Substitution into the trace formula (8.20) gives:

$$2\delta(k) + 2\sum\_{n=1}^{\infty} \left(\delta\_{\frac{2\pi}{\ell}n}(k) + \delta\_{-\frac{2\pi}{\ell}n}(k)\right) = 0 \cdot \delta(k) + \frac{\ell}{\pi} + 2\frac{1}{\pi} \sum\_{n=1}^{\infty} \ell \cos k\ell n!$$

where the second sum is taken twice because single sum counts only periodic orbits going in one of the directions. It follows that

$$2\sum\_{n\in\mathbb{Z}}\delta\_{\frac{2\pi}{\ell}n}(k) = \frac{\ell}{\pi}\sum\_{n\in\mathbb{Z}}e^{ik\ell n}.$$

One gets formula (8.44) by choosing = 2*π*.

The derived trace formula will be important when applied to inverse spectral problems for metric graphs. Let us mention here that this formula is also interesting from a pure mathematical point of view, since the distribution *μ(k)* − *χδ(k)* possesses the remarkable property: both the distribution itself and its Fourier transform are distributions supported by discrete sets, since both are given by *δ*functions supported by {±*kn*} and ±*(*P*)* respectively. One may think about this formula as a generalisation of the classical Poisson's summation formula (8.44). We have seen that Poisson's formula is a special case of (8.20) when = *(*1*.*1*)* or *(*1*.*2*)*. If is formed by edges with integer lengths, then formulas (8.20) and (8.21) can be obtained by just combining a finite number of classical Poisson formulas, since the spectrum is (more or less) periodic in this case (see Sect. 24.3). For graphs with non compatible edge lengths the spectrum is not periodic and derived formula cannot be obtained as a finite combination of Poisson formulas. One may think about quantum graphs as quasicrystals (see Sect. 10.2).

**Problem 34** Consider the two isospectral equilateral graphs presented in Fig. 2.11 (assuming standard vertex conditions). Describe the corresponding sets of periodic orbits and verify trace formula (8.20) by showing directly that the series on the right hand side of the formula are identical.

# **8.4 Trace Formula for Laplacians with Scaling-Invariant Vertex Conditions**

The trace formula can easily be generalised to non-standard conditions at the vertices as well as non-zero potential on the edges [93, 94, 455]. The proof goes almost without modifications for Laplacians with scaling-invariant vertex conditions.

Two points should be taken into account:


To clarify the first point consider the following elementary example: Let be a compact connected graph with some pendant edges. Let *L*st*,D* be the Laplace operator corresponding to Dirichlet boundary condition at the pendant vertices and standard vertex conditions at all other vertices. Then the operator *L*st*,D* does not have zero as an eigenvalue. The algebraic multiplicity of the eigenvalue zero could be different from zero. Calculations of the algebraic multiplicity can be carried out using essentially the same arguments as before and lead to *ma(*0*)* = 1 − *χ*.

Calculating contribution from the negative powers of S*(k)* one should take into account that the vertex scattering matrices are unitary and Hermitian. It follows that if **Sv***(γ )* is the product of scattering coefficients along a path *γ* , then the product of inverse scattering coefficients along the same path is given by **S**∗ **<sup>v</sup>***(γ )* = **Sv***(γ ).*

**Theorem 8.7** *Let be a finite compact metric graph with the total length* L *and let LS() be the Laplace operator in L*2*() determined by properly connecting scaling-invariant vertex conditions at the vertices described by unitary Hermitian matrices Sm, m* <sup>=</sup> <sup>1</sup>*,* <sup>2</sup>*,...,M. Then the spectral measure* (8.19) *is a tempered positive distribution, such that not only the Fourier transform μ*ˆ *is tempered but also* | ˆ*μ*| *is tempered.* 

*The following two exact trace formulae establish the relation between the spectrum* {*k*<sup>2</sup> *<sup>n</sup>*} *of <sup>L</sup>S() and the set* <sup>P</sup> *of closed paths on the metric graph*

$$\begin{split} \mu(k) &= 2m\_s(0)\delta(k) + \sum\_{k\_a \neq 0} \left(\delta\_{k\_a}(k) + \delta\_{-k\_a}(k)\right) \\ &= (2m\_s(0) - m\_a(0))\delta(k) + \frac{\mathcal{L}}{\pi} \\ &+ \frac{1}{2\pi} \sum\_{\mathcal{Y} \in \mathcal{P}} \ell(\text{prim}\,(\mathcal{Y})) \left(\mathbf{S}\_{\mathbf{V}}(\mathcal{Y})e^{ik\ell(\mathcal{Y})} + \mathbf{S}\_{\mathbf{v}}^{\diamond}(\mathcal{Y})e^{-ik\ell(\mathcal{Y})}\right), \end{split} \tag{8.45}$$

**Fig. 8.4** Composed graph <sup>2</sup>

$$V^1 \xrightarrow{} \xrightarrow{} \xrightarrow{} V^2$$

*and* 

$$\begin{split} \hat{\mu}(l) &= 2m\_{s}(0) + \sum\_{k\_{\mathfrak{n}} \neq 0} 2 \cos k\_{n} l \\ &= 2m\_{s}(0) - m\_{a}(0) + 2\mathcal{L}\delta(l) \\ &+ \sum\_{\mathcal{Y} \in \mathcal{P}} \ell(\text{prim}\,(\mathcal{Y})) \Big( \mathcal{S}\_{\mathbf{v}}(\mathcal{Y}) \delta\_{\ell(\mathcal{Y})}(l) + \mathcal{S}\_{\mathbf{v}}^{\*}(\mathcal{Y}) \delta\_{-\ell(\mathcal{Y})}(l) \Big), \end{split} \tag{8.46}$$

*where* 


One may generalise the derived trace formula by including not scaling-invariant vertex conditions [93, 94] or potentials on the edges [455].

**Isospectral Laplacians on Two Edges** Consider the metric graph <sup>2</sup> formed by two edges of length *π/*2 connected at one common vertex *V* <sup>2</sup> (see Fig. 8.4). The remaining two vertices are *V* <sup>1</sup> and *V* 3.

The vertex *V* <sup>2</sup> has degree two and we assume there the most general scaling invariant vertex conditions given by any Hermitian unitary 2 × 2 matrix *S*2. Every such matrix has the form:

$$S\_2(a,\theta) = \begin{pmatrix} a & \sqrt{1-a^2}e^{i\theta} \\ \sqrt{1-a^2}e^{-i\theta} & -a \end{pmatrix}, \quad a \in [-1,1], \theta \in [0,2\pi).$$

We assume Neumann conditions at *V* <sup>1</sup> and *V* <sup>3</sup> and denote the corresponding Laplacian by *L(a, θ )*.

It turns out that the operators *L(a, θ )* are isospectral, in particular all operators from the family are isospectral to the Neumann Laplacian on the single interval of length *π*—the operator *L(*0*,* 0*)*. Trace formula (8.45) will help us to understand the reason for isospectrality.

In the proof of Theorem 8.7 it is shown that the spectral measure *μ(k)* may be calculated via the formula

$$
\mu(k) = \chi \delta(k) + \frac{1}{2\pi} \left( \text{Tr} \sum\_{n=-\infty}^{+\infty} \mathbb{S}^n(k) \mathbf{D} \right), \tag{8.47}
$$

where <sup>S</sup>*(k)* <sup>=</sup> **SSe***(k)*. The matrices **<sup>S</sup>***,* **Se***(k)*,S*(k)* and **<sup>D</sup>** are

$$\begin{aligned} \mathbf{S} &= \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & a & \sqrt{1 - a^2} e^{i\theta} & 0 \\ 0 \sqrt{1 - a^2} e^{-i\theta} & -a & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \mathbf{S}\_{\mathbf{e}} = \begin{pmatrix} 0 & e^{i\pi/2k} & 0 & 0 \\ e^{i\pi/2k} & 0 & 0 & 0 \\ 0 & 0 & 0 & e^{i\pi/2k} \\ 0 & 0 & 0 & e^{i\pi/2k} & 0 \end{pmatrix} \\ \mathbf{S}(k) &= \begin{pmatrix} 0 & e^{i\pi/2k} & 0 & 0 \\ ae^{i\pi/2k} & 0 & 0 & \sqrt{1 - a^2} e^{i\theta} e^{i\pi/2k} \\ \sqrt{1 - a^2} e^{-i\theta} e^{i\pi/2k} & 0 & 0 & -ae^{i\pi/2k} \\ 0 & 0 & e^{i\pi/2k} & 0 \end{pmatrix}, \quad \mathbf{D} = \frac{\pi}{2} \mathbf{I}, \end{aligned}$$

in particular implying Tr*(*S*(k)***D***)* <sup>=</sup> 0. Elementary calculations imply

$$\mathbb{S}^2(k) = e^{i\pi k} \begin{pmatrix} a & 0 & 0 & \sqrt{1-a^2}e^{i\theta} \\ 0 & a & \sqrt{1-a^2}e^{i\theta} & 0 \\ 0 & \sqrt{1-a^2}e^{-i\theta} & -a & 0 \\ \sqrt{1-a^2}e^{-i\theta} & 0 & 0 & -a \end{pmatrix},$$

and Tr*(*S2*(k)***D***)* <sup>=</sup> 0, whereas

$$\mathbb{S}^4(k) = e^{2i\pi k} \begin{pmatrix} 1 & 0 \ 0 \ 0 \\ 0 \ 1 \ 0 \ 0 \\ 0 \ 0 \ 1 \ 0 \\ 0 \ 0 \ 0 \ 1 \end{pmatrix},$$

and Tr*(*S4*(k)***D***)* <sup>=</sup> <sup>2</sup>*πe*2*iπk*.

We are ready to calculate the sum (8.47) over all closed paths. Only paths of discrete length 4*n, n* <sup>∈</sup> <sup>N</sup> determine a non-zero contribution 2*πe*2*iπkn*, which is independent of *a* and *θ*. Hence the operators *L(a, θ )* are isospectral.

We shall return to this graph in Example 14.14.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 9 Trace Formula and Inverse Problems**

## **9.1 Euler Characteristic for Standard Laplacians**

Our aim in this section is to establish an explicit formula allowing one to calculate the Euler characteristic of the metric graph directly from the spectrum of the standard Laplace operator. Formula (8.21) shows that the Euler characteristic is determined by the spectrum alone: the left hand side is defined by *kn*, whereas the right hand side is a sum of *χ*, a delta function at the origin and a series containing delta functions with the supports at the lengths of periodic paths. It is clear that the lengths of periodic paths cannot be small—each path contains at least one edge, and therefore the constant term *χ* is uniquely determined.

The main idea how to get an explicit formula is to apply (8.21) to a test function with the support on the interval [0*,* min {*n*} *N <sup>n</sup>*=1]*.* Since we do not know the exact length of the shortest edge *a priori*, we are going to consider a sequence of test functions having smaller and smaller support.

**Theorem 9.1** *Let be a compact metric graph and L*st*() be the standard Laplace operator. Then the Euler characteristic χ () is uniquely determined by the spectrum*  {*λn*} *of the Laplacian <sup>L</sup>*st*()*

$$\begin{split} \chi &= 2m\_s(0) + 2 \lim\_{t \to \infty} \sum\_{k\_n \neq 0} \cos k\_n / t \left( \frac{\sin k\_n / 2t}{k\_n / 2t} \right)^2 \\ &= 2m\_s(0) - 2 \lim\_{t \to \infty} \sum\_{k\_n \neq 0} \frac{1 - 2 \cos k\_n / t + \cos 2k\_n / t}{(k\_n / t)^2}, \end{split} \tag{9.1}$$

*where kn* <sup>=</sup> <sup>√</sup>*λn <sup>&</sup>gt;* <sup>0</sup>*.*

209

**Fig. 9.1** The test function *ϕ()*

*Proof* Our proof is based on the trace formula (8.20). The idea is very simple: find a test function *ϕ* such that *μ*ˆ[*ϕ*] = *χ*. Then *μ*[ ˆ*ϕ*] is also equal to *χ* and provides the desired formula connecting *χ* and the spectrum.

Consider the function *ϕ* defined by (Fig. 9.1)

$$\varphi(\ell) = \begin{cases} \ell, & 0 \le \ell \le 1; \\ 2 - \ell, & 1 \le \ell \le 2; \\ 0, & \text{otherwise}. \end{cases} \tag{9.2}$$

This function and any scaled function *ϕt(x)* = *tϕ(tx)* are normalised as:

$$\int\_{-\infty}^{+\infty} \varphi\_l(x) dx = 1.$$

To get the Euler characteristic we need to scale the test function so that

$$\min\{\ell\_j\} > 2/t$$

holds. When applying *μ*ˆ to the test function *ϕt(x)* the contribution from all delta functions is zero and we have

$$\chi(\Gamma) = \lim\_{l \to \infty} \hat{\mu}[\varphi\_l],\tag{9.3}$$

where we used the limit as the length of the shortest edge may be unknown.

To calculate *μ*[ ˆ*ϕ*] we need the Fourier transform of the test function

$$
\hat{\varphi}\_l(k) = e^{ik/l} \left( \frac{\sin k/2t}{k/2t} \right)^2.
$$

**Fig. 9.2** Single interval graph *(*1*.*1*)* of length *π*

Applying the spectral measure *μ* to the Schwartz test function *ϕ*ˆ*<sup>t</sup>* we obtain

$$\begin{aligned} \chi(\Gamma) &= \lim\_{t \to +\infty} \hat{\mu}[\varphi\_t] \\ &= \lim\_{t \to +\infty} \mu[\hat{\varphi}\_t] \\ &= 2m\_s(0) + 2 \lim\_{t \to +\infty} \sum\_{k\_n \neq 0} \cos k\_n / t \left( \frac{\sin k\_n / 2t}{k\_n / 2t} \right)^2 . \end{aligned}$$

The second formula (9.1) follows from elementary trigonometry.

Weyl asymptotic law (4.15) implies that *kn* grow linearly with *n* and therefore the series in (9.1) is absolutely convergent. But the limit and summation signs cannot be exchanged. In fact there is no necessity to take the limit in formula (9.1): it is enough to consider sufficiently large values of *t.* The series is equal to a constant for all *t >* 2*/*min {*j* }*.* This is another interesting feature of the derived formula.

In Chap. 24 we are going to provide an alternative proof of the formula for Euler characteristic for equilateral graphs [333]. That proof does not use the trace formula and is based on the fact that the spectrum of the standard Laplacian on a metric graph with integer lengths is periodic in the *k*-scale.

It might be interesting to understand relations of the derived formula to indices of differential operators following [230].

We would like to present a few explicit examples illustrating formulas (9.1).

#### **(1) Single Interval**

Let the graph coincide with the interval [0*, π*] (with separated endpoints) (Fig. 9.2). The Euler characteristic is *<sup>χ</sup>* <sup>=</sup> <sup>1</sup>*.* The spectrum of *L*st*((*1*.*1*))* is *(L)* = {*n*2*, n* <sup>=</sup> 0*,* 1*,* 2*,...*}*.* Substituting *kn* = *n, n* = 0*,* 1*,* 2*,...* into formula (9.1) we get

$$\chi = 2 - 2 \lim\_{t \to \infty} \sum\_{n=1}^{\infty} \frac{1 - 2 \cos n/t + \cos 2n/t}{(n/t)^2} = 2 - 1 = 1,\tag{9.4}$$

where we used

$$\sum\_{n=1}^{\infty} \frac{1 - 2\cos n/t + \cos 2n/t}{(n/t)^2} = \frac{1}{2}. \tag{9.5}$$

This formula can be proven using the sum ((1.443.3) from [245])

$$\sum\_{m=1}^{\infty} \frac{\cos mx}{m^2} = \frac{\pi^2}{6} - \frac{\pi x}{2} + \frac{x^2}{4}, \quad x \in [0, 2\pi]$$

**Fig. 9.3** Simple circle graph *(*1*.*2*)* of length *π*

leading to

$$\sum\_{n=1}^{\infty} \frac{1 - 2\cos n/t + \cos 2n/t}{(n/t)^2} = t^2 \left( \sum\_{n=1}^{\infty} \frac{1}{n^2} - 2\sum\_{n=1}^{\infty} \frac{\cos n\frac{1}{t}}{n^2} + \sum\_{n=1}^{\infty} \frac{\cos n\frac{2}{t}}{n^2} \right)$$

$$= t^2 \left\{ \frac{\pi^2}{6} - 2\left(\frac{\pi^2}{6} - \frac{\pi}{2}\frac{1}{t} + \frac{1}{4}\frac{1}{t^2}\right) + \frac{\pi^2}{6} - \frac{\pi}{2}\frac{2}{t} + \frac{1}{4}\frac{4}{t^2} \right\} = \frac{1}{2}.$$

#### **(2) Simple Circle**

Let the graph be the circle *(*1*.*2*)* having length *π,* i.e. it can be treated as the interval [0*, π*] with the endpoints identified (Fig. 9.3). The Euler characteristic is *χ* = 0*.* The spectrum of *L*st*((*1*.*2*))* is *(L)* = {*(*2*n)*2*, n* <sup>=</sup> <sup>0</sup>*,* <sup>1</sup>*,* <sup>1</sup>*,* <sup>2</sup>*,* <sup>2</sup>*,...*}*.* Substitution *kn* into formula (9.1) gives

$$\chi = 2 - 4 \lim\_{t \to \infty} \sum\_{n=1}^{\infty} \frac{1 - 2 \cos 2n/t + \cos 4n/t}{(2n/t)^2} = 2 - 2 = 0,\tag{9.6}$$

where we again used formula (9.5).

#### **(3) Equilateral Star Graph**

Let be the star graph formed by *m* equal edges of the length *π* joined at one endpoint (Fig. 9.4). The Euler characteristic is *χ* = 1*.* The spectrum consists of simple eigenvalues *<sup>n</sup>*2*, n* <sup>=</sup> <sup>0</sup>*,* <sup>1</sup>*,* <sup>2</sup>*,...,* and eigenvalues *(*1*/*2+*n)*2*, n* <sup>=</sup> <sup>0</sup>*,* <sup>1</sup>*,* <sup>2</sup>*,...* having multiplicity *m* − 1. The formula (9.1) gives then

$$\begin{split} \chi &= 2 - 2 \lim\_{t \to \infty} \sum\_{n=1}^{\infty} \frac{1 - 2 \cos n/t + \cos 2n/t}{(n/t)^2} \\ &- 2(m - 1) \lim\_{t \to \infty} \sum\_{n=1}^{\infty} \frac{1 - 2 \cos (n + 1/2)/t + \cos 2(n + 1/2)/t}{((n + 1/2)/t)^2} \\ &= 2 - 1 - 0 = 1, \end{split} \tag{9.7}$$

where we used formulas (9.5) and (24.41) (see Chap. 24).

**Fig. 9.4** Equilateral star graph

Formula (9.1) requires knowledge of all eigenvalues of the standard Laplacian which is impossible in practice. In order to reconstruct the Euler characteristic from a finite number of eigenvalues one may use the following observations:


Then for a sufficiently large *t* choose any *K* so that the reminder in the series is less than 1*/*2. This idea has already been implemented to determine experimentally the Euler characteristic of microwave networks without inspecting the network visually [364, 365]. Our common work seems to be a good example of collaboration between mathematicians and applied scientists, since it appeared that the original formula (9.1) requires knowledge of too many eigenvalues, not detectable in practice. Formula (9.1) was modified by using a certain continuously differentiable test function instead of *ϕ* given by (9.2). The corresponding formula has a better convergence and therefore requires a smaller number of eigenvalues. It is enough to know about 30 lowest eigenvalues to determine Euler characteristic of simple graphs. Mathematical description of the method can be found in [367]. It happens not so often that a mathematical formula can be checked through an experiment.

**Problem 35** Check calculations leading to formulas (9.4,9.6,9.7). What is the smallest value of *t* that can be taken to get precise value of *χ*?

**Problem 36** Let be a connected graph without loops. How can one determine the length of the shortest edge from the spectrum of the standard Laplacian?

## **9.2 Euler Characteristic for Graphs with Dirichlet Vertices**

Assume that the Laplacian on a graph is determined by standard and Dirichlet conditions at the vertices. It is enough to assume that Dirichlet conditions are introduced at degree one vertices only, hence let us denote by *MD* the number of Dirichlet vertices. We are interested in recovering the Euler characteristic of the graph from its spectrum generalising formula (9.1). It is clear that the number of Dirichlet vertices *MD* should be involved since we have examples of isospectral graphs having different Euler characteristic (see Fig. 2.10), hence formula (9.1) has to be modified. The main reason is that formulas for the spectral and algebraic multiplicities need to be revised. Let us prove a counterpart of Theorem 8.2 assuming for simplicity that the graph is connected.

**Theorem 9.2** *Let be a finite compact connected metric graph with Euler characteristic χ , and let L*st*,*D*() be the Laplace operator defined by MD* <sup>≥</sup> <sup>1</sup> *Dirichlet conditions at some degree one vertices and standard conditions at all other vertices. Then the spectral and algebraic multiplicities of λ* = 0 *are* 

$$m\_s(0) = 0,\tag{9.8}$$

*i.e. λ* = 0 *is not an eigenvalue;* 

$$m\_a(0) = -\chi + M\_D.\tag{9.9}$$

*Proof* To prove that *λ* = 0 is not an eigenvalue for *MD* ≥ 1 assume that *ψ* is the corresponding eigenfunction, then it holds

$$0 = \langle \psi, L^{\text{st}, \mathcal{D}} \psi \rangle = \int\_{\Gamma} \left| \psi'(\mathbf{x}) \right|^2 d\mathbf{x}.$$

implying that *ψ* is a constant function on each edge. Taking into account standard conditions1 and connectivity of the graph we conclude that *ψ* is constant on the whole *.* If *MD* ≥ 1, then the function is identically zero.

Let us turn to the algebraic multiplicity. We are going to modify the proof of Theorem 8.2. We again introduce the vectors of amplitudes *A, B*. The relation given by *S<sup>n</sup>* <sup>e</sup> *(*0*)* is the same and formula (8.13) is preserved

$$a\_{2n-1} = b\_{2n} \quad \text{and} \quad a\_{2n} = b\_{2n-1}, \quad n = 1, 2, \dots, N,$$

•

•

<sup>1</sup> It suffices to take into account continuity only.

while formula (8.14)

$$\begin{cases} a\_l + b\_l = a\_j + b\_j, & x\_l, x\_j \in V^m, \\ \sum\_{x\_j \in V^m} (a\_j - b\_j) = 0, & \end{cases}$$

holds for standard vertices only and has to be modified for Dirichlet vertices as

$$a\_l + b\_l = 0.$$

Eliminating coefficients *bi* using the first relation we get the new system of linear equations

$$\begin{cases} a\_{2l-1} + a\_{2l} = 0, & \quad l = 1, 2, \dots, N \\\\ \sum\_{l: \mathbf{x}\_l \in V^m} (a\_l - a\_{l-(-1)^l}) = 0, \; V^m \text{ is a standard vertex;} \end{cases} \tag{9.10}$$

where we have taken into account that *MD* ≥ 1 implying that at least one, and hence all, of *a*2*i*−<sup>1</sup> + *a*2*<sup>i</sup>* is zero.

With every edge *En* we associate the flux *fn* = *a*2*n*−<sup>1</sup> − *a*2*<sup>n</sup>* as before. Then conditions at standard vertices can be interpreted as the sum of fluxes is zero there. Dirichlet vertices determine no conditions on the fluxes. Let us construct basic fluxes:


Let F be any flux supported by and satisfying conservation conditions (9.10) above. We denote by *En, n* = 1*,* 2*,...,β*<sup>1</sup> the edges on the independent cycles whose deletion turns into a tree **T**. For each Dirichlet vertex *<sup>V</sup> m, m* <sup>=</sup> 1*,* 2*,...,MD* let us denote by *Eβ*1+*<sup>m</sup>* the corresponding degree one edge. Then the flux

$$\mathcal{F} - \sum\_{n=1}^{\beta\_1} \mathcal{F}(E\_n)\mathcal{F}^n - \sum\_{n=\beta\_1+1}^{\beta\_1+M\_D-1} \mathcal{F}(E\_n)\mathcal{F}\_{1,M\_D}$$

is supported by the tree **T** \ {*En*} *β*1+*MD*−1 *<sup>n</sup>*=*β*1+<sup>1</sup> with one Dirichlet vertex. Note that in the last sum we use fluxes between the Dirichlet vertices *V* <sup>1</sup>*,...,V MD*−<sup>1</sup> and the Diriclet vertex *V MD* . As before any such flux is identically zero. We have proven that the number of independent fluxes is *β*<sup>1</sup> + *MD* − 1, which accomplishes the proof. It is now straightforward to generalise Theorem 9.1 allowing Dirichlet vertices:

**Theorem 9.3** *Let be a compact connected metric graph and L*st*,*D*()be the Laplace operator defined by MD* ≥ 1 *Dirichlet conditions at certain degree one vertices and standard conditions at all other vertices. Then the Euler characteristic χ () is uniquely determined by the spectrum* {*λn*} *of the Laplace operator <sup>L</sup>*st*,*D*()*

$$\chi = M\_D + 2 \lim\_{t \to \infty} \sum\_{k\_n} \cos k\_n / t \left( \frac{\sin k\_n / 2t}{k\_n / 2t} \right)^2,\tag{9.11}$$

*where kn* <sup>=</sup> <sup>√</sup>*λn <sup>&</sup>gt;* <sup>0</sup>*.*

*Proof* Repeating the proof of Theorem 9.1 but using trace formula (8.45) for scaling-invariant vertex conditions we obtain

$$\underbrace{2\,\mathrm{m}\_{s}(0)}\_{=0} - \underbrace{\mathrm{m}\_{a}(0)}\_{=\chi+M\_{D}} = 2\lim\_{t\to\infty} \sum\_{k\_{n}} \cos k\_{n}/t \left(\frac{\sin k\_{n}/2t}{k\_{n}/2t}\right)^{2}$$

leading to (9.11).

The above formula shows that two isospectral graphs with Dirichlet and standard vertices have a common value

$$\chi - M\_D \cdot$$

We check this in the case of isospectral graphs presented in Fig. 2.10. Their Euler characteristic and number of Dirichlet vertices are *(*1*,* 2*)* and *(*0*,* 1*)* respectively:

$$1 - 2 = 0 - 1.$$

**Problem 37** Is it possible to find examples of isospectral graphs with *(χ , MD)* equal to *(*1*,* 1*)* and *(*0*,* 0*)*?

**Problem 38** Formulate and prove analogues of Theorems 9.2 and 9.3 for not necessarily connected graphs .

**Problem 39** Consider the case of arbitrary scaling-invariant conditions at the vertices. Study possible values of the spectral and algebraic multiplicities. (Paper [347] might help.)

Formula (9.11) can be proven directly using symmetry arguments. Let us double the graph by adding to another copy of the same graph and gluing them by joining pairwise the former Dirichlet vertices *V <sup>i</sup> , i* = 1*,* 2*,...,MD* introducing there standard conditions. Let us denote the metric graph obtained in this way by 2*.* This graph is symmetric with respect to the exchange of the respective points on the

two copies of . Hence all eigenfunctions and the spectrum can be divided into two classes:


Let *χ* be the Euler characteristic of , then <sup>2</sup> has 2*β*1+*MD* −1 independent cycles and its Euler characteristic is

$$2\chi - M\_D.$$

Applying formula (9.1) to standard Laplacians on and <sup>2</sup> we get

$$\begin{split} 2\chi - M\_{D} &= 2 \lim\_{t \to \infty} \sum\_{k\_{n}^{2} \in \Sigma(L^{\mathfrak{s}}(\Gamma\_{2}))} \cos k\_{n}/t \left( \frac{\sin k\_{n}/2t}{k\_{n}/2t} \right)^{2} \\ &= 2 \lim\_{t \to \infty} \sum\_{k\_{n}^{2} \in \Sigma(L^{\mathfrak{s}}(\Gamma))} \cos k\_{n}/t \left( \frac{\sin k\_{n}/2t}{k\_{n}/2t} \right)^{2} \\ &\quad + 2 \lim\_{t \to \infty} \sum\_{k\_{n}^{2} \in \Sigma(L^{\mathfrak{s},\mathbf{D}}(\Gamma))} \cos k\_{n}/t \left( \frac{\sin k\_{n}/2t}{k\_{n}/2t} \right)^{2}, \\ \chi &= 2 \lim\_{t \to \infty} \sum\_{k\_{n}^{2} \in \Sigma(L^{\mathfrak{s}}(\Gamma))} \cos k\_{n}/t \left( \frac{\sin k\_{n}/2t}{k\_{n}/2t} \right)^{2}, \end{split}$$

where *(L)* denotes the spectrum of *L*. Elementary calculations imply (9.11).

## **9.3 Spectral Asymptotics and Schrödinger Operators**

## *9.3.1 Euler Characteristic and Spectral Asymptotics*

In this section we are going to show that the Euler characteristic is determined entirely by the asymptotics of the spectrum. The limit of each term in the series (9.1) does not depend on *kn*

$$\lim\_{n \to \infty} \frac{1 - 2\cos k\_n/t + \cos 2k\_n/t}{(k\_n/t)^2} = -1.$$

Taking this into account it is clear that changing any finite number of eigenvalues does not affect the limit (9.1). We are going to prove that the same is true even if the number of perturbed eigenvalues is infinite, but the perturbation is relatively small.

Let us denote by *(k*<sup>0</sup> *n)*<sup>2</sup> the spectrum of the Laplacian and by *k*<sup>2</sup> *<sup>n</sup>* its perturbation. Then the following Lemma shows that if the perturbation is small in the sense that *k*<sup>0</sup> *n* and *kn* possess the same asymptotics, then substituting the perturbed sequence into formula (9.1) one obtains the correct value of *χ.* This result can be used in numerical computations, but it has another important implication: as the perturbed sequence one may take the spectrum of the Schrödinger operator on the same metric graph provided the potential is sufficiently regular. This implication will be discussed in the following subsection.

**Lemma 9.4** *Let kn and k*<sup>0</sup> *<sup>n</sup> be two real sequences satisfying the following conditions* 

$$k\_n - k\_n^0 = \mathcal{O}\left(\frac{1}{n}\right),\tag{9.12}$$

$$k\_n = \frac{\pi}{\mathcal{L}} n + \mathcal{O}(1),\tag{9.13}$$

*(Weyl's asymptotics), then the following two limits coincide* 

$$\lim\_{n \to \infty} \sum\_{n=1}^{\infty} \cos k\_n / t \left( \frac{\sin k\_n / 2t}{k\_n / 2t} \right)^2 = \lim\_{t \to \infty} \sum\_{n=1}^{\infty} \cos k\_n^0 / t \left( \frac{\sin k\_n^0 / 2t}{k\_n^0 / 2t} \right)^2. \tag{9.14}$$

*Proof* Without loss of generality we assume that L = *π*. It will be convenient to write estimates (9.12) and (9.13) in the form

$$|k\_n - k\_n^0| \le A \frac{1}{n}, \quad |k\_n - n| \le B, \quad |k\_n^0 - n| \le B, \ n = 1, 2, \dots,\tag{9.15}$$

with certain positive constants *A* and *B.* In addition we shall use the following notations

$$a\_n(t) := \cos k\_n / t \left(\frac{\sin k\_n / 2t}{k\_n / 2t}\right)^2, \ a\_n^0(t) := \cos k\_n^0 / t \left(\frac{\sin k\_n^0 / 2t}{k\_n^0 / 2t}\right)^2.$$

To prove the Lemma we are going to establish two estimates which will be suitable for terms with small and large indices respectively:

#### **Estimate 1 (Suitable for Small Values of** *n***)**

$$|a\_n(t) - a\_n^0(t)| \le C \frac{\left(n + B\right)^2}{t^2},\tag{9.16}$$

*where C is a certain positive constant C >* 0*.*

Consider the function

$$f(\alpha) = \begin{cases} \cos 2\alpha \left(\frac{\sin \alpha}{\alpha}\right)^2, & \alpha \neq 0, \\ 1, & \alpha = 0. \end{cases}$$

The derivatives of *f* are

$$\begin{array}{l} f'(\alpha) = -2\sin 2\alpha \left(\frac{\sin \alpha}{\alpha}\right)^2 + 2\cos 2\alpha \frac{\sin \alpha}{\alpha} \frac{\alpha \cos \alpha - \sin \alpha}{\alpha^2}, \\\ f''(\alpha) = -4\cos 2\alpha \left(\frac{\sin \alpha}{\alpha}\right)^2 - 8\sin 2\alpha \frac{\sin \alpha}{\alpha} \frac{\alpha \cos \alpha - \sin \alpha}{\alpha^2} \\\ + 2\cos 2\alpha \left(\frac{\alpha \cos \alpha - \sin \alpha}{\alpha^2}\right)^2 \\\ + 2\cos 2\alpha \frac{\sin \alpha}{\alpha} \frac{-\alpha^2 \sin \alpha - 2\alpha \cos \alpha + 2\sin \alpha}{\alpha^3}, \end{array}$$

and we see that *f (*0*)* = 0 and *f (α)* is uniformly bounded. Hence Taylor's formula gives

$$f(\alpha) - f(0) - f'(0)\alpha = f''(\xi)\frac{\alpha^2}{2}$$

and therefore

$$|f(\alpha) - 1| \le \frac{1}{2} \max |f''(\alpha)| \alpha^2.$$

This implies that

$$|a\_n(t) - 1| \le \frac{1}{2} \max |f''(\alpha)| \frac{(n+B)^2}{4t^2}$$

and similar estimate (9.16) for the difference |*an(t)* <sup>−</sup> *<sup>a</sup>*<sup>0</sup> *n(t)*<sup>|</sup> with *<sup>C</sup>* <sup>=</sup> <sup>1</sup> <sup>4</sup>max |*f (α)*|*.*

#### **Estimate 2 (Suitable for Large Values of** *n***)**

$$\left| a\_n(t) - a\_n^0(t) \right| \le D \frac{t}{(n-B)^3}, \ n > B,\tag{9.17}$$

*where D is a certain positive constant D >* 0*.*

To prove the estimate we use that the function *α*2*f (α)* is uniformly bounded. Using the first mean value theorem we get

$$a\_n(t) - a\_n^0(t) = f(k\_n/2t) - f(k\_n^0/2t) = f'(\xi\_n)(k\_n/2t - k\_n^0/2t),$$

where *ξn* satisfies the same estimate as *kn* and *k*<sup>0</sup> *<sup>n</sup>* (see the second and third estimates in (9.15))

$$|\xi\_n - n| \le B.$$

For *n > B,* it follows that

$$|a\_n(t) - a\_n^0(t)| \le \max |\alpha^2 f'(\alpha)| \frac{1}{(\frac{n-B}{2t})^2} A \frac{1}{2nt} \le D \frac{t}{(n-B)^3}$$

with *<sup>D</sup>* <sup>=</sup> <sup>2</sup>*<sup>A</sup>* max <sup>|</sup>*α*2*<sup>f</sup> (α)*|*,* which is exactly estimate (9.17).

To prove (9.14) we need to show that the following limit equals zero

$$\lim\_{t \to \infty} \sum\_{n=1}^{\infty} \left| \cos k\_n / t \left( \frac{\sin k\_n (2t)}{k\_n / 2t} \right)^2 - \cos k\_n^0 / t \left( \frac{\sin k\_n^0 (2t)}{k\_n^0 / 2t} \right)^2 \right|$$

$$\equiv \lim\_{t \to \infty} \sum\_{n=1}^{\infty} |a\_n(t) - a\_n^0(t)|. \tag{9.18}$$

Let us split the infinite series into the finite sum of the first *K* elements and the remaining infinite series as

$$\sum\_{n=1}^{\infty} = \sum\_{n=1}^{K} + \sum\_{n=K+1}^{\infty} \dots$$

To prove that the limit is zero it is enough to show that for any  *>* 0 there exists *t*<sup>0</sup> = *t*0*(
)*, such that for any *t>t*0*(
)* the number *K* = *K(
, t)* can be chosen in such a way that both the finite sum and the series are less than */*2*.*

We estimate the summands using (9.16) and (9.17) as

$$\begin{aligned} \sum\_{n=1}^{K} |a\_n(t) - a\_n^0(t)| &\leq \sum\_{n=1}^{K} C \frac{(K+B)^2}{t^2} \leq C \frac{(K+B)^3}{t^2},\\ \sum\_{n=K+1}^{\infty} |a\_n(t) - a\_n^0(t)| &\leq \sum\_{n=K+1}^{\infty} D \frac{t}{(n-B)^3} \leq \frac{D}{2} \frac{t}{(K-B)^2}.\end{aligned}$$

Each of sum is less than */*2 if the following two inequalities are satisfied

$$K(\epsilon, t) \le \left(\frac{\epsilon t^2}{2C}\right)^{1/3} - B \quad \text{and} \quad K(\epsilon, t) \ge \sqrt{\frac{Dt}{\epsilon}} + B.$$

Hence the series in (9.18) is less than  if

$$
\sqrt{\frac{Dt}{\epsilon}} + B \le \left(\frac{\epsilon t^2}{2C}\right)^{1/3} - B.
$$

For any  *>* 0 there exists *t*0, such that for any *t>t*<sup>0</sup> the last inequality is satisfied and it is possible to choose integer *K(
, t)*, such that both the finite and infinite sums are less than */*2*.* For such *t* we have that the infinite series in (9.18) is less than *.* It follows that the limit in (9.18) is zero.

# *9.3.2 Schrödinger Operators and Euler Characteristic of Graphs*

Let *q* be any essentially bounded real potential and let *L*st *<sup>q</sup> ()* be the corresponding standard Schrödinger operator, then the difference between the eigenvalues is uniformly bounded

$$(k\_n^2 - (k\_n^0)^2 = \mathcal{O}(1),\tag{9.19}$$

as the Schrödinger operator is a bounded perturbation of the Laplacian. The same estimate will be proven in Chap. 11 assuming that the potential is just absolutely integrable (see (11.32)). Also the estimate (9.19) for not essentially bounded potentials will be justified later. Therefore in the following theorem we are going to assume that the potential is from *L*1*()*.

We are able to prove now that the formula for Euler characteristic (9.1) gives the correct result, provided the spectrum of the Laplacian is substituted with the spectrum of the Schrödinger operator.

**Theorem 9.5** *Let be a finite compact metric graph and q be a real valued absolutely integrable function on . Let L*st*() and L*st *<sup>q</sup> () be the standard Laplace and Schrödinger operators. Then the Euler characteristic χ () of the graph is uniquely determined by the spectrum λn(S) of the operator L*st *<sup>q</sup> and can be calculated using the limit* 

$$\chi(\Gamma) = 2 \lim\_{t \to \infty} \sum\_{n=0}^{\infty} \cos \sqrt{\lambda\_n (L\_q^{\text{st}})} / t \left( \frac{\sin \sqrt{\lambda\_n (L\_q^{\text{st}})} / 2t}{\sqrt{\lambda\_n (L\_q^{\text{st}})} / 2t} \right)^2,\tag{9.20}$$

*where we use the following natural convention* 

$$
\lambda\_m = 0 \Rightarrow \frac{\sin\sqrt{\lambda\_m (L\_q^{\rm st})/2t}}{\sqrt{\lambda\_m (L\_q^{\rm st})/2t}} = 1. \tag{9.21}
$$

*Proof* The estimate (9.19) together with the Weyl asymptotics (9.13) imply that

$$k\_n - k\_n^0 = \mathcal{O}\left(\frac{1}{n}\right).$$

and Lemma 9.4 can be applied. It follows that

$$\lim\_{n \to \infty} \sum\_{n=1}^{\infty} \cos k\_n / t \left( \frac{\sin k\_n / 2t}{k\_n / 2t} \right)^2 = \lim\_{t \to \infty} \sum\_{n=1}^{\infty} \cos k\_n^0 / t \left( \frac{\sin k\_n^0 / 2t}{k\_n^0 / 2t} \right)^2 = \chi \dots$$

where we used (9.1) on the last step. The introduced convention allowed us to remove *ms(*0*)* from the formula for Euler characteristic of the Laplacian.

Note that the limit cannot be substituted with considering *<sup>t</sup>* <sup>≥</sup> <sup>2</sup> min{*j* } as can be done for Laplacians.

Theorem 9.5 together with Weyl's asymptotics (4.25) imply that two Schrödinger operators on graphs may have the same spectrum only if the underlying graphs have the same total length and Euler characteristic, in other words, if the graphs have the same size and complexity.

**Uniqueness Theorem 9.6** *Let the metric graphs* <sup>1</sup> *and* <sup>2</sup> *be finite and compact and let the corresponding real potentials q*<sup>1</sup> *and q*<sup>2</sup> *be absolutely integrable. Then the corresponding standard Schrödinger operators Lqj (j ), j* = 1*,* 2*, have close spectra* 

$$
\lambda\_n \left( L\_{q\_1}(\Gamma\_1) \right) - \lambda\_n \left( L\_{q\_2}(\Gamma\_2) \right) = \mathcal{O}(1) \tag{9.22}
$$

*only if the metric graphs have the same* 


*Proof* Condition (9.22) together with the Weyl asymptotics (4.25) imply that the metric graphs <sup>1</sup> and <sup>2</sup> have the same total length.

To show that the graphs have the same Euler characteristic one repeats the arguments used in the proof of Theorem 9.20.

## *9.3.3 General Vertex Conditions: A Counterexample*

Obtained results can be extended to the case of most general vertex conditions. This problem appears to be more sophisticated than it may be expected. The main reason is that the vertex scattering matrix in general is not energy independent but tends to a certain limiting matrix **Sv***(*∞*).* The limiting matrix in its turn corresponds to certain symmetric vertex conditions, but these conditions may be incompatible with the connectivity of the original graph *.* In other words these new vertex conditions may not connect **all** edges joined at a vertex despite the fact that the original conditions (corresponding to the energy dependent scattering matrix) do connect all these edges together.

Let us study the following elementary example. Consider the interval [−*π, π*] turned into circle by joining together the end points −*π* and *π* with the help of the following vertex conditions

$$\begin{cases} \psi(-\pi) = -\partial\_n \psi(+\pi), \\ \psi(\pi) = -\partial\_n \psi(-\pi); \end{cases} \tag{9.23}$$

which are obviously properly connecting, i.e. connect together the boundary values of the functions from both endpoints. The corresponding vertex scattering matrix

$$\mathbf{S}\_{\mathbf{V}}(k) = -\frac{I - ikB}{I + ikB}$$

with *B* = 0 −1 <sup>−</sup>1 0 is irreducible, but it tends to the unit matrix **Sv***(*∞*)* = **I** as *k* → ∞*.* The vertex conditions corresponding to **Sv***(k)* = **I** are just Neumann boundary conditions *∂nψ(*+*π )* = 0 = *∂nψ(*−*π ),* which are obviously reducible: the two endpoints are not connected to each other. Therefore it is natural to call the vertex conditions (9.23) by *not asymptotically properly connecting*. If the vertex conditions are not asymptotically properly connecting, then the asymptotics of the spectrum is determined by the Laplacian not on the original graph , but on a certain new graph ∞ obtained from by chopping some of the vertices. In other words spectral asymptotics is determined by a different topology.

We illustrate this idea by calculating the spectra of the operators appearing in the example under consideration. Let us denote by *L*˜ the second derivative operator <sup>−</sup> *<sup>d</sup>*<sup>2</sup> *dx*<sup>2</sup> defined on the functions from *W*<sup>2</sup> <sup>2</sup> [−*π, π*] and satisfying vertex conditions (9.23). These vertex conditions can be written as follows using the derivatives with respect to the variable *x* ∈ [−*π, π*]

$$\begin{cases} \psi(-\pi) = \psi'(+\pi), \\ \psi'(-\pi) = -\psi(\pi). \end{cases} \tag{9.24}$$

It is easy to see that the vertex conditions are invariant under the change of the coordinate *x* → −*x* and hence the operator *L*˜ commutes with the symmetry operator P*ψ(x)* = *ψ(*−*x).* Therefore all eigenfunctions of *L*˜ are either even or odd. The dispersion equations for even and odd functions can be obtained by substituting the *Ansätze ψs(x)* = cos *kx* and *ψa(x)* = sin *kx* into the vertex conditions

$$\tan k^s \pi = -\frac{1}{k^s};\tag{9.25}$$

$$
\cot k^a \pi = -\frac{1}{k^a}.\tag{9.26}
$$

The eigenvalues satisfy the following asymptotic conditions

$$k\_n^s = n + \mathcal{O}(\frac{1}{n}), \quad k\_n^a = \frac{2n+1}{2} + \mathcal{O}(\frac{1}{n}), \quad n = 0, 1, 2, \dots,\tag{9.27}$$

where *(k<sup>s</sup> n)*<sup>2</sup> and *(k<sup>a</sup> n)*<sup>2</sup> denote the eigenvalues for even and odd eigenfunctions respectively. These eigenvalues are asymptotically close to the eigenvalues *(ks*<sup>0</sup> *<sup>n</sup> )*<sup>2</sup> <sup>=</sup> *(n)*<sup>2</sup> and *(ka*<sup>0</sup> *<sup>n</sup> )*<sup>2</sup> <sup>=</sup> 2*n*+1 2 2 for the Laplace operator on the interval [−*π, π*] (with Neumann boundary conditions at the endpoints *ψ (*−*π )* = 0 = *ψ (π )*).

Substituting the spectrum of the operator *L*˜ into the formula (9.20) one obtains the Euler characteristic of the interval, not of the circle

$$\lim\_{n \to \infty} \sum\_{k\_n = k\_n^s, k\_n^a} \cos k\_n / t \left( \frac{\sin k\_n / 2t}{k\_n / 2t} \right) = \lim\_{t \to \infty} \sum\_{k\_n^0 = k\_n^0, k\_n^0} \cos k\_n^0 / t \left( \frac{\sin k\_n^0 / 2t}{k\_n^0 / 2t} \right) = 1,\tag{9.28}$$

where we used Lemma 9.4. It follows that formula (9.1) in general is not valid for Schrödinger operators on graphs with general (Hermitian) vertex conditions.

# **9.4 Reconstruction of Graphs with Rationally Independent Lengths**

Formula (8.21) can be applied to solve the inverse spectral problem in the very special case of graphs with edges having rationally independent lengths.2 Our studies will again be restricted to the case of standard Laplacians. Such operators are uniquely determined by the underlying metric graphs and therefore the corresponding inverse problem is equivalent to the problem of recovering the metric graph

<sup>2</sup> See precise definition below.

from the spectrum of the Laplacian *L*st*().* In this section we follow our paper [346] inspired by Gutkin and Smilansky [252].

Note that this reconstruction is not possible for graphs having vertices of degree two. The two edges connected at such vertex can be substituted with the single edge of the length equal to the sum of the lengths. This is because standard conditions at a degree two vertex imply that the function and its first derivative are continuous at the vertex.

The set L of lengths of all periodic paths for a metric graph is usually called the *length spectrum*. This is a set of positive real numbers all being linear combinations of the lengths *n* of the edges with coefficients being natural numbers. But not all such linear combinations are present in L, since not all edges are connected to each other directly.

We are going to assume that the lengths of the edges are *rationally independent*, i.e. if the equality

$$\sum\_{n=1}^{N} \alpha\_n \ell\_n = 0$$

holds with certain rational *αn* <sup>∈</sup> <sup>Q</sup>, then all *αn* are necessarily equal to zero. This assumption is very important, since we already know that even trees cannot be reconstructed from the spectra of their Laplacians, unless extra restrictions are imposed (See Sect. 2.2 (Problem 6)). If the lengths are rationally independent then knowing the length *(p)* of a periodic path we know which edges this path comes across and how many times, of course provided we know *n.* Hence our first task should be to recover the lengths of edges.

Looking at formula (8.21) one may get the impression that the lengths of all periodic paths can be recovered directly as the (positive) points supporting the delta functions *δ(p)*. But one should pay attention to the fact that complicated graphs may have several periodic paths with precisely the same lengths. Then contributions from all such paths may cancel each other out.

**Example 9.7 ([410])** Consider the graph presented in Fig. 9.5 with the lengths of edges indicated. There exist precisely six periodic paths with the length 2<sup>1</sup> + <sup>2</sup> + <sup>3</sup> + <sup>4</sup> + 5. These paths are indicated on the lower part of the figure. For each closed curve there exists precisely two paths that run along it in opposite directions. We assume that the vertices *V* <sup>2</sup> and *V* <sup>4</sup> have arbitrary degrees *d*<sup>2</sup> and *d*<sup>4</sup> and the degrees of the vertices *V* <sup>1</sup> and *V* <sup>3</sup> are equal to 3*.* Then the product of scattering coefficients for the left path is

$$T(V^4) \cdot T(V^3) \cdot R(V^1) \cdot T(V^3) \cdot T(V^2) \cdot T(V^1) = \frac{2}{d\_4} \cdot \frac{2}{3} \cdot (-\frac{1}{3}) \cdot \frac{2}{3} \cdot \frac{2}{d\_2} \cdot \frac{2}{3} = -\frac{32}{81} \frac{1}{d\_2 d\_4}.$$

Here the scattering coefficients are taken from formula (3.40) determining the vertex scattering matrix for standard vertex conditions. The central path gives the same

**Fig. 9.5** Periodic paths of length 2*d*<sup>1</sup> +*d*<sup>2</sup> +*d*<sup>3</sup> +*d*<sup>4</sup> +*d*<sup>5</sup> © Marlena Nowaczyk. Reproduced with permission

contribution, whereas contribution from the right path is

$$T(V^4) \cdot T(V^3) \cdot T(V^1) \cdot T(V^2) \cdot T(V^3) \cdot T(V^1) = \frac{2}{d\_4} \cdot \frac{2}{3} \cdot \frac{2}{3} \cdot \frac{2}{d\_2} \cdot \frac{2}{3} \cdot \frac{2}{3} = \frac{64}{81} \frac{1}{d\_2 d\_4}.$$

Then the total contribution from all six paths is given by

$$\begin{aligned} \sum\_{\substack{\boldsymbol{\gamma}\in\mathcal{P} \\ \ell(\boldsymbol{\gamma})=2\ell\_1+\ell\_2+\ell\_3+\ell\_4+\ell\_5}} l(\text{prim}(\boldsymbol{\gamma})) S\_\mathbf{v}(\boldsymbol{\gamma}) \\ \boldsymbol{\delta} = 2(2d\_1+d\_2+d\_3+d\_4+d\_5) \left( -\frac{32}{27d\_2d\_4} - \frac{32}{27d\_2d\_4} + \frac{64}{27d\_2d\_4} \right) = 0. \end{aligned}$$

It follows that formula (8.21) contains no delta function supported at = 21+2+ <sup>3</sup> + <sup>4</sup> + 5.

We believe that the constructed example is the simplest one, since any such example should contain paths with reflections—paths without reflections always lead to positive scattering coefficients. But it might be interesting to find an even simpler example.

**Problem 40** Construct your own example of a metric graph with periodic paths giving zero contribution to the right hand side of trace formula (8.21).

Having the presented example in mind let us introduce the notion of *reduced length spectrum* <sup>L</sup> <sup>⊂</sup> <sup>L</sup> defined as

$$\mathbb{L}' = \{ \ell : \left( \sum\_{\substack{\mathcal{V} \in \mathcal{P} \\ \mathcal{V} \in \mathcal{P} \\ \ell(\mathcal{V}) = \ell}} \ell(\text{prim}(\mathcal{V})) S\_{\mathcal{V}}(\mathcal{V}) \neq 0 \right) \}. \tag{9.29}$$

The following Lemma proves that the reduced length spectrum always contains the shortest periodic paths associated with an edge or a pair of neighbouring edges.

**Lemma 9.8** *Let be a connected finite metric graph without degree two vertices and with rationally independent lengths of edges. The reduced length spectrum* L *contains at least the following lengths:* 


*Proof* The two assertions will be proven separately by considering all possible cases. Let us first note that if a periodic path of a length is unique, then the coefficient in front of the delta function *δ* is always different from zero, since the product of scattering coefficients is always different from zero (the graph is assumed to contain no degree two vertices). The same holds true if there are several paths of the same length, but the corresponding products of scattering coefficients are equal.

Consider first the case of a single edge *Ej* . Possible cases are:

• *Ej forms a loop.* 

There are two periodic paths of length *j* running along the loop in opposite directions. The corresponding products of scattering coefficients are equal and therefore *j* is in L *.*

• *Ej connects two different vertices.*  There is a unique path3 of length 2*j* and it is present in the reduced length spectrum as explained above.

Let *Ej* and *Ek* be two neighbouring edges, consider all possible ways they are connected to each other:


<sup>3</sup> The path running in the opposite direction coincides with the original one.

There are two shortest paths of length *j* + 2*k.* The corresponding products of scattering coefficients are equal. Hence *j* <sup>+</sup> <sup>2</sup>*k* belongs to <sup>L</sup> *.*


We are now going to show that the knowledge of the reduced length spectrum together with the total length of the graph is enough to reconstruct the graph. The first step in this direction is to recover the lengths of the edges from the total length of the graphs and the set L *.* The following result can be proven by refining the method of Gutkin-Smilansky [252].

**Lemma 9.9** *Let the lengths of the edges of a finite connected metric graph without degree two vertices be rationally independent. Then the total length* L *of the graph and the reduced length spectrum* L *(defined by (9.29)) determine the lengths of all edges independently of whether these edges form loops or not.* 

*Proof* The set L is infinite, but we are interested in reconstructing *N* rationally independent lengths *n*. Therefore it is wise to restrict our consideration to a smaller, even finite, set containing for sure the lengths of all shortest paths described in the previous lemma. For example if we take all periodic paths with the lengths less than double the total length L, then all *n* or 2*n* for sure belong to the set.

Consider the finite subset <sup>L</sup> of <sup>L</sup> <sup>⊂</sup> <sup>L</sup> consisting of all lengths less than or equal to 2L

$$
\mathbb{L}'' = \{ \ell \in \mathbb{L}' : \ell \le 2\mathcal{L} \}.
$$

This finite set contains at least the numbers 21*,* 22*,...,* 2*N .* Therefore there exists a basis *s*1*, s*2*,...,sN ,* such that every length <sup>∈</sup> <sup>L</sup> (as well as from L) can be written as a half-integer combination of *sj*

$$\ell = \frac{1}{2} \sum\_{j=1}^{N} n\_j s\_j, \quad n\_j \in \mathbb{N}.$$

Such basis is not unique especially if the graph has loops. Any two bases {*sj* } and {*s <sup>j</sup>* } are related as follows *sj* = *nj s ij , nj* <sup>=</sup> <sup>1</sup> <sup>2</sup> *,* 1*,* 2*,* where *i*1*, i*2*,...,iN* is a permutation of 1*,* 2*,...,N.* Then among all possible bases consider the basis with the shortest total length *<sup>N</sup> <sup>j</sup>*=<sup>1</sup> *sj .* This basis is unique up to a permutation.

The total length of the graph L can then be written as a sum of *sj* with the coefficients equal to 1 or 1*/*2

$$\mathcal{L} = \sum\_{j=1}^{N} \alpha\_j s\_j, \quad \alpha\_j = 1, 1/2. \tag{9.30}$$

The coefficients in this sum are equal to 1 if *sj* is equal to the length of a certain edge *Ej* , i.e. when the edge forms a loop. The coefficient 1*/*2 appears if *sj* is equal to double the length of an edge. In this case the edge does not form a loop. Therefore the lengths of the edges up to a permutation can be recovered from (9.30) using the formula *j* = *αj sj , j* = 1*,* 2*,...,N.* To check whether an edge *Ej* forms a loop or not it is enough to check whether *j* belongs to <sup>L</sup> or not.

Once the lengths of all edges are known the graph can be reconstructed from the reduced length spectrum. Lemma 9.8 implies that looking at the reduced length spectrum L one can determine whether any two edges *Ej* and *Ek* are neighbours or not (have at least one common endpoint): the edges *Ej* and *Ek* are neighbours if and only if <sup>L</sup> contains at least one of the lengths *j* <sup>+</sup> *k,* <sup>2</sup>*j* <sup>+</sup> *k, j* <sup>+</sup> <sup>2</sup>*k,* or 2*(j* + *k).*

**Lemma 9.10** *Every finite connected metric graph without degree two vertices can be reconstructed from the set* {*n*} *N <sup>n</sup>*=<sup>1</sup> *of the lengths of all edges and the reduced length spectrum* L *defined by (9.29), provided that n are rationally independent.* 

*Proof* Let us introduce the set of edges <sup>E</sup> = {*En*} *N <sup>n</sup>*=<sup>1</sup> uniquely determined by the lengths *j* . We shall prove the lemma for simple graphs first. A graph is called *simple* if it contains no loops and no multiple edges. From an arbitrary graph one can obtain a simple graph by cancelling all loops and choosing only one edge from every multiple one:


The new subsets <sup>E</sup><sup>∗</sup> <sup>⊂</sup> <sup>E</sup> containing *<sup>N</sup>*<sup>∗</sup> <sup>≤</sup> *<sup>N</sup>* elements and <sup>L</sup><sup>∗</sup> <sup>⊂</sup> <sup>L</sup> obtained in this way correspond to a simple subgraph <sup>∗</sup> ⊂ which can be obtained from by removing all loops and reducing all multiple edges (Fig. 9.6). One obtains different ∗ by choosing different edges to be left during the reduction, but all ∗ have the same topology.

The graph ∗ has the same vertex set as . Note that the reduced graph may have degree two vertices, but such vertices are not dangerous since the edges connected at such vertex are present in the reduced length spectrum L .

The reconstruction will be done iteratively and we will construct an increasing finite sequence of subgraphs such that <sup>1</sup> ⊂ <sup>2</sup> ⊂ *...N*<sup>∗</sup> = ∗. The corresponding subsets of edges will be denoted by E*k.*

For *k* = 1 take the graph <sup>1</sup> consisting of one edge, say *E*1*.* The endpoints are not connected as we have reduced all loops.

Suppose that connected subgraph *k* consisting of *k* edges is reconstructed. Pick up any edge *Ek*+<sup>1</sup> which is a neighbour of at least one of the edges in *k.* Let us denote by Enbh *<sup>k</sup>* the subset of <sup>E</sup>*<sup>k</sup>* of all edges which are neighbours of *Ek*+1*.* We have to identify one or two vertices in *k* to which the new *Ek*+<sup>1</sup> is attached. Every such vertex is uniquely determined by listing the edges joined at this vertex, since the subgraph *k* is simple. Therefore we have to separate Enbh *<sup>k</sup>* into two classes of edges attached to each of the endpoints of *Ek*+1*.* (One of the two sets can be empty, which corresponds to the case when the edge *Ek*+<sup>1</sup> is attached to *k* at one vertex only.)

Take any two edges from Enbh *<sup>k</sup>* , say *E* and *E .* The edges *E* and *E* belong to the same class if and only if:


In this way we either separate Enbh *<sup>k</sup>* into two classes of edges or Enbh *<sup>k</sup>* consists of edges joined at one vertex. In the first case the new edge *Ek*+<sup>1</sup> connects the two unique vertices determined by the subclasses. In the second case *Ek*+<sup>1</sup> is attached by one endpoint to *k* at the vertex uniquely determined by Enbh *<sup>k</sup> .* It does not play any role which of the two endpoints of *Ek*+<sup>1</sup> is attached to the chosen vertex of *k*, since the two possible graphs are equivalent.

Denote the graph obtained in this way by *k*+1*.*

Since the graph <sup>∗</sup> is connected and finite after *N*<sup>∗</sup> steps one arrives at *N*<sup>∗</sup> = ∗*.*

It remains to add all loops and multiple edges to reconstruct the initial graph . Suppose that the reconstructed subgraph ∗ is not trivial, i.e. consists of more than one edge. Then every vertex is uniquely determined by listing all edges joined at it. Check first to which vertex the loop *En* is connected by checking if periodic paths of the length *n* <sup>+</sup> <sup>2</sup>*j* belongs to <sup>L</sup> or not. All such edges *Ej* determine the unique vertex to which *En* should be attached. To reconstruct multiple edges check whether *m* <sup>+</sup>*j* is from <sup>L</sup> , where *Ej* <sup>∈</sup> <sup>E</sup>∗*.* Substitute all such edges *Ej* with corresponding multiple edges.

In the case <sup>∗</sup> is trivial, the proof is an easy exercise.

Our main result can be obtained as a straightforward implication of Lemmas 9.9 and 9.10.

**Theorem 9.11** *The spectrum of a Laplace operator on a metric graph determines the graph uniquely, provided that:* 


*Proof* The spectrum of the operator determines the left-hand side of the trace formula (8.20). Formula (8.21) shows that the spectrum of the graph determines the total length of the graph and the reduced length spectrum. The total length can also be determined using Weyl's asymptotics (4.25) (or (9.13))

$$\mathcal{L} = \pi \lim\_{n \to \infty} \frac{n}{k\_n}. \tag{9.31}$$

Having reconstructed the total length one may use Lemma 9.9 to conclude that the lengths of all edges can be extracted from the reduced length spectrum. Lemma 9.10 then implies that the whole graph can be reconstructed provided that edge lengths are rationally independent.

One can easily remove the condition that the graph is connected. The result can be generalised to include more general differential operators on the edges and vertex conditions. Moreover, it is enough to require that only edges situated close to each other have rationally independent lengths [408, 409].

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 10 Arithmetic Structure of the Spectrum and Crystalline Measures**

We consider applications of the trace formula and spectral theory of metric graphs in Fourier analysis. It turns out that spectral measures associated with metric graphs give explicit examples of crystalline measures.

## **10.1 Arithmetic Structure of the Spectrum**

Let us discuss arithmetic structure of the spectra of standard Laplacians on metric graphs. It depends both on the topology of the underlying discrete graph *G* and on the relations between the edge lengths in the metric graph *-*.

The trace formula (8.20) is going to play a crucial role in our studies, but we shall write it in a slightly modified way by moving the term *χδ* from the right hand side to the left hand side

$$\underbrace{(1+\beta\_1)}\_{2-\chi}\delta + \sum\_{k\_n \neq 0} \left(\delta\_{k\_n} + \delta\_{-k\_n}\right) = \frac{\mathcal{L}}{\pi} + \frac{1}{\pi} \sum\_{\mathcal{V} \in \mathcal{P}} l(\text{prim}(\mathcal{V})) S\_{\mathcal{V}}(\mathcal{V}) \cos kl(\mathcal{V}), \tag{10.1}$$

where we assumed that the graph is connected and therefore *ms(*0*)* = 1. In the original formula (8.20) the left hand side contains all spectral information while the right hand side collects geometric and topological characteristics of the metric graph. The modified formula (10.1) reflects crystalline1 structure of the corresponding measures: the left hand side as well the Fourier transform of the right hand side are given by infinite sums of delta functions.

<sup>1</sup> See the following Sect. 10.2, where crystalline measures are introduced.

P. Kurasov, *Spectral Geometry of Graphs*, Operator Theory: Advances and Applications 293, https://doi.org/10.1007/978-3-662-67872-5\_10

Taking into account the modified variant (10.1) of the trace formula it is natural instead of the discrete eigenvalues *λj* to look at their square roots *kj* <sup>=</sup> *λj* <sup>≥</sup> <sup>0</sup> and in addition to adjust the spectrum at *k* = 0 so that the **modified spectrum**  Spec *(-)* of *L*st*(-)* is

$$\text{Spec}\left(\Gamma\right) = \left\{ \underbrace{0, 0, \ldots, 0}\_{1 + \beta\_1 \text{ times}} \; , \pm \sqrt{\lambda\_j} : \lambda\_j \neq 0 \right\}. \tag{10.2}$$

Note that we use this convention for the spectrum only in this chapter, where its arithmetic structure is discussed, and in Chap. 24 devoted to discrete graphs.

We shall discuss two extreme cases:


In the first case the length of every edge is an integer multiple of a certain basic length  *>* 0. Therefore introducing degree two vertices at the distance  on every edge makes the metric graph equilateral—all edges have the same length . The spectrum for such graphs is directly connected to the spectrum of the corresponding normalised Laplacian matrix *LN (G)* (see Sect. 24.1). As the result the spectrum Spec *(-)* is periodic and hence is given by a finite number of arithmetic sequences.

Let us focus on the second case where the set of edge lengths is rationally independent (see Sect. 9.4). It is clear that each loop in  leads to the arithmetic sequence 2*<sup>π</sup> <sup>j</sup> n, n* <sup>∈</sup> <sup>Z</sup>*,* in the spectrum (here *<sup>j</sup>* is the length of the loop). Each eigenvalue has multiplicity 1. We are going to prove that no other arithmetic sequences occur.

**Theorem 10.1** *Let be a compact metric graph on N edges with the loops given by the edges E*1*, E*2*,...,Eν , ν* ≤ *N. Assume that the edge lengths j , j* = 1*,* 2*,... ,N, are rationally independent and is neither the segment graph -(*1*.*1*), nor the cycle graph -(*1*.*2*), nor the figure eight graph -(*2*.*4*). Then the spectrum of the standard Laplacian on can be presented as a union of multisets* 

$$\operatorname{Spec}(\Gamma) = L\_1(\Gamma) \cup L\_2(\Gamma) \cup \dots \cup L\_{\nu}(\Gamma) \cup \operatorname{Spec}^\*(\Gamma), \tag{10.3}$$

*where Lj (-)* = 2*π <sup>j</sup> n, n* <sup>∈</sup> <sup>Z</sup> *are full size arithmetic sequences determined by the loop lengths j , j* = 1*,* 2*,... , ν, and Spec*∗*(-) is a discrete set containing no full size arithmetic progression and satisfying:* 

$$\#\left(\operatorname{Spec}^\*(\Gamma)\cap[-T,T]\right) = aT + \mathcal{O}(1), \quad \text{as } T \to \infty,\tag{10.4}$$

*with* 

$$\alpha = \frac{2}{\pi} \left( \mathcal{L} - \sum\_{j=1}^{\upsilon} \ell\_j \right) = \frac{1}{\pi} \left( \ell\_1 + \dots + \ell\_{\upsilon} + 2(\ell\_{\upsilon+1} + \dots \ell\_N) \right). \tag{10.5}$$

*Proof* If the graph  has a loop of length  *j* , then the spectrum Spec *(-)* contains the full size arithmetic sequence <sup>2</sup>*<sup>π</sup> <sup>j</sup> n, n* <sup>∈</sup> <sup>Z</sup> , hence representation (10.3) is a direct consequence of the fact that each loop gives rise to the eigenfunctions

$$\psi(\mathbf{x}) = \begin{cases} \sin\frac{2\pi}{\ell\_j}n(\mathbf{x} - \mathbf{x}\_{2j-1}), & \mathbf{x} \in E\_j, \\ 0, & \text{otherwise}, \end{cases} \quad n \in \mathbb{Z}.$$

Formula (10.4) together with (10.5) then follow directly from the Weyl asymptotics (4.25).

It remains to prove that Spec∗*(-)* contains no full size arithmetic sequence. It will be convenient as in Chap. 6 to use simultaneously the complex torus

$$\mathbb{T}^N = \{ \mathbf{z} \in \mathbb{C}^N : |z\_j| = 1, j = 1, 2, \dots, N \}$$

and the real torus

$$\mathbf{T}^N = \mathbb{R}^N / 2\pi \mathbb{Z}^N.$$

Consider first the case where  is not a watermelon graph, then the set Spec∗*(-)* is given as the intersection between the curve

$$(e^{ik\ell\_1}, \dots, e^{ik\ell\_N}) \in \mathbb{T}^N \tag{10.6}$$

and the zero set Z<sup>∗</sup> *<sup>G</sup>* of the reduced secular polynomial *P*<sup>∗</sup> *<sup>G</sup>(***z***)* = 0.

Assume that the reduced spectrum Spec∗*(-)* contains an arithmetic sequence

$$a + nb, \quad n \in \mathbb{Z},$$

where *a, b* <sup>∈</sup> <sup>R</sup>. Consider the corresponding points

$$\vec{\psi}(n) = (a+nb)\vec{\ell} = a\vec{\ell} + nb\vec{\ell} \in \mathbf{T}^N$$

on the real torus **<sup>T</sup>***<sup>N</sup>* , where we use the vector  <sup>=</sup> *(* 1*,* 2*,..., N )* of edge lengths. Note that *b j N <sup>j</sup>*=<sup>1</sup> are rationally independent as  *j N <sup>j</sup>*=<sup>1</sup> are.

Then there are two possibilities

• if *b j N <sup>j</sup>*=<sup>1</sup> are linearly independent modulo 2*<sup>π</sup>* with respect to Q, then the points *ψ(n)* densely cover the torus **T***<sup>N</sup>* ;

• if *b j N <sup>j</sup>*=<sup>1</sup> are linearly dependent modulo 2*<sup>π</sup>* with respect to Q, *i.e.* there exist integers *mj* and *m* such that

$$\sum\_{j=1}^{N} m\_j b \ell\_j = m 2\pi,$$

then the points *ψ(n)* densely cover the hypertorus

$$\sum\_{j=1}^{N} m\_j \varphi\_j = m 2\pi.$$

In the first case the zero set of the polynomial *P*∗ *<sup>G</sup>(***z***)* contains the whole torus T*<sup>N</sup>* , but this is impossible since *P*<sup>∗</sup> *<sup>G</sup>* is not identically zero.

In the second case the polynomial *P*∗ *<sup>G</sup>* vanishes on the hypertorus, which is given on **T***<sup>N</sup>* by the hyperplanes:

$$\frac{m\_1}{g}\varphi\_1 + \frac{m\_2}{g}\varphi\_2 + \dots + \frac{m\_N}{g}\varphi\_N = \frac{m}{g}2\pi + \frac{k}{g}2\pi, \quad k \in \mathbb{Z},$$

where *<sup>g</sup>* <sup>∈</sup> <sup>N</sup> is the greatest common divisor (GCD) for *mj N <sup>j</sup>*=1. We also assume without loss of generality that GCD for {*m*1*,...,mN , m*} is equal to 1.

Consider the polynomial *T (***z***)* vanishing on one of the hyperplanes

$$T(\mathbf{z}) = z\_1^{m\_1/g} z\_2^{m\_2/g} \dots z\_N^{m\_N/g} - e^{i2\pi m/g} \,. \tag{10.7}$$

Let **I** be the polynomial ideal generated by *T (***z***)* with the zero set

$$V(\mathbf{I}) = \{ \mathbf{z} \in \mathbb{C}^N \, : \, F(\mathbf{z}) = 0, \,\forall F \in \mathbf{I} \}.$$

The secular polynomial *P*∗ *<sup>G</sup>* vanishes on *V (***I***)*, then Hilbert's Nullstellensatz implies that *P*∗ *<sup>G</sup>(***z***) <sup>r</sup>* for a certain *<sup>r</sup>* <sup>∈</sup> <sup>N</sup> belongs to the ideal, *i.e.* 

$$\left(P\_G^\*(\mathbf{z})\right)^r = R\left(\mathbf{z}\right)T\left(\mathbf{z}\right),\tag{10.8}$$

for a certain polynomial *R(***z***)*. Since *P*∗ *<sup>G</sup>* is irreducible (Theorem 7.19), the latter equality may hold only if *T (***z***)* coincides with a certain power of *P*∗ *<sup>G</sup>(***z***)*, but *T (***z***)* given by (10.7) is not a power of any other polynomial. The only possibility that remains is that

$$P\_G^\*(\mathbf{z}) = T(\mathbf{z}).$$

We already know that *P*∗ *<sup>G</sup>* is a first or second degree polynomial in all variables, then (10.7) implies that

$$P\_G^\*(\mathbf{z}) = T(\mathbf{z}) = \left(\prod\_{j=1}^{\nu} z\_j\right) \left(\prod\_{j=\nu+1}^{N} z\_j\right)^2 - e^{i2\pi m/g},\tag{10.9}$$

where the first product is over the edges forming loops in  and the second one over all other edges. In particular, we have *mj /g* = 1*,* 2.

To prove that representation (10.9) leads to a contradiction we shall consider contractions of graphs as in Chap. 7 and use the explicit formula (7.6) describing change of the secular polynomials under contraction.

The secular polynomials for all graphs on at most three edges have been listed in Sect. 6.2. Only genuine graphs (i.e. excluding the graphs

$$G\_{(2.1)}, G\_{(2.3)}, G\_{(3.1)}, G\_{(3.3)}, G\_{(3.5)}, G\_{(3.6)}, G\_{(3.10)}$$

having degree two vertices) need to be examined. We see that only graphs *G(*1*.*1*)* (segment), *G(*1*.*2*)* (cycle), and *G(*2*.*4*)* (figure eight graph) have secular polynomials compatible with (10.9), but these graphs are excluded by the assumptions of the theorem.

Assume now that  is a genuine graph on at least 4 edges. Then Lemma 7.7 implies that it can be contracted to a genuine graph on three edges, i.e. to one of the following graphs:

$$G\_{(3.2)}, G\_{(3.4)}, G\_{(3.7)}, G\_{(3.8)}, G\_{(3.9)}, G\_{(3.11)}, \dots$$

The secular polynomials for the graphs *G(*3*.*2*)*, *G(*3*.*4*)*, *G(*3*.*7*)*, *G(*3*.*8*)*, and *G(*3*.*11*)* (excluding the watermelon graph *G(*3*.*9*)*) are not compatible with (10.9).

It remains to study the case where  is a watermelon graph. Repeating our argument we arrive at the equation generalising (10.8)

$$\left(P\_{\mathbf{W}\_N}^s(\mathbf{z})P\_{\mathbf{W}\_N}^a(\mathbf{z})\right)^r = R(\mathbf{z})T(\mathbf{z})\dots$$

Irreducibility of *P<sup>s</sup>* **<sup>W</sup>***<sup>N</sup> (***z***)* or *P<sup>a</sup>* **<sup>W</sup>***<sup>N</sup> (***z***)* and impossibility to write *T (***z***)* as a power of any other polynomial leads to the conclusion that either

$$P\_{\mathbf{W}\_N}^s(\mathbf{z}) = T(\mathbf{z}) \quad \text{or} \quad P\_{\mathbf{W}\_N}^a(\mathbf{z}) = T(\mathbf{z}). \tag{10.10}$$

It follows in particular that *T (***z***)* is first order in all variables. Equality (10.10) does not hold for *G(*3*.*9*)*. For watermelon graphs on more than 3 edges contraction to any three edges leads to *G(*3*.*11*)*, and factorisation of the corresponding polynomial is not compatible with (10.10). The proof of the above theorem shows that the following notations are useful:


The reduced spectrum Spec∗*(-)* is obtained by intersecting the line *k*  with the reduced zero set **Z**∗.

Our original proof of the above theorem [351] was based on Lang's conjecture from diophantine analysis (proven in [359, 376] and refined in [184, 185]):

#### **Theorem 10.2 (Lang's Conjecture)** *Assume that:*


$$\overline{\mathbf{G}} = \left\{ z \in T \, : \, z^m \in \mathbf{G} \, for \, some \, m \ge 1 \right\} \subset (\mathbb{C}^\*)^N.$$

*Then there exist finitely many translates of (possibly low dimensional) subtori T*1*, T*2*,...,Tμ contained in V such that* 

$$
\overline{\mathbf{G}} \cap V = \overline{\mathbf{G}} \cap (T\_1 \cup T\_2 \cup \dots \cup T\_{\mu}),
$$

*with* 

$$
\mu \le \left( C(V) \right)'.
$$

*where C(V ) is an effectively computable constant independent of the group.* 

Full usage of diophantine analysis allows one to prove much more sophisticated properties of the spectrum. In particular the following two statements hold under the assumptions of Theorem 10.1

• The dimension of the reduced spectrum with respect to rationals (more precisely, the rational dimension of the rational linear span of the reduced spectrum) is infinite, despite the fact that the dimension of the length spectrum, that is { *(γ )*}*p*∈<sup>P</sup> is always finite [350]

$$\dim\_{\mathbb{Q}} \mathcal{L}\_{\mathbb{Q}} \left\{ k\_{\hbar} \right\}\_{k\_{\hbar} \in \text{Spec}^\*(\Gamma)} = \infty, \quad \dim\_{\mathbb{Q}} \mathcal{L}\_{\mathbb{Q}} \left\{ \ell(\mathcal{Y}) \right\}\_{p \in \mathcal{P}} = N. \tag{10.11}$$

<sup>2</sup> Here <sup>C</sup><sup>∗</sup> denotes the punctured complex plane <sup>C</sup> \ {0} with the multiplicative group structure.

• The length of possible finite arithmetic progressions—the number of elements in any such progression—can be estimated using the effectively computable constant *C(V )*[351].

## **10.2 Crystalline Measures**

The spectra of standard Laplacians on metric graphs lead to explicit examples of so-called crystalline measures, which were right in the focus of recent studies in Fourier analysis. Before our examples were published [350], it was even conjectured that such positive measures may not exist.

**Definition 10.3** A set *S* is called **discrete** if every point from the set has a (small) neighbourhood containing no other points from the set.

Following [386, 388] crystalline measures *μ* are defined as

**Definition 10.4** A tempered distribution *μ* is a **crystalline measure** if *μ* and its Fourier transform *μ*ˆ are of the form

$$
\mu(k) = \sum\_{k\_n \in K} a\_n \delta\_{k\_n}, \quad \hat{\mu}(s) = \sum\_{s\_n \in S} b\_n \delta\_{s\_n}, \tag{10.12}
$$

with *K* and *S* discrete subsets of R.

The set *S* is in the literature on crystalline measures referred to as the spectrum of the measure. In the case of metric graphs on the contrary the set *K* is determined by the spectrum Spec *(-)* (or rather Spec∗*(-)*) of the Laplacian. In order to avoid possible misunderstanding we shall use the word spectrum only in connection with the spectrum of the operator.

The simplest example of a crystalline measure is given by the Poisson summation formula

$$\sum\_{n \in \mathbb{Z}} f(n) = \sum\_{m \in \mathbb{Z}} \hat{f}(2\pi m),\tag{10.13}$$

which also can be written as:

$$
\mu(\mathbf{x}) = \sum\_{n \in \mathbb{Z}} \delta\_n \quad \Rightarrow \quad \hat{\mu}(\boldsymbol{\chi}) = \sum\_{m \in \mathbb{Z}} \delta\_{2\pi m}.\tag{10.14}
$$

The measure *<sup>n</sup>*∈<sup>Z</sup> *δtn*+*c*, *t >* <sup>0</sup>*, c* <sup>∈</sup> <sup>R</sup>*,* is usually called a **Dirac comb** of period *t*. Any finite combination of such measures is again a crystalline measure, called **generalised Dirac comb**. Such measures are considered as trivial crystalline measures and one is interested in constructing non-trivial crystalline measures, *i.e.*  not given by a finite combination of Dirac combs. **Periodic generalised Dirac** **combs** are formed by a finite number of elementary Dirac combs having rationally dependent periods. One may always assume that the periods are equal.

The support of a crystalline measure is by definition a discrete set, but we shall need the slightly more subtle notion of uniformly discrete sets. Discreteness of the set (given by Definition 10.3) does not imply that the distance between any two points in the set is bounded from below by a certain strictly positive number.

**Definition 10.5** A discrete set *S* is called **uniformly discrete** if there is a positive number *d >* 0 such that

$$|x\_n - x\_m| \ge d$$

holds for any *xn, xm* ∈ *S, n* -= *m*.

Note that in the case of multiple eigenvalues the support of the spectral measure (see (10.17) below) may be uniformly discrete, even if the spectrum is not uniformly discrete.

The union of two periodic lattices with rationally independent periods provides an example of a set which is discrete but not uniformly discrete. In what follows uniform discreteness will help us to prove the measures that we shall construct are not generalised Dirac combs.

**Lemma 10.6** *A measure on* R *given by a finite linear combination of Dirac combs is uniformly discrete if and only if it is periodic.* 

*Proof* Let *μ* be a generalised Dirac comb that is given by a finite sum of Dirac combs:

$$\mu(\mathbf{x}) = \sum\_{n=1}^{N} a\_n \left( \sum\_{m \in \mathbb{Z}} \delta\_{l\_l m + c\_n} \right). \tag{10.15}$$

If all periods *tn* are pairwise rationally dependent, *i.e. tn* <sup>=</sup> *qnt*1*, qn* <sup>∈</sup> <sup>Q</sup>, then the measure is periodic.

Arbitrary generalised Dirac comb can be written as a finite sum of periodic measures with rationally independent periods. To this end let us divide the set *T* = {*tn*} *N <sup>n</sup>*=<sup>1</sup> of all periods into *N*<sup>1</sup> <sup>≤</sup> *<sup>N</sup>* equivalence classes of pairwise rationally dependent periods

$$T = \cup\_{j=1}^{N\_1} T\_j; \qquad T\_i \cap T\_j = \emptyset, i \neq j,$$

$$t\_n, t\_m \in T\_j \Rightarrow t\_n/t\_m \in \mathbb{Q}.$$

In this way the measure *μ* is presented as a sum of periodic measures, each being a generalised Dirac comb:

$$
\mu\_j(\mathbf{x}) := \sum\_{t\_h \in T\_j} a\_h \left( \sum\_{m \in \mathbb{Z}} \delta\_{l\_h m + c\_h} \right), \quad j = 1, 2, \dots, N\_1
$$

$$
\Rightarrow \mu(\mathbf{x}) = \sum\_{j=1}^{N\_1} \mu\_j(\mathbf{x}). \tag{10.16}
$$

The supports of the periodic measures intersect at most at

$$\sum\_{i$$

points. This follows from the fact that two sequences *timi* <sup>+</sup> *ci, mi* <sup>∈</sup> <sup>Z</sup> and *tjmj* <sup>+</sup> *cj , mj* <sup>∈</sup> <sup>Z</sup> have at most one common point, provided *ti/tj* <sup>∈</sup>*/* <sup>Q</sup>. Assume the opposite:

$$\begin{cases} t\_l m\_l + c\_l = t\_j m\_j + c\_j\\ t\_l m\_l' + c\_l = t\_j m\_j' + c\_j \end{cases} \Rightarrow t\_l (m\_l' - m\_l) = t\_j (m\_j' - m\_j),$$

i.e. *ti* and *tj* are rationally dependent.

Assume that *μ* is not periodic, hence the number of periodic measures in the representation (10.16) is at least two. The measures *μ*<sup>1</sup> and *μ*<sup>2</sup> are two periodic measures with rationally independent periods, hence in their supports there exists an infinite sequence of arbitrarily close points:

$$\begin{cases} \boldsymbol{x}\_j^1 \in \operatorname{supp} \boldsymbol{\mu}\_1, \\ \boldsymbol{x}\_j^2 \in \operatorname{supp} \boldsymbol{\mu}\_2 \end{cases} \quad |\boldsymbol{x}\_j^1 - \boldsymbol{x}\_j^2| \stackrel{\left[j \to \infty\right]}{\to} \mathbf{0},$$

implying that the measure *μ*<sup>1</sup> + *μ*<sup>2</sup> is not uniformly discrete. Only a finite number of points from the sequence are not present in the support of *μ* as the supports of the periodic measures intersect at a finite number of points. Hence even the measure *μ* is not uniformly discrete in this case.

If the measure *μ* is periodic, then it is uniformly discrete: to determine the minimal distance between the atoms it is enough to look at one of the periods.

This lemma implies that every trivial (given by a generalised Dirac's comb) uniformly discrete crystalline measure is periodic. This observation will be important in the future.

Consider the standard Laplacian on any finite compact metric graph. Writing the corresponding trace formula (8.20) in the form (10.1) suggests us to introduce the following spectral measure

$$\mu(k) := (1 + \beta\_1)\delta + \sum \delta\_{k\_n}.\tag{10.17}$$

$$\begin{array}{c} k\_n \in \operatorname{Spec}(\Gamma) \\ k\_n \neq 0 \end{array}$$

Remember that the spectrum Spec includes both positive and negative values of *kn* = ±√*λn, λn* <sup>≥</sup> 0. The support of the measure is discrete, since the spectrum of *L*st*(-)* is discrete. Moreover, the measure is positive as the coefficients in front of delta functions are positive integers (*β*<sup>1</sup> ≥ 0, multiple eigenvalues are allowed). Taking Fourier transform of the right hand side of the trace formula

$$\mu = \frac{\mathcal{L}}{\pi} + \frac{1}{\pi} \sum\_{\mathcal{V} \in \mathcal{P}} l(\text{prim}(\mathcal{V})) S\_{\mathcal{V}}(\mathcal{V}) \cos kl(\mathcal{V})$$

leads to the following expression for the Fourier transform of the measure:

$$\hat{\mu}(l) = 2\mathcal{L}\mathcal{S} + \sum\_{\boldsymbol{\chi} \in \mathcal{P}} l(\text{prim}\,(\boldsymbol{\chi})) S\_{\boldsymbol{\chi}}(\boldsymbol{\chi}) \left(\delta\_{l(\boldsymbol{\chi})} + \delta\_{-l(\boldsymbol{\chi})}\right). \tag{10.18}$$

The support of this measure coincides with the set of lengths of periodic orbits which are linear combinations with positive integer coefficients of the edge lengths:

$$\ell(\boldsymbol{\gamma}) = \sum\_{n=1}^{N} \alpha\_n(\boldsymbol{\gamma}) \ell\_n, \quad \alpha\_n(\boldsymbol{\gamma}) \in \mathbb{N}, \tag{10.19}$$

where *αn(γ )* = 0*,* 1*,* 2*,...* counts how many times the orbit *γ* passes through the edge *En*. Different periodic orbits may have equal lengths but the number of orbits having a certain length is always finite. On the other hand for any length  *(γ )* there is always a non-zero distance to the nearest  *(γ )*. This is due to the fact that the coefficients in the representation above are positive integers.3 In other words, the set of lengths is discrete, hence to prove that *μ* is a crystalline measure it remains to show that *μ* is a tempered distribution, but this follows from the fact that the spectrum of *L*st*(-)* satisfies Weyl's asymptotics (4.25) and all non-zero eigenvalues

<sup>3</sup> If at least two lengths *<sup>n</sup>* are rationally independent, then the linear combinations *<sup>N</sup> <sup>n</sup>*=<sup>1</sup> *αn(γ ) n* with integer coefficients *αn* <sup>∈</sup> <sup>Z</sup> densely cover the real line <sup>R</sup> and the corresponding set is not discrete.

in *μ(k)* are counted in accordance to their multiplicities *ms(λn)*

$$\mu(k) = (1 + \beta\_1)\delta + \sum\_{\substack{k\_n \in \operatorname{Spec}(\Gamma),\\k\_n \neq 0}} m\_s(k\_n^2)\delta\_{k\_n}.\tag{10.20}$$

Summing up, the spectral measure *μ(k)* for any metric graph is crystalline. In order to get an interesting result we need to show that this measure is not a trivial crystalline measure.

If all edge lengths are pairwise rationally dependent, then the spectrum as defined above is periodic in *k*. We discuss this case in more detail in Sect. 24.3, in particular Theorem 24.6 tells us more about the structure of the spectrum. Periodicity of the spectrum Spec *(-)* means that the symmetrised spectral measure *μ* can be written as a finite sum of Dirac combs with positive integer coefficients and the same periods. One gets a trivial crystalline measure in this case.

In the rest of this section we discuss the opposite case where the edge lengths are rationally linearly independent. We shall also exclude the watermelon graphs from our discussion and will return to them at the end. Theorem 10.1 implies that the spectrum contains arithmetic progressions corresponding to the loops in *-*. Every such progression can be seen as a Dirac comb and we shall subtract their contributions from the trace formula focusing on the spectrum determined by the set Spec∗*(-)* not containing any full size arithmetic progression (see (10.3)). To this end let us introduce the reduced spectral measure

$$\mu^\*(k) = (1 + \beta\_1 - \nu)\delta + \sum\_{\substack{k\_n \in \operatorname{Spec}^\*(\Gamma) \\ k\_n \neq 0}} \delta\_{k\_n}.\tag{10.21}$$

Remember that *ν* denotes the numbers of loops in  and the set Spec∗*(-)* is determined by the intersections of the curve *(eik* <sup>1</sup> *, eik* <sup>2</sup> *,...,eik N )* with the zero set of the reduced secular polynomial *P*∗ *<sup>G</sup>(***z***)* and it holds

$$\operatorname{Spec}^\*(\Gamma) = \operatorname{Spec}(\Gamma) \backslash \left(\bigcup\_{j=1}^{\nu} L\_j(\Gamma)\right).$$

as multisets. We remind that *Lj (-)* denote the arithmetic sequences in the spectrum corresponding to the eigenfunctions supported by the loops. To get an explicit formula for the Fourier transform of *μ*∗ we need to subtract from formula (10.18) the terms corresponding to Dirac combs associated with the loops:

$$\begin{split} \hat{\mu}^\*(l) &= 2\mathcal{L}\delta + \sum\_{\boldsymbol{\mathcal{V}} \in \mathcal{P}} l(\text{prim}\,(\boldsymbol{\nu})) S\_{\boldsymbol{\mathcal{V}}}(\boldsymbol{\gamma}) \Big(\delta\_{l(\boldsymbol{\mathcal{V}})} + \delta\_{-l(\boldsymbol{\mathcal{V}})}\Big) \\ &- \sum\_{j=1}^{\boldsymbol{\mathcal{V}}} \ell\_{j} \sum\_{\boldsymbol{\mathcal{V}} \in \mathbb{Z}} \delta\_{\ell\_{j}n} \\ &= (2\mathcal{L} - \sum\_{j=1}^{\boldsymbol{\mathcal{V}}} \ell\_{j})\delta + \sum\_{\boldsymbol{\mathcal{V}} \in \mathcal{P}} l(\text{prim}\,(\boldsymbol{\mathcal{V}})) S\_{\boldsymbol{\mathcal{V}}}(\boldsymbol{\mathcal{V}}) \Big(\delta\_{l(\boldsymbol{\mathcal{V}})} + \delta\_{-l(\boldsymbol{\mathcal{V}})}\Big) \\ &- \sum\_{j=1}^{\boldsymbol{\mathcal{V}}} \ell\_{j} \sum\_{n=1}^{\infty} \left(\delta\_{\ell\_{j}n} + \delta\_{-\ell\_{j}n}\right) . \end{split} \tag{10.22}$$

Note that all subtracted terms are present in the sum over all periodic orbits and correspond to the orbits obtained by going along one particular loop several times in one particular direction. Only one direction for each loop is present because the eigenvalues determined by the loops have multiplicity one, not two as for the single cycle graph *-(*1*.*2*)*. The periodic orbits involving any two loops are not subtracted.

Our analysis implies that *μ*∗ is again a crystalline measure. It remains to show that this measure is not trivial. Before considering the general case, let us study the spectrum of the lasso graph.

## **10.3 The Lasso Graph and Crystalline Measures**

In this section we study the measures associated with the lasso graph *-(*2*.*2*)*. The zero set of the reduced secular polynomial

$$P\_{(2.2)}^{\*} = 3z\_1 z\_2^2 - z\_2^2 + z\_1 - 3$$

on the real torus **T**<sup>2</sup> is presented once more in Fig. 10.1 The polynomial *P*<sup>∗</sup> *(*2*.*2*)* is D-stable, that is it does not have any zeroes inside D2, where D is the open unit disk <sup>D</sup> = {*<sup>z</sup>* : |*z*<sup>|</sup> *<sup>&</sup>lt;* <sup>1</sup>}. To see this we write the equation *<sup>P</sup>*<sup>∗</sup> *(*2*.*2*) (***z***)* = 0 as

$$z\_2^2 = \frac{z\_1 - 3}{1 - 3z\_1}.$$

The Möbius transformation *<sup>z</sup>*<sup>1</sup> → *<sup>z</sup>*1−<sup>3</sup> <sup>1</sup>−3*z*<sup>1</sup> maps the unit disk to its complement, hence the equation has no solutions inside D2. Moreover we have

*P*∗ *(*2*.*2*)(* 1 *z*1 *,* 1 *z*2 *)* = 3 1 *z*1 1 *z*2 2 − 1 *z*2 2 − 1 *z*1 − 3 = −*z*−<sup>1</sup> <sup>1</sup> *<sup>z</sup>*−<sup>2</sup> 2 3*z*1*z*<sup>2</sup> <sup>2</sup> <sup>−</sup> *<sup>z</sup>*<sup>2</sup> <sup>2</sup> + *z*<sup>1</sup> − 3 = −*z*−<sup>1</sup> <sup>1</sup> *<sup>z</sup>*−<sup>2</sup> <sup>2</sup> *P*<sup>∗</sup> *(*2*.*2*)(z*1*, z*2*).* (10.23)

This relation implies in particular that, if *(z*1*, z*2*), zj* -= 0*,* is a zero of *P*<sup>∗</sup> *(*2*.*2*)* , then *(*1*/z*1*,* 1*/z*2*)* is also a zero. Therefore the secular equation *P*<sup>∗</sup> *(*2*.*2*) (eik* <sup>1</sup> *, eik* <sup>2</sup> *)* <sup>=</sup> <sup>0</sup> cannot have non-real solutions *kj* <sup>∈</sup>*/* <sup>R</sup>. Assume on the contrary that such solution *kj* exists. If Im *kj <sup>&</sup>gt;* 0, then *zj* <sup>=</sup> *<sup>e</sup>ik j , j* <sup>=</sup> <sup>1</sup>*,* <sup>2</sup>*,* are inside the unit disk |*zj* <sup>|</sup> *<sup>&</sup>lt;* <sup>1</sup> and *P*∗ *(*2*.*2*) (z*1*, z*2*)* = 0; this contradicts that *P*<sup>∗</sup> *(*2*.*2*)* is stable. If Im *kj <* 0, then <sup>1</sup>*/zj* <sup>=</sup> *<sup>e</sup>*−*ik j , j* <sup>=</sup> <sup>1</sup>*,* <sup>2</sup>*,* are inside the unit disk. The relation (10.23) implies then that

$$P\_{(2.2)}^{\*}(1/z\_1, 1/z\_2) = -z\_1^{-1} z\_2^{-2} P\_{(2.2)}^{\*}(z\_1, z\_2) = 0, 1$$

which again contradicts stability of *P*∗ *(*2*.*2*)* .

The reduced spectrum of the metric graph is obtained by intersecting the line *(k* 1*, k* 2*)* and the zero set of the function

$$L\_{(2,2)}^{\*}\left(\varphi\_{1},\varphi\_{2}\right) = \mathfrak{Z}\sin(\frac{\varphi\_{1}}{2}+\varphi\_{2}) + \sin(\frac{\varphi\_{1}}{2}-\varphi\_{2}).$$

The function has a non-zero gradient

$$
\nabla L\_{(2,2)}^{\*} = \left(\frac{3}{2}\cos(\frac{\varphi\_1}{2} + \varphi\_2) + \frac{1}{2}\cos(\frac{\varphi\_1}{2} - \varphi\_2), \, 3\cos(\frac{\varphi\_1}{2} + \varphi\_2) - \cos(\frac{\varphi\_1}{2} - \varphi\_2)\right),
$$

implying that the zero set of *L*∗ *(*2*.*2*)* is a smooth curve on the torus **<sup>T</sup>**2. Moreover, the normal to the curve lies in the first quadrant since the components of the gradient have the same sign

$$\begin{split} \frac{\partial L^{\*}\_{(2,2)}}{\partial \varphi\_{1}} \frac{\partial L^{\*}\_{(2,2)}}{\partial \varphi\_{2}} &= \frac{1}{2} \Big( 9 \cos^{2}(\frac{\varphi\_{1}}{2} + \varphi\_{2}) - \cos^{2}(\frac{\varphi\_{1}}{2} - \varphi\_{2}) \Big) \\ &= \frac{1}{2} \Big( 9 - 9 \sin^{2}(\frac{\varphi\_{1}}{2} + \varphi\_{2}) - 1 + \sin^{2}(\frac{\varphi\_{1}}{2} - \varphi\_{2}) \Big) \\ &= 4. \end{split} \tag{10.24}$$

where cancelling the two terms on the middle line we used that *L*∗ *(*2*.*2*) (ϕ*1*, ϕ*2*)* = 0. The direction vector for the line *(k* 1*, k* 2*)* belongs to the first quadrant as well, hence the intersection between the line and the zero set is never tangential. One obtains an infinite sequence of simple eigenvalues {*kn*} solving the trigonometric equation

$$L\_{(2.2)}^{\*}(k\ell\_1, k\ell\_2) = 3\sin k(\frac{\ell\_1}{2} + \ell\_2) + \sin k(\frac{\ell\_1}{2} - \ell\_2) = 0.\tag{10.25}$$

Plotting the graph of the function one may get an impression how the eigenvalues are placed (see Fig. 10.2).

To prove that the number of zeroes is infinite, it is enough to take into account that *L*∗ *(*2*.*2*) (k* 1*, k* 2*)* for any  *j >* 0 is a continuous function satisfying the two-sided inequality

$$3\sin k\left(\frac{\ell\_1}{2} + \ell\_2\right) - 1 \le L^\*\_{(2.2)}(k\ell\_1, k\ell\_2) \le 3\sin k\left(\frac{\ell\_1}{2} + \ell\_2\right) + 1.$$

**Fig. 10.2** Graph of the function *L*∗ *(*2*.*2*)(k* 1*, k* 2*)* for <sup>1</sup> <sup>=</sup> <sup>1</sup>*,* <sup>2</sup> <sup>=</sup> <sup>√</sup><sup>7</sup>

**Fig. 10.3** The curved strip associated with *L*∗ *(*2*.*2*)* for <sup>1</sup> <sup>=</sup> <sup>1</sup>*,* <sup>2</sup> <sup>=</sup> <sup>√</sup><sup>7</sup>

**Fig. 10.4** Graphical representation of the reduced spectrum of *-(*2*.*2*)* for rationally dependent (left figure) and rationally independent (right figure) edge lengths

The curved strip

$$3\sin k\left(\frac{\ell\_1}{2} + \ell\_2\right) - 1 \le y \le 3\sin k\left(\frac{\ell\_1}{2} + \ell\_2\right) + 1$$

plotted in Fig. 10.3 crosses the line *y* = 0 infinitely many times because the function 3 sin *k* 1 <sup>2</sup> + <sup>2</sup> does this. The character of the spectrum depends on whether <sup>1</sup>*/* <sup>2</sup> is a rational number or not.


In the figures above we used the same values of <sup>1</sup>*/* <sup>2</sup> as in Fig. 6.3, namely 1 <sup>1</sup> <sup>=</sup> <sup>3</sup> and <sup>1</sup> <sup>2</sup> = <sup>√</sup>5−<sup>1</sup> <sup>2</sup> .

For arbitrary positive edge lengths there is always a non-zero distance between the subsequent intersections, hence the distance between the subsequent eigenvalues is separated from zero, i.e. the reduced spectrum for *-(*2*.*2*)* is always uniformly discrete, independently of the actual edge lengths.

It is almost clear that in the case of rationally independent edge lengths the spectrum is not periodic, since *L*∗ *(*2*.*2*) (k* 1*, k* 2*)* is not a periodic function of *k*. To prove this rigorously we may use Theorem 10.1 which states that Spec∗*(-)* contains no arithmetic sequences provided the edge lengths are rationally independent, hence the spectrum cannot be periodic.

Let us summarise our findings.

**Theorem 10.7** *Let <sup>μ</sup>*<sup>∗</sup> <sup>=</sup> *<sup>δ</sup>*+ *kn*-<sup>=</sup><sup>0</sup> *δkn where kn are solutions to the trigonometric equation* (10.25)

$$L\_{(2.2)}^{\*}(k\ell\_1, k\ell\_2) = 3\sin k \left(\frac{\ell\_1}{2} + \ell\_2\right) + \sin k \left(\frac{\ell\_1}{2} - \ell\_2\right) = 0$$

*be the reduced spectral measure for the lasso graph -(*2*.*2*) with edge lengths* <sup>1</sup> *and* <sup>2</sup>*.* 

	- *(a) The measure μ*<sup>∗</sup> *is a trivial positive idempotent*4 *crystalline measure.*
	- *(b) The support of the measure μ*∗ *is given by the union of finitely many arithmetic progressions.*
	- *(c) The Fourier transform of the measure μ*ˆ <sup>∗</sup> *is again a periodic generalised Dirac comb and is supported by a uniformly discrete set.*
	- *(d)* | ˆ*μ*∗| *is translation bounded.*
	- *(a) The measure μ*∗ *is a non-trivial positive idempotent crystalline measure.*
	- *(b) The support of the measure μ*∗ *meets any arithmetic progression in at most a finite number of points.*
	- *(c) The support of the Fourier transform of the measure μ*ˆ <sup>∗</sup> *is not a uniformly discrete set.*
	- *(d)* | ˆ*μ*∗| *is not translation bounded.*

*Proof* The first part of the Theorem is elementary and is presented here just for the record in order to be compared with the second part. Only statement 1*(a)* needs clarification. All zeroes of the function *L*∗ *(*2*.*2*) (k* 1*, k* 2*)* are simple since (10.24) implies that both partial derivatives are non-zero and have the same sign. We did not use in the argument that <sup>1</sup>*/* <sup>2</sup> <sup>∈</sup> <sup>Q</sup>, hence the same proof implies even <sup>2</sup>*(a)*.

Let us focus on the case <sup>1</sup>*/* <sup>2</sup> <sup>∈</sup>*/* <sup>Q</sup>. The measure *μ*<sup>∗</sup> is not a generalised Dirac comb for the following reasons:

(i) the spectrum of *L*st*(-(*2*.*2*))* is a uniformly discrete set;

<sup>4</sup> Idempotent means that the measure has unit coefficients in front of all delta functions.


The latter statement coincides with 2*(b)*.

To prove 2*(c)* let us remember that the Fourier transform of *μ*∗ is given by (10.22). The support of the delta functions include lengths *l*<sup>0</sup> of all periodic orbits with non-zero sum of the corresponding scattering coefficients

$$\sum\_{l(\boldsymbol{\nu}) = l\_0} l(\text{prim}\,(\boldsymbol{\nu})) S\_{\boldsymbol{\nu}}(\boldsymbol{\nu}) \neq 0.$$

Since the two edge lengths are rationally independent, at least the orbits supported by each of the two edges alone are present. These periodic orbits have lengths *<sup>n</sup>*<sup>1</sup><sup>1</sup>*, n*<sup>1</sup> <sup>∈</sup> <sup>N</sup> and 2*n*<sup>2</sup><sup>2</sup>*, n*<sup>2</sup> <sup>∈</sup> <sup>N</sup>. The union of these sets is not uniformly discrete—there are always arbitrarily close lengths for sufficiently large *nj* . Hence the spectrum is not periodic since otherwise the reduced spectral measure would have been given by a finite sum of Dirac combs with a common period. The Fourier transform of such measures is again given by Dirac combs with equal periods and therefore its support is a uniformly discrete set.

One can also prove 2*(c)* by using the result by Lev-Olevskii [372], which states that in one dimension every crystalline measure with uniformly discrete support of the measure and its Fourier transform is given by a periodic generalised Dirac comb.

We do not have an explicit proof for 2*(d)*. It is clear that the number of periodic orbits having length approximately equal to  *(γ )* grows as *l(γ )* → ∞ and the corresponding scattering coefficients *S*v*(γ )* decrease, but it is difficult to compare these quantities even for such simple graphs as *-(*2*.*2*)*. On the other hand translational boundedness of | ˆ*μ*| would contradict Meyer's Theorem stating that every crystalline measure with *aλ* from a finite set (*aλ* = 1 in our case) and | ˆ*μ*| translation bounded is a generalised Dirac comb:

*Theorem (Meyer [385]) If aλ takes values in a finite set and* | ˆ*μ*| *is translation bounded, that is,* sup *<sup>x</sup>*∈<sup>R</sup> | ˆ*μ*|*(x* + [0*,* 1]*) <* ∞*, then μ is a generalised Dirac comb.* 

Historically, the measure *μ*∗ associated with *-(*2*.*2*)* was the first constructed explicit positive uniformly discrete crystalline measure. All non-trivial examples of crystalline measures known before were less explicit:


After those examples were presented it was not clear whether positive uniformly discrete crystalline measures exist, especially in view of Lev-Olevskii theorems stating that every uniformly discrete crystalline measure with uniformly discrete support of the Fourier transform is a generalised Dirac comb in R<sup>1</sup> [372]. Assuming positivity, the same result holds in R*<sup>d</sup>* . The above examples of crystalline measures are not explicit, and it is hard to control positivity of the measures or arithmetic properties of the support.

On the other hand those papers contained clearly formulated questions that the measure *μ*∗ provides an affirmative answer to:


After the measure *μ*∗ was discovered several alternative explicit constructions of crystalline measures were suggested, in particular using the following mathematical notions: multivariate stable polynomials [350], trigonometric polynomials with real zeroes [414], inner functions in C*<sup>N</sup>* [389], linear recurrence relations on lattices and curved model sets [390]. The paper [414] contains in addition characterisation of all idempotent crystalline measures on R<sup>1</sup> via trigonometric polynomials with real zeroes.

## **10.4 Graph's Spectrum as a Delone Set**

Laplacians on metric graphs always lead to positive crystalline measures, but measures having uniformly discrete support are of particular interest. In this section we shall focus on how examples of such measures can be obtained. In view of Weyl's asymptotic 4.15 the support of every such measure is also relatively dense. Discrete sets that are both uniformly discrete and relatively dense are called **Delone sets**. 5

The structure of the spectrum depends on whether the edge lengths are rationally dependent or not. If the edge lengths are **pairwise** rationally dependent, then the support of the reduced spectrum is periodic and therefore is always a Delone set. The corresponding summation formula is a generalised Dirac comb and is not interesting for us.

<sup>5</sup> These sets are named after Russian mathematician Бори´с Николa´евич Делонe´, who used French transliteration Delaunay for his family name.

If the edge lengths are rationally independent, then as we have seen the reduced spectrum is uniformly discrete only if the zero set of the reduced secular polynomial is not singular: the curve *(eik* <sup>1</sup> *,...,eik N )* densely covers the unit torus T*<sup>N</sup>* leading to close eigenvalues when the curve almost hits the singularities of the reduced zero set (without actually hitting them). Hence looking for measures supported by Delone sets one should either find graphs for which the zero sets of the reduced secular polynomials are not singular, or impose a linear relation on the edge lengths ensuring that the curve *(eik* <sup>1</sup> *,...,eik N )* does not come close to the singularities of the reduced zero set.

It is straightforward to describe the reduced spectrum in the case the graph has at most two edges: the reduced spectrum is given by single arithmetic progressions with the only exception of *G(*2*.*2*)*


$$\mathbf{\*}\quad \mathbf{Spec}^\*(\Gamma\_{(2.4)}) = \{\frac{2\pi}{\ell\_1 + \ell\_2}n, n \in \mathbb{Z}\}.$$

The reduced spectrum of *L*st*(-(*2*.*2*))* has already been discussed, therefore let us turn to graphs on three and more edges. The singular subset of the reduced zero set **Z**∗ *<sup>G</sup>* may be non-empty, therefore the non-trivial spectrum cannot always be a Delone set. Given a genuine graph on at least three edges, not a dumbbell graph *G(*3*.*7*)*, the spectrum is not a Delone set if the edge lengths are rationally independent. Moreover it cannot be made Delone by subtracting a finite number of arithmetic progressions. This follows from Lemma 6.3 stating that all genuine graphs on three edges have singular points in the reduced zero set, provided *G(*3*.*7*)* is excluded.

Having this fact in mind we are going to adopt the opposite strategy and discuss how to choose the edge lengths to get a Delone set, provided the discrete graph *G* is fixed.

**Theorem 10.8** *Let G be a discrete graph on at least three edges, not a watermelon graph G(*3*.*9*). Then the edge lengths can be chosen in such a way that the spectrum is a non-periodic Delone set after possibly subtracting a finite number of arithmetic sequences corresponding to the loops.* 

*Proof* To prove the theorem we shall contract *N* − 3 edges in the original graph to get a graph on three edges. Not every graph on three edges is suitable—nongenuine graphs should be excluded: these graphs are equivalent to graphs on one or two edges and their spectra are often given by arithmetic sequences. Non-genuine graphs on three edges are:

$$G\_{(3.1)}, \, G\_{(3.3)}, \, G\_{(3.5)}, \, G\_{(3.6)}, \,\text{and}\,\, G\_{(3.10)}.\tag{10.26}$$

**Fig. 10.5** Possible extensions of the watermelon graph **W**<sup>3</sup> = *G(*3*.*9*)*

Let us exclude all these graphs. Lemma 7.7 implies that any genuine graph on three or more edges can be contracted to one of the following genuine graphs on three edges:

$$G\_{(3.2)}, \, G\_{(3.4)}, \, G\_{(3.7)}, \, G\_{(3.8)}, \, G\_{(3.9)}, \,\, \text{or} \,\, G\_{(3.11)}\dots$$

One may strengthen Lemma 7.7 by proving that, excluding the watermelon *G(*3*.*9*)*, any genuine graph on at least three edges can be contracted to any of the following genuine graphs on three edges:

$$G\_{(3.2)}, G\_{(3.4)}, G\_{(3.7)}, G\_{(3.8)}, \text{ or } G\_{(3.11)}.\tag{10.27}$$

Lemma 7.7 is proven by constructing a sequence of genuine graphs with a decreasing number of edges. Every graph in the sequence is a contraction of the previous graph. This sequence ends up with the watermelon graph *G(*3*.*9*)* only if the previous graph in the sequence is either the watermelon with a loop **W**3**L**, or the watermelon on a stick **W**3**I** presented in Figs. 7.7 and 7.8. Instead of *G(*3*.*9*)* these graphs can be contracted to *G(*3*.*11*)* and *G(*3*.*8*)* respectively (see Fig. 10.5).

In what follows we shall assume that all but except three edges are contracted so that the graph *G* is equivalent to one of the graphs from the list (10.27). Note that this contraction is not unique and the same original graph can be contracted to different graphs. To get a Delone spectrum we shall subtract arithmetic sequences corresponding to the loops, remember that contraction may lead to new loops not present in the original graph *G*.

To accomplish the proof of the theorem it is enough to show that the edge lengths in the graphs (10.27) can be chosen so that the reduced spectrum is a Delone set. Examining the reduced zero surfaces **Z**<sup>∗</sup> for *G(*3*.*2*), G(*3*.*4*), G(*3*.*7*, G(*3*.*8*),* and *G(*3*.*11*)* (plotted in Figs. 6.10, 6.12, 6.15, 6.16, and 6.19) we see that excluding **Z**∗ *(*3*.*7*)* all other surfaces are singular.

Let us discuss the regular case of the dumbbell graph *G(*3*.*7*)* first. The reduced spectrum is determined by the zeroes of the reduced Laurent polynomial *L*∗ *(*3*.*7*)* (see (6.15)). The corresponding reduced zero set **Z**∗ *(*3*.*7*)* expanded periodically to R<sup>3</sup> is formed by smooth two-dimensional sheets separated from each other by a certain non-zero distance.

The normal to the surface is always pointing in the first octant, in other words all coordinates of the gradient have the same sign. This is a general fact following from the monotonicity of the eigenvalues with respect to stretching of the edges: the eigenvalues are non-increasing functions of the edge lengths. This follows from the min-max principle and scaling properties of the Dirichlet integral (giving the quadratic form of the standard Laplacian).6 We prove this fact by differentiating *L*∗ *(*3*.*7*) (ϕ*1*, ϕ*2*, ϕ*3*)* directly. The calculations remind us of formula (10.24). We have for example

$$\begin{split} \frac{\partial L^{s}\_{1,\gamma}}{\partial\varphi\_{2}} \frac{\partial L^{s}\_{1,\gamma}}{\partial\varphi\_{1}} &= \left(9\cos(\varphi\_{1}+\varphi\_{2}+\varphi\_{3})-3\cos(\varphi\_{1}+\varphi\_{2}-\varphi\_{3})\right) \\ &+ 3\cos(\varphi\_{1}-\varphi\_{2}+\varphi\_{3})-\cos(\varphi\_{1}-\varphi\_{2}-\varphi\_{3}) \\ &\times \Big(9\cos(\varphi\_{1}+\varphi\_{2}+\varphi\_{3})-3\cos(\varphi\_{1}+\varphi\_{2}-\varphi\_{3})\Big) \\ &- 3\cos(\varphi\_{1}-\varphi\_{2}+\varphi\_{3})+\cos(\varphi\_{1}-\varphi\_{2}-\varphi\_{3})\Big) \\ &= \Big(9\cos(\varphi\_{1}+\varphi\_{2}+\varphi\_{3})-3\cos(\varphi\_{1}+\varphi\_{2}-\varphi\_{3})\Big)^{2} \\ &- \Big(3\cos(\varphi\_{1}-\varphi\_{2}+\varphi\_{3})-\cos(\varphi\_{1}-\varphi\_{2}-\varphi\_{3})\Big)^{2} \\ &= \Big(9\cos(\varphi\_{1}+\varphi\_{2}+\varphi\_{3})-3\cos(\varphi\_{1}+\varphi\_{2}-\varphi\_{3})\Big)^{2} \\ &- \Big(3\cos(\varphi\_{1}-\varphi\_{2}+\varphi\_{3})-\cos(\varphi\_{1}-\varphi\_{2}-\varphi\_{3})\Big)^{2} \\ &+ \Big(9\sin(\varphi\_{1}+\varphi\_{2}+\varphi\_{3})-3\sin(\varphi\_{1}+\varphi\_{2}-\varphi\_{3})\Big)^{2} \\ &- \Big(3\sin(\varphi\_{1}-\varphi\_{2}+\varphi\_{3})-\sin(\varphi\_{1}-\varphi\_{2}-\varphi\_{3})\Big)^{2} \\ &= 80 - 48\cos2\varphi\_{3} \geq 32 > 0, \end{split}$$

where we of course used that *L*∗ *(*3*.*7*) (ϕ*1*, ϕ*2*, ϕ*3*)* = 0 in the third equality. Almost identical calculations lead to the inequality

$$\frac{\partial L^\*\_{(3.7)}}{\partial \varphi\_3} \frac{\partial L^\*\_{(3.7)}}{\partial \varphi\_1} > 0.$$

$$\frac{\partial \lambda}{\partial \ell\_j} = - \left( \psi\_\lambda'(\mathbf{x})^2 + \lambda \psi\_\lambda(\mathbf{x})^2 \right)|\_{\mathbf{x} \in \mathfrak{e}\_j},\tag{10.28}$$

<sup>6</sup> One may even prove Hadamard-type formula quantifying the derivative in the case of simple eigenvalues (see [226], [160, Appendix A], [52])

where *ψλ* is the eigenfunction corresponding to a non-degenerate eigenvalue *λ* and *ψ <sup>λ</sup>(x)*<sup>2</sup> <sup>+</sup> *λψλ(x)*<sup>2</sup> is the Prüfer amplitude for the eigenfunction on the edge.

The spectrum is obtained by crossing the reduced zero set by the line *k(* 1*,* 2*,* 3*)* whose direction vector lies in the first octant (as well as the normal to the surface). Hence the distance between any two subsequent eigenvalues is uniformly separated from zero, the reduced spectrum is a Delone set. In order to avoid that the reduced spectrum is given by a generalised Dirac comb, it is enough to assume that  *j* are rationally independent, no further restriction on the edge lengths is necessary.

Let us turn to the graphs *G(*3*.*2*), G(*3*.*4*), G(*3*.*8*), G(*3*.*11*)*. If all edge lengths are rationally independent, then it is unavoidable that the reduced spectrum has arbitrarily close points: these points appear when the line *k*  comes closer and closer to the singular points. On the other hand Lemma 6.3 lists all singular points *ϕ<sup>j</sup>* for these graphs. Let us choose a hypertorus T avoiding all these singular points

$$\text{dist}\left\{\mathcal{T}, \mathfrak{g}^{j}\right\} \ge d > 0.$$

The hypertorus for graphs listed in (10.27) can be written as an integer linear relation between the coordinates:7

$$\mathcal{T} = \{n\_1\wp\_1 + n\_2\wp\_2 + n\_3\wp\_3 = 0\}, \quad n\_j \in \mathbb{Z}.\tag{10.29}$$

The torus can be fixed so that not all integers *nj* have the same sign, i.e. the normal does not lie in the first octant. We may always assume that

$$n\_1, n\_2 > 0, \quad n\_3 < 0. \tag{10.30}$$

Consider the hyperplane *<sup>Π</sup>* <sup>∈</sup> <sup>R</sup><sup>3</sup> given by the same linear equation (10.29)—it avoids all singular points of the reduced zero set on R3. The intersection between the zero set of the reduced polynomial and the hyperplane is given by a set of smooth non-singular curves **Z**∗ *<sup>G</sup>* ∩*Π* = {*γj* }. There is a minimal distance between the curves since the picture is periodic and the curves may intersect only at the singular points of **Z**∗ *<sup>G</sup>*, but these points are avoided.

Let us choose any edge length vector *l* satisfying the same relation (10.29) with positive coordinates that are not pairwise rationally dependent; this is always possible for *n* satisfying (10.30): choose any two rationally independent <sup>1</sup> and <sup>2</sup> and get positive <sup>3</sup> from the relation (10.29). The set *k*  densely covers the hypertorus T but it is a line on the hyperplane *Π*. The intersection points between the line *kl* and the zero set **Z**∗ *<sup>G</sup>* on the hyperplane belong to the algebraic curves *γj* <sup>⊂</sup> *<sup>Π</sup>* <sup>⊂</sup> <sup>R</sup>3*.* The normal to **Z**<sup>∗</sup> *<sup>G</sup>* lies in the first octant as well as the line's direction vector *l* , hence the two consecutive intersection points belong to different curves *γj* <sup>⊂</sup> *<sup>Π</sup>* <sup>⊂</sup> <sup>R</sup><sup>3</sup> and always are separated by a finite distance, hence the reduced spectrum is an uniformly discrete set.

We illustrate the proof of the above theorem with a few explicit examples.

<sup>7</sup> The point *<sup>ϕ</sup>* <sup>=</sup> **<sup>0</sup>** belongs to any torus of the form (10.29) and is a singular point for the watermelon graph *G(*3*.*9*)*, therefore we had to exclude this graph.

**Fig. 10.6** Zero set for graph *G(*3*.*2*)* together with the torus *ϕ*<sup>3</sup> = 2*ϕ*<sup>1</sup>

**Example 10.9** This example comes from [63] where the three-star graph *G(*3*.*2*)* is considered. The authors put requirement <sup>3</sup> = 2<sup>1</sup> leading to the equation on the spectrum:

$$L\_{(3,2)}(k\ell\_1, k\ell\_2, 2k\ell\_1) = 0$$

$$\Leftrightarrow 3\sin(3\ell\_1 + \ell\_2)k + \sin(-\ell\_1 + \ell\_2)k + \sin(3\ell\_1 - \ell\_2)k + \sin(\ell\_1 + \ell\_2)k = 0.$$

Let us introduce the hypertorus T = {*ϕ* : *ϕ*<sup>3</sup> = 3*ϕ*1}. Figure 10.6 shows the zero set and the hypertorus which avoids the singular points *(*±*π/*2*,* ±*π/*2*,* ±*π/*2*)*. The intersection between the zero set and the hypertorus is given by smooth curves. Projection of these curves to the *(ϕ*1*, ϕ*2*)*-plane is presented in Fig. 10.7. The curves are clearly separated by a non-zero distance and the spectrum is a Delone set.

**Example 10.10** Consider the graph *G(*3*.*8*)* with the reduced Laurent polynomial given by (6.15) (Fig. 10.8). The singular points are *(*±*π/*2*, π, π ).* We choose the hypertorus T

$$
\varphi\_1 + 2\varphi\_2 - \varphi\_3 = 0
$$

avoiding the singular points. The intersection curves projected to the *(ϕ*1*, ϕ*2*)*-plane are plotted in Fig. 10.9. Different curves are clearly separated by a \*\*\*non-zero

set for graph *G(*3*.*8*)* together with the torus T

distance. The reduced spectrum is given by the equation

$$L\_{(3.8)}^{\*}\left(k\ell\_1, k\ell\_2, k(\ell\_1 + 2\ell\_2)\right) = 0$$

$$\Leftrightarrow \mathfrak{F}\sin\left(\frac{3}{2}\ell\_1 + \frac{3}{2}\ell\_2\right)k + \sin\left(\frac{1}{2}\ell\_1 - \frac{1}{2}\ell\_2\right)k$$

$$+ \sin\left(\frac{3}{2}\ell\_1 + \frac{1}{2}\ell\_2\right)k - 3\sin\left(\frac{1}{2}\ell\_1 - \frac{3}{2}\ell\_2\right)k = 0,$$

where we used that the edge lengths satisfy the relation <sup>3</sup> = <sup>1</sup> + 2<sup>2</sup>*.* The reduced eigenvalues form a Delone set, all eigenvalues having multiplicity one. Choosing <sup>1</sup>

**Fig. 10.7** Intersection between **Z***G(*3*.*2*)* and the torus *ϕ*<sup>3</sup> = 2*ϕ*<sup>1</sup> projected to the *(ϕ*1*, ϕ*2*)*-plane

**Fig. 10.9** Projection of the intersection curves to the *(ϕ*1*, ϕ*2*)*-plane

and <sup>2</sup> rationally independent the line *(k* 1*, k* 2*, k* 3*)* densely covers the torus T *.* The reduced spectrum contains no arithmetic sequence.

In both examples obtained Delone sets are simple—all eigenvalues including point *k* = 0 have multiplicity one. Hence the corresponding crystalline measures are not only uniformly discrete but also idempotent. In general to get idempotent measures one has to ensure that the reduced spectrum is simple and that the delta measure at the origin has unit weight. The above examples illustrate two mechanisms how unit weight at the origin can be achieved:


**Problem 41** Construct your own examples of idempotent uniformly discrete positive crystalline measures using one of the graphs on three edges.

**Problem 42** Consider any graph on four edges. Construct an example of idempotent uniformly discrete positive crystalline measure by properly choosing the edge lengths (of course not allowing zero edge lengths, as such graphs should be seen as graphs on three edges).

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 11 Quadratic Forms and Spectral Estimates**

The goal of this Chapter is to provide a systematic study of quadratic forms associated with Schrödinger operators on metric graphs. These forms will be used to prove spectral estimates in terms of a certain reference Laplace operator. The spectrum of a Laplacian is easier to calculate, but this is not the only reason to obtain spectral estimates. It turns out that the reference Laplacian does not necessarily correspond to the same metric graph  as the original operator. The corresponding reference metric graph may have different topological structure from *-*. Hence the spectral estimates obtained here will imply that in certain cases the topological structure of the graph  cannot always be deduced from the spectrum of the corresponding quantum graph, despite that the vertex conditions are properly connecting (as always). Moreover, we are going to use spectral estimates to prove several generalisations of the celebrated Ambartsumian theorem.

The main mathematical tool we are going to use is the one-to-one correspondence between semi-bounded self-adjoint operators and closed semi-bounded quadratic forms [90, 442]. Working with the quadratic forms directly allows one to obtain effective spectral estimates much faster and use the full power of perturbation theory.

We are going to consider the case of zero magnetic potential in order to simplify formulas, but it is not a restriction, since arbitrary vertex conditions will be treated and a nontrivial magnetic potential is equivalent to introducing certain phases in the vertex conditions (see Chap. 16).

## **11.1 Quadratic Forms (Integrable Potentials)**

Quadratic forms associated with quantum graphs have already been considered in Sect. 3.4, where parametrisation of vertex conditions via Hermitian matrices was considered. Our focus here will be on determining the quadratic form domain in the case of absolutely integrable potentials. The assumption that the potential is just summable and not necessarily uniformly bounded forces us to be more careful.

## *11.1.1 Explicit Expression*

Let us denote by *QL***<sup>S</sup>** *q (u, v)* the quadratic (more precisely sesquilinear) form associated with the operator *L***<sup>S</sup>** *<sup>q</sup>* (see Definition 4.1 with *a(x)* ≡ 0). The quadratic form is first defined on the domain of the operator *u, v* <sup>∈</sup> Dom *(L***<sup>S</sup>** *<sup>q</sup> )* :

$$\mathcal{Q}\_{L^{\mathbf{S}}\_q}(\mu, \upsilon) = \left\langle \mu, \left. L^{\mathbf{S}}\_q \upsilon \right\rangle\_{L\_2(\Gamma)} \cdot \right.$$

Hence the functions satisfy on every edge

$$u, v \in W\_2^1(E\_n), \ -u'' + qu, -v'' + qv \in L\_2(E\_n), \ \ n = 1, 2, \dots, N.$$

As we have shown in Sect. 4.1, these conditions imply that the functions are not only continuous, but have continuous first derivatives on every edge and one may impose vertex conditions. Moreover this implies that one may integrate by parts in the expression for the quadratic form:

$$\begin{aligned} \mathcal{Q}\_{L\_q^\mathbf{S}}(\boldsymbol{u}, \boldsymbol{v}) &= \left< \boldsymbol{u}, L\_q^\mathbf{S} \boldsymbol{v} \right> \\ &= \sum\_{n=1}^N \int\_{E\_n} \overline{\boldsymbol{u}(\boldsymbol{x})} \left( -\boldsymbol{v}''(\boldsymbol{x}) + q(\boldsymbol{x})\boldsymbol{v}(\boldsymbol{x}) \right) d\boldsymbol{x} \\ &= \sum\_{n=1}^N \int\_{E\_n} \left( \overline{\boldsymbol{u}'(\boldsymbol{x})} \boldsymbol{v}'(\boldsymbol{x}) + q(\boldsymbol{x})\overline{\boldsymbol{u}(\boldsymbol{x})} \boldsymbol{v}(\boldsymbol{x}) \right) d\boldsymbol{x} + \sum\_{m=1}^M \langle \tilde{\boldsymbol{u}}(\boldsymbol{V}^m), \stackrel{\scriptstyle \mathcal{D}}{\boldsymbol{\vartheta}}(\boldsymbol{V}^m) \rangle\_{\mathbb{C}^{du}} \, . \end{aligned} \tag{11.1}$$

where the vectors *u(V m), ∂u(V m)* of boundary values at the vertex *<sup>V</sup> <sup>m</sup>* were introduced in formula (3.2).

The expression for the quadratic form may be simplified further if one takes into account that the vertex values of functions and their derivatives are not independent, but satisfy vertex conditions (4.8) 1

$$\vec{u}(S^m - I)\vec{u}(V^m) = (S^m + I)\partial\vec{u}(V^m),\tag{11.2}$$

<sup>1</sup> Note that this formula holds only for functions from the domain of the operator.

where *S<sup>m</sup>* is an irreducible unitary *dm* <sup>×</sup> *dm* matrix. Consider the eigensubspace for *<sup>S</sup><sup>m</sup>* associated with the eigenvalue −<sup>1</sup> and its orthogonal complement in C*dm* . Let us denote the corresponding orthogonal projectors by *P <sup>m</sup>* <sup>−</sup><sup>1</sup> and *<sup>P</sup> <sup>m</sup>*<sup>⊥</sup> <sup>−</sup><sup>1</sup> respectively.

Applying *P <sup>m</sup>* <sup>−</sup><sup>1</sup> to both sides of Eq. (11.2) we get

$$P\_{-1}^{m}i(S^{m}-I)\tilde{\mu}(V^{m}) = P\_{-1}^{m}(S^{m}+I)\partial\tilde{\mu}(V^{m})$$

and therefore

$$P\_{-1}^{m}\vec{\mu}(V^{m}) = 0,\tag{11.3}$$

where we used that *P <sup>m</sup>* <sup>−</sup><sup>1</sup> is an eigenprojector for *<sup>S</sup><sup>m</sup>* and therefore commutes with it. It follows that the function *u* satisfies certain **generalised Dirichlet conditions** at *V <sup>m</sup>*—not all boundary values of *u* at *V <sup>m</sup>* are equal to zero, but a certain combination of them is zero (more precisely the projection on the eigensubspace *P <sup>m</sup>* <sup>−</sup>1C*dm* is zero).

Let us apply now the projector *<sup>P</sup> <sup>m</sup>*<sup>⊥</sup> <sup>−</sup><sup>1</sup> <sup>=</sup> *<sup>I</sup>* <sup>−</sup> *<sup>P</sup> <sup>m</sup>* <sup>−</sup><sup>1</sup> to both sides of (11.2) and use again that the projector and the matrix *S<sup>m</sup>* commute:

$$\hat{\iota}\,P\_{-1}^{m\perp}(\mathcal{S}^m - I)P\_{-1}^{m\perp}\tilde{\iota}(V^m) = P\_{-1}^{m\perp}(\mathcal{S}^m + I)P\_{-1}^{m\perp}\partial\tilde{\iota}(V^m).\tag{11.4}$$

Note that the matrix *<sup>P</sup> <sup>m</sup>*<sup>⊥</sup> <sup>−</sup><sup>1</sup> *(S<sup>m</sup>* <sup>+</sup> *I )P <sup>m</sup>*<sup>⊥</sup> <sup>−</sup><sup>1</sup> is invertible in the subspace *<sup>P</sup> <sup>m</sup>*<sup>⊥</sup> <sup>−</sup><sup>1</sup> <sup>C</sup>*dm* and therefore the last condition can be written in the Robin form as follows:

$$P\_{-1}^{m\perp} \partial \vec{u}(V^m) = P\_{-1}^{m\perp} i \frac{S^m - I}{S^m + I} P\_{-1}^{m\perp} \vec{u}(V^m). \tag{11.5}$$

The matrix *<sup>P</sup> <sup>m</sup>*<sup>⊥</sup> <sup>−</sup><sup>1</sup> *<sup>i</sup> <sup>S</sup>m*−*<sup>I</sup> <sup>S</sup>m*+*<sup>I</sup> <sup>P</sup> <sup>m</sup>*<sup>⊥</sup> <sup>−</sup><sup>1</sup> is Hermitian in *<sup>P</sup> <sup>m</sup>*<sup>⊥</sup> <sup>−</sup><sup>1</sup> <sup>C</sup>*dm* and was denoted by *Am* in (3.28)

$$A^m := P\_{-1}^{m \perp} i \frac{S^m - I}{S^m + I} P\_{-1}^{m \perp}. \tag{11.6}$$

Hence every function from the domain of the operator satisfies the **generalised Robin condition**:

$$P\_{-1}^{m\perp} \partial \vec{u}(V^m) = A^m P\_{-1}^{m\perp} \vec{u}(V^m). \tag{11.7}$$

Using this notation the expression for the sesquilinear form can be written as follows:

$$\begin{split} \mathcal{Q}\_{L^{S}\_{q}}(\boldsymbol{u},\boldsymbol{v}) &= \sum\_{n=1}^{N} \int\_{E\_{n}} \overline{\boldsymbol{u}^{\prime}(\boldsymbol{x})} \boldsymbol{v}^{\prime}(\boldsymbol{x}) d\boldsymbol{x} + \sum\_{n=1}^{N} \int\_{E\_{n}} \boldsymbol{q}(\boldsymbol{x}) \overline{\boldsymbol{u}(\boldsymbol{x})} \boldsymbol{v}(\boldsymbol{x}) d\boldsymbol{x} \\ &+ \sum\_{m=1}^{M} \langle P\_{-1}^{m\perp} \vec{\boldsymbol{u}}(\boldsymbol{V}^{m}), \boldsymbol{A}^{m} P\_{-1}^{m\perp} \vec{\boldsymbol{v}}(\boldsymbol{V}^{m}) \rangle\_{\mathbb{C}^{d\_{\rm{m}}}}. \end{split} \tag{11.8}$$

Here we split the integral term, since we know that the functions *u, v* are continuous and *q* ∈ *L*1*.* The corresponding quadratic form is

$$\begin{split} \mathcal{Q}\_{L\_q^S}(\boldsymbol{u}, \boldsymbol{u}) &= \sum\_{n=1}^N \int\_{E\_n} |\boldsymbol{u}'(\boldsymbol{x})|^2 d\boldsymbol{x} + \sum\_{n=1}^N \int\_{E\_n} q(\boldsymbol{x}) |\boldsymbol{u}(\boldsymbol{x})|^2 d\boldsymbol{x} \\ &+ \sum\_{m=1}^M \langle P\_{-1}^{m\perp} \vec{\boldsymbol{u}}(\boldsymbol{V}^m), \boldsymbol{A}^m P\_{-1}^{m\perp} \vec{\boldsymbol{u}}(\boldsymbol{V}^m) \rangle\_{\mathbb{C}^{dm}} . \end{split} \tag{11.9}$$

It is common for unbounded operators that the quadratic form is defined on a domain which is larger than the domain of the operator. If the operator is strictly positive, then the domain of the quadratic form is obtained by closing the operator domain with respect to the norm given by the quadratic form.

The quadratic form we got is not necessarily positive, since there is no reason to assume that the Hermitian matrices *Am* are positive, the potential *q* can also be negative. In order to proceed we need to show that the quadratic form is semibounded, i.e. there exists a constant *K* such that

$$\left\|\boldsymbol{u}\right\|\_{\mathcal{Q}\_{L^{\underline{S}}\_{q}}}^{2} := \mathcal{Q}\_{L^{\underline{S}}\_{q}}(\boldsymbol{u}, \boldsymbol{u}) + K \left\|\boldsymbol{u}\right\|\_{L\_{2}(\Gamma)}^{2},\tag{11.10}$$

is positive definite.

## *11.1.2 An Elementary Sobolev Estimate*

In order to proceed we need the following elementary Sobolev estimate (a special case of the Gagliardo-Nirenberg estimate), showing that every function from *W*<sup>1</sup> <sup>2</sup> on a compact interval is essentially bounded.

**Lemma 11.1** *Assume that <sup>u</sup>* <sup>∈</sup> *<sup>W</sup>*<sup>1</sup> <sup>2</sup> [0*,* ]*, then it holds* 

$$\left\|\boldsymbol{u}\right\|\_{L\_{\infty}[0,\ell]}^2 \le \epsilon \left\|\boldsymbol{u}'\right\|\_{L\_2[0,\ell]}^2 + \frac{2}{\epsilon} \left\|\boldsymbol{u}\right\|\_{L\_2[0,\ell]}^2,\tag{11.11}$$

*where >* 0 *can be chosen arbitrarily, provided it is sufficiently small:* ≤ *.*

*Proof* We prove first the estimate for continuous, piecewise continuously differentiable functions. Let us denote by *x*min one of the the points at which |*u*| attains the minimum, then it holds

$$\begin{aligned} |\mu(\mathbf{x})|^2 &\le \underbrace{|\mu(\mathbf{x\_{min}})|^2}\_{\le \|\mu\|^2/\ell} + 2\mathbf{Re}\int\_{\varkappa\_{\min}}^{\chi} \overline{\mu(\mathbf{y})} \mu'(\mathbf{y}) d\mathbf{y} \\ &\le \|\mu\|^2/\ell \end{aligned}$$

and hence

$$\begin{split} \left| |u(\mathbf{x})| \right|^2 &\leq \left\| |u\| ^2/\ell + 2 \int\_{\begin{subarray}{c} \mathbf{x} \text{min} \\ \ell \end{subarray}}^{\times} |u(\mathbf{y})| \left| u'(\mathbf{y}) \right| d\mathbf{y} \\ &\leq \left\| |u\| ^2/\ell + \epsilon \int\_{0}^{\ell} |u'(\mathbf{y})|^2 d\mathbf{y} + \frac{1}{\epsilon} \int\_{0}^{\ell} |u(\mathbf{y})|^2 d\mathbf{y} \\ &\leq \epsilon \left\| u' \right\|\_{L\_2[0,\ell]}^2 + \left( \frac{1}{\epsilon} + \frac{1}{\ell} \right) \left\| u \right\|\_{L\_2[0,\ell]}^2. \end{split}$$

Taking into account that is positive and less than we obtain (11.11).

It remains to note that continuous piecewise continuously differentiable functions form a dense subset in *W*<sup>1</sup> <sup>2</sup> [0*,* ] and hence estimate (11.11) holds for any function from *W*<sup>1</sup> <sup>2</sup> [0*,* ]*.* 

The obtained estimate will be used for sufficiently small values of , more precisely estimates with tending to zero will be interesting for us. Therefore the restriction ≤ is not essential. It is important to remember that, taking smaller and smaller values of , the coefficient in front of *<sup>u</sup>* <sup>2</sup> increases.

The estimate (11.11) can be generalised for the case of metric graphs as follows

$$\|\mu\|\_{L\_{\infty}(\Gamma)}^2 \le \epsilon \|\mu'\|\_{L\_2(\Gamma)}^2 + \frac{2}{\epsilon} \|\mu\|\_{L\_2(\Gamma)}^2,\tag{11.12}$$

provided *<sup>u</sup>* <sup>∈</sup> *<sup>W</sup>*<sup>1</sup> <sup>2</sup> *(-)* without any required vertex conditions and ≤ min, where min denotes the length of the shortest edge

$$\ell\_{\min} = \min\_{n=1,2,\ldots,N} \ell\_n. \tag{11.13}$$

The obtained estimate may be improved taking into account topological properties of the metric graph and vertex conditions. For example, in the case of standard vertex conditions the estimate holds for less than the diameter of the metric graph. The diameter is the longest distance between any two points on the metric graph.

**Problem 43** Prove the Sobolev estimate (11.12) for less than the diameter of the metric graph *-*, provided the function *<sup>u</sup>* <sup>∈</sup> *<sup>W</sup>*<sup>1</sup> <sup>2</sup> *(-* \ **V***)* is in addition continuous at the vertices.

## *11.1.3 The Perturbation Term Is Form-Bounded*

Our immediate goal is to estimate the second and third terms in the quadratic form expression (11.9) in order to show that the form is semibounded.

The **second term** can be estimated as follows. Inequality (11.11) implies that

$$\begin{aligned} \left| \int\_{E\_n} q(\mathbf{x}) |u(\mathbf{x})|^2 d\mathbf{x} \right| &\leq \|q\|\_{L\_1(E\_n)} \max\_{\mathbf{x}\in E\_n} |u(\mathbf{x})|^2 \\ &\leq \|q\|\_{L\_1(E\_n)} \left( \epsilon \|u'\|\_{L\_2(E\_n)}^2 + \frac{2}{\epsilon} \|u\|\_{L\_2(E\_n)}^2 \right). \end{aligned}$$

Hence we have

$$\left| \sum\_{n=1}^{N} \int\_{E\_n} q(\mathbf{x}) |u(\mathbf{x})|^2 d\mathbf{x} \right| \le \|q\|\_{L\_1(\Gamma)} \left( \epsilon \|u'\|\_{L\_2(\Gamma)}^2 + \frac{2}{\epsilon} \|u\|\_{L\_2(\Gamma)}^2 \right),\tag{11.14}$$

of course provided *<*min*.*

Under the same assumption on the **third term** satisfies

$$\begin{aligned} & \left| \sum\_{m=1}^{M} \langle P\_{-1}^{m \perp} \tilde{u}(V^{m}), A^{m} P\_{-1}^{m \perp} \tilde{u}(V^{m}) \rangle\_{\mathbb{C}^{d\_{\text{m}}}} \right| \\ & \le \sum\_{m=1}^{M} d\_{m} \|A^{m} \| \, \|u\|\_{L\_{\infty}}^{2} \\ & \le \left( \sum\_{m=1}^{M} d\_{m} \|A^{m} \| \right) \left( \epsilon \|u' \|\_{L\_{2}(\Gamma)}^{2} + \frac{2}{\epsilon} \|u\|\_{L\_{2}(\Gamma)}^{2} \right), \end{aligned} \tag{11.15}$$

where *dm* is the degree of the vertex *V m.* Note that the obtained estimates (11.14) and (11.15) are far from being optimal.

Let us consider the quadratic form *QL***<sup>S</sup>** *q (u, u)* as a perturbation of the Dirichlet form *u* 2 *L*2*(-)*:

$$\|\mathcal{Q}\_{L\_q^S}(\mu,\mu) = \|\mu'\|\_{L\_2(\Gamma)}^2 + \mathcal{B}\_{L\_q^S}(\mu,\mu). \tag{11.16}$$

We have proven that the perturbation term

$$B\_{L\_q^\mathcal{S}}(u,u) := \int\_{\Gamma} q(\mathbf{x}) |u(\mathbf{x})|^2 d\mathbf{x} + \sum\_{m=1}^{M} \langle P\_{-1}^{m\perp} \vec{u}(V^m), A^m P\_{-1}^{m\perp} \vec{u}(V^m) \rangle\_{\mathbb{C}^{d\_m}} \tag{11.17}$$

possesses the estimate

$$|B\_{L\_q^S}(\boldsymbol{u}, \boldsymbol{u})| \le \left(\sum\_{m=1}^M d\_m \|A^m\| + \|q\|\_{L\_1(\Gamma)}\right) \left(\epsilon \|\boldsymbol{u}'\|\_{L\_2(\Gamma)}^2 + \frac{2}{\epsilon} \|\boldsymbol{u}\|\_{L\_2(\Gamma)}^2\right). \tag{11.18}$$

This inequality has two important implications:


$$K = \left(\sum\_{m=1}^{M} d\_m \|A^m\| + \|q\|\_{L\_1(\Gamma)}\right) \frac{2}{\epsilon} + 1,\tag{11.19}$$

where ≤ min satisfies in addition

$$\epsilon \le \frac{1}{2} \left( \sum\_{m=1}^{M} d\_m \|A^m\| + \|q\|\_{L\_1(\Gamma)} \right)^{-1}.$$

With such and *K* we have the two-sided estimate:

$$\frac{1}{2} \|\boldsymbol{u}'\|\_{L\_2(\Gamma)}^2 + \|\boldsymbol{u}\|\_{L\_2(\Gamma)}^2 \le \mathcal{Q}\_{L\_q^8}(\boldsymbol{u}, \boldsymbol{u}) + K \|\boldsymbol{u}\|\_{L\_2(\Gamma)}^2 \le \frac{3}{2} \|\boldsymbol{u}'\|\_{L\_2(\Gamma)}^2 + 2K \|\boldsymbol{u}\|\_{L\_2(\Gamma)}^2. \tag{11.20}$$

In other words, expression (11.10) determines a norm, which is equivalent to the Sobolev *W*<sup>1</sup> <sup>2</sup> -norm:

$$\|\mu\|\_{W^1\_2(\Gamma)}^2 = \|\mu'\|\_{L\_2(\Gamma)}^2 + \|\mu\|\_{L\_2(\Gamma)}^2.$$

Our next goal is to determine the domain of the quadratic form—the closure of Dom *(L***<sup>S</sup>** *<sup>q</sup> )* with respect to the quadratic form norm introduced above. It is natural to study first the closure of the Dirichlet form.

## *11.1.4 The Reference Laplacian*

Consider the Dirichlet form

$$\left\|u'\right\|\_{L\_2(\Gamma)}^2 = \int\_{\Gamma} |u'(\mathbf{x})|^2 d\mathbf{x} \tag{11.21}$$

defined on the functions *<sup>u</sup>* <sup>∈</sup> Dom *(L***<sup>S</sup>** *<sup>q</sup> (-)).* The quadratic form is not closed on this domain and our aim is to determine its closure. We may restrict the quadratic form further by considering just smooth functions *u* ∈ *C*∞*(En)* satisfying the given vertex condition, which we write as a combination of generalised Dirichlet and Robin conditions (see formulas (11.3) and (11.7)).

Consider any Cauchy sequence *uj* ∈ *C*∞*(-* \ **V***)* such that *uj* − *ui <sup>W</sup>*<sup>1</sup> <sup>2</sup> *(-*\**V***)* → 0*.* It follows that *uj* − *ui <sup>L</sup>*2*(-)* → 0 and *u <sup>j</sup>* − *u i L*2*(-)* → 0 and therefore the limiting function as well as its first derivative are square integrable. In other words the limit function belongs to *W*<sup>1</sup> <sup>2</sup> *(-* \ **V***).* Every such function is continuous on each edge, therefore the generalised Dirichlet conditions (11.3) are preserved. The generalised Robin conditions (11.5) disappear, since the functions from *W*<sup>1</sup> <sup>2</sup> are not necessarily continuously differentiable. Summing up, the closure of the positive Dirichlet form is defined by the same expression (11.21) on the domain of functions from *W*<sup>1</sup> <sup>2</sup> *(-*\ **V***)* satisfying just the generalised Dirichlet conditions (11.3).

In the next step we calculate the self-adjoint operator associated with the closure of the Dirichlet form. Consider the sesquilinear form

$$\langle u', v' \rangle$$

assuming that *u* and *v* are in the domain of the quadratic form. The domain of the corresponding operator is given by all *v* such that the sesquilinear form determines a bounded linear functional with respect to *u* in the Hilbert space norm:

$$
\langle \mu', v' \rangle \le C\_v \|\mu\|\_{L\_2(\Gamma)}.\tag{11.22}
$$

Taking first *u* ∈ *C*<sup>∞</sup> <sup>0</sup> *(-* \ **V***)* we see that (11.22) holds only if the generalised derivative of *v* lies in *L*2*(-)*, i.e. if *<sup>v</sup>* <sup>∈</sup> *<sup>W</sup>*<sup>2</sup> <sup>2</sup> *(-*\**V***).* Consider now any *<sup>u</sup>* <sup>∈</sup> *<sup>W</sup>*<sup>1</sup> <sup>2</sup> *(-*\**V***)* satisfying the generalised Dirichlet condition (11.3); one may integrate by parts in the sesquilinear form to get

$$\begin{aligned} \langle u', v' \rangle &= \int\_{\Gamma} \overline{u(\mathbf{x})} (-v''(\mathbf{x})) dx + \sum\_{m=1}^{M} \langle \vec{u}^{m}, \partial \vec{v}^{m} \rangle\_{\mathbb{C}^{d\_{m}}} \\ &= \int\_{\Gamma} \overline{u(\mathbf{x})} v''(\mathbf{x}) dx + \sum\_{m=1}^{M} \underbrace{\langle \underline{P}\_{-1}^{m} \vec{u}^{m}, \underline{P}\_{-1}^{m} \partial \vec{v}^{m} \rangle\_{\mathbb{C}^{d\_{m}}}}\_{=0} \\ &+ \sum\_{m=1}^{M} \langle \underline{P}\_{-1}^{m \perp} \vec{u}^{m}, \underline{P}\_{-1}^{m \perp} \partial \vec{v}^{m} \rangle\_{\mathbb{C}^{d\_{m}}} \end{aligned}$$

The integral term gives a bounded functional with respect to *u*. Hence the sesquilinear form determines a bounded functional if and only if

$$\langle P\_{-1}^{m \perp} \vec{u}^{m}, P\_{-1}^{m \perp} \partial \vec{v}^{m} \rangle\_{\mathbb{C}^{d\_m}}$$

are bounded functionals with respect to the *L*2-norm. The functionals *<sup>u</sup>* → *um* are not bounded, since square integrable functions are not necessarily bounded (see estimate (11.11), where could be taken arbitrarily small but not equal to zero). Therefore we get a bounded functional only if *v* satisfies the **generalised Neumann**

11.1 Quadratic Forms (Integrable Potentials) 267

#### **condition**

$$P\_{-1}^{m \perp} \partial \vec{v}^m = 0.\tag{11.23}$$

Summing up, the self-adjoint operator corresponding to the closure of the Dirichlet form is the Laplace operator defined on the functions from *W*<sup>2</sup> <sup>2</sup> *(-* \ **V***)* satisfying generalised Dirichlet (11.3) and generalised Neumann (11.23) conditions at the vertices. These vertex conditions are scaling-invariant and sometimes are called non-Robin. This operator will play the role of a **reference** operator, when spectral estimates for quantum graphs will be derived. Using our notations, the reference operator can be written as *L***Sv***(*∞*)* , where following (3.31) we introduced the high energy limit of the vertex scattering matrix

$$\mathbf{S\_{V}}(\infty) = \lim\_{k \to \infty} \mathbf{S\_{V}}(k) = I - 2P\_{-1},$$

where *P*−<sup>1</sup> <sup>=</sup> *<sup>M</sup> <sup>m</sup>*=<sup>1</sup> *<sup>P</sup> <sup>m</sup>* −1. The matrices *S<sup>m</sup>* **<sup>v</sup>** *(*∞*)* have eigenvalues −1 and 1 and the corresponding eigensubspaces are *P <sup>m</sup>* <sup>−</sup>1C*dm* and *<sup>P</sup> <sup>m</sup>*<sup>⊥</sup> <sup>−</sup><sup>1</sup> <sup>C</sup>*dm .* Hence the functions from the domain of *L***Sv***(*∞*)* satisfy precisely the Dirichlet and Neumann conditions (see (11.3) and (11.23).

Note that the operator *L***Sv***(*∞*)* coincides with *L***<sup>S</sup>** only if **S** is Hermitian. In general, substituting **S** with **Sv***(*∞*)* may change the topology of the described system—the metric graph corresponding to *L***Sv***(*∞*)* may be slightly different from the original graph *-.* The original matrix **S** has block structure, each block associated with a vertex in the graph *-.* Therefore the limiting matrix **Sv***(*∞*)* also has block structure, but the blocks in it could be finer than those in **S**. We have already met this phenomenon in Sect. 9.3.3.

**Example 11.2** Consider the 2 × 2 unitary matrix

$$\mathbf{S} = \frac{1}{2} \begin{pmatrix} 1+i \ 1-i \\ 1-i \ 1+i \end{pmatrix}.$$

The eigenvalues are 1 and *i*

$$\mathbf{S} = \mathbf{l} \ P\_{(\mathbf{l}, \mathbf{l})} + \mathbf{i} \ P\_{(\mathbf{l}, -\mathbf{l})},$$

where *P(*1*,*1*)* and *P(*1*,*−1*)* are the orthogonal projectors on the corresponding eigenvectors *(*1*,* 1*)* and *(*1*,* −1*)* respectively. It follows, that **Sv***(*∞*)* is the unitary matrix with 1 as the double eigenvalue. Really, formula (3.20), or more explicitly (3.30) implies that

$$\mathbf{S}\_{\mathbf{V}}(k) = 1\\P\_{(1,1)} + \frac{k(i+1) + (i-1)}{k(i+1) - (i-1)}\\P\_{(1,-1)} \to\_{k \to \infty} 1\\\ P\_{(1,1)} + 1\\\ P\_{(1,-1)} = \mathbb{I}.$$

The limit matrix **Sv***(*∞*)* describes not a single vertex of degree 2, but two independent degree one vertices with Neumann conditions.

Let us denote by *-*∞ the metric graph corresponding to the new vertex scattering matrix **Sv***(*∞*).* The graph *-*∞ is obtained from the original metric graph  by chopping all vertices, for which *S<sup>m</sup>* **<sup>v</sup>** *(*∞*)* <sup>=</sup> lim*k*→∞ *<sup>S</sup><sup>m</sup>* **<sup>v</sup>** *(k)* has block structure. For future use we formulate the following definition, which involves the Schrödinger operators with maybe non-zero potentials.

**Definition 11.3** Let *L***<sup>S</sup>** *<sup>q</sup> (-)* be a Schrödinger operator on a metric graph  with vertex conditions determined by the matrix **S***.* Let **Sv***(*∞*)* be the high-energy limit of the vertex scattering matrix and *-*∞ - the corresponding metric graph obtained by chopping (if necessary) certain vertices in  so that the vertex conditions determined by **Sv***(*∞*)* are properly connecting for *-*∞*.* Then the non-Robin Laplace operator *L***Sv***(*∞*) (-*<sup>∞</sup>*)* <sup>=</sup> *<sup>L</sup>***Sv***(*∞*)* <sup>0</sup> *(-*∞*)* is called the **reference Laplacian** for the Schrödinger operator *L***<sup>S</sup>** *<sup>q</sup> (-).*

Note that the Hilbert spaces *L*2*(-)* and *L*2*(-*∞*)* can always be identified, since these metric graphs have the same set of edges. The reference operator will play a very important role in spectral estimates, where the spectrum of the Schrödinger operator will be compared to the spectrum of just the reference Laplacian.

It will be convenient to distinguish vertex conditions that do not lead to a different reference graph:

**Definition 11.4** Vertex conditions are called **asymptotically properly connecting**  if the high energy limits of all vertex scattering matrices *S<sup>m</sup>* **<sup>v</sup>** *(*∞*)* are irreducible, i.e. if *-*<sup>∞</sup> = *-.*

**Definition 11.5** Vertex conditions are called **asymptotically standard** if and only if **Sv***(*∞*)* <sup>=</sup> **<sup>S</sup>**st*(-*∞*)*, where **S**st*(-*∞*)* is the unitary matrix determining standard vertex conditions on *-*∞*.*

Note that we do not require that **Sv***(*∞*)* <sup>=</sup> **<sup>S</sup>**st*(-)* for vertex conditions to be asymptotically standard. If this is the case, then the vertex conditions are called asymptotically properly connecting and standard.

**Problem 44** Consider a degree three vertex. Provide examples of properly connecting vertex conditions, such that in *-*∞ the vertex splits into:


Give an example of asymptotically properly connecting vertex conditions.

**Problem 45** How to describe all asymptotically standard vertex conditions?

## *11.1.5 Closure of the Perturbed Quadratic Form*

We return now to the operator *L***<sup>S</sup>** *<sup>q</sup> (-)* and the corresponding quadratic form norm given by (11.10), where the constant *K* is chosen as in (11.19). For simplicity, let us consider this norm not on the domain of *L***<sup>S</sup>** *<sup>q</sup> (-)*, but on the space of smooth functions *C*∞*(-* \ **V***)* satisfying vertex conditions with the parameter matrix **S**. These conditions can be written as a combination of generalised Dirichlet and Robin conditions: (11.3) and (11.5).

The estimate (11.20) implies that the closure of this domain with respect to the quadratic form norm of *L***<sup>S</sup>** *<sup>q</sup>* and with respect to the Dirichlet form just coincide. Hence the quadratic form (11.9) is closed on the set of *W*<sup>1</sup> <sup>2</sup> *(-* \ **V***)*-functions satisfying just generalised Dirichlet conditions (11.3). As before, the Robin part of vertex conditions disappears. The difference is that the Robin part can be reconstructed taking into account the terms involving the matrices *Am.*

Let us summarise our studies:

**Theorem 11.6** *Let be a finite compact metric graph and let q* ∈ *L*1*(-). Then the quadratic form of the operator L***<sup>S</sup>** *<sup>q</sup> is defined on the domain* Dom *(QL***<sup>S</sup>** *q )of functions from W*<sup>1</sup> <sup>2</sup> *(-*\**V***) satisfying the generalised Dirichlet conditions* (11.3) *at the vertices and is given by the following expression* 

$$\mathcal{Q}\_{L\_q^{\rm S}}(\boldsymbol{u}, \boldsymbol{u}) = \int\_{\Gamma} \left( |\boldsymbol{u}'(\boldsymbol{x})|^2 + q(\boldsymbol{x}) |\boldsymbol{u}(\boldsymbol{x})|^2 \right) d\boldsymbol{x}$$

$$+ \sum\_{m=1}^{M} \langle P\_{-1}^{m \perp} \vec{\boldsymbol{u}}(\boldsymbol{V}^m), \boldsymbol{A}^m P\_{-1}^{m \perp} \vec{\boldsymbol{u}}(\boldsymbol{V}^m) \rangle\_{\mathbb{C}^{\mathcal{A}^m}}.\tag{11.24}$$

We have already mentioned that there is a one-to-one correspondence between semibounded quadratic forms and self-adjoint operators. Let us see how the unique self-adjoint operator *L***<sup>S</sup>** *<sup>q</sup>* can be reconstructed from its quadratic form *QL***<sup>S</sup>** *q .* Assume that the quadratic form is known and therefore is given by the expression (11.24) on the domain of functions from *W*<sup>1</sup> <sup>2</sup> *(-* \ **V***)* satisfying the generalised Dirichlet conditions (11.3). Assume that *u, v* ∈ Dom *(QL***<sup>S</sup>** *q )*, then the corresponding sesquilinear form is

$$\mathcal{Q}\_{L\_q^\mathbf{S}}(\boldsymbol{\mu}, \boldsymbol{\upsilon}) = \int\_{\Gamma} \left( \overline{\boldsymbol{u}'(\mathbf{x})} \boldsymbol{\upsilon}'(\mathbf{x}) + q(\mathbf{x}) \overline{\boldsymbol{u}(\mathbf{x})} \boldsymbol{\upsilon}(\mathbf{x}) \right) d\mathbf{x}$$

$$+ \sum\_{m=1}^{M} \langle P\_{-1}^{m\perp} \overline{\boldsymbol{u}}(\boldsymbol{V}^m), \boldsymbol{A}^m P\_{-1}^{m\perp} \overline{\boldsymbol{v}}(\boldsymbol{V}^m) \rangle\_{\mathbb{C}^{d\mathbf{w}}}.\tag{11.25}$$

The domain of the operator consists of all functions *v* such that (11.25) defines a bounded linear functional with respect to *u* i.e. an estimate similar to (11.22) holds.

Consider first *u* ∈ *C*<sup>∞</sup> <sup>0</sup> *(-*\ **V***),* then the form is equal to

$$\mathcal{Q}\_{L^{S}\_{q}}(\boldsymbol{u},\boldsymbol{v}) = \int\_{\boldsymbol{\Gamma}} \overline{u'(\boldsymbol{x})} \boldsymbol{v}'(\boldsymbol{x}) d\boldsymbol{x} + \int\_{\boldsymbol{\Gamma}} q(\boldsymbol{x}) \overline{u(\boldsymbol{x})} \boldsymbol{v}(\boldsymbol{x}) d\boldsymbol{x} \dots$$

Using generalised derivatives, one may write that this sesquilinear form determines a bounded functional only if

$$-\mu'' + qu \in L\_2(\Gamma) \tag{11.26}$$

holds.

Assume now that *u* ∈ *C*∞*(-* \ **V***)*, i.e. *u* is not necessarily equal to zero in a neighbourhood of the vertices. We already know that (11.26) holds and hence *v* and its first derivative are continuous on each edge, this allows us to integrate by parts:

$$\begin{split} \mathcal{Q}\_{L^{S}\_{q}}(\boldsymbol{u},\boldsymbol{v}) &= \int\_{\boldsymbol{\Gamma}} \overline{\boldsymbol{u}(\boldsymbol{x})} \left( -\boldsymbol{v}''(\boldsymbol{x}) + q(\boldsymbol{x})v(\boldsymbol{x}) \right) d\boldsymbol{x} - \sum\_{m=1}^{M} \langle \tilde{\boldsymbol{u}}(\boldsymbol{V}^{m}), \partial \tilde{\boldsymbol{v}}(\boldsymbol{V}^{m}) \rangle\_{\mathbb{C}^{d\_{m}}} \\ &+ \sum\_{m=1}^{M} \langle P\_{-1}^{m\perp} \tilde{\boldsymbol{u}}(\boldsymbol{V}^{m}), \boldsymbol{A}^{m} P\_{-1}^{m\perp} \tilde{\boldsymbol{v}}(\boldsymbol{V}^{m}) \rangle\_{\mathbb{C}^{d\_{m}}} . \end{split}$$

We already know that the integral term is a bounded functional. Taking into account that *u* satisfies the generalised Dirichlet conditions (11.3) the vertex terms can be written as

$$\sum\_{m=1}^{M} \langle P\_{-1}^{m \perp} \vec{u}(V^{m}), A^{m} P\_{-1}^{m \perp} \vec{v}(V^{m}) - P\_{-1}^{m \perp} \partial \vec{v}(V^{m}) \rangle\_{\mathbb{C}^{d\_m}}.\tag{11.27}$$

Since the functionals *u* → *u(xj )* are not bounded with respect to the *L*2-norm of *u* and the vectors *<sup>P</sup> <sup>m</sup>*<sup>⊥</sup> <sup>−</sup><sup>1</sup> *u(V m)* are arbitrary, (11.27) determines bounded functionals if and only if

$$A^m P\_{-1}^{m \perp} \vec{v}(V^m) - P\_{-1}^{m \perp} \partial \vec{v}(V^m) = 0,$$

which is equivalent to the generalised Robin condition (11.7). Hence the domain of the operator is given by the set of functions from *W*<sup>1</sup> <sup>2</sup> *(-* \ **V***)* satisfying (11.26) and vertex conditions (11.3) and (11.7) as expected. Provided *v* belongs to this domain, the quadratic form is given by

$$\mathcal{Q}\_{L^{\mathcal{S}}\_q}(\mu, \upsilon) = \int\_{\Gamma} \overline{u(\upsilon)} \underbrace{\left(-\upsilon''(\upsilon) + q(\upsilon)\upsilon(\upsilon)\right)}\_{= (\tau\_q \upsilon)(\upsilon)} d\upsilon,$$

which shows that the action of the operator corresponding to the quadratic form is given by the differential expression *τq* as indicated above.

## **11.2 Spectral Estimates (Standard Vertex Conditions)**

The spectrum of the Schrödinger operator with *L*1-potential on a finite compact metric graph is purely discrete, since every such operator can be considered as a finite rank perturbation (in the resolvent sense) of the Schrödinger operator on the separate edges with Dirichlet conditions at all endpoints (so-called Dirichlet operator, see Definition 4.3 ). The aim of this section is to prove spectral estimates comparing the spectra of Schrödinger operators and the reference Laplacians, two differential operators acting essentially on the same metric graph.2 The main reason for such studies is that the spectrum of a non-Robin Laplacian is much easier to calculate. Moreover as a by-product of our studies we shall prove a generalisation of the celebrated Ambartsumian theorem (see Chap. 14). It turns out that as in the case of a single interval the difference between Laplace and Schrödinger eigenvalues is uniformly bounded. The classical proof for this fact in the case of single interval relies heavily on the explicit formula for the resolvent kernel of the Laplacian [105, 381]. Since the corresponding formula in the case of metric graphs is not so explicit, we are going to use general perturbation theory. In particular the *minmax* and *max-min* principles giving explicit formulas for the eigenvalues will be exploited (see Proposition 4.19).

Our immediate goal is to use Proposition 4.19 to estimate the eigenvalues of the Schrödinger operator through the corresponding Laplacian eigenvalues. This is an easy exercise if we assume *q* ∈ *L*∞*(-),* but we want to cover the most general case of *q* ∈ *L*1*(-).*

**Problem 46** Assume that *q* ∈ *L*∞*(-),* show that the eigenvalue of the Laplace and Schrödinger operators satisfy the uniform in *n* estimate

$$|\lambda\_n(L\_q^{\rm st}) - \lambda\_n(L^{\rm st})| \le C,\tag{11.28}$$

where *C* is a certain constant independent of *n.* Can you give an explicit formula for *C*?

If one takes into account that the eigenvalues satisfy Weyl's law, one may prove that the square roots of the eigenvalues are asymptotically close:

$$k\_n(L\_q^{\mathbf{S}}) - k\_n(L^{\mathbf{S}}) = \frac{\lambda\_n(L\_q^{\mathbf{S}}) - \lambda\_n(L^{\mathbf{S}})}{k\_n(L\_q^{\mathbf{S}}) + k\_n(L^{\mathbf{S}})} \xrightarrow[n \to \infty]{} 0. \tag{11.29}$$

This observation motivates the following definition.

<sup>2</sup> These operators act on  and *-*∞ respectively.

**Definition 11.7** Two unbounded, semi-bounded self-adjoint operators *A* and *B* with discrete spectra {*λn(A)*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> and {*λn(B)*}<sup>∞</sup> *<sup>n</sup>*=1, respectively, are called **asymptotically isospectral** if

$$\lim\_{n \to \infty} \left( \sqrt{\lambda\_n(A)} - \sqrt{\lambda\_n(B)} \right) = 0. \tag{11.30}$$

This definition makes sense only if the operators are unbounded, otherwise the requirement in the definition is too weak. To guarantee asymptotic isospectrality of quantum graphs it is enough if the eigenvalues satisfy the estimate

$$|\lambda\_n(L\_q^{\mathbf{S}}) - \lambda\_n(L^{\mathbf{S}})| \le Cn^{1-\epsilon}, \quad \epsilon > 0. \tag{11.31}$$

Our spectral estimates are stronger, but we prefer to use the notion of isospectrality, since precisely condition (11.30) will be used to prove that two generalised trigonometric polynomials have close zeroes if and only if the zeroes coincide.

In order to make the presentation more transparent, let us discuss the case of standard conditions first and return back to the case of general vertex conditions in the next section.

**Theorem 11.8** *Let L*st *and L*st *<sup>q</sup> be the standard Laplace and standard Schrödinger operators on a compact finite metric graph and let the potential q in the Schrödinger operator be absolutely integrable, q* ∈ *L*1*(-). Then the Laplace and Schrödinger operators, L*st *and L*st *<sup>q</sup> , are asymptotically isospectral, moreover the difference between their eigenvalues is uniformly bounded* 

$$|\lambda\_n(L^{\rm st}) - \lambda\_n(L\_q^{\rm st})| \le C(\Gamma, q),\tag{11.32}$$

*where the constant C(-, q) depends on the graph and the potential q, but is independent of n.*

We postpone the proof of the theorem and try to derive an upper estimate for *λn(L*st *<sup>q</sup> )* using a naive approach. In the case of standard conditions the quadratic form is

$$\mathcal{Q}\_{L\_q^{\text{st}}}(\mu, \mu) = \int\_{\Gamma} |\mu'| ^2 d\mathbf{x} + \int\_{\Gamma} q(\mathbf{x}) |\mu(\mathbf{x})|^2 d\mathbf{x},$$

where *u* is arbitrary *W*<sup>1</sup> <sup>2</sup> *(-)* function continuous at the vertices. The form can be estimated from above by

$$\mathcal{Q}\_{L\_q^\mathbf{x}}(\mu, \mu) \le \mathcal{Q}\_{L\_{q+}^\mathbf{x}}(\mu, \mu) = \int\_{\Gamma} |u'| ^2 dx + \int\_{\Gamma} q\_+(\mathbf{x}) |u(\mathbf{x})|^2 d\mathbf{x},\tag{11.33}$$

where *q*<sup>+</sup> is the positive part of the potential *q*:

$$q(\mathbf{x}) = q\_+(\mathbf{x}) - q\_-(\mathbf{x}),\ q\_\pm(\mathbf{x}) \ge 0. \tag{11.34}$$

This step cannot be improved much, since the new estimate coincides with the original one in the case *q* is nonnegative.

The idea how to proceed is to choose a concrete *<sup>n</sup>*-dimensional subspace <sup>V</sup><sup>0</sup> *<sup>n</sup>*, then the Rayleigh quotient will give not an exact value for *λn(L*st *<sup>q</sup> )*, but an upper estimate if formula (4.51) is used

$$
\lambda\_n(L\_q^{\mathrm{st}}) = \min\_{\mathcal{V}^u} \max\_{u \in \mathcal{V}^u} \frac{\mathcal{Q}\_{L\_q^{\mathrm{st}}}(u, u)}{\|u\|^2} \le \max\_{u \in \mathcal{V}\_n^0} \frac{\mathcal{Q}\_{L\_q^{\mathrm{st}}}(u, u)}{\|u\|^2}.
$$

The only candidate for <sup>V</sup><sup>0</sup> *<sup>n</sup>* we have is the linear span of the Laplacian eigenfunctions *ψL*st *<sup>j</sup>* corresponding to the *n* lowest eigenvalues

$$\mathcal{W}\_n^0 = \mathcal{L}\left\{ \psi\_1^{L^{\text{st}}}, \psi\_2^{L^{\text{st}}}, \dots, \psi\_n^{L^{\text{st}}} \right\}. \tag{11.35}$$

If *q* ≡ 0 then this estimate gives the exact value for *λn.* Therefore it is natural to split the quadratic form as follows:

$$
\lambda\_n(L\_q^{\mathrm{st}}) \le \max\_{\boldsymbol{\mu} \in \mathcal{V}\_n^0} \frac{\mathcal{Q}\_{L\_q^{\mathrm{st}}}(\boldsymbol{\mu}, \boldsymbol{\mu})}{\|\boldsymbol{\mu}\|^2} \le \max\_{\boldsymbol{\mu} \in \mathcal{V}\_n^0} \frac{\int\_{\Gamma} |\boldsymbol{\mu}'|^2 d\boldsymbol{x}}{\|\boldsymbol{\mu}\|^2} + \max\_{\boldsymbol{\mu} \in \mathcal{V}\_n^0} \frac{\int\_{\Gamma} q\_+(\boldsymbol{x}) |\boldsymbol{\mu}|^2 d\boldsymbol{x}}{\|\boldsymbol{\mu}\|^2}.
$$

Then the first quotient is equal to *λn(L*st*)* and the maximum is attained on

$$u = \psi\_n^{L^{\mathfrak{s}}}.$$

If nothing about *q* is known, then to estimate the second quotient one may use

$$\int\_{\Gamma} q\_{+}(\mathbf{x}) |u|^{2} dx \le \|q\_{+}\|\_{L\_{1}(\Gamma)} \left( \max\_{\mathbf{x} \in \Gamma} |u(\mathbf{x})| \right)^{2} . \tag{11.36}$$

We need to estimate |*u(x)*| 2, provided *<sup>u</sup>* <sup>=</sup> *<sup>n</sup> <sup>j</sup>*=<sup>1</sup> *αjψ<sup>L</sup>*st *<sup>j</sup> .* The Laplacian eigenfunctions *ψL*st *<sup>j</sup>* possess the uniform upper bound

$$|\psi\_j^{L^{\text{st}}}(\mathbf{x})| \le c \|\psi\_j^{L^{\text{st}}}\|\_{L^2(\Gamma)},\tag{11.37}$$

where the constant *c* depends on the graph  only. To prove this one may use the fact that every Laplacian eigenfunction *ψ* is a sine function on each edge and therefore admits

$$\|\|\boldsymbol{\psi}\|\|\_{L\_2(\Gamma)}^2 \ge \int\_{E\_l} |\psi(\boldsymbol{x})|^2 d\boldsymbol{x} \ge \left(\max\_{\boldsymbol{x}\in E\_l} |\psi(\boldsymbol{x})|\right)^2 \frac{1}{2} \left[\frac{\ell\_l k}{2\pi}\right] \frac{2\pi}{k}$$

and take into account that the eigenvalues satisfy Weyl asymptotics implying that *ik* 2*π* may be equal to zero just for a finite number of eigenfunctions.

We get the following explicit estimate for the second quotient

$$\max\_{\mu \in \mathcal{W}\_n^0} \frac{\int\_{\Gamma} q\_+(\mathbf{x}) |\mu|^2 d\mathbf{x}}{\|\mu\|^2} \le \|q\_+\|\_{L\_1(\Gamma)} c^2 \max\_{\alpha\_\beta} \frac{|\alpha\_1 + \alpha\_2 + \dots + \alpha\_n|^2}{|\alpha\_1|^2 + |\alpha\_2|^2 + \dots + |\alpha\_n|^2}.$$

The maximum is attained when all alpha are equal, for example if *α*<sup>1</sup> = *α*<sup>2</sup> =···= 1*,* leading to

$$\max\_{\mu \in \mathcal{V}\_n^0} \frac{\int\_{\Gamma} q\_+(\mathbf{x}) |u|^2 dx}{\|u\|^2} \le \|q\_+\|\_{L\_1(\Gamma)} c^2 n. \tag{11.38}$$

It follows that

$$
\lambda\_n(L\_q^{\rm st}) - \lambda\_n(L^{\rm st}) \le \|q\_+\|\_{L\_1(\Gamma)} c^2 n,\tag{11.39}
$$

i.e. we do not get an estimate uniform in *n*—the estimate grows linearly with *n.* The reason is the splitting of the quadratic form of *L*st *<sup>q</sup>* into two parts. The maxima of the two parts are realized on intrinsically different vectors: the first term is maximized if *<sup>u</sup>* <sup>=</sup> *<sup>ψ</sup><sup>L</sup> <sup>n</sup> ,* while since nothing is known about the potential all eigenfunctions may play the same role in the second estimate. This is the reason why the obtained estimate is not optimal. Let us prove the theorem getting a uniform estimate in *n*.

*Proof of Theorem 11.8* It is enough to prove the theorem for any fixed total length L, therefore we assume that L = *π* in order to simplify formulas. The proof is divided into two parts proving upper and lower estimates separately.

**Upper Estimate** As before we use the estimate

$$\lambda\_n(L\_q^{\text{st}}) \le \max\_{\mu \in \mathcal{V}\_n^0} \frac{\int\_{\Gamma} |u'|^2 dx + \int\_{\Gamma} q\_+ |u|^2 dx}{\|u\|^2},\tag{11.40}$$

where V<sup>0</sup> *<sup>n</sup>* is defined by (11.35). Every function *<sup>u</sup>* <sup>=</sup> *<sup>n</sup> <sup>j</sup>*=<sup>1</sup> *αjψ<sup>L</sup>*st *<sup>j</sup>* from V<sup>0</sup> *<sup>n</sup>* can be written as a sum *u* = *u*<sup>1</sup> + *u*2, where

$$\begin{array}{l} \mu\_1 := \alpha\_1 \psi\_1^{L^{\text{st}}} + \alpha\_2 \psi\_2^{L^{\text{st}}} + \dots + \alpha\_{n-p-1} \psi\_{n-p}^{L^{\text{st}}}, \\ \mu\_2 := \alpha\_{n-p} \psi\_{n-p+1}^{L^{\text{st}}} + \alpha\_{n-p+1} \psi\_{n-p+1}^{L^{\text{st}}} + \dots + \alpha\_n \psi\_n^{L^{\text{st}}}. \end{array} \tag{11.41}$$

Here *p* is a certain natural number to be fixed later (depending on  and *q*), therefore as *n* increases the first function *u*<sup>1</sup> will contain an increasing number of terms, while the second function will always be given by a sum of *p* terms.

Using the fact that *q*<sup>+</sup> is nonnegative we have

$$\int\_{\Gamma} q\_{+} |u\_{1} + u\_{2}|^{2} dx \le 2 \int\_{\Gamma} q\_{+} |u\_{1}|^{2} dx + 2 \int\_{\Gamma} q\_{+} |u\_{2}|^{2} dx. \tag{11.42}$$

Then taking into account that *u*<sup>1</sup> and *u*<sup>2</sup> as well as *u* <sup>1</sup> and *u* <sup>2</sup> are mutually orthogonal we arrive at

$$\begin{aligned} \mathcal{Q}\_{L^{\mathfrak{R}}\_{q}}(\boldsymbol{u},\boldsymbol{u}) &\leq \underbrace{\int\_{\Gamma} |\boldsymbol{u}\_{1}'|^{2} d\boldsymbol{x} + 2 \int\_{\Gamma} q\_{+} |\boldsymbol{u}\_{1}|^{2} d\boldsymbol{x}}\_{=\mathcal{Q}\_{L^{\mathfrak{R}}\_{2q\_{+}}}(\boldsymbol{u}\_{1},\boldsymbol{u}\_{1})} + \underbrace{\int\_{\Gamma} |\boldsymbol{u}\_{2}'|^{2} d\boldsymbol{x} + 2 \int\_{\Gamma} q\_{+} |\boldsymbol{u}\_{2}|^{2} d\boldsymbol{x}}\_{=\mathcal{Q}\_{L^{\mathfrak{R}}\_{2q\_{+}}}(\boldsymbol{u}\_{2},\boldsymbol{u}\_{2})}. \end{aligned}$$

To estimate the first form we use the Sobolev estimate (11.12)

$$\begin{aligned} \mathcal{Q}\_{L^{\text{st}}\_{2q+}}(u\_1, u\_1) &= \int\_{\Gamma} |u'\_1|^2 dx + 2 \int\_{\Gamma} q\_+ |u\_1|^2 dx \\ &\le (1 + 2\epsilon \|q\_+\|\_{L\_1(\Gamma)}) \int\_{\Gamma} |u'\_1|^2 dx + \frac{4}{\epsilon} \|q\_+\|\_{L\_1(\Gamma)} \int\_{\Gamma} |u\_1|^2 dx \\ &\le \left( (1 + 2\epsilon \|q\_+\|\_{L\_1(\Gamma)}) \lambda\_{n-p}(L^{\text{st}}) + \frac{4}{\epsilon} \|q\_+\|\_{L\_1(\Gamma)} \right) \|u\_1\|\_{L\_2(\Gamma)}^2, \end{aligned}$$

The key point is that can be chosen in such a way that

$$(1 + 2\epsilon \|q\_{+}\|\_{L\_{1}(\Gamma)})\lambda\_{n-p} + \frac{4}{\epsilon} \|q\_{+}\|\_{L\_{1}(\Gamma)} < \lambda\_{n}$$

holds. This will be shown later.

On the other hand, our naive approach (11.38) can be applied to the second form with the only difference that the number of eigenfunctions involved is *p*, not *n*

$$\|\mathcal{Q}\_{L\_{2q\_+}^{\mathrm{st}}}(\mu\_2,\mu\_2)\| \le \left(\lambda\_n(L^{\mathrm{st}}) + 2c^2 \|q\_+\|\_{L\_1(\Gamma)}p\right) \|\mu\_2\|\_{L\_2(\Gamma)}^2.$$

Putting together the obtained estimates we get

$$\begin{split} \mathcal{Q}\_{L\_{q}^{\text{st}}}(\boldsymbol{\mu},\boldsymbol{\mu}) &\leq \left( (1+2\epsilon \|\boldsymbol{q}\_{+}\|\_{L\_{1}(\Gamma)})\lambda\_{\boldsymbol{\eta}-\boldsymbol{p}}(\boldsymbol{L}^{\text{st}}) + \frac{4}{\epsilon} \|\boldsymbol{q}\_{+}\|\_{L\_{1}(\Gamma)} \right) \|\boldsymbol{\mu}\_{1}\|\_{L\_{2}(\Gamma)}^{2} \\ &+ \left(\lambda\_{\boldsymbol{\eta}}(\boldsymbol{L}^{\text{st}}) + 2c^{2} \|\boldsymbol{q}\_{+}\|\_{L\_{1}(\Gamma)}p\right) \|\boldsymbol{\mu}\_{2}\|\_{L\_{2}(\Gamma)}^{2} \\ &\leq \lambda\_{\boldsymbol{\eta}}(\boldsymbol{L}^{\text{st}}) \|\boldsymbol{\mu}\|\_{L\_{2}(\Gamma)}^{2} + 2c^{2} \|\boldsymbol{q}\_{+}\|\_{L\_{1}(\Gamma)}p\|\boldsymbol{\mu}\|\_{L\_{2}(\Gamma)}^{2} \\ &- \left(\lambda\_{\boldsymbol{\eta}}(\boldsymbol{L}^{\text{st}}) - (1+2\epsilon\|\boldsymbol{q}\_{+}\|\_{L\_{1}(\Gamma)})\lambda\_{\boldsymbol{\eta}-\boldsymbol{p}}(\boldsymbol{L}^{\text{st}}) - \frac{4}{\epsilon} \|\boldsymbol{q}\_{+}\|\_{L\_{1}(\Gamma)}\right) \|\boldsymbol{\mu}\_{1}\|\_{L\_{2}(\Gamma)}^{2} .\end{split}$$

#### 276 11 Quadratic Forms and Spectral Estimates

We would get the desired estimate

$$\lambda\_n(L\_q^{\text{st}}) \le \max\_{\boldsymbol{\mu} \in \mathcal{V}\_n^0} \frac{\mathcal{Q}\_{L\_q^{\text{st}}}(\boldsymbol{\mu}, \boldsymbol{\mu})}{\|\boldsymbol{\mu}\|\_{L\_2(\Gamma)}^2} \le \lambda\_n(L^{\text{st}}) + C \|\boldsymbol{\mu}\|\_{L\_2(\Gamma)}^2 \tag{11.43}$$

with *<sup>C</sup>* <sup>=</sup> <sup>2</sup>*c*<sup>2</sup> *<sup>q</sup>*+ *L*1*(-)p* if we manage to prove that

$$\lambda\_n(L^{\rm st}) - (1 + 2\epsilon \|q\_+\|\_{L\_1(\Gamma)})\lambda\_{n-p}(L^{\rm st}) - \frac{4}{\epsilon} \|q\_+\|\_{L\_1(\Gamma)} > 0\tag{11.44}$$

for a certain that may depend on *n* and a certain *p* independent of *n*. We use the following two-sided estimate for Laplacian eigenvalues proven in Sect. 4.6

$$\left(n - M\right)^2 \le \lambda\_n \left(L^{\text{st}}\right) \le \left(n + N\right)^2. \tag{11.45}$$

Let us choose = 1*/n*, this gives us

$$\begin{aligned} &\lambda\_n(L^{\rm st}) - (1 + 2\epsilon \|q\_+\|\_{L\_1(\Gamma)})\lambda\_{n-p}(L^{\rm st}) - \frac{4}{\epsilon} \|q\_+\|\_{L\_1(\Gamma)} \\ &= \lambda\_n(L^{\rm st}) - (1 + 2/n \|q\_+\|\_{L\_1(\Gamma)})\lambda\_{n-p}(L^{\rm st}) - 4n \|q\_+\|\_{L\_1(\Gamma)} \\ &\ge \left(n - M\right)^2 - (1 + 2/n \|q\_+\|\_{L\_1(\Gamma)})(n - p + N)^2 - 4n \|q\_+\|\_{L\_1(\Gamma)} \\ &= 2n \left(p - M - N - 3\|q\_+\|\_{L\_1(\Gamma)}\right) + \mathcal{O}(1). \end{aligned}$$

We see that for any fixed integer *p>M*+*N* +3 *q*+ *L*1*(-)* the expression is positive for sufficiently large *n* and the difference between the eigenvalues possesses the uniform upper estimate

$$
\lambda\_n(L\_q^{\rm st}) - \lambda\_n(L^{\rm st}) \le C. \tag{11.46}
$$

If one is interested in the difference between the eigenvalues for large *n* only, then the constant *C* can be taken equal to

$$\mathcal{C} = 2c^2 \|q\|\_{L\_1(\Gamma)} (M + N + \Im \|q\|\_{L\_1(\Gamma)} + 1),\tag{11.47}$$

but this value of *C* may be too small in order to ensure that (11.46) holds for all *n*, since proving (11.44) we assumed that *n* is sufficiently large. The later assumption does not affect the final result, since for a finite number of eigenvalues estimate (11.46) is always satisfied, but the exact value of the constant *C* may be affected.

**Lower Estimate** To get a lower estimate we are going to use the *maxmin* principle (4.52). The first step is to notice that

$$\|Q\_{L\_q^{\text{fl}}}(\mu,\mu)\| \geq \int\_{\Gamma} |u'(\mathbf{x})|^2 d\mathbf{x} - \int\_{\Gamma} q\_{-}(\mathbf{x}) |u(\mathbf{x})|^2 d\mathbf{x}.\tag{11.48}$$

Using the same subspace V<sup>0</sup> *<sup>n</sup>*−<sup>1</sup> we get

$$
\lambda\_n(L\_q^{\mathrm{st}}) \ge \min\_{\boldsymbol{\mu} \perp \mathcal{V}\_{n-1}^0} \frac{\mathcal{Q}\_{L\_q^{\mathrm{st}}}(\boldsymbol{\mu}, \boldsymbol{\mu})}{\|\boldsymbol{\mu}\|\_{L\_2(\Gamma)}^2}.
$$

Since *<sup>u</sup>* is orthogonal to V<sup>0</sup> *<sup>n</sup>*−<sup>1</sup> it possesses the representation

$$\mu = \sum\_{j=n}^{\infty} \alpha\_j \psi\_j^{L^a}.$$

As before let us split the function *u* = *u*<sup>1</sup> + *u*<sup>2</sup>

$$\begin{aligned} \mu\_1 &:= \alpha\_n \psi\_n^{L^{\text{st}}} + \alpha\_{n+1} \psi\_{n+1}^{L^{\text{st}}} + \dots + \alpha\_{n+p-1} \psi\_{n+p-1}^{L^{\text{st}}}, \\ \mu\_2 &:= \alpha\_{n+p} \psi\_{n+p}^{L^{\text{st}}} + \alpha\_{n+p+1} \psi\_{n+p+1}^{L^{\text{st}}} + \dots \end{aligned} \tag{11.49}$$

Note two important differences:


Using the fact that *q*<sup>−</sup> is nonnegative we may split the quadratic form

$$\mathcal{Q}\_{L\_q^\text{R}}(\boldsymbol{u},\boldsymbol{u}) \ge \underbrace{\int\_{\Gamma} |\boldsymbol{u}\_1'|^2 d\boldsymbol{x} - 2 \int\_{\Gamma} q\_- |\boldsymbol{u}\_1|^2 d\boldsymbol{x}}\_{=\mathcal{Q}\_{L\_{-2q\_-}}(\boldsymbol{u}\_1,\boldsymbol{u}\_1)} + \underbrace{\int\_{\Gamma} |\boldsymbol{u}\_2'|^2 d\boldsymbol{x} - 2 \int\_{\Gamma} q\_- |\boldsymbol{u}\_2|^2 d\boldsymbol{x}}\_{=\mathcal{Q}\_{L\_{-2q\_-}}(\boldsymbol{u}\_2,\boldsymbol{u}\_2)}.$$

Now the first function *u*<sup>1</sup> is given by a finite number of terms and the following estimate can be used

$$\mathcal{Q}\_{L\_{-2q\_{-}}^{\rm st}}(\boldsymbol{\mu}\_{1},\boldsymbol{\mu}\_{1}) \ge \left(\lambda\_{\boldsymbol{\pi}}(L^{\rm st}) - c^{2}p \|\boldsymbol{q}\_{-}\|\_{L\_{1}(\Gamma)}\right) \|\boldsymbol{\mu}\_{1}\|\_{L\_{2}(\Gamma)}^{2}.\tag{11.50}$$

To estimate the second form we use (11.36) and the Sobolev estimate (11.12) for max |*u(x)*| 2. We get

$$\begin{split} \mathcal{Q}\_{L\_{-2q-}^{\mathrm{alt}}}(\mu\_2,\mu\_2) &\geq \|\mu\_2'\|\_{L\_2}^2 - 2\|q\_-\|\_{L\_1} \max |\mu\_2(\mathbf{x})|^2 \\ &\geq \|\mu\_2'\|\_{L\_2}^2 - 2\|q\_-\|\_{L\_1} \left(\epsilon \|\mu\_2'\|\_{L\_2}^2 + \frac{2}{\epsilon} \|\mu\_2\|\_{L\_2}^2\right) \\ &= (1 - 2\epsilon \|q\_-\|\_{L\_1}) \|\mu\_2'\|\_{L\_2}^2 - \frac{4\|q\_-\|\_{L\_1}}{\epsilon} \|\mu\_2\|\_{L\_2}^2. \end{split}$$

Taking into account

$$\|\mu\_2'\|\_{L\_2}^2 \ge \lambda\_{n+p}(L\_0^{\text{st}}) \|\mu\_2\|\_{L\_2}^2,\tag{11.51}$$

we arrive at

$$\mathcal{Q}\_{L\_{-2q\_{-}}^{\rm s}}(\mu\_2, \mu\_2) \ge \left( (1 - 2\epsilon \|q\_-\|\_{L\_1(\Gamma)}) \lambda\_{n+p}(L\_0^{\rm s}) - \frac{4 \|q\_-\|\_{L\_1(\Gamma)}}{\epsilon} \right) \|\mu\_2\|\_{L\_2}^2. \tag{11.52}$$

Summing the estimates (11.50) and (11.52) and taking into account that *<sup>u</sup>*<sup>2</sup> <sup>2</sup> *<sup>L</sup>*<sup>2</sup> <sup>=</sup>*<sup>u</sup>* <sup>2</sup> *<sup>L</sup>*<sup>2</sup> <sup>−</sup>*u*<sup>1</sup> <sup>2</sup> *<sup>L</sup>*<sup>2</sup> we get

$$\begin{split} \mathcal{Q}\_{L\_{q}^{\text{st}}}(\boldsymbol{u},\boldsymbol{u}) &\geq \left(\lambda\_{\boldsymbol{n}}(\boldsymbol{L}\_{0}^{\text{st}}) - 2c^{2}p\|\boldsymbol{q}\_{-}\|\_{\boldsymbol{L}\_{1}}\right) \|\boldsymbol{u}\_{1}\|\_{\boldsymbol{L}\_{2}}^{2} \\ &\quad + \left((1 - 2\epsilon\|\boldsymbol{q}\_{-}\|\_{\boldsymbol{L}\_{1}})\lambda\_{\boldsymbol{n}+p}(\boldsymbol{L}\_{0}^{\text{st}}) - \frac{4\|\boldsymbol{q}\_{-}\|\_{\boldsymbol{L}\_{1}}}{\epsilon}\right) \|\boldsymbol{u}\_{2}\|\_{\boldsymbol{L}\_{2}}^{2} \\ &\geq \lambda\_{\boldsymbol{n}}(\boldsymbol{L}\_{0}^{\text{st}}) \|\boldsymbol{u}\|\_{\boldsymbol{L}\_{2}}^{2} - 2c^{2}p\|\boldsymbol{q}\_{-}\|\_{\boldsymbol{L}\_{1}}\|\boldsymbol{u}\|\_{\boldsymbol{L}\_{2}}^{2} \\ &\quad + \left((1 - 2\epsilon\|\boldsymbol{q}\_{-}\|\_{\boldsymbol{L}\_{1}})\lambda\_{\boldsymbol{n}+p}(\boldsymbol{L}\_{0}^{\text{st}}) - \frac{4\|\boldsymbol{q}\_{-}\|\_{\boldsymbol{L}\_{1}}}{\epsilon} - \lambda\_{\boldsymbol{n}-1}(\boldsymbol{L}\_{0}^{\text{st}})\right) \|\boldsymbol{u}\_{2}\|\_{\boldsymbol{L}\_{2}}^{2}. \end{split}$$

As before, to prove the desired uniform estimate it is sufficient to show that for large enough *n* the following expression can be made positive by choosing an appropriate :

$$\left(1 - 2\epsilon \|q\_{-}\|\_{L\_{1}}\right)\lambda\_{n+p}(L\_{0}^{\text{st}}) - \frac{4\|q\_{-}\|\_{L\_{1}}}{\epsilon} - \lambda\_{n}(L\_{0}^{\text{st}}) > 0. \tag{11.53}$$

Again we use (11.45): we substitute *λn*+*p(L*st <sup>0</sup> *)* with the lower bound and *λn(L*st 0 *)* with the upper. As before we choose = 1*/n*, so the left-hand side of (11.53) becomes

$$\begin{aligned} &(1 - 2\epsilon \|q\_{-}\|\_{L\_1})\lambda\_{n+p}(L\_0^{\text{st}}) - \frac{4}{\epsilon} \|q\_{-}\|\_{L\_1} - \lambda\_n(L\_0^{\text{st}}) \\ &\geq (1 - 2\|q\_{-}\|\_{L\_1}/n)(n+p-M)^2 \\ & -4n\|q\_{-}\|\_{L\_1} - (n+N)^2 \\ &= 2n\left(p - M - N - 3\|q\_{-}\|\_{L\_1}\right) + \mathcal{O}(1). \end{aligned}$$

If *p>M* + *N* + 3 *q*− *L*<sup>1</sup> , then for sufficiently large *n* the expression is positive, hence the following lower estimate holds

$$
\lambda\_n(L\_q^{\rm st}) - \lambda\_n(L\_0^{\rm st}) \ge \mathcal{C},\tag{11.54}
$$

where the exact value of *C* is determined by the difference between the first few eigenvalues as described above. Here we use (11.31) with = 1. 

If we are interested in large values of *n* only, then we have an explicit formula for the constant *C(-, q)* given by (11.47).

**Problem 47** The constant *C* appearing in (11.32) for sufficiently large *n* can be taken in the form (11.47). On the other hand if *q* ∈ *L*∞*(-)*, then *C* can be taken equal to *q <sup>L</sup>*∞*(-).* Which expression provides the best value and why?

**Problem 48** Show that Theorem 11.8 holds for non-Robin vertex conditions.

## **11.3 Spectral Estimates for General Vertex Conditions**

As the title of this section indicates we are going to prove spectral estimates in the case, where the vertex conditions are not assumed to be standard. The most general class of vertex conditions described in Chap. 3 will be covered.

Assume that a finite compact metric graph *-*, a real *L*1-potential *q* ∈ *L*1*(-)* and unitary irreducible vertex matrices *S<sup>m</sup>* are given. Then the corresponding selfadjoint Schrödinger operator *L***<sup>S</sup>** *<sup>q</sup> (-)* is defined using Definition 4.1. The spectral estimate we are going to prove resembles very much the estimate (11.32) with the only difference that the constant *C* should obviously depend not only on the graph  and potential *q*, but on the vertex conditions as well. We did not consider the most general vertex conditions in the previous section just in order to simplify our presentation. What we need now, is just to go through the proof and amend all necessary formulas.

For the estimates we used quadratic forms, hence instead of the matrices *S<sup>m</sup>* we need to consider the corresponding Hermitian matrices *Am* appearing in the generalised Robin conditions (11.7). Every Hermitian matrix can be written as a difference between two positive matrices

$$A^m = A^m\_+ - A^m\_-,\tag{11.55}$$

where *Am* <sup>±</sup> are defined using the spectral representation of *Am*

$$A^m = \sum\_{\lambda\_n(A^m)\neq 0} \lambda\_n(A^m) \langle \vec{e}\_n, \cdot \rangle \vec{e}\_n,$$

as follows

$$\begin{cases} A\_+^m = \sum\_{\lambda\_n(A^m) > 0} \lambda\_n(A^m) \langle \vec{e}\_n, \cdot \rangle \vec{e}\_n, \\ A\_-^m = -\sum\_{\lambda\_n(A^m) < 0} \lambda\_n(A^m) \langle \vec{e}\_n, \cdot \rangle \vec{e}\_n. \end{cases} \tag{11.56}$$

Let us analyse which ingredients were crucial for the proof of Theorem 11.8:


The proof can be carried out without major modifications, provided the above mentioned estimates hold. One might need to amend the constants as described above.

1. The lower and upper bounds (11.48) and (11.33) for the perturbation term can be modified as follows:

$$\begin{split} \int\_{\Gamma} |u'(x)|^2 dx \underbrace{- \int\_{\Gamma} q\_{-}(x) |u(x)|^2 dx - \sum\_{m=1}^{M} \langle P\_{-1}^{m} \overleftarrow{u}(V^{m}), A\_{-}^{m} P\_{-1}^{m} \overleftarrow{u}(V^{m}) \rangle\_{\mathbb{C}^{4m}}}\_{=: -B\_{L\_{q}^{+}}^{-\infty}(u,u)} \\ =: -B\_{L\_{q}^{+}}^{-\infty}(u,u) \\ \leq \underbrace{\int\_{\Gamma} |u'(\infty)|^2 dx + \int\_{\Gamma} q(\infty) |u(\infty)|^2 dx + \sum\_{m=1}^{M} \langle P\_{-1}^{m} \overleftarrow{u}(V^{m}), A^{m} P\_{-1}^{m} \overleftarrow{u}(V^{m}) \rangle\_{\mathbb{C}^{4m}}}\_{=: \mathbb{D}\_{L\_{q}^{+}}(u,u)} \\ \leq \underbrace{\int\_{\Gamma} |u'(\infty)|^2 dx + \underbrace{\int\_{\Gamma} q\_{+}(\infty) |u(\infty)|^2 dx + \sum\_{m=1}^{M} \langle P\_{-1}^{m} \overleftarrow{u}(V^{m}), A\_{+}^{m} P\_{-1}^{m} \overleftarrow{u}(V^{m}) \rangle\_{\mathbb{C}^{4m}}}\_{=: \mathcal{B}\_{L\_{q}^{+}}^{+\infty}(u,u)}}\_{=: \mathcal{B}\_{L\_{q}^{+}}^{+\infty}(u,u)}. \end{split} \tag{11.57}$$

Note that the quadratic forms *B*+ *L***S** *q* do not depend on the negative part *q*<sup>−</sup> of the potential *q* and on the negative part *Am* <sup>−</sup> of the matrix *Am.* Similarly, *B*<sup>−</sup> *L***S** *q* is independent of *q*<sup>+</sup> and *Am* +*.*

2. Taking into account that all functions from the space *W*<sup>1</sup> <sup>2</sup> *(-* \ **V***)* are continuous on every edge in we may modify (11.36) as

$$\begin{split} \left| \mathcal{B}\_{L\_q^\pm}^\pm (\boldsymbol{u}, \boldsymbol{u}) \right| &\leq \left( \| \boldsymbol{q}\_\pm \|\_{L\_1(\Gamma)} + \sum\_{m=1}^M d\_m \| \boldsymbol{A}\_\pm^m \| \right) \| \boldsymbol{u} \|\_{L\_\infty(\Gamma)}^2 \\ &\leq \left( \| \boldsymbol{q} \|\_{L\_1(\Gamma)} + \sum\_{m=1}^M d\_m \| \boldsymbol{A}^m \| \right) \| \boldsymbol{u} \|\_{L\_\infty(\Gamma)}^2. \end{split} \tag{11.58}$$

3. Taking into account nonnegativity of *q*<sup>±</sup> and *Am* <sup>±</sup> we generalise estimate (11.42) as

$$\mathcal{B}\_{L\_q^\\$}^\pm(\boldsymbol{\mu}\_1+\boldsymbol{\mu}\_2,\boldsymbol{\mu}\_1+\boldsymbol{\mu}\_2) \le 2\mathcal{B}\_{L\_q^\\$}^\pm(\boldsymbol{\mu}\_1,\boldsymbol{\mu}\_1) + 2\mathcal{B}\_{L\_q^\\$}^\pm(\boldsymbol{\mu}\_2,\boldsymbol{\mu}\_2). \tag{11.59}$$


Therefore we may just repeat the proof of Theorem 11.8 to get:

**Theorem 11.9** *Let L***<sup>S</sup>** *<sup>q</sup> (-) be an arbitrary Schrödinger operator on a compact finite metric graph with absolutely integrable potential q* ∈ *L*1*(-) and unitary irreducible vertex matrices Sm, m* <sup>=</sup> <sup>1</sup>*,* <sup>2</sup>*,...,M. Consider the high energy limit of the vertex scattering matrices S<sup>m</sup>* **<sup>v</sup>** *(*∞*) and the corresponding reference Laplace operator L***Sv***(*∞*) defined on the graph -*∞*. Then the Schrödinger and the refernce Laplace operators are asymptotically isospectral, moreover the difference between their eigenvalues is uniformly bounded* 

$$|\lambda\_n(L^{\mathbf{S}\_\mathbf{v}(\infty)}(\Gamma^{\infty})) - \lambda\_n(L^{\mathbf{S}}\_q(\Gamma))| \le C(\Gamma, q, \mathbf{S}),\tag{11.60}$$

*where the constant C(-, q,* **S***) depends on the graph -, the potential q and the vertex matrix* **S***, but is independent of n.*

*Proof* As we already mentioned the proof of Theorem 11.8 can be repeated without major modifications. As a result one arrives at the conclusion that the spectrum of *L***<sup>S</sup>** *q* is close to the spectrum of the self-adjoint operator corresponding to the quadratic form

$$\int\_{\Gamma} |u'(x)|^2 dx$$

with the domain of functions from *W*<sup>1</sup> <sup>2</sup> *(-* \ **V***)* satisfying the generalised Dirichlet conditions (11.3) at the vertices. The self-adjoint operator corresponding to this quadratic form is the Laplace operator defined on the functions from *W*<sup>2</sup> <sup>2</sup> *(-* \ **V***)* satisfying both generalised Dirichlet (11.3) and generalised Neumann conditions (11.23) at the vertices. To write such vertex conditions in the form (3.21) one has to substitute the unitary matrix **S** with the high energy limit of the vertex scattering matrix **Sv***(*∞*).* 

The main difference between the Theorems 11.8 and 11.9 is that the reference Laplacian is not the standard Laplacian on the same metric graph *-*, but is the Laplace operator on the graph *-*∞ with the vertex conditions given by (11.3) and (11.23). Hence the reference Laplacian is not necessarily a standard Laplacian on *-*<sup>∞</sup>*.* Of course, if the vertex conditions in *L***<sup>S</sup>** *<sup>q</sup>* are asymptotically properly connecting and standard, then the reference operator is just the standard Laplacian on *-.*

**Problem 49** Consider the cycle of length *π* with a single degree two vertex. Assume that the vertex conditions are given by the matrix **<sup>S</sup>** <sup>=</sup> <sup>1</sup> 2 1 + *i* 1 − *i* 1 − *i* 1 + *i* from Example 11.2. Show that the spectrum tends to the spectrum of the Neumann Laplacian on the interval [0*, π*]*.*

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 12 Spectral Gap and Dirichlet Ground State**

This chapter is entirely devoted to the studies of the lowest non-trivial eigenvalue of operators on graphs. For standard Laplacians on connected graphs the lowest eigenvalue is *λ*<sup>1</sup> = 0 and we shall be interested in *λ*2, which coincides with the spectral gap *λ*<sup>2</sup> − *λ*1. For Laplacians with Dirichlet vertices it is already non-trivial to calculate the ground state *λ*<sup>1</sup> *>* 0. To study these quantities similar methods can be used: Eulerian path and symmetrisation techniques, Cheeger's approach, surgery principles. Most of these methods work for Schrödinger operators but in order to illuminate connections between spectrum and topology/geometry we shall focus on standard and Dirichlet Laplacians. The methods developed will be extended to higher eigenvalues in the following chapter.

## **12.1 Fundamental Estimates**

Our first step is to obtain a fundamental estimate for the spectral gap in terms of the total graph length L*().* For standard Laplacians the eigenvalues are decreasing as *α*−<sup>2</sup> as the lengths of the edges are scaled by factor *α >* 1*,* hence it is clear that the estimate should contain the factor L−2*.* Alternatively one could study graphs having fixed total length, say L = *π.* There are two alternative methods to prove the estimate: via symmetrisation technique or using Eulerian cycles. Both approaches will be presented.

**Theorem 12.1** *Let be a connected finite metric graph with the total length* L*(). The spectral gap for the standard Laplacian on can be estimated as follows* 

$$
\lambda\_2(\Gamma) \ge \left(\frac{\pi}{\mathcal{L}(\Gamma)}\right)^2. \tag{12.1}
$$

283

**Remark 12.2** It will be clear from the proof that the obtained estimate is sharp, since the equality is attained if the graph is given just by one interval.

This theorem has been proven independently by different authors: S. Nicaise [399, 400], L. Friedlander [225], P. Kurasov and S. Naboko [343]. We are going to present two proofs of the theorem based on two different techniques: Eulerian paths and symmetrisation. The first technique treats a metric graph as a geometric object and is rather illustrative. The second technique is based on the coarea formula and therefore establishes a bridge between quantum graphs and partial differential equations.

## *12.1.1 Eulerian Path Technique*

In this section we follow closely the article [343], where the Eulerian path technique was first presented. Essentially the same method was described by S. Nicaise without exploiting Eulerian paths.

*Proof of Theorem 12.1 Using Eulerian Path Technique* The first nontrivial eigenvalue of *L*st*()* can be calculated by minimising the Rayleigh quotient (see Proposition 4.19)

$$\lambda\_2(\Gamma) = \min\_{\mu \perp 1} \frac{\int\_{\Gamma} |u'(\mathbf{x})|^2 d\mathbf{x}}{\int\_{\Gamma} |u(\mathbf{x})|^2 d\mathbf{x}},\tag{12.2}$$

where the minimum is taken over all admissible functions *u*: belonging to the Sobolev space *W*<sup>1</sup> <sup>2</sup> on every edge and continuous on the whole *.* Note that one may extend the set of admissible functions by allowing continuous piecewise *W*<sup>1</sup> 2 functions—this will not change the minimiser.

The first nontrivial eigenfunction *ψ*<sup>2</sup> is a minimiser of (12.2) and therefore satisfies

$$
\lambda\_2(\Gamma) = \frac{\int\_\Gamma |\psi\_2'(\mathbf{x})|^2 d\mathbf{x}}{\int\_\Gamma |\psi\_2(\mathbf{x})|^2 d\mathbf{x}}. \tag{12.3}
$$

Consider the graph 2—a certain "double cover" of —obtained from by doubling every edge (see Fig. 12.1). The new edges have the same lengths and connect the same vertices, so that the set of vertices is preserved. The corresponding vertex degrees are doubled—this will become important soon.

Let us lift up the function *ψ*<sup>2</sup> from *L*2*()* to the function *ψ*<sup>ˆ</sup> <sup>2</sup> <sup>∈</sup> *<sup>L</sup>*2*(*2*)* in a symmetric way by assigning it the same values on any new pair of edges as on the original edge in *.* More precisely, consider any edge *En* ∈ and let us denote by *E <sup>n</sup>* and *E <sup>n</sup>* the corresponding edge pair in 2*.* It is natural to use the same

**Fig. 12.1** Doubling the edges

parametrisation of the intervals *En*, *E n*, and *E n.* Then we have

$$
\hat{\psi}\_2|\_{E\_n'} = \hat{\psi}\_2|\_{E\_n''} = \psi\_2|\_{E\_n}.
$$

The function *ψ*ˆ <sup>2</sup> obtained from *ψ*<sup>2</sup> in this way obviously satisfies

$$
\lambda\_2(\Gamma) = \frac{\int\_{\Gamma^2} |\hat{\psi}\_2'(\mathbf{x})|^2 d\mathbf{x}}{\int\_{\Gamma^2} |\hat{\psi}\_2(\mathbf{x})|^2 d\mathbf{x}},
$$

where both the numerator and denominator gain factor 2 compared to (12.3).

Every vertex in <sup>2</sup> has even degree and therefore there exists a closed (Eulerian) path <sup>P</sup> on <sup>2</sup> coming along every edge in <sup>2</sup> precisely one time [181, 267].1 The path may go through certain vertices several times. The path can be identified with the loop *S*2L*()* of length 2L*.* The loop itself is a metric graph and we consider the corresponding standard Laplacian *L*st*(S*2L*())*. As before, the ground state for *<sup>L</sup>*st*(S*2L*())* is *λ*<sup>1</sup> <sup>=</sup> <sup>0</sup> and to calculate the spectral gap the Rayleigh quotient can be used. The set of admissible trial functions consists of *W*<sup>1</sup> <sup>2</sup> *(S*2L*())* functions having mean value zero.

The function *ψ*ˆ <sup>2</sup> defined originally on the graph <sup>2</sup> can be considered as a function on the loop *S*2L*().* It is a continuous and piece-wise *W*<sup>1</sup> <sup>2</sup> function with zero mean value and therefore gives an upper estimate for the Laplacian eigenvalue on the loop

$$
\lambda\_2(S\_{2\mathcal{L}}) \le \frac{\int\_{S\_{2\mathcal{L}}} |\hat{\psi}'\_2(\mathbf{x})|^2 d\mathbf{x}}{\int\_{S\_{2\mathcal{L}}} |\hat{\psi}\_2(\mathbf{x})|^2 d\mathbf{x}} = \lambda\_2(\Gamma).
$$

We obtain the result by noticing that *λ*2*(S*2L*)* = *π* L*()*<sup>2</sup> *.*

In fact we have proven that the minimum of the spectral gap (among all graphs of the same total length) is realised by the single interval of length L, *I*L*()*. This is due to the fact that

$$
\lambda\_2(\mathcal{S}\_{2\mathcal{L}(\Gamma)}) = \lambda\_2(I\_{\mathcal{L}(\Gamma)}).\tag{12.4}
$$

<sup>1</sup> This result can be traced back to the famous **Seven bridges of Königsberg** problem solved by L. Euler in 1736.

Moreover, the graph *I*<sup>L</sup> is the unique minimiser—this will be proven in Sect. 14.1.2. Remember that the first eigenvalue for the loop is doubly degenerate, while it is simple for the interval.

The cycle *S*2<sup>L</sup> can be obtained from <sup>2</sup> by chopping its vertices into degree two vertices—this way to obtain spectral inequalities is known under the name **surgery of graphs** and will be described in details in Sect. 12.5.

## *12.1.2 Symmetrisation Technique*

In this section we follow closely the article [225], where symmetrisation technique [286] was applied to obtain estimates for the spectral gap for the standard Laplacian.

*Proof of Theorem 12.1 Using Symmetrisation* The main idea of the method is to introduce a special transformation mapping functions from *L*2*()* to functions from *<sup>L</sup>*2[0*,*L] and use it to compare the eigenvalues of the standard Laplacians *L*st*()* and *L*st*(*[0*,*L]*).*

The spectral gap for both operators coincides with the energy *λ*<sup>2</sup> of the first excited state. Let *ψ*<sup>2</sup> be such excited state for the operator *L*st*().* This is a continuous function on a compact graph , let us denote the points of minimum and maximum for *ψ*2*(x)* by *x*min and *x*max*.*

The symmetrized function *ψ*<sup>∗</sup> on the interval [0*,*L] is the unique nondecreasing continuous function such that

$$
\psi^\*(0) = \psi\_2(\mathfrak{x}\_{\min}), \quad \psi^\*(\mathcal{L}) = \psi\_2(\mathfrak{x}\_{\max}).
$$

and

$$m(t) := \text{measure}\left\{ \mathbf{x} \in \Gamma : \psi\_2(\mathbf{x}) < t \right\} = \text{measure}\left\{ \mathbf{s} \in [0, \mathcal{L}] : \psi^\*(\mathbf{s}) < t \right\}.$$

The function *ψ*∗ constructed in this way satisfies

$$\int\_{\Gamma} |\psi\_2(\mathbf{x})|^2 d\mathbf{x} = \int\_0^{\mathcal{L}} |\psi^\*(\mathbf{x})|^2 d\mathbf{x} \tag{12.5}$$

and

$$0 = \int\_{\Gamma} \psi\_2(\mathbf{x})d\mathbf{x} = \int\_0^{\mathcal{L}} \psi^\*(\mathbf{x})d\mathbf{x},\tag{12.6}$$

where the left equality comes from the fact *ψ*<sup>2</sup> is orthogonal to the constant function, which is the ground state. The measure satisfies

$$m'(t) = \sum\_{\mathbf{x}:\psi\_2(\mathbf{x}) = t} \frac{1}{|\psi'\_2(\mathbf{x})|}. \tag{12.7}$$

This formula holds for all *t* which do not coincide with the local minima and maxima of *ψ*<sup>2</sup> and the values of *ψ*<sup>2</sup> at the vertices. The formula is obtained by summing up contributions from different preimages of *t* under *ψ*2*(x)*. The number of preimages is finite, since *ψ*<sup>2</sup> satisfies the eigenfunction equation on each interval. Let us denote the number of preimages by *n(t).* Obviously

$$m(t) \ge 1, \quad \psi\_2(\mathbf{x}\_{\rm min}) < t < \psi\_2(\mathbf{x}\_{\rm max}), \tag{12.8}$$

since the function *ψ*<sup>2</sup> is continuous.

The co-area formula (see for example [76]) implies

$$\int\_{\Gamma} |\psi\_2'(\mathbf{x})|^2 d\mathbf{x} = \int\_{\psi\_2(\chi\_{\min})}^{\psi\_2(\chi\_{\max})} \sum\_{\chi:\psi\_2(\chi)=t} |\psi\_2'(\mathbf{x})| dt.$$

Using the Cauchy-Schwarz inequality

$$\left(\sum\_{j=1}^{n} \frac{1}{a\_j}\right) \cdot \sum\_{j=1}^{n} a\_j \ge n^2$$

we get the estimate

$$\sum\_{\mathbf{x}:\psi\_2(\mathbf{x})=t} |\psi\_2'(\mathbf{x})| \ge n(t)^2 \left( \sum\_{\mathbf{x}:\psi\_2(\mathbf{x})=t} \frac{1}{|\psi\_2'(\mathbf{x})|} \right)^{-1} \ge \left( \sum\_{\mathbf{x}:\psi\_2(\mathbf{x})=t} \frac{1}{|\psi\_2'(\mathbf{x})|} \right)^{-1}$$

$$= \frac{1}{l} \frac{1}{m'(t)},\tag{12.9}$$

where we first used (12.8) and then (12.7). Therefore we have

$$\int\_{\Gamma} |\psi\_2'(\mathbf{x})|^2 d\mathbf{x} \ge \int\_{\psi\_2(\chi\_{\min})}^{\psi\_2(\chi\_{\max})} \frac{dt}{m'(t)}.\tag{12.10}$$

Precisely the same argument can be applied to the function *ψ*∗ with the only difference that all inequalities turn into equalities and there is no need to use (12.8).

Finally we get:

$$\int\_{\Gamma} |\psi\_2'(\mathbf{x})|^2 d\mathbf{x} \ge \int\_0^{\mathcal{L}} |\psi^{\*\prime}(\mathbf{s})|^2 d\mathbf{s}.\tag{12.11}$$

Since the norms of the functions coincide, the Rayleigh quotients satisfy the estimates

$$\underbrace{\frac{\int\_{\Gamma} |\psi\_{2}'(\mathbf{x})|^{2} dx}{\int\_{\Gamma} |\psi\_{2}(\mathbf{x})|^{2} dx}}\_{=\lambda\_{2}(L^{\text{st}}(\Gamma))} \ge \underbrace{\int\_{0}^{\mathcal{L}} |\psi^{\*\prime}(\mathbf{s})|^{2} ds}\_{\ge \lambda\_{2}(L^{\text{st}}([0,\mathcal{L}]))}\,\,. \tag{12.12}$$

The left quotient gives us precisely *λ*2*(L*st*())*, while the right quotient is an upper estimate for *λ*2*(L*st*(*[0*,*L]*))*, since *ψ*<sup>∗</sup> is an admissible trial function for the quadratic form of *L*st*(*[0*,*L]*)* and is orthogonal to the ground state—the constant function (see (12.6). The precise value of *<sup>λ</sup>*2*(L*st*(*[0*,*L]*))* in accordance with Proposition 4.19 is given by *λ*2*(L*st*(*[0*,*L]*))* <sup>=</sup> min*u*⊥<sup>1</sup> L <sup>0</sup> |*u (x)*| <sup>2</sup>*dx* L <sup>0</sup> |*u(x)*| <sup>2</sup>*dx* = *π* L 2 *.* We get estimate (12.1).

The proven estimate is fundamental in the sense that it is valid for arbitrary compact finite metric graphs independently of their topological and geometrical properties—everything is determined by the total length L*.* Both methods show that the estimate is sharp and equality is attained for the graph formed by just one interval of length L*.*

## **12.2 Balanced and Doubly Connected Graphs**

The obtained estimate can be improved if the original graph possesses special properties. For example if we assume that the graph is *balanced*, *i.e.* all vertices have even degrees, then there is no need to consider the "double covering" and Euler theorem can be applied to the graph directly. 2 We say that a metric graph is *doubly (edge) connected* if to make it disconnected one has to remove at least two edges. Such graphs are also called *bridgeless*, since a bridge is an edge removing which makes the graph disconnected. Remember that only connected are considered

<sup>2</sup> Balanced quantum graphs were considered recently in connection with the momentum operator and asymptotics for resonances:

<sup>•</sup> a momentum operator on a metric graph can be defined if and only if the graph is balanced [191],

<sup>•</sup> asymptotics of resonances for a graph with leads is of Weyl-type if and only if every external vertex (i.e. a vertex to which external leads are attached) is not balanced [156, 157, 197].

now. Every balanced graph is bridgeless, but the opposite implication is not always true.

**Problem 50** Construct an explicit example of bridgeless non balanced graph.

**Theorem 12.3** *Let all assumptions of Theorem 12.1 be satisfied. Assume in addition that the graph is bridgeless. Then the spectral gap for the standard Laplacian on can be estimated as follows* 

$$
\lambda\_2(\Gamma) \ge 4 \left( \frac{\pi}{\mathcal{L}(\Gamma)} \right)^2. \tag{12.13}
$$

#### *Proof*

**Eulerian Path Technique** We prove a slightly weaker statement first: every balanced graph possesses estimate (12.13).

Consider the proof Theorem 12.1 via Eulerian path technique. If the original graph is balanced then there is no need to create a double cover graph 2—the original graph contains an Eulerian cycle and the length of this cycle is of course L (instead of 2L as for the double cover graph). Repeating the argument we obtain:

$$
\lambda\_2(S\_{\mathcal{L}}) \le \frac{\int\_{S\_{\mathcal{L}}} |\psi\_2'(\mathbf{x})|^2 d\mathbf{x}}{\int\_{S\_{\mathcal{L}}} |\psi\_2(\mathbf{x})|^2 d\mathbf{x}} = \lambda\_2(\Gamma). \tag{12.14}
$$

Taking into account that *λ*2*(S*L*)* = 2*π* L*()*<sup>2</sup> we get the estimate (12.13).

**Symmetrisation Technique** Consider the proof of Theorem 12.1 using symmetrisation technique. The proof was given in [282] for balanced graphs, but the same proof holds for bridgeless graphs [52].

If the graph is bridgeless, then any continuous function *ψ* attains almost any value at least twice. The exceptional values include the minimal and the maximal values *ψ(x*min*)*, *ψ(x*max*)* as well as the values of *ψ* at some vertices. Hence the function *n(t)* satisfies the following inequality almost everywhere

$$m(t) \ge 2.$$

The inequality (12.9) can be written as

$$\sum\_{\substack{\chi:\psi\_2(\chi)=t}} \left| \psi'\_2(\chi) \right| \ge n(t)^2 \frac{1}{m'(t)}.$$

Integration gives us

$$\int\_{\Gamma} |\psi\_2'(\mathbf{x})|^2 d\mathbf{x} \ge 4 \int\_{\psi\_2(\chi\_{\min})}^{\psi\_2(\chi\_{\max})} \frac{dt}{m'(t)}$$

**Fig. 12.2** Graph *(*3*.*4*)*: loop with two intervals attached

instead of (12.10). We got an extra factor 4, but no such factor appears for the function *ψ*∗, hence

$$\int\_{\Gamma} \left| \psi\_2'(\mathbf{x}) \right|^2 d\mathbf{x} \ge 4 \int\_0^{\mathcal{L}} \left| \psi^{\*\prime}(\mathbf{s}) \right|^2 d\mathbf{s}$$

holds instead of (12.11). Considering the Rayleigh quotient and repeating the argument we obtain (12.13) for bridgeless graphs.

The obtained estimate is again sharp, since we have equality in (12.13) if the graph is a loop. Among all balanced graphs of the same total length the loop has the smallest spectral gap.

The proof via Eulerian path provides a clear pure topological explanation why the lower estimate is multiplied by the factor 4, while such explanation is more hidden if symmetrisation technique is used. On the other hand, symmetrisation technique allows one to easily prove the statement for bridgeless not necessarily balanced graphs.

**Problem 51** Calculate the spectrum of the standard Laplacian on the graph *(*3*.*4*)* presented in Fig. 12.2 with the lengths of the loop and the outgrowths being equal to 2L*/*3 and L*/*6 respectively.

**Problem 52** Give a non-trivial example of a metric graph such that the *n*-th eigenvalue coincides with the *n*-th eigenvalue for the interval of the same length.

## **12.3 Graphs with Dirichlet Vertices**

Another application of the obtained estimate are graphs with Dirichlet vertices. Consider a metric graph with Laplacian *L*st*,*<sup>D</sup> defined on functions satisfying Dirichlet conditions at one or more degree one vertices and standard vertex conditions at all other vertices. Considering Dirichlet conditions at just degree one vertices is not a restrictive assumption, since introducing Dirichlet conditions at a vertex of higher degree *d >* 1 decomposes the vertex into *d* degree one vertices (remember that we agreed to consider only properly connecting vertex conditions). Introducing Dirichlet conditions one has to be careful so that the graph remains connected (introducing Dirichlet conditions at a degree two vertex on a bridge disconnects the graph). The point *λ* = 0 is not an eigenvalue anymore, since any

**Fig. 12.3** Gluing two copies of a graph

constant function satisfying a Dirichlet condition somewhere is identically zero. Therefore instead of the spectral gap *λ*<sup>2</sup> − *λ*<sup>1</sup> we are going to discuss possible estimates for the lowest eigenvalue *λ*1*(L*st*,*D*()).*

**Lemma 12.4** *Let be a connected finite compact metric graph and let L*st*,*<sup>D</sup> *be the Laplace operator defined on the functions satisfying Dirichlet conditions at least one degree one vertices and standard vertex conditions at all other vertices. Then the ground state λ*1*(L*st*,*D*()) satisfies the estimate* 

$$
\lambda\_1(L^{\rm st,D}(\Gamma)) \ge \left(\frac{\pi}{2\mathcal{L}}\right)^2,\tag{12.15}
$$

*where* L *is the total length of .*

*Proof* It is enough to prove Lemma in the case where there is just one Dirichlet vertex, since adding Dirichlet vertices increases the eigenvalues.

Let us denote the Dirichlet vertex by *V* 0. Consider two copies of the graph and glue them together into *<sup>V</sup>* <sup>0</sup> by identifying two distinct vertices *<sup>V</sup>* <sup>0</sup> belonging to different copies of and introducing standard vertex conditions at the new joined vertex (Fig. 12.3).

Let *ψ*<sup>1</sup> be the eigenfunction of the original Laplacian on corresponding to *λ*1*(L*st*,*D*()).* This function is zero at the vertex *V* <sup>0</sup> due to Dirichlet conditions there. Let us extend the function as −*ψ*1*(x)* to the second copy of *.* The new extended function on *<sup>V</sup>* <sup>0</sup> is an eigenfunction of the standard Laplacian *L*st*( <sup>V</sup>* <sup>0</sup> *)* with the same eigenvalue *λ*1*(L*st*,*D*())*:

	- the function is equal to zero at the vertex and therefore is continuous,
	- the sum of normal derivatives is zero, since contributions from the two copies of compensate each other.

Hence *λ*1*(L*st*,*D*())* can be estimated from below by *λ*2*(L*st*( <sup>V</sup>* <sup>0</sup> *))*:

$$
\lambda\_1(L^{\text{st,D}}(\Gamma)) \ge \left(\frac{\pi}{2\mathcal{L}}\right)^2.
$$

where we used that the total length of *<sup>V</sup>* <sup>0</sup> is 2L*.* The constructed eigenfunction is not a ground state for the following reasons:


The obtained estimate is sharp, since for the interval of length with the Dirichlet and Neumann conditions at the endpoints we have *λ*<sup>1</sup> = *π* 2 2 *.* Moreover, equality in (12.15) holds only if the graph is a Dirichlet-Neumann segment of length L*.* The reason is rather simple: we have equality in (12.15) only if we have equalities in all estimates used in the proof. In particular, we need that the graph *<sup>V</sup>* <sup>0</sup> has the smallest spectral gap, hence it coincides with the interval of length 2L*.* Moreover the constructed eigenfunction should coincide with the second eigenfunction on the interval of length 2L, hence the graph is the Dirichlet-Neumann segment of length L.

## **12.4 Cheeger's Approach**

An effective method to estimate the spectral gap for a Laplacian on a Riemannian manifold was suggested by J. Cheeger [135]. This method has already become a standard tool in differential geometry. Our goal in this section is to apply Cheeger's ideas to quantum graphs.

The basic idea of Cheeger is that the first nontrivial eigenfunction *ψ*<sup>2</sup> should attain both positive and negative values, since it is orthogonal to *ψ*<sup>1</sup> ≡ 1*.* In other words, *ψ*<sup>2</sup> has at least two nodal domains. The problem is that we do not know much about these domains, but one may obtain estimates without knowing where these domains are situated, or how large they are. The border *S* between the domains divides the manifold into two or more separate manifolds with boundary. It follows that the border *S* can be considered as a cut of the original manifold *M* into two submanifolds *M*<sup>1</sup> and *M*<sup>2</sup> : *M* = *M*<sup>1</sup> ∪ *M*2*.* One may introduce Cheeger's quotient *hS* <sup>=</sup> *L(S)* min {*A(M*1*), A(M*2*)*} *,* where *L(S)* is the length of the cut *S* and *A(Mj )* are the areas of the submanifolds *Mj* . Of course in the case of an *n*-dimensional manifold *M* one should speak about the *n* − 1-dimensional area *L(S).* Since we do not know which particular cut *S* corresponds to the eigenfunction *ψ*2, an estimate can be obtained by the cut, which minimises the quotient. The corresponding infimum is called Cheeger's constant

$$h(M) := \inf\_{\mathcal{S}} \frac{L(\mathcal{S})}{\min\left\{A(M\_1), A(M\_2)\right\}},\tag{12.16}$$

where the infimum is taken over all possible cuts *S* dividing *M* into two parts *M*<sup>1</sup> and *M*2*.* The infimum is realised on short cuts dividing *M* into two parts of approximately equal areas.

Using Cheeger's constant the following lower estimate for the first nontrivial Laplace eigenvalue *λ*2*(M)* may be proven [135]

$$\frac{1}{4}(h(M))^2 \le \lambda\_2(M). \tag{12.17}$$

Upper estimates in general are easier to obtain, since any admissible function provides such an estimate via the Rayleigh quotient. The lower estimates are in general much harder to prove. The advantage of Cheeger's approach is that such estimates are obtained in geometrical terms only. Let us see how this approach can be generalised to quantum graphs. We consider standard Laplacians only and start with the lower estimate.

**Lower Estimate** We use Cheeger's original argument [135] developed for quantum graphs in [266, 400, 436]. It was also the subject of our first common project with R. Suhr, which we follow here. For Cheeger's approach it is essential that the eigenfunctions are continuous and the ground state eigenfunction can be chosen strictly positive. Therefore we develop our analysis for standard Laplacians. Let be a metric graph, consider a set of points *P* on the edges dividing into two subgraphs *M*<sup>1</sup> and *M*2*.* Then we define Cheeger's constant for the metric graph as

$$h(\Gamma) = \inf\_{P} \frac{|P|}{\min\left\{\mathcal{L}(M\_1), \mathcal{L}(M\_2)\right\}},\tag{12.18}$$

where |*P*| denotes the number of dividing points and the infimum is taken over the set of possible cuts of . We formally excluded the possibility to divide the graph along its vertices, but such divisions appear in the limit as the dividing points approach ends of the edges. Taking the infimum incorporates such divisions into our approach.

Let *λ*<sup>2</sup> be the spectral gap of the standard Laplacian on , then consider any corresponding eigenfunction that we denote by *ψ (*= *ψ*2*)*. The eigenfunction can be chosen real-valued. Consider the domains + and − where the function is positive and negative respectively. Without loss of generality we assume that + has the lowest volume: if this is not the case the eigenfunction should be multiplied by −1.

The restriction of *ψ* to + is the ground state eigenfunction for the Laplacian on + determined by standard vertex conditions at all vertices except the dividing points from *P* where Dirichlet conditions are assumed, since *ψ* is an eigenfunction satisfying prescribed vertex conditions and is sign definite.

Using the fact that *ψ* is an eigenfunction we obtain:

$$\begin{split} \lambda\_2(L^4(\Gamma)) &= \frac{\int\_{\Gamma^+} -\psi''(\mathbf{x})\psi(\mathbf{x})dx}{\int\_{\Gamma^+} \psi^2(\mathbf{x})dx} = \frac{\int\_{\Gamma^+} (\psi'(\mathbf{x}))^2 dx}{\int\_{\Gamma^+} \psi^2(\mathbf{x})dx} \\ &= \frac{\int\_{\Gamma^+} (\psi'(\mathbf{x}))^2 dx \quad \int\_{\Gamma^+} (\psi(\mathbf{x}))^2 dx}{\left(\int\_{\Gamma^+} \psi^2(\mathbf{x}) dx\right)^2} \\ &\geq \frac{\left(\int\_{\Gamma^+} |\psi'(\mathbf{x})| \, |\psi(\mathbf{x})| dx\right)^2}{\left(\int\_{\Gamma^+} \psi^2(\mathbf{x}) dx\right)^2} \\ &= \frac{1}{4} \left(\frac{\int\_{\Gamma^+} |\frac{d}{dx} (\psi^2(\mathbf{x}))| dx}{\int\_{\Gamma^+} \psi^2(\mathbf{x}) dx}\right)^2, \end{split} \tag{12.19}$$

where we used the Cauchy-Schwarz inequality. Let us introduce the following notations in order to exploit the co-area formula

$$\begin{aligned} V(\mathbf{y}) &= \text{measure}\left(\left\{\mathbf{x} \in \Gamma^+ : \boldsymbol{\psi}^2(\mathbf{x}) \ge \mathbf{y} \right\} \right), \\ N(\mathbf{y}) &= |(\boldsymbol{\psi}^2)^{-1}(\mathbf{y}) \cap \Gamma^+|. \end{aligned}$$

Then by definition of *h* we always have

$$\frac{N(\mathbf{y})}{V(\mathbf{y})} \ge h(\Gamma),$$

for any *y >* 0, as any *y* yields a division of + and as a result a division of .

Let us note the following relation

$$V(\mathbf{y}) = \mathcal{L}(\Gamma^+) - \int\_0^\mathbf{y} N(t)dt$$

implying

$$\frac{dV(\mathbf{y})}{d\mathbf{y}} = -N(\mathbf{y}).\tag{12.20}$$

The last equality holds almost everywhere on +, more precisely for all *y* not equal to the value of *ψ*<sup>2</sup> at the vertices and local minima and maxima on the edges. Hence *V (y)* is a piecewise continuously differentiable function.

Subdividing <sup>+</sup> into intervals *Δj* where *ψ* is monotone we get

$$\begin{split} \int\_{\Gamma^{+}} \left| \frac{d}{dx} \psi^{2}(x) \right| dx &= \sum\_{\Delta\_{j}} \int\_{\Delta\_{j}} \left| \frac{d}{dx} \psi^{2}(x) \right| dx = \sum\_{\Delta\_{j}} \int\_{\frac{\Delta\_{j}}{\Delta\_{j}}}^{\Delta\_{j}} 1 dy \\ &= \int\_{0}^{\max} \frac{\psi^{2}}{\Gamma^{+}} \, N(\mathbf{y}) d\mathbf{y} \quad \quad = \int\_{0}^{\max} \frac{\psi^{2}}{\Gamma(\mathbf{y})} \, N(\mathbf{y}) d\mathbf{y} \\ &\ge h \int\_{0}^{\max} \frac{\psi^{2}}{\Gamma^{+}} \, V(\mathbf{y}) d\mathbf{y} \quad = -h \int\_{0}^{\max} \psi^{2} \, \frac{\psi^{2}}{\mathrm{d}\mathbf{y}} \, \mathrm{d}\mathbf{y} \\ &= h \int\_{0}^{\max} \psi^{2} \, \, N(\mathbf{y}) d\mathbf{y} = h \int\_{\Gamma^{+}} \psi^{2}(\mathbf{x}) d\mathbf{x}, \end{split} \tag{12.21}$$

where integrating by parts on the third line we used that *V* is piecewise *C*1-function and relation (12.20). Finally combining the estimates (12.19) and (12.21) we get the desired lower estimate for standard Laplacians on metric graphs.

**Theorem 12.5** *Let be a finite compact connected metric graph and let h be the corresponding Cheeger constant defined by (12.18). Then the spectral gap for the standard Laplacian possesses the lower estimate* 

$$
\lambda\_2 \left( L^{\rm st} (\Gamma) \right) \ge \frac{1}{4} h^2. \tag{12.22}
$$

One may prove that inequality in (12.22) is in fact strict. Equality occurs only if all inequalities in the proof turn into equalities, in particular if one has equality in (12.19) which holds only if *ψ* is proportional to *ψ.* It follows that *ψ* is an exponential function and therefore cannot be a real valued solution to the eigenfunction equation. This result is also described in [436, Theorem 6.1].

**Problem 53** Check to which extent Cheeger's approach may be generalised for the case of Laplacians with delta vertex conditions.

**Upper Estimates** For a metric graph let us delete some of its edges *S* = ∪*s <sup>j</sup>*=<sup>1</sup>*Enj .* If the resulting graph \ *<sup>S</sup>* is not connected, then we say that *<sup>S</sup>* is an **edge cut** of *.* The set \ *S* may consist of several connected components. Let us denote by <sup>1</sup> and <sup>2</sup> any separation of \ *S* into two nonintersecting sets

$$
\Gamma\_1 \cup \Gamma\_2 = \Gamma \backslash \mathcal{S}, \quad \Gamma\_1 \cap \Gamma\_2 = \emptyset.
$$

We assume in this section that contains no loops, *i.e.* edges adjusted to one vertex. This is not an important restriction. Really, consider any graph with a loop, mark any point on the loop and put a new vertex at this point. The new metric

graph obtained in this way contains no loops but the corresponding Laplace operator is unitarily equivalent to the Laplace operator on the original graph.

With any set *S* as described above let us associate the following Cheeger-type quotient

$$\min\_{\begin{array}{c}\Gamma\_{1},\,\Gamma\_{2}:\,\Gamma\_{1}\cup\Gamma\_{2}=\Gamma\backslash S;\\\Gamma\_{1}\cap\Gamma\_{2}=\emptyset;\end{array}}\frac{\mathcal{L}(\Gamma)\sum\_{E\_{n}\in S}\ell\_{n}^{-1}}{\mathcal{L}(\Gamma\_{1})\mathcal{L}(\Gamma\_{2})},\tag{12.23}$$

We are going to prove that this quotient provides an upper estimate for the spectral gap. It resembles Cheeger's constant (12.17) described above, but is different.

**Theorem 12.6** *Let be a connected metric graph without loops, then the spectral gap for the standard Laplacian is estimated from above by the Cheeger-type quotient*  (12.23) *as follows* 

$$\lambda\_2(\Gamma) \le C(\Gamma) := \inf\_{\mathcal{S}} \min\_{\substack{\mathcal{S} \\ \Gamma\_1, \,\Gamma\_2 \,:\, \Gamma\_1 \cup \Gamma\_2 = \,\Gamma \\ \,\,\, \Gamma\_1 \cap \Gamma\_2 = \emptyset}} \frac{\mathcal{L}(\Gamma) \sum\_{E\_n \in S} \ell\_n^{-1}}{\mathcal{L}(\Gamma\_1) \mathcal{L}(\Gamma\_2)},\qquad(12.24)$$

*where the infimum is taken over all edge cuts S of .*

*Proof* Consider the function *g* defined as follows

$$g(\mathbf{x}) = \begin{cases} 1, & \mathbf{x} \in \Gamma\_1; \\ -1, & \mathbf{x} \in \Gamma\_2; \\ \ell\_n^{-1} \left( -\operatorname{dist} \left( \mathbf{x}, \Gamma\_1 \right) + \operatorname{dist} \left( \mathbf{x}, \Gamma\_2 \right) \right), \mathbf{x} \in E\_n \subset S, \end{cases} \tag{12.25}$$

where the distances dist*(x, j )*, *j* = 1*,* 2*,* are calculated along the corresponding interval *En.* The continuous function *g* is constructed in such a way, that it is equal to ±1 on <sup>1</sup> and <sup>2</sup> and is linear on the edges connecting <sup>1</sup> and 2*.* The mean value of the function might be different from zero. In that case the function *g* has to be modified so that it will be orthogonal to the ground state. Consider then the function *f* which is not only continuous, but also orthogonal to the ground state:

$$f(\mathbf{x}) = \mathbf{g}(\mathbf{x}) - \mathcal{L}(\Gamma)^{-1} \langle \mathbf{g}, 1 \rangle\_{L\_2(\Gamma)} = \mathbf{g}(\mathbf{x}) - \frac{\mathcal{L}(\Gamma\_1) - \mathcal{L}(\Gamma\_2)}{\mathcal{L}(\Gamma)}.$$

The Rayleigh quotient for the function *f* gives an upper estimate for the spectral gap.

To determine the Rayleigh quotient we calculate the Dirichlet integral and the norm of *f* :

$$\begin{split} \|\|f'\|\|\_{L\_2(\Gamma)}^2 &= \|\|g'\|\|\_{L\_2(\Gamma)}^2 = \sum\_{E\_n \in S} \int\_{E\_n} (-2\ell\_n^{-1})^2 dx = 4\sum\_{E\_n} \ell\_n^{-1}; \\ \|\|f\|\|\_{L\_2(\Gamma)}^2 &= \|\|g\|\|\_{L\_2(\Gamma)}^2 - \mathcal{L}(\Gamma)^{-1}\langle g, 1\rangle^2 \\ &\ge \mathcal{L}(\Gamma\_1) + \mathcal{L}(\Gamma\_2) - \mathcal{L}(\Gamma)^{-1}\left(\mathcal{L}(\Gamma\_1) - \mathcal{L}(\Gamma\_2)\right)^2 \\ &\ge 4\frac{\mathcal{L}(\Gamma\_1)\mathcal{L}(\Gamma\_2)}{\mathcal{L}(\Gamma)}. \end{split} \tag{12.26}$$

This gives the following upper estimate for *λ*2*()*

$$
\lambda\_2(\Gamma) \le C(\Gamma), \tag{12.27}
$$

where we use (12.24) and the fact that the set *S* dividing into disconnected components is arbitrary.

The derived estimate shows that the spectral gap is small if the metric graph can be cut into two approximately equal parts by deleting few long edges. Of course choosing long edges to delete makes the rest of the graph smaller, but to get the best estimate one has to find the best balance between these two tendencies.

The estimate we have just proven is rather explicit, but not exact in the sense that we do not know any graph for which the equality holds. We shall now present a modified estimate, which has a more complicated form, but with the advantage that there are graphs for which the estimate is exact.

The graphs we consider are finite and compact, hence the number of possible cuts is always finite. Therefore it is tempting to substitute the infimum in (12.24) by the minimum. On the other hand, every point on an edge can be seen as a dummy degree two vertex and the number of possible cuts becomes infinite making it unavoidable to use the infimum.

*An Improved Upper Estimate* The function *g* used in the proof of Theorem 12.6 can be chosen equal to

$$g(\mathbf{x}) = \begin{cases} 1, & \mathbf{x} \in \Gamma\_1; \\ \cos \frac{\text{dist}\left(\mathbf{x}, \Gamma\_1\right)}{\ell\_{\text{fl}}} \pi = -\cos \frac{\text{dist}\left(\mathbf{x}, \Gamma\_2\right)}{\ell\_{\text{fl}}} \pi, & \mathbf{x} \in E\_n \subset S; \\ -1, & \mathbf{x} \in \Gamma\_2. \end{cases} \tag{12.28}$$

We again shift the function by a constant to satisfy the orthogonality condition

$$f(\mathbf{x}) = g(\mathbf{x}) - \frac{\mathcal{L}(\Gamma\_1) - \mathcal{L}(\Gamma\_2)}{\mathcal{L}(\Gamma)}.\tag{12.29}$$

Calculating the Dirichlet integral and the norm

$$\begin{aligned} \|\|f'\|\|\_{L\_2(\Gamma)}^2 &= \sum\_{E\_n \subset S} \left(\frac{\pi}{\ell\_n}\right)^2 \int\_{E\_n} \sin^2 \frac{\text{dist}\left(\mathbf{x}, \Gamma\_1\right)}{\ell\_n} \pi \, d\mathbf{x} = \frac{\pi^2}{2} \sum\_{E\_n \subset S} \ell\_n^{-1}; \\ \|\|f\|\|\_{L\_2(\Gamma)}^2 &= \|\|\|g\|\|\_{L\_2(\Gamma)}^2 - \frac{\left(\mathcal{L}(\Gamma\_1) - \mathcal{L}(\Gamma\_2)\right)^2}{\mathcal{L}(\Gamma)}; \\ \|\|g\|\|\_{L\_2(\Gamma)}^2 &= \mathcal{L}(\Gamma\_1) + \mathcal{L}(\Gamma\_2) + \frac{1}{2}\mathcal{L}(S) \end{aligned}$$

and substituting into the Rayleigh quotient, we get the following estimate for the first nontrivial eigenvalue

$$\begin{split} \lambda\_{2}(\Gamma) &\leq \frac{\left\lVert\boldsymbol{f}\right\rVert\_{L\_{2}(\Gamma)}^{2}}{\left\lVert\boldsymbol{f}\right\rVert\_{L\_{2}(\Gamma)}^{2}} \\ &= \frac{\frac{\pi^{2}}{2}\sum\_{E\_{n}\subset S}\ell\_{n}^{-1}}{\mathcal{L}(\Gamma\_{1}) + \mathcal{L}(\Gamma\_{2}) + \frac{1}{2}\mathcal{L}(S) - \frac{(\mathcal{L}(\Gamma\_{1}) - \mathcal{L}(\Gamma\_{2}))^{2}}{\mathcal{L}(\Gamma)}} \\ &= \frac{\pi^{2}\mathcal{L}(\Gamma)\sum\_{E\_{n}\subset S}\ell\_{n}^{-1}}{8\mathcal{L}(\Gamma\_{1})\mathcal{L}(\Gamma\_{2}) + \mathcal{3}(\mathcal{L}(\Gamma\_{1}) + \mathcal{L}(\Gamma\_{2}))\mathcal{L}(S) + \mathcal{L}^{2}(S)}. \end{split} \tag{12.30}$$

Here L*(S)* denotes the total length of all deleted edges forming the set *S.* The obtained estimate can be used even in the case where the graphs <sup>1</sup> and <sup>2</sup> have zero lengths—the denominator is still different from zero in that case. This will be used in the proof of the following theorem.

**Theorem 12.7** *The spectral gap for the Laplace operator on a metric graph satisfies the following lower and upper estimates* 

$$\frac{\pi^2}{\mathcal{L}^2(\Gamma)} \le \lambda\_2(\Gamma) \le \frac{\pi^2}{\mathcal{L}^2(\Gamma)} \text{ 4\mathcal{L}(\Gamma)} \sum\_{E\_n \in \Gamma} \ell\_n^{-1}. \tag{12.31}$$

*If the metric graph G is bipartite,*3 *then the upper estimate can be improved by a factor of* 4 *as follows* 

$$
\lambda\_2(\Gamma) \le \frac{\pi^2}{\mathcal{L}^2(\Gamma)} \quad \mathcal{L}(\Gamma) \sum\_{E\_n \in \Gamma} \ell\_n^{-1}. \tag{12.32}
$$

<sup>3</sup> A graph *G* is called bipartite if the vertices can be divided into two classes, so that the edges connect only vertices from different classes. Such graphs are also called two-coloured, which means that one may colour all its vertices using just two colours, so that no two neighbours have the same colour.

*Proof* The lower estimate has been already proven in Theorem 12.1, it remains to show the upper one. We start from the second formula. Assume that the graph is bipartite. Then the graphs <sup>1</sup> and <sup>2</sup> appearing in Cheeger's estimate (12.30) can be chosen equal to the two disjoint sets of vertices V<sup>1</sup> and V<sup>2</sup> appearing in the definition of a bipartite graph. These sets consist of vertices only, hence L*(*1*)* = 0 = L*(*2*).* Every edge in connects vertices from the two sets. Therefore the proper cut *S* contains all edges. In other words we cut all edges in the graph and we have L*(S)* = L*()* leading to the following estimate

$$
\lambda\_2(\Gamma) \le \frac{\pi^2}{\mathcal{L}(\Gamma)} \sum\_{E\_n \subset \Gamma} \ell\_n^{-1}.
$$

The upper estimate for arbitrary graphs can be proven by the following trick: any metric graph can be turned into a bipartite graph by introducing new vertices in the middle of every edge. Then the sets V<sup>1</sup> and V<sup>2</sup> can be chosen equal to the unions of old and new vertices respectively. Then the previous estimate gives

$$
\lambda\_2(\Gamma) \le \frac{\pi^2}{\mathcal{L}(\Gamma)} \cdot 2 \sum\_{E\_n \subset \Gamma} (\ell\_n/2)^{-1}.
$$

The factor 2 in front of the sum appears due to the fact that every edge in is divided into two smaller edges of lengths */*2*.*

**Example 12.8** Consider the equilateral complete graph *KM* of total length L*.* To get upper estimates for the spectral gap we are going to cut in two different ways:


Intuitively the first cut seems to be better, since the second cut appears to be very asymmetric.

**Cut A** The lengths of <sup>1</sup> and <sup>2</sup> and the size of the cut are given by:


Applying (12.24) we get the following upper estimate:

$$\lambda\_2 \le \frac{\mathcal{L}^{\frac{M^2}{4}} \frac{M(M-1)}{2\mathcal{L}}}{\left(\frac{M(M-2)}{8} \frac{2\mathcal{L}}{M(M-1)}\right)^2} = \frac{2M^4}{\mathcal{L}} \frac{(1 - 1/M)^3}{(1 - 2/M)^2} \sim\_{M \to \infty} 2\frac{M^4}{\mathcal{L}^2}.\tag{12.33}$$

**Cut B** The lengths needed for the estimate are:


The estimate will be

$$
\lambda\_2 \le \frac{\mathcal{L}(M-1)a^{-1}}{(\ell-a)(M-1)\frac{M-2}{M}\mathcal{L}} = \frac{1}{a(\ell-a)}\frac{M}{M-2}.\tag{12.34}
$$

Here *a* is arbitrary and we get the best estimate if we choose *a* = */*2. This gives us:

$$
\lambda\_2 \le \frac{M^3 (M-1)^2}{M-2} \frac{1}{\mathcal{L}^2} \sim\_{M \to \infty} \frac{M^4}{\mathcal{L}^2}. \tag{12.35}
$$

The precise value of the spectral gap for complete graphs has already been calculated:

$$
\lambda\_2(K\_M) = \frac{M^2 (M-1)^2}{4\mathcal{L}^2} \left( \arccos(-\frac{1}{M-1}) \right)^2 \sim\_{M \to \infty} \frac{\pi^2}{16} \frac{M^4}{\mathcal{L}^2}.
$$

We see that Cut B provides an estimate with an almost correct asymptotic behaviour (factor 1 instead of *π*2*/*16). To get a reasonable estimate we were forced to introduce dummy degree two vertices and use new edges in the cut.

Another interesting upper estimate [292] may be obtained using the fact that flower graphs maximise the second eigenvalue [294, Theorem 4.2]

$$
\lambda\_2 \le \frac{\pi^2 N^2}{\mathcal{L}^2} \le \pi^2 N^2 \frac{h^2(\Gamma)}{4},\tag{12.36}
$$

where *<sup>N</sup>* is the number of edges and one uses the trivial estimate *h()* <sup>≥</sup> <sup>2</sup> L *.*

# **12.5 Topological Perturbations in the Case of Standard Conditions**

Quantum graphs for us are primarily geometric objects, therefore it is important to understand how their spectral properties depend on topological perturbations of the underlying metric graph. We are going to discuss here just two such perturbations when two vertices are glued together or when an edge or an interval is cut. We call these perturbations *gluing* and *cutting* respectively. Starting from standard Laplacians we proceed to most general Schrödinger operators on graphs.

The method of topological perturbations will help us to understand how spectral properties depend on the topology and obtain new explicit spectral estimates. The method was first introduced in [343] and soon became a standard tool in spectral analysis of quantum graphs. For a comprehensive survey of this method one may consult [88]. In fact we have already used this approach when Eulerian path technique was applied. Our goal here will be to provide a more systematic study presenting explicit examples.

As already mentioned, the spectral gap has been extensively investigated for discrete graphs where it is referred to as *algebraic connectivity* [221]. Therefore our goal here will be to understand behaviour of the spectral gap under topological and geometrical perturbations of the underlying metric graph. Such perturbations include:


In the last two cases the total length of the graph increases, therefore it is not surprising that the spectral gap has a tendency to decrease (Theorems 12.11 and 12.12). Hence let us start our studies with the first case.

In this section we are going to compare our results with the corresponding statements for discrete graphs described in detail in Chap. 24. To understand most of the comments it is not necessary to read that chapter in advance, but one may consult it if necessary. The main message we want to deliver is that metric and discrete graphs are similar as far as properties of their lowest eigenvalues are concerned. The difference is lying in the fact that the role of vertices and edges is interchanged. Thus the total length of a metric graph plays a role similar to the number of vertices for discrete graphs; adding an edge to a discrete graph is similar to joining two vertices in metric graphs, and so on.

## *12.5.1 Gluing Vertices Together*

Our first step is to formalise the observations on what happens to the spectrum when two vertices with standard conditions are joined together into one common vertex. We have already used this observation in Sect. 12.1.1, where Eulerian path technique was introduced.

**Theorem 12.9** *Let be a connected metric graph and let be another metric graph obtained from by joining together two of its vertices, say V* <sup>1</sup> *and V* <sup>2</sup>*. Then the following holds:* 

*(1) The spectral gap for the standard Laplacian satisfies the inequality* 

$$
\lambda\_2(\Gamma) \le \lambda\_2(\Gamma'). \tag{12.37}
$$

*(2) The equality λ*2*()* = *λ*2*( ) holds if and only if the eigenfunction ψ*<sup>2</sup> *corresponding to the first excited state can be chosen such that it attains the same values in the vertices to be joined:* 

$$
\psi\_2(V^1) = \psi\_2(V^2). \tag{12.38}
$$

*Proof* Consider the standard Laplacians *L*st*()* and *L*st*( )*. Their quadratic forms are given by the same expression—the Dirichlet integral |*u (x)*| <sup>2</sup>*dx*, where the integration is over the corresponding metric graph. The integration is over the edges and therefore it is irrelevant whether the vertices *V* <sup>1</sup> and *V* <sup>2</sup> are joined together or not. The domain of the quadratic form is given by all functions from *W*<sup>1</sup> <sup>2</sup> on the edges satisfying continuity conditions at the vertices. The functions from *L*2*()* and *L*2*( )* can be identified, therefore one may say that functions from the domain of the quadratic form on satisfy the additional continuity condition

$$
\mu(V^1) = \mu(V^2),
$$

(compared to functions from the domain of the quadratic form on ).

The first excited state is calculated by minimising the Rayleigh quotient |*u (x)*| <sup>2</sup>*dx* |*u(x)*| <sup>2</sup>*dx* over the set of functions from the domain of the quadratic form which in addition are orthogonal to the ground state eigenfunction *ψ*1*(x)* ≡ 1*.* The set of admissible functions for *λ*2*()* is larger than that for *λ*2*( )*, hence inequality (12.37) for the corresponding minima follows.

To prove the second statement we first note that if the minimising function *ψ*<sup>2</sup> for satisfies in addition (12.38), then the same function is a minimiser for and the corresponding eigenvalues coincide. Conversely if *λ*2*()* = *λ*2*( )*, then the eigenfunction for *L*st*( )* is also a minimiser for the Rayleigh quotient for and therefore is an eigenfunction for *L*st*()* (satisfying in addition (12.38)).

It is interesting to compare spectral behaviour of quantum graphs and discrete graphs; as we already mentioned, the role of vertices and edges is exchanged, hence it is natural to compare Theorem 12.9 with Proposition 24.10, which describes what happens to the spectral gap as an edge is added to a discrete graph. These two statements may appear to be rather similar at first glance. But the reasons for the spectral gap to increase are different. In the case of discrete graphs the difference between the Laplace operators is a nonnegative matrix, hence we have an explicit inequality for the quadratic forms having identical domains. For quantum graphs the quadratic forms are given by identical expressions, but inequality (12.37) is valid due to the fact that the opposite inclusion holds for the domains of the quadratic forms.

**Corollary 12.10** *Theorem 12.9 implies in particular that the flower graph formed by loops, all attached to one vertex, has the largest spectral gap among all graphs formed by a given set of edges (Fig. 12.4).*

**Fig. 12.4** A flower graph

## *12.5.2 Adding an Edge*

Our goal in this section is to study behaviour of the spectral gap as an extra edge is added to the metric graph. There are two possibilities:


In both cases the total length of the graph increases, therefore it is not surprising that the spectral gap has a tendency to decrease (Theorem 12.11). Therefore it is particularly interesting to find cases when the spectral gap is growing. Nevertheless we start by investigating what happens when pendant edges are added—the spectral gap cannot increase in this case. The main reason is to compare our findings with the corresponding result for discrete graphs (Proposition 24.11).

**Theorem 12.11** *Let be a connected metric graph with a vertex V* <sup>1</sup> *and let be another graph obtained from by adding a pendant edge, i.e. one vertex and one edge between the new vertex with the vertex V* <sup>1</sup>*.* 

*(1) The spectral gap for the standard Laplacians satisfies the following inequality:* 

$$
\lambda\_2(\Gamma) \ge \lambda\_2(\Gamma').
$$

*(2) The equality λ*2*()* = *λ*2*( ) holds only if every eigenfunction ψ*<sup>2</sup> *corresponding to λ*2*() is equal to zero at V* <sup>1</sup>*:* 

$$
\psi\_2(V^1) = 0.
$$

*Proof* The graph is naturally considered as a subset of *.* Let us introduce the following function on :

$$f(\mathbf{x}) = \begin{cases} \psi\_2(\mathbf{x}), & \mathbf{x} \in \Gamma, \\ \psi\_2(V^1) \ge \Gamma' \backslash \Gamma. \end{cases}$$

This function coincides with *ψ*<sup>2</sup> on the original graph and is extended to the new edge by a constant preserving continuity at *V* <sup>1</sup>*.* The function is not necessarily orthogonal to the ground state on *.* Therefore consider the nonzero function *g* differed from *f* by a constant

$$\mathbf{g}(\mathbf{x}) := f(\mathbf{x}) + c,$$

where *c* is chosen so that the orthogonality condition in *L*2*( )* holds

$$0 = \langle \mathbf{g}(\mathbf{x}), 1 \rangle\_{L\_2(\Gamma')} = \underbrace{\langle \psi\_2, 1 \rangle\_{L\_2(\Gamma)}}\_{=0} + \psi\_2(V^1)\ell + c\mathcal{L}',$$

where and L are the length of the added edge and the total length of respectively. This implies *<sup>c</sup>* = −*ψ*2*(V* <sup>1</sup>*)* <sup>L</sup> *.* Using the new function the following estimate for the spectral gap is obtained:

$$\lambda\_2(\Gamma') \le \frac{\|\mathbf{g}'\|\_{L\_2(\Gamma')}^2}{\|\mathbf{g}\|\_{L\_2(\Gamma')}^2} = \frac{\|\boldsymbol{\psi}'\_2\|\_{L\_2(\Gamma)}^2}{\|\boldsymbol{\psi}\_2\|\_{L\_2(\Gamma)}^2 + \underbrace{c^2\mathcal{L} + \left|\boldsymbol{\psi}\_2(V^1) + c\right|^2\ell}\_{\ge 0} \le \lambda\_2(\Gamma).$$

Here L denotes the total length of the metric graph *.* The last inequality follows from the fact that

$$\|\psi\_2'\|\_{L\_2(\Gamma)}^2 = \lambda\_2(\Gamma) \|\psi\_2\|^2.$$

Note that in the last expression the equality holds if and only if *c* = 0 and <sup>|</sup>*ψ*2*(V* <sup>1</sup>*)* <sup>+</sup> *<sup>c</sup>*<sup>|</sup> <sup>2</sup> <sup>=</sup> <sup>0</sup> implying *ψ*2*(V* <sup>1</sup>*)* <sup>=</sup> <sup>0</sup>*.* This proves the second assertion.

The proven statement has a direct analogy in the theory of discrete Laplacians— Proposition 24.11 below. The analogy is complete, since the transformation we analyse consists of adding one new edge and one new vertex simultaneously. It has similar effect on discrete and metric graphs.

In the proof of the theorem we did not use that \ is an edge. It is straightforward to generalise the theorem for the case where \ is an arbitrary finite connected graph joined to at a single vertex *V* <sup>1</sup>*.* One may even show that joining together any two graphs <sup>1</sup> and <sup>2</sup> at one vertex leads to a new graph with the spectral gap satisfying the estimate:

$$\min\left\{\lambda\_2(\Gamma\_1), \lambda\_2(\Gamma\_2)\right\} \ge \lambda\_2(\Gamma').\tag{12.39}$$

One may also prove this statement using general perturbation theory, taking into account that the difference between the resolvents of the standard Laplacians on 1∪<sup>2</sup> and has rank one. The operator *<sup>L</sup>*st*(*1∪2*)* has at least three eigenvalues in the interval [0*,* min {*λ*2*(*1*), λ*2*(*2*)*}]*,* hence the operator *<sup>L</sup>*st*( )* has at least two eigenvalues in the same interval.

The approach using general perturbation theory illuminates one important point: the statement holds only if two graphs are joined at one vertex: in that case the difference between the resolvents has rank one.

We return now to our original goal and investigate the behaviour of the spectral gap when an edge between two vertices is added to a metric graph.

**Theorem 12.12** *Let be a connected metric graph and Lst—the corresponding standard Laplace operator. Let be a metric graph obtained from by adding an edge between the vertices V* <sup>1</sup> *and V* <sup>2</sup>*. Assume that the eigenfunction ψ*<sup>2</sup> *corresponding to the first excited eigenvalue can be chosen such that* 

$$
\psi\_2(V^1) = \psi\_2(V^2). \tag{12.40}
$$

*Then the following inequality for the spectral gap holds:* 

$$
\lambda\_2(\Gamma) \ge \lambda\_2(\Gamma'). \tag{12.41}
$$

*Proof* Consider the eigenfunction *ψ*2*()* for *Lst()* and extend it to the new edge by a constant, which is possible due to (12.40)

$$f(\mathbf{x}) = \begin{cases} \psi\_2(\mathbf{x}), & \mathbf{x} \in \Gamma, \\ \psi\_2(V^1) \ (= \psi\_2(V^2)), & \mathbf{x} \in \Gamma' \backslash \Gamma. \end{cases}$$

This function is not orthogonal to the constant function. Let us, as before, adjust the constant *c* so that the function *g(x)* = *f (x)* + *c* is orthogonal to 1 in *L*2*( )*: 4

$$0 = \langle \mathbf{g}(\mathbf{x}), \mathbf{l} \rangle\_{L\_2(\Gamma')} = \underbrace{\langle \psi\_2(\mathbf{x}), \mathbf{l} \rangle\_{L\_2(\Gamma)}}\_{=0} + \psi\_2(V^1)\ell + \mathbf{c}\mathcal{L}' = 0,$$

where we keep notations from the proof of the previous Theorem. We have used that the eigenfunction *ψ*<sup>2</sup> has mean value zero, *i.e.* is orthogonal to the ground state on *.* This implies *<sup>c</sup>* = −*ψ*2*(V* <sup>1</sup>*)* <sup>L</sup> *.* Now we are ready to get an estimate for *λ*2*( )* using the Rayleigh quotient

$$
\lambda\_2(\Gamma') \le \frac{\|\mathbf{g}'\|\_{L\_2(\Gamma')}^2}{\|\mathbf{g}\|\_{L\_2(\Gamma')}^2}.
$$

The numerator and denominator can be evaluated as follows

$$\begin{aligned} \|\|g'\|\|\_{L\_2(\Gamma')}^2 &= \|\psi\_2'\|\|\_{L\_2(\Gamma)}^2 = \lambda\_2(\Gamma) \|\psi\_2\|\_{L\_2(\Gamma)}^2, \\\\ \|g\|\_{L\_2(\Gamma')}^2 &= \|\psi\_2 + c\|\_{L\_2(\Gamma)}^2 + |\psi\_2(V^1) + c|^2 \ell = \end{aligned}$$

<sup>4</sup> In what follows we are going to use the same notation 1 for the functions identically equal to one on both metric graphs and .

$$\begin{aligned} &= \|\psi\_2\|\_{L\_2(\Gamma)}^2 + c^2 \mathcal{L} + |\psi\_2(V^1) + c|^2 \ell \\ &\ge \|\psi\_2\|\_{L\_2(\Gamma)}^2 \end{aligned}$$

leading to (12.41).

One may think that the above theorem is rather artificial due to the presence of condition (12.40). To see that this condition is natural, let us consider a couple of examples:

**Example 12.13** Let be the graph formed by one edge of length *a*. The spectrum of *L*st*()* is  *(L*st*())* <sup>=</sup> *<sup>π</sup> a* <sup>2</sup> *n*<sup>2</sup> <sup>∞</sup> *.* All eigenvalues have multiplicity one.

*n*=0 Consider the graph obtained from by adding edge of length *b,* so that is formed by two intervals of lengths *a* and *b* connected in parallel (see Fig. 12.5). The graph is equivalent to the circle of length *<sup>a</sup>* <sup>+</sup> *<sup>b</sup>*. The spectrum is:  *(L*st*( ))* <sup>=</sup> <sup>2</sup>*<sup>π</sup> a*+*b* 2 *n*2 <sup>∞</sup> *n*=0 *,* where all the eigenvalues except for the ground state have double

multiplicity.

Let us study the relation between the spectral gaps:

$$
\lambda\_2(\Gamma) = \frac{\pi^2}{a^2}, \quad \lambda\_2(\Gamma') = \frac{4\pi^2}{(a+b)^2}.
$$

Any relation between these values is possible:

$$b > a \implies \lambda\_2(\Gamma) > \lambda\_2(\Gamma'),$$

$$b < a \implies \lambda\_2(\Gamma) < \lambda\_2(\Gamma').$$

Therefore the first eigenvalue is not in general a monotonously decreasing function of the set of edges. The spectral gap decreases only if certain additional conditions are satisfied.

**Example 12.14** Consider, in addition to the graph discussed in Example 12.13, the graph obtained from by adding another one edge of length *c* between the same two vertices (see Fig. 12.5). Hence is formed by three parallel edges of lengths *a, b* and *c*. The first eigenfunction for *L*st*( )* can always be chosen so that its values at the vertices are equal. Then, in accordance with Theorem 12.12, the first eigenvalue for is less or equal to the first eigenvalue for :

$$
\lambda\_2(\Gamma'') \le \lambda\_2(\Gamma').
$$

This fact can easily be supported by explicit calculations.

The above examples and theorems show that the spectral gap has a tendency to decrease, when a new sufficiently long edge is added. It is not surprising, since addition of an edge increases the total length of the graph, but the eigenvalues satisfy

$$\begin{array}{c} \text{Fig. 12.5 graphs } \Gamma = \Gamma\_{(1,1)}, \\ \Gamma' = \Gamma\_{(2,3)}, \text{and } \Gamma' = \Gamma\_{(3,9)}. \end{array} \qquad \bigtriangleup \bigtriangleup \bigtriangleup \bigtriangleup \bigtriangleup$$

Weyl's law and therefore are asymptotically close to *(πn)*2*/*L2*.* This is in contrast to discrete graphs, for which addition of an edge never decreases the spectral gap.

Condition (12.40) in Theorem 12.12 is not easy to check for non-trivial graphs and therefore it might be interesting to obtain other explicit sufficient conditions. In what follows we would like to discuss one such geometric condition ensuring that the spectral gap drops as a new edge is added to a graph. The main idea is to compare the length of the new edge with the total length of the original graph L*().* It turns out that if  *>* L*()*, then the spectral gap always decreases. We have already observed this phenomenon when discussing Example 12.13, where behaviour of *λ*<sup>2</sup> depended on the ratio between the lengths *a* and *b.* If *b* = *>a* = L*()*, then the gap decreases. It is surprising that the same explicit condition holds for arbitrary connected graphs *.*

**Theorem 12.15** *Let be a connected finite compact metric graph of length* L*() and let be a graph constructed from by adding an edge of length between certain two vertices. If* 

$$
\ell > \mathcal{L}(\Gamma),
\tag{12.42}
$$

*then the spectral gaps of the corresponding standard Laplacians satisfy the estimate* 

$$
\lambda\_2(\Gamma) \ge \lambda\_2(\Gamma'). \tag{12.43}
$$

*Proof* Let *ψ*<sup>2</sup> be any eigenfunction corresponding to the first excited eigenvalue *λ*2*()* of *L*st*()*. It follows that the minimum of the Rayleigh quotient is attained at *ψ*2:

$$\lambda\_2(\Gamma) = \min\_{\substack{\boldsymbol{c} \\ \boldsymbol{u} \in W\_2^1(\Gamma): \boldsymbol{u} \perp \boldsymbol{1}}} \frac{\|\boldsymbol{u}'\|\_{L\_2(\Gamma)}^2}{\|\boldsymbol{u}\|\_{L\_2(\Gamma)}^2} = \frac{\|\boldsymbol{\psi}'\_2\|\_{L\_2(\Gamma)}^2}{\|\boldsymbol{\psi}\_2\|\_{L\_2(\Gamma)}^2},$$

where *c W*<sup>1</sup> <sup>2</sup> *()* denotes the set of continuous *W*<sup>1</sup> <sup>2</sup> -functions on the graph :

$$
\stackrel{c}{W\_2^1}(\Gamma) = W\_2^1(\Gamma \backslash \mathbf{V}) \cap \mathcal{C}(\Gamma).
$$

Let us denote by *V* <sup>1</sup> and *V* <sup>2</sup> the vertices in where the new edge *E* of length is attached.

The eigenvalue *λ*2*( )* can again be estimated using the Rayleigh quotient

$$\lambda\_2(\Gamma') = \min\_{\substack{\boldsymbol{c} \\ \boldsymbol{u} \in W\_2^1(\Gamma') : \boldsymbol{u} \perp \boldsymbol{1}}} \frac{\|\boldsymbol{u}'\|\_{L\_2(\Gamma')}^2}{\|\boldsymbol{u}\|\_{L\_2(\Gamma')}^2} \le \frac{\|\boldsymbol{g}'\|\_{L\_2(\Gamma')}^2}{\|\boldsymbol{g}\|\_{L\_2(\Gamma')}^2},\tag{12.44}$$

where *g(x)* is any function in *c W*<sup>1</sup> <sup>2</sup> *( )* orthogonal to the constant functions in *L*2*( ).* Let us choose a trial function *g* of the form *g(x)* = *f (x)* + *c* where

$$f(\mathbf{x}) := \begin{cases} \psi\_2(\mathbf{x}), & \mathbf{x} \in \Gamma, \\ \gamma\_1 + \gamma\_2 \sin\left(\frac{\pi x}{\ell}\right) \mathbf{x} \in \Gamma' \langle \Gamma = E = [-\ell/2, \ell/2], \end{cases} \tag{12.45}$$

with *<sup>γ</sup>*<sup>1</sup> <sup>=</sup> *(ψ*2*(V* <sup>1</sup>*)*+*ψ*2*(V* <sup>2</sup>*))/*<sup>2</sup> and *<sup>γ</sup>*<sup>2</sup> <sup>=</sup> *(ψ*2*(V* <sup>2</sup>*)*−*ψ*2*(V* <sup>1</sup>*))/*2. Here we assumed that the left endpoint of the interval is connected to *V* <sup>1</sup> and the right endpoint to *V* <sup>2</sup>*. c*

The function *f* obviously belongs to *W*<sup>1</sup> <sup>2</sup> *( )*, since it is continuous at *V* <sup>1</sup> and *V* <sup>2</sup> , but it is not necessarily orthogonal to the ground state eigenfunction 1. The constant *c* is adjusted in order to ensure the orthogonality condition is satisfied

$$0 = \langle \mathbf{g}, \mathbf{l} \rangle\_{L\_2(\Gamma')} = c\mathcal{L}' + \underbrace{\langle \psi\_2, \mathbf{l} \rangle\_{L\_2(\Gamma)}}\_{=0} + \int\_{-\ell/2}^{\ell/2} \left( \chi\_1 + \chi\_2 \sin\left(\frac{\pi x}{\ell}\right) \right) d\mathbf{x} = c\mathcal{L}' + \chi\_1 \ell$$

$$\Rightarrow c = -\frac{\chi\_1 \ell}{\mathcal{L}'}.\tag{12.46}$$

The function *g* can be used as a trial function in (12.44) to estimate the spectral gap. Let us begin by computing the denominator using the fact that *g* is orthogonal to 1 and Pythagoras theorem can be used

$$\begin{split} \|g\|\_{L\_2(\Gamma')}^2 &= \|f+c\|\_{L\_2(\Gamma')}^2 = \|f\|\_{L\_2(\Gamma')}^2 - \|c\|\_{L\_2(\Gamma')}^2 \\ &= \|\psi\_2\|\_{L\_2(\Gamma)}^2 + \int\_{-\ell/2}^{\ell/2} \left(\gamma\_1 + \gamma\_2 \sin\left(\frac{\pi x}{\ell}\right)\right)^2 dx - c^2 \mathcal{L}' \end{split}$$
 
$$= \|\psi\_2\|\_{L\_2(\Gamma)}^2 + \ell \gamma\_1^2 + \frac{\ell}{2} \nu\_2^2 - c^2 \mathcal{L}' \tag{12.47}$$

The numerator yields

$$\|\mathbf{g}'\|\_{L\_2(\Gamma')}^2 = \|f'\|\_{L\_2(\Gamma')}^2 = \|\psi\_2'\|\_{L\_2(\Gamma)}^2 + \int\_{-\ell/2}^{\ell/2} \left(\nu\_2^2 \frac{\pi^2}{\ell^2} \cos^2\left(\frac{\pi x}{\ell}\right)\right) dx$$

$$= \lambda\_2(\Gamma) \|\psi\_2\|\_{L\_2(\Gamma)}^2 + \nu\_2^2 \frac{\pi^2}{2\ell}. \tag{12.48}$$

After plugging (12.47) and (12.48) into the Rayleigh quotient (12.44) we obtain

$$
\lambda\_2(\Gamma') \le \frac{\lambda\_2(\Gamma) \|\psi\_2\|\_{L\_2(\Gamma)}^2 + \nu\_2^2 \frac{\pi^2}{2\ell}}{\|\psi\_2\|\_{L\_2(\Gamma)}^2 + \ell \gamma\_1^2 + \frac{\ell}{2} \nu\_2^2 - c^2 \mathcal{L}'}.
$$

Using (12.46) the last estimate implies

$$\lambda\_2(\Gamma') \le \frac{\lambda\_2(\Gamma) \|\psi\_2\|\_{L\_2(\Gamma)}^2 + \nu\_2^2 \frac{\pi^2}{2\ell}}{\|\psi\_2\|\_{L\_2(\Gamma)}^2 + \ell \nu\_1^2 \left(1 - \frac{\ell}{\mathcal{L}'}\right) + \frac{\ell}{2} \nu\_2^2} \le \frac{\lambda\_2(\Gamma) \|\psi\_2\|\_{L\_2(\Gamma)}^2 + \nu\_2^2 \frac{\pi^2}{2\ell}}{\|\psi\_2\|\_{L\_2(\Gamma)}^2 + \frac{\ell}{2} \nu\_2^2},\tag{12.49}$$

where we used that  *<* L = L+*.* It remains to take into account the fundamental estimate (12.1)

$$
\lambda\_2(\Gamma) \ge \left(\frac{\pi}{\mathcal{L}}\right)^2 \cdot \mathbb{I}
$$

Then taking into account that  *>* L*()* the estimate (12.49) can be written as

$$
\lambda\_2(\Gamma') \le \frac{\lambda\_2(\Gamma) \left\| \psi\_2 \right\|\_{L\_2(\Gamma)}^2 + \lambda\_2(\Gamma) \chi\_2^2 \ell/2}{\left\| \psi\_2 \right\|\_{L\_2(\Gamma)}^2 + \chi\_2^2 \ell/2} = \lambda\_2(\Gamma). \tag{12.50}
$$

The theorem is proven.

The fundamental estimate (12.1) was crucial for the proof. It relates the spectral gap and the total length of the metric graph, *i.e.* geometric and spectral properties of metric graphs. It might be interesting to prove an analogue of the theorem for discrete graphs. Proposition 24.10 states that the spectral gap increases if one edge is added to a discrete graph. Adding a long edge should correspond to adding a chain to a discrete graph.

The above theorem can again be proven using perturbation theory methods. The standard Laplacian *L*st*(* ∪ [0*,* ]*)* has at least four eigenvalues in the interval [0*, λ*2*()*]: the double eigenvalue 0 <sup>=</sup> *<sup>λ</sup>*1*()* <sup>=</sup> *<sup>λ</sup>*1*(*[0*,* ]*)*, *λ*2*()* and *λ*2*(*[0*,* ]*)* <sup>=</sup> *<sup>π</sup>*<sup>2</sup> <sup>2</sup> *<sup>&</sup>lt; <sup>π</sup>*<sup>2</sup> <sup>L</sup><sup>2</sup> <sup>≤</sup> *<sup>λ</sup>*2*().* The difference between the resolvents of *L*st*(* ∪ [0*,* ]*)* and *L*st*( )* has rank two, hence (12.50) holds.

The previous theorem gives us a sufficient geometric condition for the spectral gap to decrease. Let us study now the case where the spectral gap is increasing. Similarly, as we proved that adding one edge that is *long enough* always makes the spectral gap smaller (Theorem 12.15), we claim that an edge that is *short enough*  makes it not to decrease. We have already seen in Theorem 12.9 that adding an edge of zero length (joining two vertices into one) may lead to an increase of the spectral gap. It turns out that the criterion for a gap to decrease can be formulated explicitly

$$\blacksquare$$

in terms of the eigenfunction on the larger graph. Therefore let us change our point of view and study the behaviour of the spectral gap as an edge is deleted.

## **12.6 Bonus Section: Further Topological Perturbations**

## *12.6.1 Cutting Edges*

In the following subsection we are going to study the behaviour of the spectral gap when one of the edges is deleted. The result of such a procedure is not obvious, since deleting an edge decreases the total length of the metric graph and one expects that the first excited eigenvalue increases. On the other hand deleting an edge decreases the graph's edge connectivity and therefore the spectral gap is expected to decrease. It is easy to construct examples when one of these two tendencies prevails: Example 12.13 shows that the spectral gap may both decrease and increase when an edge is deleted.

Let us discuss first what happens when one of the edges is cut at a certain internal point. Let ∗ be a connected metric graph obtained from a metric graph by cutting one of the edges, say *E*<sup>1</sup> = [*x*1*, x*2] at a point *x*<sup>∗</sup> ∈ *(x*1*, x*2*).* It will be convenient to denote by *x*∗ <sup>1</sup> and *x*<sup>∗</sup> <sup>2</sup> the points on the two sides of the cut. In other words, the graph <sup>∗</sup> has precisely the same set of edges and vertices as except that the edge [*x*1*, x*2] is substituted by two edges [*x*1*, x*<sup>∗</sup> <sup>1</sup> ] and [*x*<sup>∗</sup> <sup>2</sup> *, x*2] and two new vertices *<sup>V</sup>* <sup>1</sup><sup>∗</sup> = {*x*<sup>∗</sup> 1 } and *<sup>V</sup>* <sup>2</sup><sup>∗</sup> = {*x*<sup>∗</sup> <sup>2</sup> } are added to the set of vertices. It is irrelevant whether the new graph is still connected or not.

One may change point of view and consider the point *x*∗ as a degree two vertex, then Theorem 12.9 can be reformulated as:

**Theorem 12.16** *Let be a connected metric graph and let* ∗ *be another graph obtained from by cutting one of the edges at an internal point x*∗ *producing two new vertices V* <sup>1</sup><sup>∗</sup> *and V* <sup>2</sup>∗*.*

*1. Then the first excited eigenvalues satisfy the following inequality* 

$$
\lambda\_2(\Gamma) \ge \lambda\_2(\Gamma^\*). \tag{12.51}
$$

*2. If λ*2*(*∗*)* <sup>=</sup> *<sup>λ</sup>*2*() then every eigenfunction of L*st*() corresponding to λ*2*() satisfies Neumann condition at the cut point x*∗*: ψ* <sup>1</sup>*(x*∗*)* = 0*. If at least one of the eigenfunctions on* ∗ *satisfies ψ*∗ <sup>2</sup> *(V* <sup>1</sup>∗*)* <sup>=</sup> *<sup>ψ</sup>*<sup>∗</sup> <sup>2</sup> *(V* <sup>2</sup>∗*), then <sup>λ</sup>*2*(*∗*)* <sup>=</sup> *<sup>λ</sup>*2*().*

This theorem implies that the spectral gap has a tendency to decrease as an edge is cut in an internal point. Note that the total length of the graph is preserved.

## *12.6.2 Deleting Edges*

Let us study now what happens if an edge is deleted, or if an interval of non-zero length is cut away from an edge (without gluing the remaining sides together). Every point inside an edge can be seen as a degree two vertex, hence it is enough to study what happens if an edge is deleted.

The following theorem proves a sufficient condition that guarantees that the spectral gap is decreasing as one of the edges is deleted.

**Theorem 12.17** *Let be a connected finite compact metric graph of the total length* L *and let* <sup>∗</sup> *be another connected metric graph obtained from by deleting one edge of length between certain vertices V* <sup>1</sup> *and V* <sup>2</sup>*. Assume in addition that* 

$$\begin{pmatrix} \max\_{\psi\_2 \in L^4(\Gamma): \psi\_2 = \lambda\_2 \psi\_2} \frac{(\psi\_2(V^1) - \psi\_2(V^2))^2}{(\psi\_2(V^1) + \psi\_2(V^2))^2} \cot^2 \frac{k\_2 \ell}{2} - 1 \Big) \frac{k\_2}{2} \cot \frac{k\_2 \ell}{2} \ge (\mathcal{L} - \ell)^{-1},\tag{12.52}$$

*where <sup>λ</sup>*2*()* <sup>=</sup> *<sup>k</sup>*<sup>2</sup> <sup>2</sup>*, k*<sup>2</sup> *<sup>&</sup>gt;* <sup>0</sup>*, is the first excited eigenvalue of <sup>L</sup>*st*(), then* 

$$
\lambda\_2(\Gamma) \ge \lambda\_2(\Gamma^\*). \tag{12.53}
$$

*The inequality holds even in the special case where there exists a function ψ*<sup>2</sup> *with <sup>ψ</sup>*2*(V* <sup>1</sup>*)* = −*ψ*2*(V* <sup>2</sup>*), provided < π/k*2*.*

*Proof* It will be convenient to denote the edge to be deleted by *E* = \ <sup>∗</sup> as well as to introduce notation L<sup>∗</sup> = L − for the total length of ∗*.*

Let us consider any eigenfunction *ψ*<sup>2</sup> on corresponding to the eigenvalue *λ*2*()*. We then define the function *g* ∈ *c W*<sup>1</sup> <sup>2</sup> *(*∗*)* by

$$\mathbf{g} = \psi\_2|\_{\Gamma^\*} + c,$$

where the constant *c* is to be adjusted so that *g* has mean value zero on ∗:

$$\langle \mathbf{g}, 1 \rangle\_{L\_2(\Gamma^\*)} = 0. \tag{12.54}$$

Straightforward calculations lead to

$$0 = \langle \psi\_2, 1 \rangle\_{L\_2(\Gamma^\*)} + c\mathcal{L}^\* = -\langle \psi\_2, 1 \rangle\_{L\_2(E)} + c\mathcal{L}^\* \Rightarrow c = \frac{\int\_E \psi\_2(\mathbf{x})d\mathbf{x}}{\mathcal{L}^\*}.\tag{12.55}$$

The function *g* can be used to estimate the first excited eigenvalue *λ*2*(*∗*)*:

$$\lambda\_2(\Gamma^\*) = \min\_{\substack{\varepsilon\\ \mu \in W\_2^1(\Gamma^\*): \mu \perp \mathbb{L}}} \frac{\|\mu'\|\_{L\_2(\Gamma^\*)}^2}{\|\mu\|\_{L\_2(\Gamma^\*)}^2} \le \frac{\|g'\|\_{L\_2(\Gamma^\*)}^2}{\|g\|\_{L\_2(\Gamma^\*)}^2}. \tag{12.56}$$

Bearing in mind that *ψ*2*,* 1*L*2*()* = 0 and using (12.55) we evaluate the denominator in (12.56) first:

$$\left\|g\right\|\_{L\_2(\Gamma^\*)}^2 = \left\|\psi\_2 + c\right\|\_{L\_2(\Gamma^\*)}^2 = \int\_{\Gamma} (\psi\_2 + c)^2 \, dx - \int\_E (\psi\_2 + c)^2 \, dx$$

$$= \left\|\psi\_2\right\|\_{L\_2(\Gamma)}^2 - \int\_E \psi\_2^{\ast 2} \, dx - \frac{1}{\mathcal{L}^\*} \left(\int\_E \psi\_2 \, dx\right)^2. \tag{12.57}$$

The numerator similarly yields

$$\left\|\lg'\right\|\_{L2(\Gamma^\*)}^2 = \int\_{\Gamma} \left(\psi\_2'\right)^2 dx - \int\_E \left(\psi\_2'\right)^2 dx = \lambda\_2(\Gamma) \left\|\psi\_2\right\|\_{L2(\Gamma)}^2 - \int\_E \left(\psi\_2'\right)^2 dx. \tag{12.58}$$

Plugging (12.57) and (12.58) into (12.56) we arrive at

$$
\lambda\_2(\Gamma^\*) \le \frac{\lambda\_2(\Gamma) \left\| \psi\_2 \right\|\_{L\_2(\Gamma)}^2 - \int\_E (\psi\_2')^2 \, dx}{\left\| \psi\_2 \right\|\_{L\_2(\Gamma)}^2 - \int\_E \psi\_2 \, ^2 \, dx - \frac{1}{\mathcal{E}^\*} \left( \int\_E \psi\_2 \, dx \right)^2}. \tag{12.59}$$

Let us evaluate the integrals appearing in (12.59) taking into account that *ψ*<sup>2</sup> is a solution to Eq. (2.30) on the edge *E* which can be parameterized as *E* = [−*/*2*, /*2] so that *<sup>x</sup>* = −*/*<sup>2</sup> belongs to *<sup>V</sup>* <sup>1</sup> and *<sup>x</sup>* <sup>=</sup> */*<sup>2</sup> to *<sup>V</sup>* <sup>2</sup>

$$
\psi\_2|\_E(\mathbf{x}) = \alpha \sin \left(k\_2 \mathbf{x}\right) + \beta \cos \left(k\_2 \mathbf{x}\right),
\tag{12.60}
$$

where

$$\alpha = -\frac{\psi\_2(V^1) - \psi\_2(V^2)}{2\sin(k\_2\ell/2)}, \quad \beta = \frac{\psi\_2(V^1) + \psi\_2(V^2)}{2\cos\left(k\_2\ell/2\right)}.\tag{12.61}$$

Direct calculations imply

$$\begin{array}{l} \int\_{E} \psi\_{2}(\mathbf{x})d\mathbf{x} &= \frac{2\beta}{k\_{2}}\sin\left(\frac{k\_{2}\ell}{2}\right);\\ \int\_{E} (\psi\_{2}(\mathbf{x}))^{2}d\mathbf{x} &= \frac{\alpha^{2}+\beta^{2}}{2}\ell - \frac{\alpha^{2}-\beta^{2}}{2}\frac{\sin(k\_{2}\ell)}{k\_{2}};\\ \int\_{E} (\psi\_{2}'(\mathbf{x}))^{2}d\mathbf{x} &= k\_{2}^{2}\left(\frac{\alpha^{2}+\beta^{2}}{2}\ell + \frac{\alpha^{2}-\beta^{2}}{2}\frac{\sin(k\_{2}\ell)}{k\_{2}}\right). \end{array}$$

Inserting calculated values into (12.59) we get

$$\begin{split} \lambda\_{2}(\Gamma^{\*}) &\leq \lambda\_{2}(\Gamma) \\ &\quad \parallel \psi\_{2} \parallel\_{L\_{2}(\Gamma)}^{2} - \frac{\alpha^{2} + \beta^{2}}{2}\ell - \frac{\alpha^{2} - \beta^{2}}{2} \frac{\sin(k\_{2}\ell)}{k\_{2}} \\ &\quad \parallel \psi\_{2} \parallel\_{L\_{2}(\Gamma)}^{2} - \frac{\alpha^{2} + \beta^{2}}{2}\ell + \frac{\alpha^{2} - \beta^{2}}{2} \frac{\sin(k\_{2}\ell)}{k\_{2}} - \frac{1}{\mathcal{L}^{\*}} \frac{4\beta^{2}}{\lambda\_{2}(\Gamma)} \sin^{2}\left(\frac{k\_{2}\ell}{2}\right) . \end{split} \tag{12.62}$$

To guarantee that the quotient is not greater than 1 and therefore *λ*2*(*∗*)* ≤ *λ*2*()* it is enough that

$$\frac{\alpha^2 - \beta^2}{2} \frac{\sin(k\_2 \ell)}{k\_2} \ge -\frac{\alpha^2 - \beta^2}{2} \frac{\sin(k\_2 \ell)}{k\_2} + \frac{1}{\mathcal{L}^\*} \frac{4\beta^2}{\lambda\_2(\Gamma)} \sin^2\left(\frac{k\_2 \ell}{2}\right)$$

$$\iff \frac{k\_2}{2} \left(\frac{\alpha^2}{\beta^2} - 1\right) \cot\left(\frac{k\_2 \ell}{2}\right) \ge (\mathcal{L}^\*)^{-1}. \tag{12.63}$$

Using (12.61) the last inequality can be written as

$$\left(\frac{(\psi\_2(V^1) - \psi\_2(V^2))^2}{(\psi\_2(V^1) + \psi\_2(V^2))^2} \cot^2\left(\frac{k\_2 \ell}{2}\right) - 1\right) \frac{k\_2}{2} \cot\left(\frac{k\_2 \ell}{2}\right) \ge (\mathcal{L}^\*)^{-1}.$$

Remembering that the eigenfunction *ψ*<sup>2</sup> could be chosen arbitrary we arrive at (12.52).

It remains to study the special case where *ψ*2*(V* <sup>1</sup>*)* = −*ψ*2*(V* <sup>2</sup>*).* It follows that *β* = 0 and

$$\alpha = -\frac{\psi\_2(V^1)}{\sin k\_2 \ell / 2}.$$

Instead of (12.62) we arrive at the following inequality

$$
\lambda\_2(\Gamma^\*) \le \lambda\_2(\Gamma) \frac{\|\:\psi\_2\|\_{L\_2(\Gamma)}^2 - \frac{\alpha^2}{2}\ell - \frac{\alpha^2}{2}\frac{\sin(k\_2\ell)}{k\_2}}{\|\:\psi\_2\|\_{L\_2(\Gamma)}^2 - \frac{\alpha^2}{2}\ell + \frac{\alpha^2}{2}\frac{\sin(k\_2\ell)}{k\_2}}.\tag{12.64}$$

The quotient is always less than 1 provided sin *k*2 *>* 0, which is true if  *< π/k*2*.* 

Roughly speaking, condition (12.52) means that the length is sufficiently small, of course provided *ψ*2*(V* <sup>1</sup>*)* = *<sup>ψ</sup>*2*(V* <sup>2</sup>*).* Indeed, for small the cotangent term is of order 1*/*. Therefore the left-hand side of (12.52) is of order 1*/*<sup>3</sup> and thus growing to infinity as decreases, while the right-hand side remains bounded.

Let us apply the above theorem to obtain an estimate for the length of the piece that can be cut from an edge so that the spectral gap still decreases. Consider any edge in , say *E*<sup>1</sup> = [*x*1*, x*2] and choose an arbitrary internal point *x*<sup>∗</sup> ∈ *(x*1*, x*2*).* Assume that we cut away an interval of length centred at *x*∗*.* Of course the length should satisfy the obvious geometric condition: *x*<sup>1</sup> ≤ *x*<sup>∗</sup> −*/*2 and *x*<sup>∗</sup> +*/*2 ≤ *x*2*.* We assume in addition that

$$
\ell < \frac{\pi}{2k\_2} \tag{12.65}
$$

guaranteeing in particular that the cotangent function in (12.52) is positive.

The function *ψ*<sup>2</sup> on the edge *E*<sup>1</sup> can be written in a form similar to (12.60)

$$
\psi\_2(\mathbf{x}) = \alpha \sin k\_2(\mathbf{x} - \mathbf{x}^\*) + \beta \cos k\_2(\mathbf{x} - \mathbf{x}^\*).
$$

Then formula (12.63) implies that the spectral gap decreases as the interval is cut away if

$$|\alpha| > |\beta|.\tag{12.66}$$

and the following estimate is satisfied

$$\cot\left(\frac{k\_2\ell}{2}\right) \ge \frac{2}{k\_2\mathcal{L}^\*\left(\frac{\alpha^2}{\beta^2} - 1\right)}.\tag{12.67}$$

Condition (12.66) means that the eigenfunction does not satisfy Neumann condition at *x*∗*.* This condition was expected, since if *ψ*<sup>2</sup> is symmetric with respect to *x*∗, then the spectral gap may increase for any *.* Really, one may imagine that deleting the interval is performed in two steps. One cuts the edge *E*<sup>1</sup> at the point *x*<sup>∗</sup> first. Then one deletes the intervals [*x*<sup>∗</sup> − */*2*, x*<sup>∗</sup> <sup>1</sup> ] and [*x*<sup>∗</sup> <sup>2</sup> *, x*<sup>∗</sup> + */*2]. If *α* = 0 (symmetric function), then the spectral gap may be preserved in accordance to Theorem 12.16. Deleting the pendant edges (intervals [*x*<sup>∗</sup> − */*2*, x*<sup>∗</sup> <sup>1</sup> ] and [*x*<sup>∗</sup> <sup>2</sup> *, x*<sup>∗</sup> + */*2]) always increases the spectral gap due to Theorem 12.12.

Using the fact that under condition (12.65) we have cot *k*2 2 <sup>≥</sup> <sup>2</sup> *<sup>k</sup>*2 the following explicit estimate on can be obtained

$$\ell \le (\mathcal{L} - \ell) \left( \frac{\alpha^2}{\beta^2} - 1 \right) \Rightarrow \ell \le \frac{\left( \frac{\alpha^2}{\beta^2} - 1 \right) \mathcal{L}}{1 + \left( \frac{\alpha^2}{\beta^2} - 1 \right)},\tag{12.68}$$

of course under condition (12.66). For the spectral gap not to increase it is enough that estimate (12.68) is satisfied for at least one eigenfunction *ψ*2:

$$\ell \le \min \left\{ \frac{\pi}{2k\_2}, \max\_{\substack{\psi\_2: L^4(\Gamma) \: \psi\_2 = \lambda\_2 \psi\_2}} \frac{\left(\frac{\alpha^2}{\beta^2} - 1\right) \mathcal{L}}{1 + \left(\frac{\alpha^2}{\beta^2} - 1\right)} \right\},\tag{12.69}$$

where we have taken into account (12.65).

We see that if the eigenfunction *ψ*<sup>2</sup> is asymmetric with respect to the point *x*<sup>∗</sup> (*i.e.* (12.66) is satisfied), then a certain sufficiently small interval can be cut from the edge ensuring that the spectral gap decreases despite the total length decreases.

We have shown that deleting not so long edges or cutting away short intervals from the edges may lead to a decrease of the spectral gap despite the fact that total length of the graph decreases. One can see an analogy between these results and the phenomenon observed in [193], where the behaviour of the spectral gap under extension of edges was discussed for graphs with delta couplings at the vertices. It was shown that the lowest eigenvalue may increase when the edge lengths also increase, provided the ground state has certain special properties.

**Problem 54** Consider the complete graph *KM* with *M* vertices connected by *M(M*−1*)* <sup>2</sup> edges of equal length. What happens to the spectral gap if


**Problem 55** Consider the flower graph depicted in Fig. 12.4. Describe the behaviour of the spectral gap if


Consider both the cases where the edges have equal and different lengths.

Four different approaches to get spectral estimates have been described in the current chapter:


The main ingredient of all proofs are estimates on the quadratic forms, hence it is straightforward to take care of higher eigenvalues. In many cases the same proofs may be applied. Taking into account not only standard vertex conditions may lead to certain difficulties. For example, symmetrisation technique and the

Cheeger approach require that the functions from the quadratic form domain are continuous, which is the case for standard and delta vertex conditions. Only approaches based on topological perturbations (and Eulerian path technique, which uses topological perturbations) can be extended to arbitrary vertex conditions. We discuss this direction of research in the following chapter. Starting from special classes of scaling-invariant conditions we accomplish our studies including most general vertex conditions.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 13 Higher Eigenvalues and Topological Perturbations**

Some fundamental estimates for higher eigenvalues of standard Laplacians have already been derived in Sect. 4.6. The goal of this chapter is twofold: on the one hand considering the standard Laplacian we derive explicit fundamental estimates for higher eigenvalues and describe the behaviour of such eigenvalues under topological perturbations. Here techniques developed in the previous chapter are used. On the other hand, considering Schrödinger operators with most general vertex conditions we analyse the behaviour of the spectrum under topological perturbations and show that intuition gained during our studies of standard Laplacians cannot always be applied: the eigenvalues may depend on topological perturbations in a completely opposite way.

# **13.1 Fundamental Estimates for Higher Eigenvalues**

## *13.1.1 Lower Estimates*

Our aim here will be to derive explicit estimates for all higher eigenvalues of the standard Laplacian. Such estimates were first obtained by L. Friedlander [225] and we use his main ideas here.

Let us try to guess, which metric graph minimises the eigenvalue *λj* ? It is clear that *λj* is always greater or equal to *λj*−1, which in turn does not exceed *λj*−<sup>2</sup> and so on. Pressing down the eigenvalue *λj* we make it degenerate so that *λj* = *λj*−1*.* Pressing it further we shall make it triple degenerate until we reach *λj* = *λj*−<sup>1</sup> = ··· = *λ*2, which is strictly larger than *λ*<sup>1</sup> = 0 as the ground state is non-degenarate (see Theorem 4.12). Hence our guess is that the *j* -th eigenvalue is minimised by the graph where *λj* has multiplicity *j* − 1 and is the first non-trivial eigenvalue.

Consider equilateral star graph on *j* edges each of length L*/j.* The second eigenvalue is degenerate with multiplicity *j* − 1 and coincides with the ground state of the Dirichlet-Neumann interval of length L*/j*

$$
\lambda\_2 = \dots = \lambda\_j = \left(\frac{j\pi}{2\mathcal{L}}\right)^2. \tag{13.1}
$$

We therefore suspect that *jπ* 2L 2 provides the best lower estimate for *λj* on a graph of total length L*.* Our guess can of course not be considered as a rigorous proof, but surprisingly it provides the correct answer to the current problem.

**Theorem 13.1** *Let be a connected metric graph of total length* L*. Then the j -th eigenvalue of the standard Laplacian can be estimated as* 

$$
\lambda\_j(L^{\rm st}(\Gamma)) \ge \left(\frac{j\pi}{2\mathcal{L}}\right)^2. \tag{13.2}
$$

*Equality occurs if and only if the graph is an equilateral star with j edges (a segment in the case j* = 2*).* 

*Proof* To prove the estimate it is enough to consider just trees, since any graph can be turned into a tree by chopping few of its vertices. The edges are preserved during this operation, but the domain of the quadratic form is enlarged, since the functions may attain different values at different pieces of chopped vertices. The quadratic forms on the original graph and on the tree are given by exactly the same expression, hence the eigenvalues of the tree do not exceed the corresponding eigenvalues for the original graph.

We first prove that *given j any tree can be divided by at most j* − 1 *points into subgraphs of length at most* L*/j.* Every point *x*<sup>0</sup> inside an edge naturally divides the tree into two parts each containing points that can be joined by paths not passing through *x*0*.* In a similar way if *x*<sup>0</sup> is a vertex, then the tree is divided into *d* components, where *d* is the degree of the vertex.

Let *T* be a tree. Consider all pendant edges in *T* - the edges connected to vertices of degree one and pick up any star subgraph of degree *d* containing at least *d* − 1 pendant edges. We have three possibilities:


We repeat the described procedure cutting at most *j*−1 times until the tree is divided into at least *j* components of lengths at most L*/j.* Let us denote the points dividing *T* by *x*1*, x*2*,...,xm*, *m* ≤ *j* − 1*.* We restore all star graphs that were substituted by edges. This does not affect the sizes of the components which we denote by *Ti.*

Consider now the first *j* eigenfunctions *ψi, i* = 1*,* 2*,...,j* , which are linearly independent. In the linear span of *ψi, i* = 1*,...,j* there exists a non-zero function *φ(x)* vanishing at all points *xi, i* = 1*,* 2*,...,m* ≤ *j* − 1*.* On every component *Ti* of the tree, where *φ* is not identically zero, it satisfies a Dirichlet condition at at least one edge, hence the corresponding Rayleigh quotient is greater than or equal to the first eigenvalue of the Dirichlet-Neumann interval of the same length as the component

$$\frac{\int\_{T\_l} |\phi'(\mathbf{x})|^2 d\mathbf{x}}{\int\_{T\_l} |\phi(\mathbf{x})|^2 d\mathbf{x}} \ge \left(\frac{j\pi}{2\mathcal{L}}\right)^2,\tag{13.3}$$

where we have taken into account (12.15) and the fact that the length of *Ti* does not exceed L*/j.* Summing up the contributions from all components where *φ* is not identically zero we get the same estimate for the Rayleigh quotient on the whole tree:

$$\frac{\int\_{T} |\phi'(\alpha)|^2 d\alpha}{\int\_{T} |\phi(\alpha)|^2 d\alpha} \ge \left(\frac{j\pi}{2\mathcal{L}}\right)^2. \tag{13.4}$$

The function *φ* belongs to the linear span of the first *j* eigenfunctions and therefore satisfies

$$\frac{\int\_{T} |\phi'(\mathbf{x})|^2 d\mathbf{x}}{\int\_{T} |\phi(\mathbf{x})|^2 d\mathbf{x}} \le \lambda\_j(L^{\text{st}}(T)).\tag{13.5}$$

Comparing the last two inequalities we get

$$
\lambda\_j(L^{\text{st}}(\Gamma)) \ge \lambda\_j(L^{\text{st}}(T)) \ge \left(\frac{j\pi}{2\mathcal{L}}\right)^2.
$$

To show uniqueness one may use mathematical induction with respect to *j* . Details can be found in [225].

We have already shown that the estimate is sharp and turns into equality for equilateral star on *j* edges. An interesting observation is that to cut this star into pieces of length at most L*/j* one needs just one cutting point - the central vertex.

Equality in the estimate (13.2) is realised precisely if the graph is an equilateral star graph with *j* edges. Such a graph has *j* Neumann degree one vertices and its first Betti number is zero *β*<sup>1</sup> = 0 (*χ* = 1). Hence it is natural to expect that, given a graph, estimate (13.2) is not sharp for sufficiently large eigenvalues. For example,

let us assume that the graph has *N* pendant vertices and *β*<sup>1</sup> *>* 0 cycles, then for large enough *j* the estimate (13.2) can be improved as follows:

**Theorem 13.2 (Following Theorem 4.7 from [87])** *Let be a metric graph with*  |*N*| ≥ 0 *vertices of degree one with the Neumann condition. Assume that is not a cycle. Then for all j* ≥ 2

$$\lambda\_f(\Gamma) \ge \begin{cases} \left( j - \frac{|N| + \beta\_1}{2} \right)^2 \frac{\pi^2}{\mathcal{L}^2} & \text{if } j \ge |N| + \beta\_1 \\\\ \frac{j^2 \pi^2}{4 \mathcal{L}^2} & \text{otherwise}, \end{cases} \tag{13.6}$$

*where β*<sup>1</sup> *is the first Betti number of the graph.* 

*Proof* If is not a tree, we find an edge whose removal would not disconnect the graph. Let *V* <sup>0</sup> be a vertex to which this edge is incident; since is not a cycle, without loss of generality we can assume its degree is 3 or larger (otherwise this vertex can be absorbed into the edge). We disconnect the edge from this vertex, reducing *β*<sup>1</sup> by one and creating an extra vertex of degree one where we impose the Neumann condition, see Fig. 13.1. We keep standard conditions at *V* 0. Then the new graph is not a cycle, as a new vertex of degree 1 was created. We may therefore repeat the process inductively until we obtain a tree **T** with |*N* |=|*N*|+*β*<sup>1</sup> Neumann vertices.

Since the eigenvalues are reduced at every step, *λk()* ≥ *λk(***T***)*. It is therefore enough to verify the inequality for trees.

Given a tree **T** we can find an arbitrarily small perturbation under which the *k*th eigenvalue is simple and its eigenfunction is nonzero on vertices [82]. In these circumstances the *k*-th eigenfunction has exactly *k* nodal domains [50, 433, 465] (see also [80, Thm. 6.4] for a short proof). Each nodal domain is a subtree **T***<sup>j</sup>* , and with vertex conditions inherited from (plus Dirichlet conditions on the nodal domain boundaries), *λk()* is the first eigenvalue of the subtree.

There are at most |*N*| subtrees with some Neumann conditions on their pendant vertices. Since these are nodal subtrees (*k >* 1), there are also some pendant vertices with Dirichlet conditions and we can use estimate (12.15) in the form L*<sup>j</sup> k* ≥ *π/*2. The same conclusion is true if *k* = 1 and has at least one Dirichlet vertex.

**Fig. 13.1** Disconnecting the edge *E*0 from the vertex *V* 0 in the proof of Theorem 13.2. This operation reduces *β*1 by 1 at the expense of increasing the number of Neumann vertices by 1

If *k* ≥ |*N*|, we also have at least *k* − |*N*| subtrees with *only* Dirichlet conditions at the pendant vertices. For such trees the ground state energy satisfies the estimate: L*j* <sup>√</sup>*<sup>λ</sup>* <sup>≥</sup> *π.* To see this it is enough to realise that the corresponding eigenfunction has a maximum and the point where it is attained divides the tree **T** into at least two pieces. Each of the pieces is a tree with at least one Dirichlet vertex and estimate (12.15) can be used.

Summing up we have

$$\mathcal{L}\sqrt{\lambda\_k(\Gamma)} = \sum\_{j=1}^k \mathcal{L}\_j \sqrt{\lambda\_1(\mathbf{T}\_j)} \ge |N|\frac{\pi}{2} + (k - |N|)\pi = \left(k - \frac{|N|}{2}\right)\pi.$$

When *k <* |*N*|, we use estimate (12.15) for each of the *k* nodal subtrees, obtaining Friedlander's bound.

## *13.1.2 Upper Bounds*

Let us look at the upper estimates for the eigenvalues.

**Theorem 13.3 (Following Theorem 4.9 from [87] Inspired by Ariturk [32])** *Let be a connected metric graph with Dirichlet or Neumann conditions at the vertices of degree one and standard condition elsewhere. If is not a cycle, then for all <sup>k</sup>* <sup>∈</sup> <sup>N</sup>

$$
\lambda\_k(\Gamma) \le \left( k - 2 + \beta\_1 + |D| + \frac{|N| + \beta\_1}{2} \right)^2 \frac{\pi^2}{\mathcal{L}^2}, \tag{13.7}
$$

*where the set of Dirichlet vertices is denoted by D and the set of Neumann vertices of degree one is denoted by N.* 

*Proof* If is not a tree (i.e. if *β*<sup>1</sup> *>* 0) and not a cycle, we repeat the process described at the beginning of the proof of Theorem 13.2, disconnecting *β*<sup>1</sup> edges at vertices and creating a tree **T** with *β*<sup>1</sup> additional Neumann vertices of degree one. At every step, the eigenvalue goes down, but not further than the next eigenvalue. The reason is very simple: from any two eigenfunctions of the Laplacian on the graph with the chopped vertex, one may always glue together a trial function for the original graph with the Rayleigh quotient not exceeding the maximum of the Rayleigh quotient for each of the functions in the pair, therefore we have *λk()* ≤ *λk*+*β*<sup>1</sup> *(***T***)* and the bound for general graphs follows from the bound for trees, *β*<sup>1</sup> = 0 (see also [80, Thm 3.1.10]).

It is enough to prove the theorem for trees **T** and we shall use induction on the number of edges. The inequality turns into equality for a single edge with either Dirichlet or Neumann or mixed conditions.

Choose an arbitrary vertex *V* <sup>0</sup> of degree *d*<sup>0</sup> and divide the corresponding equivalence class into two, denoted by *V* and *V* and having *d*<sup>0</sup> −1 and 1 elements respectively. We introduce standard conditions at *V* and Dirichlet conditions at *V* . This process corresponds to chopping off from **T** the branch growing from *V* and introducing Dirichlet conditions at the root *V* . We denote the resulting trees by **T** and **T** respectively. It is clear that

$$
\lambda\_k(\mathbf{T}) \le \lambda\_{k+1}(\mathbf{T}' \cup \mathbf{T''}),
$$

where Dirichlet condition is assumed at the root of **T**. Here **T** ∪ **T** denotes the union of two metric trees. This inequality follows from the fact that from any *k* + 1 eigenfunctions on **T** ∪ **T** one may always build *k* continuous functions on the original tree **T**, and the Rayleigh quotient for these trial functions does not exceed *λk*+1*(***T** ∪ **T***)*.

We denote by L and L the total lengths of the subtrees **T** and **T** respectively. The numbers of Dirichlet and Neumann vertices satisfy

$$|D| = |D'| + |D''| - 1, \quad |N| = |N'| + |N''|,$$

as a new Dirichlet condition is introduced at *V* .

The *k* + 1-st eigenvalue of **T** ∪ **T** coincides with the eigenvalue either on **T** or on **T**. Assume without loss of generality that *λk*+1*(***T** ∪**T***)* = *λj (***T***)* for a certain *j* , then we get

$$\begin{split} \mathcal{L}\sqrt{\lambda\_{k}(\mathbf{T})} &\leq \mathcal{L}\sqrt{\lambda\_{k+1}(\mathbf{T}\cup\mathbf{T''})} \\ &\leq \mathcal{L}'\sqrt{\lambda\_{k-j+1}(\mathbf{T'})} + \mathcal{L}''\sqrt{\lambda\_{j}(\mathbf{T''})} \\ &\leq \pi \left(k - j + 1 - 2 + |D'| + \frac{|N'|}{2}\right) + \pi \left(j - 2 + |D''| + \frac{|N''|}{2}\right) \\ &= \pi \left(k - 2 + |D| + \frac{|N|}{2}\right). \end{split}$$

This completes the proof.

## *13.1.3 Graphs Realising Extremal Eigenvalues*

For large indices the lower (13.6) and upper bounds (13.7) give the following twosided estimate:

$$\frac{\pi^2}{\mathcal{L}^2} \left( j - \frac{|N|}{2} - \frac{\beta\_1}{2} \right) \le \lambda\_j(\Gamma) \le \frac{\pi^2}{\mathcal{L}^2} \left( j - 2 + |D| + \frac{|N|}{2} + \frac{3}{2}\beta\_1 \right). \tag{13.8}$$

**Fig. 13.2** Looptree with 4 Dirichlet and 2 Neumann vertices and 2 loops

We already know that the estimates are sharp since there exist graphs realising both the lower and upper bounds [353]. What is more remarkable is that there exist graphs where both estimates are realised simultaneously. Since the upper bound is never equal to the lower one, we need highly degenerate eigenvalues to realise the estimates. This is possible if one considers looptrees—tree graphs with loops attached to some of the degree one vertices. One assumes that Dirichlet or Neumann conditions are introduced at the other degree one vertices. Carefully adjusting the lengths of the Dirichlet and Neumann pendant edges and of the loops allows one to create graphs with degenerate eigenvalues *λj*min = *λj*min+<sup>1</sup> =···= *λj*max , where

$$\begin{split} \lambda\_{j\_{\text{max}}} &= \frac{\pi^2}{\mathcal{L}^2} \left( j\_{\text{min}} - \frac{|N|}{2} - \frac{\beta\_1}{2} \right) \\ \lambda\_{j\_{\text{min}}} &= \frac{\pi^2}{\mathcal{L}^2} \left( j\_{\text{max}} - 2 + |D| + \frac{|N|}{2} + \frac{3}{2} \beta\_1 \right) . \end{split}$$

The construction of such graphs is described in [469]. One of the simplest looptrees is presented in Fig. 13.2.

**Problem 56** Determine the lengths of the edges and loops, so that the eigenvalue estimates (13.8) are sharp for the graph depicted in Fig. 13.2.

## **13.2 Gluing and Cutting Vertices with Standard Conditions**

The results presented here are completely analogous to those proven in Sect. 12.5. Therefore we skip the proofs leaving them for interested readers to work on.

In Theorem 12.9 we assumed that the graph is connected, since the estimate was trivial for non-connected graphs: for graphs with at least two connected components we have formally *λ*1*()* = *λ*2*()* = 0. It is natural to drop the connectivity requirement if one is interested in higher eigenvalues:

**Theorem 13.4** *Let be a metric graph and let be another metric graph obtained from by joining together two of its vertices, say V* <sup>1</sup> *and V* <sup>2</sup>*. Then the following inequality for the eigenvalues of the standard Laplacian holds:* 

$$
\lambda\_n(\Gamma) \le \lambda\_n(\Gamma'). \tag{13.9}
$$

**Problem 57** Use the minmax principle (Proposition 4.19) for general *n* to prove Theorem 13.4.

The above theorem can be reformulated speaking about cutting vertices instead of gluing. In particular Theorem 12.16 can be generalised as follows:

**Theorem 13.5** *Let be a connected metric graph and let* ∗ *be another graph obtained from by cutting one of the edges at an internal point x*∗ *producing two new vertices V* <sup>1</sup><sup>∗</sup> *and V* <sup>2</sup>∗*. Then the eigenvalues of the standard Laplacian satisfy the following inequality* 

$$
\lambda\_n(L^{\rm st}(\Gamma)) \ge \lambda\_n(L^{\rm st}(\Gamma^\*)).\tag{13.10}
$$

**Problem 58** Prove Theorem 13.5. How to modify the second statement in Theorem 12.16 in order to cover higher eigenvalues?

One may conclude that our intuition acquired investigating the spectral gap can be applied to higher eigenvalues of the standard Laplacian. It is straightforward to include Schrödinger operators with standard conditions, but one has to be careful when vertex conditions are not standard. To understand what should be modified we look first at scaling-invariant vertex conditions, leaving the most general conditions for the last section.

## **13.3 Gluing Vertices with Scaling-Invariant Conditions**

## *13.3.1 Scaling-Invariant Conditions Revisited*

The ideas previously developed can be applied to vertices with arbitrary vertex conditions. Let us start our analysis by discussing gluing of scaling-invariant conditions. Let us recall that scaling-invariant vertex conditions at a vertex *V* of degree *d* correspond to a parameter *S* in (3.21), which is not only unitary but also Hermitian. Every such *d* × *d* matrix *S* has just eigenvalues −1 and 1 so that the corresponding eigensubspaces span the space C*<sup>d</sup>* :

$$P\_{-1} + P\_{\mathbb{I}} = \mathbb{I}\_{\mathbb{C}^d}.$$

The vertex condition can be written as two projectors (3.33). One may say that scaling-invariant conditions are a combination of Dirichlet and Neumann conditions on two mutually orthogonal subspaces. Denoting the eigensubspace of *S* associated with 1 by D, the same vertex conditions (3.33) can be written as

$$\begin{cases} \begin{aligned} P\_{\mathcal{D}}^{\perp} \vec{u} &= 0, \\ P\_{\mathcal{D}} \partial \vec{u} &= 0. \end{aligned} \end{cases} $$

The Dirichlet data at the vertex span the subspace D, while the Neumann data span the orthogonal complement D⊥:

$$\begin{cases} \vec{u} \in \mathcal{D}, \\\\ \partial \vec{u} \in \mathcal{D}^{\perp}. \end{cases} \tag{13.11}$$

Note that the vertex conditions are properly connecting if and only if both the Neumann subspace D and the Dirichlet subspace D<sup>⊥</sup> do not contain basis vectors from the standard basis in C*<sup>d</sup>* .

Assume for example that <sup>D</sup> contains the vector *<sup>e</sup> <sup>j</sup>* from the standard basis in <sup>C</sup>*<sup>d</sup> .* Then every vector from the orthogonal complement D<sup>⊥</sup> has zero *j* -th component and therefore condition (13.11) implies that

$$
\partial \mu(\alpha\_j) = 0.
$$

At the same time no restriction on *u(xj )* is imposed by the requirement *u* ∈ D*.* In other words, the limit values *u(xj )* and *∂u(xj )* are not related to other limit values. The vertex can be split into two vertices implying that such vertex conditions are not properly connecting.

The case where D<sup>⊥</sup> contains a vector from the standard basis is completely similar - the roles of function values and normal derivatives are interchanged.

Two examples of properly connecting scaling-invariant conditions are:


If at least one of the coordinates in *a* or *b* is zero, then the orthogonal subspace contains one of the vectors from the standard basis in C*<sup>d</sup>* and therefore the corresponding vertex condition is not properly connecting.

## *13.3.2 Gluing Vertices*

Consider two vertices *V* <sup>1</sup> and *V* <sup>2</sup> with scaling-invariant conditions. Let us denote by D<sup>1</sup> <sup>⊂</sup> <sup>C</sup>*d*<sup>1</sup> and D<sup>2</sup> <sup>⊂</sup> <sup>C</sup>*d*<sup>2</sup> the corresponding Neumann subspaces and assume that they are canonically embedded into C*<sup>d</sup>* <sup>=</sup> <sup>C</sup>*d*1+*d*<sup>2</sup> by assigning zero to all coordinates not related to the corresponding subvertex.

We wish to connect these vertices into one common vertex *V* by assigning scaling-invariant conditions. In other words, we need to select a new Neumann subspace D <sup>∈</sup> <sup>C</sup>*<sup>d</sup>* <sup>=</sup> <sup>C</sup>*d*1+*d*<sup>2</sup> *.* The subsbace D<sup>1</sup> <sup>+</sup> <sup>D</sup><sup>2</sup> cannot be used directly without any modification - the corresponding vertex conditions are obviously not properly connecting by construction. In principle, the subspace D can be chosen

arbitrarily, but it is natural to look for a procedure giving D as a certain modification of D<sup>1</sup> + D2*.*

There are two obvious modifications

#### 1. **Gluing by restriction**

Select a subspace E ⊂ D<sup>1</sup> + D<sup>2</sup> and define the new Neumann subspace by requiring that all its elements are orthogonal to E:

$$\mathcal{D}' = \left\{ \vec{\mu} \in \mathcal{D}\_1 + \mathcal{D}\_2 : \vec{\mu} \perp \mathcal{E} \right\}. \tag{13.12}$$

#### 2. **Gluing by extension**

Select a subspace F ⊂ D1 + D2 <sup>⊥</sup> and define the new Neumann subspace by adding F to D<sup>1</sup> + D2:

$$\mathcal{D}' = \mathcal{D}\_1 + \mathcal{D}\_2 + \mathcal{F}.\tag{13.13}$$

One has to be careful selecting the subspaces E and F and check that the new vertex conditions are properly connecting.

It is straightforward to generalise the developed methods for the case where more than two vertices are joined together.

**Problem 59** How to describe gluing vertices with scaling-invariant conditions for the case of several vertices.

**Problem 60** Is it possible to glue vertices by combining extension and restriction procedures? Provide explicit examples to support your answer.

Let us discuss how these modifications work when applied to one-dimensional and hyperplanar conditions.

#### **Gluing Vertices with One-Dimensional Vertex Conditions**

We plan to preserve the character of vertex conditions, so that new vertex conditions are also of one-dimensional type.

Assume that D*<sup>j</sup>* are spanned by *a <sup>j</sup> .* Then we have D<sup>1</sup> + D<sup>2</sup> is spanned by *a* <sup>1</sup> and *a*2. Obviously dim*(*D<sup>1</sup> + D2*)* = 2 and the **first** gluing method should be used. We get vertex conditions of one-dimensional type if we select a vector *a* ∈ D<sup>1</sup> + D2*.* Every such vector is a linear combination of *a <sup>j</sup>* but should be different from the basis vectors

$$
\vec{a} = h\_1 \vec{a}\_1 + h\_2 \vec{a}\_2, \quad h\_1 h\_2 \neq 0.
$$

We get vertex conditions of one-dimensional type with

$$\mathcal{D}' \text{ spanned by } \tilde{a} = h\_1 \tilde{a}\_1 + h\_2 \tilde{a}\_2.$$

Standard conditions is a special case of one-dimensional scaling-invariant conditions with *a* = *(*1*,* 1*,...,* 1*).* Therefore when gluing standard vertices it is natural to choose *h*<sup>1</sup> = *h*<sup>2</sup> = 1 so that

$$\begin{aligned} \vec{a}\_1 &= (\underbrace{1, \dots, 1}\_{d\_1}, 0, \dots, 0) \in \mathbb{C}^{d\_1}, \\ \vec{a}\_2 &= (0, \dots, 0, \underbrace{1, \dots, 1}\_{d\_2}) \in \mathbb{C}^{d\_2}, \\ \vec{a} &= (1, 1, \dots, 1) \in \mathbb{C}^d. \end{aligned}$$

#### **Gluing Vertices with Hyperplanar Vertex Conditions**

Gluing hyperplanar vertices we want to preserve their hyperplanarity. Assume D*<sup>j</sup>* = {*<sup>u</sup>* <sup>∈</sup> <sup>C</sup>*dj* : *<sup>u</sup>* <sup>⊥</sup> *<sup>b</sup> <sup>j</sup>* }*,* dim <sup>D</sup>*<sup>j</sup>* <sup>=</sup> *dj* <sup>−</sup> <sup>1</sup>*.* Then dim *(*D<sup>1</sup> <sup>+</sup> <sup>D</sup>2*)* <sup>=</sup> *<sup>d</sup>* <sup>−</sup> <sup>2</sup> :

$$\mathcal{D}\_1 + \mathcal{D}\_2 = \left\{ \vec{\mu} \in \mathbb{C}^d \, : \, \langle \vec{\mu}, \vec{b}\_1 \rangle = 0 = \langle \vec{\mu}, \vec{b}\_2 \rangle \right\}. \tag{13.14}$$

We need to use the **second** gluing method to increase the dimension of the supspace. One has to select a single vector *b* ∈ L{*b* <sup>1</sup>*, b* <sup>2</sup>} without zero coordinates. Every such vector is of the form

$$
\ddot{b} = h\_1 \ddot{b}\_1 + h\_2 \ddot{b}\_2, \quad h\_1 h\_2 \neq 0.
$$

The corresponding conditions are given by

$$\mathcal{D}' = \left\{ \vec{\mu} \in \mathbb{C}^d : \vec{\mu} \perp \vec{b} = h\_1 \vec{b}\_1 + h\_2 \vec{b}\_2 \right\}. \tag{13.15}$$

An important class of hyperplanar conditions is given by vectors *b <sup>j</sup>* = *(*1*,* <sup>1</sup>*,...,* <sup>1</sup>*)* <sup>∈</sup> <sup>C</sup>*dj .* The situation is similar to standard conditions with the only one natural choice for *<sup>b</sup>* <sup>=</sup> *(*1*,* <sup>1</sup>*,...,* <sup>1</sup>*)* <sup>∈</sup> <sup>C</sup>*<sup>d</sup>* (see [450]).

# *13.3.3 Spectral Gap and Gluing Vertices with Scaling-Invariant Conditions*

Let us generalise Theorem 13.4 by allowing scaling-invariant conditions and nonzero potentials (considering Schrödinger operators instead of Laplacians):

**Theorem 13.6** *Let be a metric graph with selected vertices V* <sup>1</sup> *and V* <sup>2</sup>*, and let be the metric graph obtained from by joining the vertices V* <sup>1</sup> *and V* <sup>2</sup> *into one vertex V. Let the vertex conditions (determined by the matrices* **S***,* **S** *) on and*  *be scaling-invariant and obtained from each other either by restriction (13.16) or by extension (13.17). Assume that the potential q is absolutely integrable q* ∈ *L*1*(), then the eigenvalues of the Schrödinger operators on and satisfy the following inequalities:* 

*1. If the gluing is given by restriction as described in (13.12), then* 

$$
\lambda\_j(L\_q^{\mathbf{S}}(\Gamma)) \le \lambda\_j(L\_q^{\mathbf{S'}}(\Gamma')).\tag{13.16}
$$

*2. If the gluing is given by extension as described in (13.13), then* 

$$
\lambda\_j(L\_q^{\mathbf{S}}(\Gamma)) \ge \lambda\_j(L\_q^{\mathbf{S}}(\Gamma')).\tag{13.17}
$$

*Proof* Presence of the potential does not affect the proof so much, since the quadratic forms for the Schrödinger operators on and are given by the same expression (see (11.9)):

$$\int\_{\Gamma} |u'(\mathbf{x})|^2 d\mathbf{x} + \int\_{\Gamma} q(\mathbf{x}) |u(\mathbf{x})|^2 d\mathbf{x},$$

provided the functions on the graphs and are identified. For both forms the function *u* should belong to the Sobolev space *W*<sup>1</sup> <sup>2</sup> *(En)* on every edge. The difference between the form domains lies in the conditions the function *u* satisfies at the vertices affected by the gluing. Let *u* denote the vector of function values at the vertices *V* <sup>1</sup> and *V* <sup>2</sup> or at the vertex *V* . Then the vertex conditions for and are

$$
\vec{\mu} \in \mathcal{D}\_1 + \mathcal{D}\_2, \quad \vec{\mu} \in \mathcal{D}',
$$

respectively.

If the vertices are glued by restriction then obviously

$$
\vec{\mu} \in \mathcal{D}' \Rightarrow \vec{\mu} \in \mathcal{D}\_1 + \mathcal{D}\_2,
$$

*i.e.* the domain corresponding to is smaller, hence all eigenvalues are larger due to the minmax principle (Proposition 4.19).

In the case of gluing by extension, the quadratic form associated with has larger domain, hence the eigenvalues are smaller leading to (13.17).

In particular, it follows that gluing vertices with standard conditions the eigenvalues may only grow, while gluing vertices with hyperplanar conditions the eigenvalues may only decrease [450].

## **13.4 Gluing Vertices with General Vertex Conditions**

Let us briefly discuss the most general case of gluing vertices. Consider a quantum graph with selected vertices *<sup>V</sup> <sup>j</sup> , j* <sup>=</sup> <sup>1</sup>*,* <sup>2</sup>*,* of degree *dj .* Assume following Sect. 3.8.2 that the vertex conditions are determined by selecting subspaces D*<sup>j</sup>* ⊂ <sup>C</sup>*dj* and Hermitian matrices *Aj* acting in D*<sup>j</sup> .* Assume that gluing the vertices into one vertex *<sup>V</sup>* the vertex conditions are given by a subspace D <sup>⊂</sup> <sup>C</sup>*<sup>d</sup>* <sup>=</sup> <sup>C</sup>*d*1+*d*<sup>2</sup> and a Hermitian matrix *A .* We are interested in selecting conditions that guarantee that the eigenvalues of the glued graph behave monotonically upon gluing. The answer can be given in terms of the quadratic forms *u, Au*<sup>D</sup> associated with the vertex conditions.

We are going to say that a quadratic form *a(u, u)* with the domain *Da subordinates* a quadratic form *b(u, u)* with the domain *Db* if and only if:


Under the same conditions we are going to say that *b* is subordinated by *a.* Both conditions above are important, since the two quadratic forms can be compared only if one of the forms is defined on the intersection of the form domains.

**Theorem 13.7** *Let L***S***(A) <sup>q</sup> () be a Schrödinger operator on a metric graph with vertex conditions at the vertices <sup>V</sup>* <sup>1</sup> *and <sup>V</sup>* <sup>2</sup> *determined by the subspaces* D<sup>1</sup> *and*  D<sup>2</sup> *and Hermitian matrices A*<sup>1</sup> *and A*<sup>2</sup> *respectively. Let be the metric graph obtained from by joining together the two vertices introducing vertex conditions determined by the subspace* D *and Hermitian matrix A . Then the eigenvalues of the Schrödinger operators on and satisfy the following inequalities:* 

*1. If the quadratic form u, A u*D *for the joined vertex is subordinated to the sum of the quadratic forms u, A*1*u*D<sup>1</sup> +*u, A*2*u*D<sup>2</sup> *corresponding to the vertices to be joined, then* 

$$
\lambda\_j(L\_q^{\mathbf{S}(A)}(\Gamma)) \le \lambda\_j(L\_q^{\mathbf{S}(A')}(\Gamma')).\tag{13.18}
$$

*2. If the sum of quadratic forms u, A*1*u*D<sup>1</sup> + *u, A*2*u*D<sup>2</sup> *associated with the vertices <sup>V</sup>* <sup>1</sup> *and <sup>V</sup>* <sup>2</sup> *is subordinated by the quadratic form u, A u*D *for the joined vertex, then* 

$$
\lambda\_j(L\_q^{\mathbf{S}(A)}(\Gamma)) \ge \lambda\_j(L\_q^{\mathbf{S}(A')}(\Gamma')).\tag{13.19}
$$

Defining sums of quadratic forms above we assume that each of the terms is canonically extended to the common domain D<sup>1</sup> + D<sup>2</sup> as

$$\langle \langle \tilde{\boldsymbol{\mu}}, \boldsymbol{A}\_{\boldsymbol{j}} \tilde{\boldsymbol{\mu}} \rangle\_{\mathcal{D}\_1 + \mathcal{D}\_2} = \langle P\_{\mathcal{D}\_{\boldsymbol{j}}} \tilde{\boldsymbol{\mu}}, \boldsymbol{A}\_{\boldsymbol{j}} P\_{\mathcal{D}\_{\boldsymbol{j}}} \tilde{\boldsymbol{\mu}} \rangle\_{\mathcal{D}\_{\boldsymbol{j}}}, \quad \boldsymbol{j} = 1, 2.$$

The theorem can be proven by just repeating the arguments used to prove Theorem 13.6. The only difference is that one should not only take into account conditions determining the set of admissible functions, but look at the corresponding quadratic forms that are going to satisfy suitable inequalities. Observe that the theorem gives just sufficient conditions for the eigenvalues not to increase/not to decrease; moreover we do not study under which conditions the eigenvalues are preserved.

**Problem 61** Prove Theorem 13.7 using the minmax principle.

**Problem 62** Reformulate Theorem 13.7 for the case of delta conditions at the vertices.

**Problem 63** Generalise Theorem 13.7 for the case where more than two vertices are glued together. Is it always possible to consider such gluing as a sequence of pair-wise gluings?

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 14 Ambartsumian Type Theorems**

Obtained spectral estimates will be applied in this chapter to prove a direct analog (for differential operators on graphs) of the celebrated Ambartsumian theorem. The original theorem from 1929 [29] states that the spectrum of the Schrödinger and Laplace operators on compact interval coincide if and only if the potential in the Schrödinger equation is identically equal to zero, provided Neumann boundary conditions are assumed at the endpoints. This theorem laid a ground for the inverse spectral theory in dimension one. Borg-Marchenko theory [105, 106, 379, 381] and later Faddeev-Gelfand-Levitan-Marchenko [219, 233, 234, 380] inverse spectral theory in dimension one grew from this wonderful theorem. The reason I call it *wonderful* is that it is rather unique, since if the potential in the Schrödinger equation is not identically equal to zero, then to reconstruct it one needs two spectra: for example the spectra of the Schrödinger equation with Dirichlet-Dirichlet and Dirichlet-Neumann conditions at the endpoints. In the case of Ambartsumian the potential is determined by just single spectrum. Of course the zero potential which is "determined" is exceptional.

Consider a Schrödinger equation on a finite metric graph. There is not much sense to include the case of non-zero magnetic potential, since it is spectrally equivalent to a special change of vertex conditions. Hence the Schrödinger operator (as any quantum graph) is determined by three parameters:


Among these parameters the following particular values are going to play a very special role in our studies:


These particular parameters will be called **exceptional**. The reason for this name is not only that precisely these parameters appear in the Ambartsumian theorem and its generalisations—these values of the parameters are most natural to assume if very little is known about the quantum graph. For example, let us fix the total length of the metric graph, then the interval [0*,*L] is the graph with the simplest topology. Moreover, a single interval minimises the spectral gap for the standard Laplacian. Zero potential is essentially the only potential that can be prescribed without any knowledge of the metric graph. The same argument applies to standard vertex conditions: if nothing is known about the geometry and topology of the graph, then it is natural to require that the functions from the domain are continuous at the vertices. Assuming this and taking into account that the quadratic form is given by the Dirichlet integral - *-* |*u (x)*| <sup>2</sup>*dx* we get standard vertex conditions for the functions from the domain of the operator. Our studies will prove for another one time that these parameters are not only natural, but possess exceptional properties in relation to the inverse problem.

The inverse problem for quantum graphs consists of reconstructing all three parameters from certain spectral data. In this chapter we consider the smallest possible set of spectral data—just the spectrum of the Schrödinger operator. We restrict our studies to finite compact quantum graphs, hence the spectrum is a set of eigenvalues satisfying Weyl's asymptotics. This set is usually not enough to recover all three members of the triple, but we shall systematically step by step consider particular problems, where some of these parameters are fixed.

It is natural to start from the case where two of the parameters are fixed and only one parameter varies and continue with the case where just one of the parameters is fixed. Each time when the parameters are fixed, we assume that they coincide with the corresponding exceptional ones. Several uniqueness theorems will be proven, while counterexamples will be presented in the cases where no uniqueness is observed.

We are going to follow the approach developed in the series of papers [97, 356, 357]. We shall continue our studies in the next chapter without assuming that any of the members of the triple is fixed to coincide with the exceptional one.

## **14.1 Two Parameters Fixed, One Parameter Varies**

In this section we describe uniqueness theorems for the Schrödinger operator *L***<sup>S</sup>** *<sup>q</sup> (-)* in the case where two out of three parameters are not just fixed, but coincide with the exceptional ones described above.

# *14.1.1 Zero Potential Is Exceptional: Classical Ambartsumian Theorem*

Our aim in this section is to prove the classical Ambartsumian Theorem, which can be formulated (in our notations) as follows

**Theorem 14.1** *Assume that potential q is absolutely integrable: q* ∈ *L*1*(I ). The spectrum of the standard Schrödinger operator on the interval I* = [0*,* ] *coincides with the spectrum of the standard Laplace operator* 

$$
\lambda\_n(L\_q^{\mathrm{st}}(I)) = \lambda\_n(L^{\mathrm{st}}(I)) \quad \left(= \frac{\pi^2}{\ell^2} (n-1)^2\right), \ n = 1, 2, 3, \dots, \ell
$$

*if and only if the potential q is equal to zero almost everywhere* 

$$q(\mathbf{x}) \equiv \mathbf{0}.$$

To prove this theorem we shall not follow the original article by V. Ambartsumian [29], but will use an approach based on the integral transformation operator instead [234, 375]. The key ingredient is the existence of the integral kernel *K(*·*,* ·*)* connecting solutions to the Schrödinger and Laplace equations.

**Theorem 14.2** *Suppose that ϕ(x, λ) is the solution to the equation* 

$$-\varphi\_{\infty}^{\prime\prime} + q(\mathbf{x})\varphi = k^2 \varphi,\tag{14.1}$$

*satisfying the initial condition* 

$$
\varphi(0,\lambda) = 1,\ \varphi\_x'(0,\lambda) = 0.\tag{14.2}
$$

*Then there exists a unique function K(*·*,* ·*) having locally integrable first derivatives with respect to each of the variables, such that*<sup>1</sup>

$$\varphi(\mathbf{x}, \lambda) = \cos kx + \int\_0^\chi K(\mathbf{x}, t) \cos kt \, dt,\tag{14.3}$$

$$K(\mathbf{x}, \mathbf{x}) = \frac{1}{2} \int\_0^\chi q(t)dt. \tag{14.4}$$

<sup>1</sup> Here cos *kx* is the unique solution to the differential equation with zero potential, which satisfies the initial conditions (14.2), so that Eq. (14.3) connects this function and *ϕ(k, x).*

*Proof* We do not give here a detailed proof of this theorem, but just mention that the function *K(*·*,* ·*)* can be found as a solution to the Goursat problem for the wave equation

$$-\frac{\partial^2 K(\mathbf{x},t)}{\partial \mathbf{x}^2} + q(\mathbf{x})K(\mathbf{x},t) = -\frac{\partial^2 K(\mathbf{x},t)}{\partial t^2} \tag{14.5}$$

in the region 0 ≤ *t* ≤ *x* satisfying the boundary conditions

$$\begin{cases} K(\mathbf{x}, \mathbf{x}) = \frac{1}{2} \int\_0^\chi q(t) dt, \\ \frac{\partial K(\mathbf{x}, \mathbf{0})}{\partial t} = 0. \end{cases} \tag{14.6}$$

Do not proceed further without solving the following problem, which will be important for our future considerations:

**Problem 64** Show that if *K(*·*,* ·*)* is a solution to the wave equation (14.5) satisfying the boundary conditions (14.6), then formula (14.3) provides a solution to the differential equation (14.1) satisfying the initial conditions (14.2).

A point *λ* belongs to the spectrum of the Schrödinger operator on [0*,* ] with Neumann boundary conditions if and only if *ϕ <sup>x</sup> (, λ)* = 0, 2 which can be written using (14.3) as

$$-k\sin k\ell + K(\ell, \ell)\cos k\ell + \int\_0^\ell K\_\times(\ell, t)\cos kt\,dt = 0. \tag{14.7}$$

We already know that the eigenvalues *λn* satisfy Weyl's asymptotics *kn* <sup>−</sup> *<sup>π</sup> n* = <sup>O</sup>*(* <sup>1</sup> *<sup>n</sup> )* (see (4.25)). Let us calculate the first correction term putting

$$k\_{n+1} = \frac{\pi}{\ell}n + \frac{a\_0}{n} + \frac{\varkappa\_n}{n},$$

with *γn* −−−→ *<sup>n</sup>*→∞ <sup>0</sup>*.* Substituting this representation into (14.7) we get

$$\begin{aligned} -(\frac{\pi}{\ell}n + \frac{a\_0}{n} + \frac{\mathcal{Y}\_n}{n})(-1)^n \ell \left[\frac{a\_0}{n} + \frac{\mathcal{Y}\_n}{n} + \mathcal{O}(1/n^2)\right] + K(\ell, \ell)(-1)^n \left[1 + \mathcal{O}(1/n^2)\right] \\ + \int\_0^\ell K\_\chi(\ell, t) \cos k\_n t \, dt = 0. \end{aligned}$$

<sup>2</sup> The condition *ϕ <sup>x</sup> (*0*, λ)* = 0 is fulfilled due to the representation (14.3).

Taking into account that *Kx (*·*,t)* ∈ *L*1*(*0*, )* and *kn* → ∞ we conclude that the integral term tends to zero

$$\int\_0^\ell K\_\lambda(\ell, t) \cos k\_n t \, dt \xrightarrow[n \to \infty]{} 0.$$

It follows that

$$a\_0 = \frac{K(\ell, \ell)}{\pi} = \frac{\int\_0^\ell q(t)dt}{2\pi}.$$

In other words, we have proven the asymptotics

$$k\_n(L\_q^{\rm st}(I)) = \frac{\pi}{\ell} \left( n + \frac{\int\_0^\ell q(t)dt}{\ell} \frac{1}{2} \left( \frac{\ell}{\pi} \right)^2 \frac{1}{n} + o(1/n) \right), \tag{14.8}$$

in the case where  is a single interval of length . The term - <sup>0</sup> *q(t)dt* determining the fist correction is nothing else than the mean value of the potential and it is not surprising that it appears here, since adding a constant to the potential shifts all eigenvalues by the same constant.

*Proof of Theorem 14.1* Assume that the spectrum of the standard Schrödinger and Laplace operators coincide

$$\lambda\_n(L\_q^{\mathrm{st}}(I)) = \left(\frac{\pi}{\ell}(n-1)\right)^2 = \lambda\_n(L^{\mathrm{st}}(I)), \ n = 1, 2, \dots$$

From the asymptotic formula (14.8) we conclude that

$$\int\_{0}^{\ell} q(t)dt = 0,\tag{14.9}$$

because the remainder term in formula (14.8) is *o(*1*/n)*, not O*(*1*/n)*. Therefore the function *u(x)* ≡ 1 is a minimiser for the Rayleigh quotient

$$\frac{\int\_0^\ell |u'(\mathbf{x})|^2 dx + \int\_0^\ell q(\mathbf{x}) |u(\mathbf{x})|^2 dx}{\int\_0^\ell |u(\mathbf{x})|^2 dx} = 0,$$

since the second integral in the numerator vanishes due to (14.9). Remembering that *λ*<sup>1</sup> = 0 is the unique lowest eigenvalue, we conclude that the function *u* ≡ 1 satisfies the differential equation

$$\underbrace{-\mu''(\mathbf{x})}\_{=0} + q(\mathbf{x})\underbrace{\mu(\mathbf{x})}\_{=1} = 0$$

implying that *q(x)* ≡ 0*.*

To prove the theorem it was crucial that the functions from the domains of the Schrödinger and Laplace operators satisfy Neumann conditions at the endpoints.

To prove that *q(x)* <sup>≡</sup> <sup>0</sup> it is enough to require that *λ*1*(L*st *<sup>q</sup> (I ))* = 0 and *λn(L*st *<sup>q</sup> (I ))* <sup>−</sup> *λn(L*st*(I ))* <sup>=</sup> *o(* <sup>1</sup> *<sup>n</sup> )*.

# *14.1.2 Interval-Graph Is Exceptional: Geometric Version of Ambartsumian Theorem for Standard Laplacians*

Our goal in this section is to prove that the spectrum of the standard Laplacian on a metric graph coincides with the spectrum of the standard (Neumann) Laplacian on an interval if and only if the graph is the interval. In fact we are going to prove a much stronger result, namely that the spectral gaps of the standard Laplacians on a metric graph and on an interval of the same total length are equal if and only if the graph is the interval. The result holds only if we agree to remove all vertices of degree 2 and therefore to identify a set of chain-coupled intervals with one interval having the length equal to the sum of lengths in the set.3 It is interesting to note that the theorem we are going to prove does not require that **all** eigenvalues coincide (like in the classical Ambartsumian theorem), but just the first two (the ground state and *λ*1). In fact the requirement that the ground states are the same is fulfilled automatically for standard Laplacians.

**Theorem 14.3** *Let L*st*(-) be the standard Laplace operator on a connected finite compact metric graph of total length* L*(-). Assume that the first (nonzero) eigenvalue of L*st*(-) coincides with the first nontrivial eigenvalue of the standard Laplacian L*st*(I ) on the interval <sup>I</sup> of the same length* L*(I )* <sup>=</sup> <sup>L</sup>*(-)*

$$
\lambda\_2(L^{\rm st}(\Gamma)) = \lambda\_2(L^{\rm st}(I)) \equiv \left(\frac{\pi}{\mathcal{L}}\right)^2,\tag{14.10}
$$

*then the graph coincides with the interval I.*

<sup>3</sup> We have already discussed that the corresponding operators are unitarily equivalent via obvious identification of points on the graphs, hence there is no reason to distinguish such metric graphs. Remember that this holds for standard vertex conditions only.

*Proof* We are going to use the proof of Theorem 12.1, which states that *λ*2*(L*st*(-))* can be estimated from below by *<sup>π</sup>* L*(-)*2 *.* If you inspect the proof carefully, you will see that the graph for which the estimate is sharp is essentially unique.

Assume that *λ*2*(L*st*(-))* = *π* L*(-)*2 and consider the corresponding eigenfunction *ψ*2*.* As in the proof of Theorem 12.1, let us double all the edges in  to get the graph *-*<sup>2</sup>*.* We denote by *ψ*ˆ <sup>2</sup> the extension of *ψ*<sup>2</sup> to the doubled graph *-*<sup>2</sup> assigning the same values on the new edges.

The graph *-*<sup>2</sup> is balanced (all vertices have even degree) and therefore there exists an Eulerian path—a closed path visiting each edge precisely once. In our picture, this implies that the vertices can be chopped turning *-*<sup>2</sup> into the loop *S*2L. Hence the function *ψ*ˆ <sup>2</sup> can be considered as a function on the loop *S*2L*.* Obviously, *ψ*ˆ <sup>2</sup> is orthogonal to the constant function, therefore since *λ*2*(L*st*(-))* <sup>=</sup> *<sup>λ</sup>*2*(L*st*(S*2L*))* <sup>=</sup> *<sup>π</sup>* L*(-)*2 , the function *ψ*ˆ <sup>2</sup> itself is an eigenfunction for the Laplacian on the loop corresponding to the first non-zero eigenvalue. Choosing proper parametrisation of the loop this function just coincides with cos *<sup>π</sup>* <sup>L</sup> *x.* The function *ψ*2*(x)* can be reconstructed from *ψ*ˆ <sup>2</sup>*(x)* <sup>=</sup> cos *<sup>π</sup>* <sup>L</sup> *<sup>x</sup>* by gluing its values on the pairs of edges appeared during doubling. The values of *ψ*ˆ <sup>2</sup> cover the interval [−1*,* 1] precisely twice, implying that there exists just one way to glue points on the loop together to get  back. To get *-*<sup>2</sup> back from *S*2<sup>L</sup> one needs to glue together the points corresponding to the vertices in *-*2. The values of *ψ*ˆ <sup>2</sup> at these points should be equal, hence any possible *-*<sup>2</sup> is obtained from *S*2<sup>L</sup> by identifying few points with equal values of *ψ*ˆ 2. Hence *-*<sup>2</sup> is just a chain of cycles joined together. The corresponding is a chain of intervals joined at degree two vertices.

It follows that  is essentially just one interval. It might happen that  is formally given by a chain of intervals, but then there exists just one way to glue these intervals together keeping *ψ* continuous and having *ψ* = 0 at the endpoints. Since we agreed to remove vertices of degree 2, the unique graph  is the interval of length L*(-).*

This theorem can be seen as a geometric version of Ambartsumian theorem: it shows that single interval, in relation to other metric graphs, plays the same exceptional role as the zero potential in the one-dimensional Schrödinger equation.

Theorem 14.3 has several important corollaries, we shall also reformulate the theorem in order to fit Ambartsumian's original formulation:

**Corollary 14.4** *If the spectral gap for the standard Laplacian on a compact finite connected metric graph coincides with the spectral gap for the single interval of the same total length, then all other eigenvalues coincide as well.* 

The theorem can also be reformulated as follows without introducing the total length of the graph.

**Corollary 14.5** *Let L*st*(-) be the standard Laplace operator on a finite compact metric graph -. Assume that the first non-trivial eigenvalue of L*st*(-) satisfies* 

$$
\lambda\_2 = \lim\_{n \to \infty} \frac{\lambda\_n}{n^2},
\tag{14.11}
$$

*then the graph is formed by one edge.* 

This corollary can be proved by noting that

$$\mathcal{L}^2(\Gamma) = \lim\_{n \to \infty} \frac{\pi^2 n^2}{\lambda\_n}.$$

Therefore it is possible to get rid of the total length in the formulation substituting it by the asymptotics.

**Theorem 14.6** *Assume that the metric graph is finite and compact. Then the spectrum of the standard Laplacian on coincides with the spectrum of the standard (Neumann) Laplacian on an interval I*

$$
\lambda\_n(L^{\rm st}(\Gamma)) = \lambda\_n(L^{\rm st}(I)) \tag{14.12}
$$

*if and only if the metric graph coincides with the interval.* 

*Proof* This statement is an easy corollary of Theorem 14.3, since the spectra of standard Laplacians *L*st*(-)* and *L*st*(I )* satisfy Weyl's asymptotic (4.25) and hence are isospectral only if the lengths of the two graphs coincide, i.e. L*(-)* = L*(I ).*

Theorem 14.3 cannot be generalised directly to include balanced graphs. In other words, it is not true that if a standard Laplacian on a balanced graph has the same spectral gap as the standard Laplacian on a circle of the same total length, then the graph is a circle. We present here a counterexample.

**Example 14.7** Consider the figure eight graph *-(*2*.*4*)* shown in Fig. 14.1. It has the same spectral gap as the loop graph of the same total length, also the first non-trivial eigenvalue is not degenerate.

On the other hand, the method presented in the proof of Theorem 14.3 can be generalised to show that all balanced graphs with the same spectral gap as the loop are given by a chain of circles coupled to each other as in Example 14.7.

**Fig. 14.1** Graph *-(*2*.*4*)*: two loops attached

**Problem 65** Check all details in Example 14.7. Prove that a standard Laplacian on a balanced graph has the same spectral gap as on the circle of the same total length if and only if the graph is a chain of circles.

**Problem 66** Show that any balanced graph, such that the first nontrivial eigenvalue is degenerate, is a loop.

## *14.1.3 Standard Vertex Conditions Are Not Exceptional*

This subsection is rather elementary and we add it just in order to accomplish our search for Ambartsumian-type results when just one of the parameters varies. Our first intention was just to show that on the interval the spectrum of the standard Laplacian differs from the spectrum of the Laplacian with any other vertex conditions. It turns out that this statements holds essentially true for any other vertex conditions as well, implying that standard conditions are not exceptional in that case.

**Theorem 14.8** *The spectrum of the Laplace operator on an interval I determines the vertex conditions up to the exchange of the two endpoints.* 

*Proof* Consider first the general case, where Robin conditions are assumed at the endpoints:

$$
\mu'(0) = h\_0 \mu(0), \quad -\mu'(\ell) = h\_\ell \mu(\ell).
$$

Here we identified the interval *I* with [0*,* ]*.* Every eigenfunction can be written as

$$
\psi(\mathbf{x}) = a \cos kx + b \sin kx.
$$

Substituting *ψ* into the vertex conditions we get the secular equation

$$(h\_0 + h\_\ell)k\cos k\ell + (h\_0h\_\ell - k^2)\sin k\ell = 0.1$$

The zeroes *kj* of this equation determine the eigenvalues *λj* <sup>=</sup> *<sup>k</sup>*<sup>2</sup> *<sup>j</sup>* of the Laplacian. As we expected, the secular equation depends just on the sum and product of the Robin parameters and therefore is invariant under the exchange *h*<sup>0</sup> ↔ *h.* Our goal is to prove that *h*<sup>0</sup> + *h* and *h*0*h* are uniquely determined by the spectrum. Assume on the contrary, that the operator with other Robin parameters *h*˜<sup>0</sup> and *h*˜ has the same spectrum and therefore the same zeroes *kj* of the secular equation. It follows that

$$
\langle (h\_0 + h\_\ell) \not{p} \not{q} (\tilde{h}\_0 \tilde{h}\_\ell - k\_j^2) = (\tilde{h}\_0 + \tilde{h}\_\ell) \not{p} \not{q} (h\_0 h\_\ell - k\_j^2) .\rangle$$

Since *kj* grow linearly with *j* due to Weyl's asymptotics, we necessarily have

$$h\_0 + h\_\ell = \bar{h}\_0 + \bar{h}\_\ell. \tag{14.13}$$

If *h*<sup>0</sup> + *h* = 0, then it follows that

$$h\_0 h\_\ell = h\_0 h\_\ell \tag{14.14}$$

and we have proven that the Robin parameters are determined up to the exchange of the endpoints.

Consider now the case *h*<sup>0</sup> + *h* = 0, which means that *h* = −*h*<sup>0</sup> =: *h.* It is easy to see that *λ* = 0 is not an eigenvalue, unless *h* = 0 and the spectrum of the problem is determined by the secular equation

$$(h^2 + k^2) \sin k\ell = 0.$$

Hence there is one negative eigenvalue

$$
\lambda\_1 = -h^2
$$

with the eigenfunction *<sup>ψ</sup>*<sup>1</sup> <sup>=</sup> *<sup>e</sup>hx .* All positive eigenvalues coincide with the positive eigenvalues of the Neumann Laplacian

$$
\lambda\_n = \left(\frac{\pi}{\ell}(n-1)\right)^2, \quad n = 2, 3, 4, \dots
$$

with the corresponding eigenfunctions

$$\psi\_n = \cos\frac{\pi}{\ell}(n-1)x + h\frac{\sin\frac{\pi}{\ell}(n-1)x}{\frac{\pi}{\ell}(n-1)}.$$

The spectrum determines *h* up to a sign, i.e. up to flipping of the end-points.

It remains to consider the case, where at one or both endpoints Dirichlet conditions are assumed. We leave this problem as an exercise.

**Problem 67** Show that the spectrum of the Dirichlet-Dirichlet Laplacian on an interval is different from the spectrum of any other Robin Laplacian on the same interval. Prove the same result for the Laplacian with Dirichlet condition at one endpoint and Robin condition at the other one.

In some sense, the fact that any vertex conditions are uniquely determined by the spectrum of the Laplacian on the interval is surprising. As we shall see later on, the vertex conditions are harder to determine than the potential or the graph.

## **14.2 One Parameter Is Fixed, Two Parameters Vary**

Our next step is to study the uniqueness theorem in the case where just one of the parameters is fixed to coincide with the exceptional one, the other two parameters are free.

# *14.2.1 Standard Vertex Conditions Are Exceptional: Schrödinger Operators on Arbitrary Graphs*

In this section we prove Ambartsumian-type result in the case where the vertex conditions are assumed to be standard. More precisely, we show that if the spectrum of the standard Schrödinger operator with absolutely integrable potential coincides with the spectrum of the standard (Neumann) Laplacian on an interval, then the graph is the interval and the potential is identically equal to zero. This statement is not just a formal combination of the classical Ambartsumian theorem (Theorem 14.1) and its geometric version (Theorem 14.6). On the other hand, the above mentioned theorems will help us: when we manage to show that the graph *-* coincides with the interval *I* , then Theorem 14.1 will do the rest.

Our strategy will be the following. In addition to *L*st *<sup>q</sup> (-)* consider the standard Laplacian on the same metric graph, *L*st*(-).* The spectral estimates, proven in the previous chapter, imply that these operators are asymptotically isospectral. This implies that the standard Laplacians on  and on the interval are asymptotically isospectral. Our key step is to prove that this is possible only if these operators are isospectral.

The key statement holds true in a more general situation: two standard Laplacians are asymptotically isospectral if and only if they are isospectral (Theorem 15.14). This observation implies that the spectrum of Laplace operators on metric graphs possesses certain rigidity—by deforming the graph one cannot just shift few eigenvalues a little bit. It is unavoidable that the changes of the spectrum destroy the estimate (11.30). The main reason for such behaviour of the spectrum is that it is given by zeroes of a certain almost periodic function. More about this can be found in the next chapter, where the general result will be proven. Our goal right now is to prove this statement in the simplest case, where one of the metric graphs is an interval. The proof in this situation is not only much simpler, but we believe that it is rather elegant.

The proof is based on the fact that the spectrum of a standard Laplacian is given by zeroes of a trigonometric polynomial (see Theorem 6.1):

$$p(k) = \sum\_{j=1}^{J} p\_j e^{i\alpha\_j k} = 0.$$

We shall need two technical lemmas.

**Lemma 14.9** *[97] Given ω*1*,...,ωJ* <sup>∈</sup> <sup>R</sup> *there exists a subsequence* {*mi*} *of the natural numbers such that* 

$$\lim\_{l \to \infty} e^{i\alpha\_j m\_l} = 1, \ j = 1, 2, \dots, J. \tag{14.15}$$

*for each ωj .* 

*Proof* Let us denote by *ω* := *(ω*1*, ω*2*,...ωJ )*. It is enough to consider projections of the vectors *mω* to the *<sup>J</sup>* -dimensional torus **T***<sup>J</sup>* <sup>=</sup> *(*R*/*2*π*Z*)<sup>J</sup> .* The torus is compact, the sequence is infinite, hence there exists a subsequence *niω,*  such that the corresponding projections converge on the torus. These projections form a Cauchy sequence and therefore for any there exists **I** = **I***()* such that for any *i*1*, i*<sup>2</sup> ≥ **I***()* it holds

$$|e^{i\alpha\_f(n\_{\ell\_1} - n\_{\ell\_2})} - 1| < \epsilon.$$

Taking any sequence *i* →*i*→∞ 0 we may choose *i*1*()* = **I***(i)* and *i*2*() >* **I***(i)* sufficiently large, so that

$$m\_i := n\_{i\_2(\epsilon\_l)} - n\_{i\_1(\epsilon\_l)}$$

is an increasing sequence. Then we have *eimiωj* <sup>=</sup> *<sup>e</sup>i(ni*2*(i)*−*ni*1*(i))ωj* <sup>→</sup> <sup>1</sup> for any *j.* It follows that (14.15) holds.

The following technical lemma on zeroes of such polynomials will be used:

**Lemma 14.10 (J. Boman)** *Let p be a trigonometric polynomial* 

$$p(k) = \sum\_{j=1}^{J} p\_j e^{i\omega\_j k},\tag{14.16}$$

*with ω*1*, ω*2*,...,ωJ* <sup>∈</sup> <sup>R</sup> *and p*1*, p*2*,...,pJ* <sup>∈</sup> <sup>C</sup>*. If the zeroes kn of <sup>p</sup> satisfy* 

$$\lim\_{n \to \pm \infty} (k\_n - n) = 0,\tag{14.17}$$

*then kn* <sup>=</sup> *<sup>n</sup> for all n.*<sup>4</sup>

<sup>4</sup> Note that enumeration of zeroes used in this Lemma does not follow our convention to denote the lowest eigenvalue of the standard Laplacian by *λ*<sup>1</sup> = 0 implying *k*<sup>1</sup> = 0.

*Proof* We assume that the sequence *mi* is chosen so that (14.15) holds. Denoting *kn* − *n* =: *γn,* so that *γn* tends to zero as *n* → ∞ we have for a fixed *n*

$$\begin{split} 0 &= p(k\_{n+m\_l}) = \sum\_{j=1}^{J} p\_j e^{i\omega\_j(n+m\_l+\gamma\_{n+m\_l})} \\ &= \sum\_{j=1}^{J} p\_j e^{i\omega\_j n} e^{i\omega\_j m\_l} e^{i\omega\_j \gamma\_{n+m\_l}} \to \, \_{i \to \infty} \sum\_{j=1}^{J} p\_j e^{i\omega\_j n} = p(n) .\end{split} \tag{14.18}$$

The limit follows from the choice of *mi*, the fact that *γn*+*mi* tends to zero and that the sum is finite.

Since the left hand side is zero by construction, we conclude that *p(n)* = 0*.* It follows in particular that

$$k\_n - k\_{-n} \le 2n + 1.$$

On the other hand, by assumption *kn* − *n* → 0 as *n* → ±∞, hence *kn* = *n.* This proves the lemma.

Using scaling arguments, the lemma can be generalised for the case, where *kn* <sup>−</sup> *<sup>π</sup>* <sup>L</sup> *<sup>n</sup>* <sup>→</sup> <sup>0</sup> for some positive <sup>L</sup> instead of (14.17). We are going to use this statement in the case where instead of (14.17) we have lim*n*→∞*(kn* − *(n* − 1*))* = 0*.*

We are ready to prove the theorem, which can be seen as **a strong version of Ambartsumian theorem** for quantum graphs.

**Theorem 14.11 (Boman-Kurasov-Suhr [97])** *Let be a finite compact graph and q a real absolutely integrable potential on -. Then if the spectrum of the standard Schrödinger operator L*st *<sup>q</sup> (-) coincides with the spectrum of the standard (Neumann) Laplacian on the interval I*

$$
\lambda\_n(L\_q^{\rm st}(\Gamma)) = \lambda\_n(L^{\rm st}(I)),\tag{14.19}
$$

*then the graph coincides with the interval I and q(x)* ≡ 0 *almost everywhere.* 

*Proof* Assume that all assumptions of the theorem hold, in particular that

$$\lambda\_n(L\_q^{\mathrm{st}}(\Gamma)) = \left(\frac{\pi(n-1)}{\mathcal{L}}\right)^2 \Rightarrow k\_n(L\_q^{\mathrm{st}}(\Gamma)) = \frac{\pi(n-1)}{\mathcal{L}}.\tag{14.20}$$

Theorem 11.8 implies that

$$|\lambda\_n(L\_q^{\mathrm{st}}(\Gamma)) - \lambda\_n(L^{\mathrm{st}}(\Gamma))| = \mathcal{O}(1).$$

It follows that the corresponding square roots are close to each other, i.e. the operators are asymptotically isospectral

$$|k\_n(L\_q^{\mathrm{st}}(\Gamma)) - k\_n(L^{\mathrm{st}}(\Gamma))| = \frac{|\lambda\_n(L\_q^{\mathrm{st}}(\Gamma)) - \lambda\_n(L^{\mathrm{st}}(\Gamma))|}{|k\_n(L\_q^{\mathrm{st}}(\Gamma)) + k\_n(L^{\mathrm{st}}(\Gamma))|} = \mathcal{O}(1/n),$$

since *λn(L*st*(-))* and *λn(L*st *<sup>q</sup> (-))* satisfy Weyl asymptotics. Taking into account (14.20) we see that

$$k\_n(L^{\mathfrak{sl}}(\Gamma)) - \frac{\pi}{\mathcal{L}}(n-1) \to 0.$$

Now Lemma 14.10 can be applied to conclude that

$$k\_n(L^{\mathfrak{sl}}(\Gamma)) = \frac{\pi}{\mathcal{L}}(n-1),$$

since the spectrum of *L*st*(-)* is given by a trigonometric polynomial (5.47). It follows in particular that *λ*2*(L*st*(-))* <sup>=</sup> *<sup>π</sup>* L <sup>2</sup> implying that  is an interval (Theorem 14.3). It remains to apply the classical Ambartsumian Theorem 14.1 to conclude that *q(x)* ≡ 0*.*

To prove the theorem we combined the classical Theorem 14.1 with its geometric version Theorem 14.3 using elegant Lemma 14.10. It might appear that this step is rather trivial, but one should remember that we used the spectral estimates (Theorem 11.8) forming the basis for our analysis.

**Problem 68** Assume that in the proof of Lemma 14.10 the set {*mω*} projected on the torus is finite. How to construct the sequence *mi* explicitly?

# *14.2.2 Zero Potential: Laplacians on Graphs that Are Isospectral to the Interval*

Our next uniqueness theorem should be devoted to Schrödinger operator with zero potential, i.e. to the Laplacian. Instead, we are going to present counterexamples showing that such uniqueness theorem does not hold.

We start by providing two counterexamples constructed using elementary unitary transformations or playing with graph's topology.

Let  be the graph formed by two edges of length *π/*2, *e*<sup>1</sup> = [*x*1*, x*2]=[0*,π/*2] and *e*<sup>2</sup> = [*x*3*, x*4]=[*π/*2*, π*] and vertices *<sup>V</sup>* <sup>1</sup> = {*x*1}, *<sup>V</sup>* <sup>2</sup> = {*x*2*, x*3}, and *<sup>V</sup>* <sup>3</sup> = {*x*4}. The graph  can be seen as the interval [0*, π*] with certain conditions introduced at the middle point *x* = *π/*2*.*

**Example 14.12 (Elementary Counterexample 1)** Consider any unimodular function *Θ(x)* constant on each of the two edges, say

$$\Theta\_{\theta}(\mathbf{x}) = \begin{cases} 1, & 0 < \mathbf{x} < \pi/2; \\ \exp(i\theta), & \pi/2 < \mathbf{x} < \pi. \end{cases}$$

Then the operator

$$\begin{aligned} A\_{\theta} &:= \Theta\_{\theta} \qquad \underbrace{L^{\text{st}}(\Gamma)}\_{=L^{\text{st}}([0,\pi])} \quad (\Theta\_{\theta})^{-1} \\ &= L^{\text{st}}([0,\pi]) \end{aligned}$$

and *L*st*(*[0*, π*]*)* are unitarily equivalent. The operator *Aθ* is the Laplace operator defined on the functions satisfying Neumann conditions at *V* <sup>1</sup> and *V* <sup>3</sup> and vertex conditions with *S*<sup>2</sup> = 0 *eiθ e*−*iθ* 0 at the central vertex *V* <sup>2</sup>*.* The operators *Aθ* and *L*st <sup>0</sup> are not only unitarily equivalent, but the eigenfunctions satisfy

$$\left|\psi\_{n}^{A\_{\theta}}(\mathbf{x})\right|^{2} = \left|\psi\_{n}^{L\_{0}^{\mathbf{s}}}(\mathbf{x})\right|^{2},\tag{14.21}$$

i.e. the probability densities for the eigenfunctions coincide. One may say that these operators are undistinguishable from the point of view of quantum mechanics.

**Example 14.13 (Elementary Counterexample 2)** Consider the Laplace operator *B* defined on the functions satisfying Neumann conditions at both endpoints of the left edge [*x*1*, x*2]=[0*,π/*2] and Dirichlet and Neumann conditions at the endpoints of the right edge [*x*3*, x*4]=[*π/*2*, π*]*.* The conditions at the middle vertex of  are not properly connecting and the corresponding operator is an orthogonal sum of the operators on the two edges, hence the spectrum is given by the union of the two spectra 0*,* 22*,* 42*,...* and 1*,* 32*,* 52*,...* and hence coincides with the spectrum of *<sup>L</sup>*st*(*[0*, π*]*).*

The eigenfunctions have support on one of the half-intervals, hence there is no chance that a version of formula (14.21) holds. On the other hand, this example is not very much interesting, since the graph corresponding to the operator *B* is not connected. This illustrates that isospectrality can be achieved by changing the topology of the graph.

**Example 14.14 (Nontrivial Counterexample)** Consider again the graph  and impose Neumann conditions at *V* <sup>1</sup> and *V* 3, as is done for standard conditions. At *V* <sup>2</sup> we impose conditions via a vertex scattering matrix *S*<sup>2</sup>

$$\mathbf{S}\_2 = \begin{pmatrix} a \ b \\ c \ d \end{pmatrix}.$$

We only require that *S* is unitary, Hermitian and non-diagonal (which ensures that *S*<sup>2</sup> is properly connecting), so that *a, d* <sup>∈</sup> <sup>R</sup> and *<sup>c</sup>* <sup>=</sup> *<sup>b</sup>*. From the normalisation of the columns we get that *<sup>c</sup>* <sup>=</sup> <sup>√</sup> <sup>1</sup> <sup>−</sup> *<sup>a</sup>*2*e*−*iθ* √ for some *θ* ∈ [0*,* 2*π )* and so *b* = <sup>1</sup> <sup>−</sup> *<sup>a</sup>*2*eiθ* , and from the orthogonality we get that *<sup>a</sup>* √ <sup>1</sup> <sup>−</sup> *<sup>a</sup>*2*eiθ* <sup>+</sup>*<sup>d</sup>* √ <sup>1</sup> <sup>−</sup> *<sup>a</sup>*2*eiθ* <sup>=</sup> 0, and since |*a*| *<* 1 we have *d* = −*a*, so

$$S\_2(a,\theta) = \begin{pmatrix} a & \sqrt{1-a^2}e^{i\theta} \\ \sqrt{1-a^2}e^{-i\theta} & -a \end{pmatrix}, \quad |a| < 1.$$

As Neumann conditions at the endpoints *x*<sup>1</sup> and *x*<sup>4</sup> determine scattering that is just reflection with the coefficient 1, we get the total vertex scattering matrix for *-*

$$\mathbf{S}(a,\theta) \equiv \mathbf{S} = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & a & \sqrt{1 - a^2} e^{i\theta} & 0 \\ 0 \sqrt{1 - a^2} e^{-i\theta} & -a & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}, \quad |a| < 1, \ \theta \in [0, 2\pi). \tag{14.22}$$

The vertex conditions are scaling-invariant.

In this basis the edge scattering matrix *Se* is given by

$$\mathbf{S}\_{\mathbf{f}}(k) = \begin{pmatrix} 0 & e^{ik\pi/2} & 0 & 0 \\ e^{ik\pi/2} & 0 & 0 & 0 \\ 0 & 0 & 0 & e^{ik\pi/2} \\ 0 & 0 & e^{ik\pi/2} & 0 \end{pmatrix}. \tag{14.23}$$

Using that **S** is Hermitian, the secular equation (5.44) can be written as

$$\det(\mathbf{S\_e} - \mathbf{S}(a, \theta)) = \begin{vmatrix} -1 & e^{ik\pi/2} & 0 & 0\\ e^{ik\pi/2} & -a & -\sqrt{1 - a^2}e^{i\theta} & 0\\ 0 & -\sqrt{1 - a^2}e^{-i\theta} & a & e^{ik\pi/2} \\ 0 & 0 & e^{ik\pi/2} & -1 \end{vmatrix}$$

$$= -1 + e^{4(ik\pi/2)} = -1 + e^{ik2\pi} = 0,$$

which is the same as the secular equation for the interval of length *π* with standard, i.e. Neumann, conditions at the endpoints. To prove that *LS(a,θ )* <sup>0</sup> *(-)* has the same spectrum as *L*st <sup>0</sup> *(*[0*, π*]*)*, it remains to inspect the ground state. The eigenvalue *λ* = 0 has multiplicity one for *LS(a,θ )* <sup>0</sup> *(-)*, and corresponding eigenfunction is given by

$$\psi\_0 = \begin{cases} \sqrt{1+a}e^{-i\theta}, & 0 < x < \pi/2, \\ \sqrt{1-a}, & \pi/2 < x < \pi. \end{cases}$$

Hence the operators *LS(a,θ )* <sup>0</sup> *(-)* and *L*st <sup>0</sup> *(*[0*, π*]*)* are isospectral. We note that the eigenfunctions are of the form

$$\psi\_m(\mathbf{x}) = \begin{cases} \sqrt{1+a}e^{-l\theta}\cos((m-1)\mathbf{x}), & 0 < \mathbf{x} < \pi/2, \\ \sqrt{1-a}\cos((m-1)\mathbf{x}), & \pi/2 < \mathbf{x} < \pi; \end{cases} \quad m = 1, 2, \dots$$

Hence no formula similar to (14.21) may hold unless *a* = 0*.* Note that both operators *Aθ* and *B* can formally be included into the family *LS(a,θ )* 0

$$A\_{\theta} = L\_0^{S(0,\theta)}, \quad B = L\_0^{S(1,\theta)}.\tag{14.24}$$

We have thus shown that the insertion of an extra middle vertex with vertex conditions given by an **arbitrary** non-diagonal unitary and Hermitian matrix does not change the spectrum of *L*st <sup>0</sup> *(*[0*, π*]*)*. These conditions correspond to all possible self-adjoint scaling-invariant conditions at the new mid-vertex, except for the case of two disjoint Neumann-Dirichlet intervals or two disjoint Neumann-Neumann intervals. For standard conditions a degree 2 vertex is removable but we emphasize that while the metric graphs considered above are topologically intervals, the middle vertex is generally non-removable for these conditions.

**Problem 69** Use trace formula (8.20) to explain isospectrality of the graphs in Example 14.14.

Presented counterexamples allow us to formulate the following theorem.

**Theorem 14.15** *Let a Laplace operator LS(-) with asymptotically properly connecting vertex conditions be isospectral to the standard Laplacian on the interval <sup>I</sup>* : *(L<sup>S</sup> <sup>q</sup> (-))* <sup>=</sup> *(L*st <sup>0</sup> *(I )). This does not in general imply that LSv (*∞*) (-*<sup>∞</sup>*)* = *L*st*(I ).* 

We note that in the above example *S*2*(a, θ )* is Hermitian, so we in fact have that *-*<sup>∞</sup> = *-*. What is surprising with the counterexample is that it works with any 2×2 unitary Hermitian matrix.

# *14.2.3 Single Interval: Schrödinger Operators Isospectral to the Standard Laplacian*

The goal of this section is to show, that there exists an infinite family of Schrödinger operators on a single interval, which are isospectral to the Neumann Laplacian on it. We start with the Dirichlet Laplacian on the interval [0*,* 1]*.* The spectrum is given by the eigenvalues

$$
\lambda\_n = \left(\pi n\right)^2, \quad n = 1, 2, \dots
$$

with the corresponding eigenfunctions

$$
\psi\_n = \sin \pi n x.
$$

To get the spectrum of the Neumann Laplacian it is enough to add just one eigenvalue—the eigenvalue zero. We are going to get it by inverting Crum's procedure, which describes how to remove the lowest eigenvalue in a Schrödinger equation on an interval using the Darboux transform. We are going to follow famous paper [150].

#### **Crum's Procedure**

Assume that a Schrödinger operator

$$L\_q^\mathbf{h} = -\frac{d^2}{dx^2} + q(\mathbf{x}), \ \mathbf{h} = (h\_0, h\_1) \in \mathbb{R}^2, q(\mathbf{x}) \in \mathbb{R},$$

with the domain consisting of functions from *W*<sup>2</sup> <sup>2</sup> *(*0*,* 1*)* satisfying Robin conditions

$$
\mu'(0) = h\_0 \mu(0), \ -\mu'(1) = h\_1 \mu(1),
$$

be given. Denote by

$$\lambda\_n := \lambda\_n^{L\_q^\mathbf{h}}, \quad \psi\_n := \psi\_n^{L\_q^\mathbf{h}}, \quad n = 1, 2, \dots$$

its eigenvalues and eigenfunctions respectively.

The main idea of the Darboux transform is using several eigenfunctions construct a new potential so that the eigenfunctions of the new Schrödinger operator may be expressed through the original eigenfunctions. We are going to present here the simplest version of Crum's procedure where just the ground state is used (see [150] for general consideration).

Let us denote by *v* the logarithmic derivative of *ψ*<sup>1</sup>

$$v(\mathbf{x}) := \frac{\psi\_1'(\mathbf{x})}{\psi\_1(\mathbf{x})}.$$

It is non-singular, since the ground state eigenfunction never vanishes. The function *v* satisfies the Riccati equation

$$v' + v^2 = q(\mathbf{x}) - \lambda\_\mathbf{l}.\tag{14.25}$$

To see this, substitute *v* as a logarithmic derivative of *ψ*<sup>1</sup> to get the eigenfunction equation

$$-\left.\psi\_{\mathsf{l}}'' + q\left(\mathbf{x}\right)\psi\_{\mathsf{l}}\right| = \lambda\_{\mathsf{l}}\psi\_{\mathsf{l}}\,,$$

which is obviously satisfied by *ψ*1*.*

The deformed potential is given by the explicit formula

$$
\hat{q}(\mathbf{x}) = q(\mathbf{x}) - 2v'(\mathbf{x}) = q(\mathbf{x}) - 2\frac{d^2}{d\mathbf{x}^2} \ln \psi\_{\mathbf{l}}(\mathbf{x}).\tag{14.26}
$$

The new Schrödinger operator *L*<sup>D</sup> *<sup>q</sup>*<sup>ˆ</sup> = − *<sup>d</sup>*<sup>2</sup> *dx*<sup>2</sup> + ˆ*q(x)* is defined on the functions satisfying Dirichlet boundary conditions. The eigenvalues and eigenfunctions will be denoted as:

$$\hat{\lambda}\_n := \lambda\_n^{L^{\rm D}\_{\hat{q}}}, \quad \hat{\psi}\_n := \psi\_n^{L^{\rm D}\_{\hat{q}}}, \ n = 1, 2, \dots$$

The main point of Crum is that the spectrum of the new operator can easily be calculated as well as the corresponding eigenfunctions. The spectrum of *L*<sup>D</sup> *q*ˆ coincides with the spectrum of the original operator *L***<sup>h</sup>** *<sup>q</sup>* except that *λ*1*(L***<sup>h</sup>** *<sup>q</sup> )* is missing:

$$
\lambda\_n = \lambda\_{n+1}, \ n = 1, 2, \dots \tag{14.27}
$$

The corresponding eigenfunctions *ψ*ˆ*<sup>n</sup>* can easily be calculated from the eigenfunctions *ψn*+<sup>1</sup> of the original operator

$$
\hat{\psi}\_n = \psi\_1 \frac{d}{d\chi} \left( \frac{\psi\_{n+1}}{\psi\_1} \right). \tag{14.28}
$$

Let us check first that *ψ*ˆ*<sup>n</sup>* determined by this formula satisfy Dirichlet conditions. Differentiating we get

$$
\hat{\psi}\_n = \frac{\psi'\_{n+1}\psi\_1 - \psi\_{n+1}\psi'\_1}{\psi\_1}.
$$

Since *ψ*<sup>1</sup> and *ψn*+<sup>1</sup> satisfy the same Robin conditions at the endpoints, *ψ*ˆ is equal to zero there.

Multiplying (14.28) by *ψ*<sup>1</sup> and differentiating we obtain:

$$\frac{d}{d\boldsymbol{x}}\left(\psi\_1 \hat{\psi}\_n\right) = \underbrace{\psi\_{n+1}^{\prime\prime}}\_{=(q-\lambda\_{n+1})\psi\_{n+1}} + \underbrace{\boldsymbol{\upmu}\_{n+1}^{\prime} \boldsymbol{\upmu}\_1^{\prime} - \boldsymbol{\upmu}\_{n+1}^{\prime} \boldsymbol{\upmu}\_1^{\prime}}\_{=(q-\lambda\_1)\psi\_1} - \underbrace{\boldsymbol{\upmu}\_{n+1}^{\prime\prime} \boldsymbol{\upmu}\_1^{\prime}}\_{=(q-\lambda\_1)\psi\_1}$$

implying

$$(\lambda\_1 - \lambda\_{n+1})\psi\_1 \psi\_{n+1} = \frac{d}{d\chi} \left(\psi\_1 \hat{\psi}\_n\right). \tag{14.29}$$

Since *ψ*ˆ*<sup>n</sup>* satisfies Dirichlet conditions we may easily integrate

$$
\psi\_1(\mathbf{x})\hat{\psi}\_n(\mathbf{x}) = (\lambda\_1 - \lambda\_{n+1}) \int\_0^\chi \psi\_1(t)\psi\_{n+1}(t)dt \\
= -(\lambda\_1 - \lambda\_{n+1}) \int\_\chi^1 \psi\_1(t)\psi\_{n+1}(t)dt.
$$

Dividing by *ψ*<sup>1</sup> and differentiating

$$
\hat{\psi}\_n(\mathbf{x}) = \frac{\lambda\_1 - \lambda\_{n+1}}{\psi\_1(\mathbf{x})} \int\_0^\chi \psi\_1(t) \psi\_{n+1}(t) dt
$$

twice we get

$$\begin{split} \hat{\psi}\_{n}^{\prime}(\mathbf{x}) &= -\frac{\psi\_{1}^{\prime}(\mathbf{x})}{\psi\_{1}^{2}(\mathbf{x})} \int\_{0}^{\chi} \psi\_{1}(t) \psi\_{n+1}(t) dt + \frac{1}{\psi\_{1}(\mathbf{x})} (\lambda\_{1} - \lambda\_{n+1}) \psi\_{1}(\mathbf{x}) \psi\_{n+1}(\mathbf{x}) \\ &= (\lambda\_{1} - \lambda\_{n+1}) \psi\_{1}(\mathbf{x}) - \upsilon(\mathbf{x}) \hat{\psi}\_{n}(\mathbf{x}); \end{split}$$

$$\begin{split} \hat{\psi}\_{n}^{\prime\prime}(\mathbf{x}) &= (\lambda\_{1} - \lambda\_{n+1})\psi\_{n+1}^{\prime} - \upsilon^{\prime}(\mathbf{x})\hat{\psi}\_{n}(\mathbf{x}) \\ &- \upsilon(\mathbf{x}) \underbrace{\left( (\lambda\_{1} - \lambda\_{n+1})\psi\_{1}(\mathbf{x}) - \upsilon(\mathbf{x})\hat{\psi}\_{n}(\mathbf{x}) \right)}\_{=\hat{\psi}\_{n}^{\prime}} \\ &= \left( (\upsilon^{2}(\mathbf{x}) - \upsilon^{\prime}(\mathbf{x}) - \lambda\_{1} - \lambda\_{n+1} \right) \hat{\psi}\_{n}(\mathbf{x}) \\ &+ (\lambda\_{1} - \lambda\_{n+1}) \underbrace{\left( \psi\_{n+1}^{\prime} - \upsilon(\mathbf{x})\psi\_{n+1}(\mathbf{x}) - \hat{\psi}\_{n}(\mathbf{x}) \right)}\_{=0} \\ &= (\hat{q}(\mathbf{x}) - \lambda\_{n+1})\hat{\psi}\_{n}(\mathbf{x}), \end{split}$$

where on the last step we used Riccati equation, relation (14.26) to substitute *q* with *q*ˆ and the definition (14.28) of *ψ*ˆ*<sup>n</sup>* implying that

$$
\bar{\psi}\_n = \psi'\_{n+1} - \upsilon \psi\_{n+1}.
$$

Thus we have not only proven that the function *ψ*ˆ*<sup>n</sup>* satisfies Dirichlet boundary conditions, but that it is a solution to the eigenfunction equation

$$-\psi'' + \hat{q}\,\psi = \lambda\psi$$

with *λ* = *λn*+1*.*

Checking the number of zeroes of the calculated functions one may prove that we have obtained all eigenfunctions of the new Schrödinger operator. Another way to show this is to use formula (14.28) to express the original eigenfunctions through the new ones:

$$
\psi\_{n+1} = \frac{1}{\lambda\_1 - \lambda\_{n+1}} \cdot \frac{1}{\psi\_1} \frac{d}{dx} \left( \psi\_1 \hat{\psi}\_n \right). \tag{14.30}
$$

#### **Inverting Crum's Procedure**

We are looking for the Schrödinger operator *L***<sup>h</sup>** *<sup>q</sup>* having the same spectrum as the Neumann Laplacian on the interval

$$
\lambda\_{n+1} = n^2 \pi^2, \quad n = 0, 1, 2, \dots, \pi
$$

and the eigenfunctions

$$
\hat{\psi}\_n = \sin \pi n x.
$$

Assume that we carried out elimination of the ground state and obtained the Dirichlet Laplacian *L*<sup>D</sup> <sup>=</sup> *<sup>L</sup>*<sup>D</sup> <sup>0</sup> with the spectrum

$$
\widehat{\lambda}\_n = n^2 \pi^2, \quad n = 1, 2, \ldots
$$

Our goal is to find *q*. To this end, let us examine the Riccati equation (14.25)

$$v' + v^2 = \underbrace{q(x)}\_{=0+2v'} - \underbrace{\lambda\_1}\_{=0} = 0,$$

which implies

$$-\upsilon' + \upsilon^2 = 0.\tag{14.31}$$

One possible solution to this differential equation is

$$v(x) = \frac{-1}{x+1},$$

leading to the potential

$$q(\mathbf{x}) = 2\frac{d}{d\mathbf{x}}v(\mathbf{x}) = \frac{2}{(\mathbf{x}+1)^2}.$$

The ground state can be calculated, since *v* is just its logarithmic derivative

$$
\psi\_1(x) = \frac{1}{x+1}.
$$

To determine the operator *L***<sup>h</sup>** *<sup>q</sup>* it remains to provide the Robin parameters which are equal to

$$h\_0 = v(0) = -1, \; h\_1 = -v(1) = \frac{1}{2}.\tag{14.32}$$

The eigenfunctions corresponding to the higher eigenvalues can be calculated using formula (14.30)

$$
\psi\_{n+1} = -\frac{1}{\pi^2 n^2} \left( \pi n \cos nx - \frac{\sin \pi n x}{x+1} \right).
$$

Constructed explicit counterexample proves the following theorem

**Theorem 14.16** *Let L***<sup>h</sup>** *<sup>q</sup> (I ) be a Schrödinger operator on the interval* [0*,* 1] *with the real-valued potential q and the boundary conditions determined by Robin parameters h*<sup>0</sup> *and h*1*. Then there exists an infinite family of such operators which are isospectral to the Neumann Laplacian on I .* 

*Proof* We constructed a single operator with the prescribed property. To obtain an infinite family one may consider different solutions to the differential equation (14.31). One has just to be careful choosing solutions without singularities inside the interval, the rest is similar to the calculations already carried out.

**Problem 70** Complete the proof of Theorem 14.16 by considering alternative solutions to the differential equation (14.31). What is the reason that relation

$$h\_0 + h\_1 + \frac{1}{2} \int\_0^1 q(x) dx = 0$$

holds for all constructed counterexamples?

Our studies in this section show that standard vertex conditions are crucial for Ambartsumian-type theorem to hold: fixing any other member of the triple (graph or potential) does not allow to extend the Ambartsumian theorem. Only if we assume that the vertex conditions are standard, we are able to extend Ambartsumian's result for graphs (Theorem 14.11).

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 15 Further Theorems Inspired by Ambartsumian**

We continue to derive results in the spirit of classical Ambartsumian Theorem 14.1. In the first part we use heat kernel technique to show that a Schrödinger operator is isospectral to a Laplacian only if the potential is zero. This part is rather technical but does not require any a priori knowledge of heat kernel semigroups. In the second part the theory of almost periodic functions is used to obtain uniqueness results for Laplace and Schrödinger operators.

# **15.1 Ambartsumian-Type Theorem by Davies**

Our goal in this section is to prove that among all standard Schrödinger operators on a metric graph, only the operator with zero potential has the same spectrum as the Laplacian. In other words, the zero potential is unique among all other potentials if one just looks at the spectrum of a quantum graph. In the case where the metric graph is just an interval, this fact is the classical Ambartsumian theorem, but the graphs considered in this section are arbitrary finite compact metric graphs. We are going to assume that such a graph is fixed. In our presentation we are going to follow the proof by E.B. Davies [155], but we adapt it to the case of quantum graphs. The original proof goes as follows: it is first shown that the statement holds under rather general assumptions, then it is shown that the assumptions hold for quantum graphs with standard vertex conditions. Adapting the proof we managed to simplify some arguments and no deep knowledge of the heat kernel approach to spectral theory [154] will be required.

## *15.1.1 On a Sufficient Condition for the Potential to Be Zero*

If you examine the proof of the classical Ambartsumian theorem (Theorem 14.1) you will see that the crucial point is to show that the potential has average value zero. For a single interval this follows from explicit spectral asymptotics. For arbitrary compact graphs this fact follows from the asymptotic analysis of the heat kernel as *t* → 0*.* Therefore let us first show that proving that the average value of the potential is zero is enough to obtain an Ambartsumian-type theorem.

#### **Theorem 15.1** *If*

$$
\lambda\_1(L\_q^{\rm st}(\Gamma)) \ge 0 \tag{15.1}
$$

*and* 

$$\int\_{\Gamma} q(\mathbf{x})d\mathbf{x} \le 0,\tag{15.2}$$

*then q is equal to zero almost everywhere.* 

*Proof* Let us use *u(x)* ≡ 1 as a trial function for the quadratic form

$$\begin{aligned} \left. \mathcal{Q}(u,u) \right|\_{u=1} &= \int\_{\Gamma} |u'(\mathbf{x})|^2 d\mathbf{x} \Big|\_{u=1} + \int\_{\Gamma} q(\mathbf{x}) |u(\mathbf{x})|^2 d\mathbf{x} \Big|\_{u=1} \\ &= 0 + \int\_{\Gamma} q(\mathbf{x}) d\mathbf{x}. \end{aligned}$$

Inequality (15.1) implies that *Q(*1*,* 1*)* ≥ 0 and hence

$$\int\_{\Gamma} q(\boldsymbol{x})d\boldsymbol{x} \ge 0,$$

which in combination with (15.2) implies

$$\int\_{\Gamma} q(\mathbf{x}) = 0 \quad \text{and} \quad \lambda\_1(L\_q^{\mathfrak{sl}}(\Gamma)) = 0.$$

Therefore *u(x)* ≡ 1 is not only the eigenfunction of the standard Laplacian on but also an eigenfunction of *L*st *<sup>q</sup> ()* since the lowest eigenvalue of the Schrödinger operator is simple. Hence, on every edge *u(x)* ≡ 1 satisfies the differential equation

$$-\frac{d^2}{d\boldsymbol{\chi}^2}\mu(\boldsymbol{\chi}) + q(\boldsymbol{\chi})\mu(\boldsymbol{\chi}) = 0 \cdot \mu(\boldsymbol{\chi})$$

implying

$$q(\boldsymbol{\alpha}) = 0$$

almost everywhere.

## *15.1.2 Laplacian Heat Kernel*

Our goal in this subsection is to study properties of the heat kernel associated with the standard Laplacian on a finite compact metric graph *.* The heat kernel *H(t, x, y)* is defined as the kernel of the integral operator solving the heat equation

$$\begin{cases} \frac{\partial}{\partial t} \mu(t, \mathbf{x}) = -L\_q^{\text{st}} \mu(t, \mathbf{x}), \, \mathbf{x} \in \Gamma, \, t \in (0, \infty),\\ \mu(0, \mathbf{x}) = \mu\_0(\mathbf{x}), \qquad \mathbf{x} \in \Gamma; \end{cases} \Rightarrow \boldsymbol{\mu}(t, \mathbf{x}) = \int\_{\Gamma} H\_{\Gamma}(t, \mathbf{x}, \mathbf{y}) \mu\_0(\mathbf{y}) d\mathbf{y}. \tag{15.3}$$

In the case where all eigenfunctions *ψn* of *L*st *<sup>q</sup>* and eigenvalues *λn* are known, the solution can be presented as an absolutely converging series

$$u(t, \mathbf{x}) = \sum\_{n=1}^{\infty} e^{-\lambda\_n t} \psi\_n(\mathbf{x}) \langle \psi\_n, u\_0 \rangle\_{L^2(\Gamma)} \tag{15.4}$$

leading to an explicit formula for the Heat kernel

$$H\_{\Gamma}(t, \mathbf{x}, \mathbf{y}) = \sum\_{n=1}^{\infty} e^{-\lambda\_{n}t} \psi\_{n}(\mathbf{x}) \overline{\psi\_{n}(\mathbf{y})}.\tag{15.5}$$

Note that the complex conjugation is not needed if the eigenfunctions can be chosen real, for example in the case of standard vertex conditions considered here.

We first study the heat kernel for single interval [−*a, a*] with Dirichlet boundary conditions imposed at the endpoints and use obtained estimates to analyse Laplacian heat kernel's behaviour for small times.

#### **Heat Kernel for the Dirichlet Laplacian on an Interval**

Consider the operator *L*D*(*−*a, a).* We are interested in the corresponding heat kernel. To obtain an explicit formula we use the eigenfunction expansion for the Dirichlet Laplacian. The normalised eigenfunctions and eigenvalues are

$$
\psi\_n = \frac{1}{\sqrt{a}} \sin \left( \frac{\chi + a}{2a} \pi n \right), \quad \lambda\_n = \left( \frac{\pi}{2a} \right)^2 n^2.
$$

The heat kernel is given by (15.5)

$$H\_{\left[-a,a\right]}(t,x,y) = \sum\_{n=1}^{\infty} e^{-\left(\frac{\pi}{2a}\right)^2 n^2 t} \frac{1}{a} \sin\left(\frac{x+a}{2a}\pi n\right) \sin\left(\frac{y+a}{2a}\pi n\right). \tag{15.6}$$

The kernel is a positive continuous function on *(*0*,*∞*)* × [−*a, a*] × [−*a, a*]*.*

In what follows we shall need an estimate for *<sup>a</sup>* <sup>−</sup>*<sup>a</sup> <sup>H</sup>*[−*a,a*]*(t,* <sup>0</sup>*, x)dx.*

**Lemma 15.2** *The heat kernel H*[−*a,a*]*(t,* 0*, x) for the Dirichlet Laplacian on*  [−*a, a*] *satisfies the integral estimate*<sup>1</sup>

$$1 > \int\_{-a}^{a} H\_{[-a,a]}(t,0,x)dx \ge 1 - \frac{4a}{\sqrt{\pi t}}e^{-a^2/(4t)}.\tag{15.7}$$

*Proof* We start by calculating explicitly

$$H\_{[-a,a]}(t,0,\chi) = \sum\_{n=1}^{\infty} e^{-\left(\frac{\pi}{2a}\right)^2 n^2 t} \frac{1}{a} \sin\frac{\pi n}{2} \sin\frac{\chi+a}{2a} \pi n$$

and integrating

$$\begin{split} \int\_{-a}^{a} H\_{[-a,a]}(t,0,x) dx &= \sum\_{n=1}^{\infty} e^{-\left(\frac{\pi}{2a}\right)^{2}n^{2}l} \frac{1}{a} \sin\frac{\pi n}{2} \int\_{-a}^{a} \sin\frac{x+a}{2a} \pi n dx \\ &= \sum\_{n=1}^{\infty} e^{-\left(\frac{\pi}{2a}\right)^{2}n^{2}l} \frac{1}{a} \sin\frac{\pi n}{2} \,\frac{2a}{\pi n} \,(1-\cos\pi n) \\ &= \sum\_{m=0}^{\infty} e^{-\left(\frac{\pi}{2a}\right)^{2}(2m+1)^{2}l} \frac{4}{\pi(2m+1)} \sin\frac{\pi(2m+1)}{2} \\ &= \sum\_{m=0}^{\infty} e^{-\left(\frac{\pi}{2a}\right)^{2}(2m+1)^{2}l} \frac{4}{\pi(2m+1)} (-1)^{m} .\end{split}$$

All formulas depend on *t/a*2, hence we can put *<sup>a</sup>* <sup>=</sup> <sup>1</sup> for a while and substitute *t* with *t/a*<sup>2</sup> at the final stage. Using this convention the formula for the integral is modified as

$$\int\_{-1}^{1} H\_{[-1,1]}(t,0,x)dx = \sum\_{m=-\infty}^{\infty} e^{-\pi^2(m+1/2)^2t} \frac{2\sin\pi(m+1/2)}{\pi(m+1/2)}.\tag{15.8}$$

<sup>1</sup> The lower estimate used in [155] is: *<sup>a</sup>* <sup>−</sup>*<sup>a</sup> <sup>H</sup>*[−*a,a*]*(t,* <sup>0</sup>*, x)dx* <sup>≥</sup> <sup>1</sup> <sup>−</sup> <sup>4</sup>*e*−*a*2*/*8*<sup>t</sup>* .

To calculate the series we are going to use Poisson summation formula

$$\sum\_{n=-\infty}^{\infty} f(n) = \sum\_{n=-\infty}^{\infty} \hat{f}(n),\tag{15.9}$$

where *f*ˆ is the Fourier transform of *f.* Consider first an auxiliary function

$$\mathbf{g}(\mathbf{x}) := 2e^{-\pi^2 \mathbf{x}^2 t} \xrightarrow[\pi \,\, \mathbf{x}]{\sin \pi \,\mathbf{x}}.$$

To calculate its Fourier transform *g(k)* ˆ we note that *g* is essentially a product of two functions

$$e^{-\pi^2 \mathcal{X}^2 \mathcal{I}} \quad \text{and} \quad \frac{\sin \pi \mathcal{X}}{\pi \mathcal{X}},$$

with explicit formulas for their Fourier transforms

$$\int\_{-\infty}^{\infty} e^{-\pi^2 \ge^2 t} e^{-2\pi i p \ge} d\mathbf{x} = \int\_{-\infty}^{\infty} e^{-(\pi \ge \sqrt{t} + ip/\sqrt{t})^2} d\mathbf{x} \, e^{-p^2/t} = \frac{1}{\sqrt{\pi t}} e^{-p^2/t};$$

$$\int\_{-\infty}^{\infty} \frac{\sin \pi x}{\pi x} e^{-2\pi i p x} dx = \begin{bmatrix} 1, \ |p| < 1/2, \\\\ 0, \text{ otherwise.} \end{bmatrix}$$

Taking convolution we get

$$
\hat{\mathfrak{g}}(p) = \frac{2}{\sqrt{\pi t}} \int\_{p - 1/2}^{p + 1/2} e^{-s^2/t} ds.
$$

To calculate the series (15.8) consider the function

$$f(\mathbf{x}) = \mathbf{g}(\mathbf{x} + 1/2)$$

implying

$$
\hat{f}(p) = e^{\pi i p} \hat{\mathfrak{g}}(p),
$$

and modify Poisson summation formula (15.9) as

$$\sum\_{m=-\infty}^{\infty} \operatorname{g}(m+1/2) = \sum\_{m=-\infty}^{\infty} f(m) = \sum\_{m=-\infty}^{\infty} \widehat{f}(m) = \sum\_{m=-\infty}^{\infty} \widehat{\operatorname{g}}(m)e^{\pi imx}.$$

$$= \sum\_{m=-\infty}^{\infty} \widehat{\operatorname{g}}(m)(-1)^{m}.$$

Then the integral of the heat kernel is given by

$$\begin{split} \int\_{-1}^{1} H\_{[-1,1]}(t,0,x)dx &= \sum\_{m=-\infty}^{\infty} \hat{g}(m)(-1)^{m} \\ &= \frac{2}{\sqrt{\pi t}} \Big( \int\_{0}^{1/2} e^{-s^{2}/t} ds - \int\_{1/2}^{3/2} e^{-s^{2}/t} ds \Big) \\ &\qquad + \int\_{3/2}^{5/2} e^{-s^{2}/t} ds + \dots \Big) \\ &= \frac{2}{\sqrt{\pi}} \Big( \int\_{0}^{1/2\sqrt{t}} e^{-s^{2}} ds - \int\_{1/2\sqrt{t}}^{3/2\sqrt{t}} e^{-s^{2}} ds \Big) \\ &\qquad + \int\_{3/2\sqrt{t}}^{5/2\sqrt{t}} e^{-s^{2}} ds + \dots \Big). \end{split}$$

Using Gaußian integral 1 = <sup>√</sup> 2 *π* ∞ <sup>0</sup> *<sup>e</sup>*−*s*<sup>2</sup> *ds* we obtain

$$\begin{split} 1 - \int\_{-1}^{1} H\_{[-1,1]}(t,0,x)dx &= \frac{4}{\sqrt{\pi}} \left( \int\_{1/2\sqrt{t}}^{3/2\sqrt{t}} e^{-s^2} ds + \int\_{5/2\sqrt{t}}^{7/2\sqrt{t}} e^{-s^2} ds \right) \\ &+ \int\_{9/2\sqrt{t}}^{11/2\sqrt{t}} e^{-s^2} ds \dots \right). \end{split}$$

We are getting immediately the upper estimate, since every term in the series on the right hand side is positive:

$$1 - \int\_{-1}^{1} H\_{[-1,1]}(t,0,x)dx > 0 \quad \Rightarrow \quad \int\_{-1}^{1} H\_{[-1,1]}(t,0,x)dx < 1.$$

This estimate can also be obtained by noting that

$$H\_{[-a,a]}(t,x,\mathbf{y}) \le H\_{(-\\\\\infty,\infty)}(t,x,\mathbf{y}) = \frac{1}{\sqrt{4\pi t}}e^{-|\mathbf{x}-\mathbf{y}|^2/4t},$$

since the heat kernel is monotone with respect to the domain.

The lower estimate can be obtained with different precisions, for example we may use just the first integral in the series

$$1 - \int\_{-1}^{1} H\_{[-1,1]}(t,0,\mathbf{x})d\mathbf{x} \le \frac{4}{\sqrt{\pi}} \int\_{1/2\sqrt{t}}^{3/2\sqrt{t}} e^{-s^2} ds < \frac{4}{\sqrt{\pi t}} e^{-1/(4t)}.$$

Lemma 15.2 will allow us to show an explicit two-sided estimate for *H*[−1*,*1]*(t,* 0*,* 0*).* A series representation for this function follows directly from (15.6)

$$\begin{split} H\_{[-a,a]}(t,0,\chi) &= \sum\_{n=1}^{\infty} e^{-\left(\frac{x}{2a}\right)^2 n^2 t} \frac{1}{a} \sin\frac{\pi n}{2} \sin\frac{\chi+a}{2a} \pi n \\ &= \sum\_{n=1}^{\infty} e^{-\left(\frac{x}{2a}\right)^2 n^2 t} \frac{1}{2a} \left(\cos\frac{\chi}{2a} - \cos(\frac{\chi}{2a} + \pi n)\right) \\ &= \sum\_{m=0}^{\infty} e^{-\left(\frac{x}{2a}\right)^2 (2m+1)^2 t} \frac{1}{a} \cos\frac{\chi}{2a}, \end{split}$$

and in particular

$$H\_{[-1,1]}(t,0,0) = \sum\_{m=0}^{\infty} e^{-\pi^2(m+1/2)^2t} = \frac{1}{2}\vartheta\left[2,0,e^{-\pi^2t}\right].$$

where *ϑ* is the Elliptic Theta function.2 It does not help to take the Fourier transform and use Poisson summation formula since the Fourier transform of the Gaussian kernel is a Gaussian kernel.

**Lemma 15.3** *The heat kernel H*[−*a,a*]*(t,* 0*, x) for the Dirichlet Laplacian on*  [−*a, a*] *satisfies the estimate*<sup>3</sup>

$$\frac{1}{\sqrt{4\pi t}} \ge H\_{[-a,a]}(t,0,0) \ge \frac{1}{\sqrt{4\pi t}} - \frac{8a}{\pi t} e^{-a^2/(2t)}.\tag{15.10}$$

*Proof* The estimate is tight for small *t*, which can be illustrated by Fig. 15.1.

To prove the estimate we again use the approach developed in [155]. First of all we need the following identity for the heat kernel, which can easily be proven for any compact domain with the eigenfunctions *ψn* of the Dirichlet Laplacian:

$$H(2t,0,0) = \int\_{\Omega} H^2(t,0,x)dx,\tag{15.11}$$

where the integral is taken over the domain. We use the standard formula (15.5) for the heat kernel implying

$$\int\_{\Omega} H^2(t,0,\mathbf{x})d\mathbf{x} = \int\_{\Omega} \sum\_{n=1}^{\infty} e^{-\lambda\_n t} \psi\_n(0)\psi\_n(\mathbf{y}) \sum\_{m=1}^{\infty} e^{-\lambda\_m t} \psi\_m(0)\psi\_n(\mathbf{y})d\mathbf{y}$$

3 The lower estimate proven in [155] reads as *<sup>H</sup>*[−1*,*1]*(t,* <sup>0</sup>*,* <sup>0</sup>*)* <sup>≥</sup> <sup>√</sup><sup>1</sup> 4*πt* <sup>1</sup> <sup>−</sup> <sup>15</sup>*e*−1*/(*4*t)* .

<sup>2</sup> This special function was introduced in order to study solutions to the heat equation on a finite interval.

$$=\sum\_{n,m=1}^{\infty} e^{-\lambda\_n t} \psi\_n(0) e^{-\lambda\_m t} \psi\_m(0) \delta\_{nm}$$

$$=\sum\_{m=1}^{\infty} e^{-2\lambda\_m t} \left(\psi\_m(0)\right)^2 = H(2t,0,0).$$

Consider the following two positive functions

$$f(\mathbf{x}) = \begin{cases} H\_{[-a,a]}(t,0,\mathbf{x}), \text{ if } |\mathbf{x}| \le a, \\ \mathbf{0}, & \text{otherwise;} \end{cases}$$

$$g(\mathbf{x}) = H\_{(-\infty,\infty)}(t,0,\mathbf{x}) = \frac{1}{\sqrt{4\pi t}} e^{-\mathbf{x}^2/(4t)},$$

so that

$$0 \le f(\mathbf{x}) \le g(\mathbf{x}) \le \frac{1}{\sqrt{4\pi t}}$$

for all *<sup>x</sup>* <sup>∈</sup> <sup>R</sup> and <sup>R</sup> *g(x)dx* = 1*.* Using formula (15.11) we get

$$\begin{split} 0 &\leq \frac{1}{\sqrt{8\pi t}} - H\_{[-a,a]}(2t,0,0) \\ &= H\_{(-\infty,\infty)}(2t,0,0) - H\_{[-a,a]}(2t,0,0) \\ &= \int\_{\mathbb{R}} \left( g^2(\mathbf{x}) - f^2(\mathbf{x}) \right) d\mathbf{x} \\ &\leq \int\_{\mathbb{R}} \left( g(\mathbf{x}) - f(\mathbf{x}) \right) 2g(\mathbf{x}) d\mathbf{x} \\ &\leq \frac{1}{\sqrt{\pi t}} \int\_{\mathbb{R}} \left( g(\mathbf{x}) - f(\mathbf{x}) \right) d\mathbf{x} \\ &= \frac{1}{\sqrt{\pi t}} \left( 1 - \int\_{-a}^{a} f(\mathbf{x}) d\mathbf{x} \right) \\ &\leq \frac{4a}{\pi t} e^{-1/(4t)}, \end{split}$$

where we used (15.7) on the last step. It remains to make a substitution of 2*t* with *t* leading to the second estimate in (15.10).

Lemma 15.3 implies that the heat kernel on the interval has a singularity like √ 1 <sup>4</sup>*πt .* Moreover, the two-sided estimates (15.10) allows one to conclude that the following limit exists and calculate it

$$\lim\_{t \to 0} \sqrt{4\pi t} \ H\_{[-a.a]}(t, 0, 0) = 1,\tag{15.12}$$

since lim*t*→<sup>0</sup> <sup>√</sup> 16 *πt <sup>e</sup>*−1*/(*2*t)* <sup>=</sup> <sup>0</sup>*.*

#### **Heat Kernel for the Standard Laplacian on the Graph**

We accomplish this section by proving that the limit (15.12) holds for the standard Laplacian on a metric graph for almost all points. More precisely, the set of exceptional points, where the limit may be violated, coincides with the set of vertices.

**Lemma 15.4** *Let* [−*a.a*] *be an interval on one of the edges on a finite compact metric graph . Then the heat kernels H*[−*a,a*]*(t, x, y) for the Dirichlet Laplacian on* [−*a, a*] *and H(t, x, y) for the standard Laplacian on restricted to the interval*  [−*a, a*] *satisfy the estimate:* 

$$H\_{[-a,a]}(t,x,\mathbf{y}) \le H\_{\Gamma}(t,x,\mathbf{y});\ \mathbf{x},\mathbf{y},\in[-a,a].\tag{15.13}$$

*Proof* Consider the graph ∪−*a,a* obtained from ∪ by joining pairwise the endpoints of the intervals [−*a, a*] on both graphs (see Fig. 15.2). We impose standard vertex conditions. The graph is invariant under the symmetry transformation *τ* mapping the same points *x* and *x* on the two copies of onto each other:

$$
\mathfrak{tr}\chi = \mathfrak{x}', \quad \mathfrak{tr}\chi' = \mathfrak{x}.
$$

The two copies of as subsets of ∪−*a,a* will be denoted by and *.*

Consider the corresponding heat kernel *H*∪−*a,a(t, x, y),* which of course is positive, since the composite graph is a metric graph with standard vertex conditions at the vertices. Hence the solution to the heat equation on ∪−*a,a* is given by

$$\begin{aligned} u(t, \mathbf{x}) &= \int\_{\Gamma \cup \\_{a, a\Gamma}} H\_{\Gamma \cup \\_{a, a\Gamma}} (t, \mathbf{x}, \mathbf{y}) u\_0(\mathbf{y}) d\mathbf{y} \\ &= \int\_{\Gamma} H\_{\Gamma \cup \\_{a, a\Gamma}} (t, \mathbf{x}, \mathbf{y}) u\_0(\mathbf{y}) d\mathbf{y} + \int\_{\Gamma'} H\_{\Gamma \cup \\_{a, a\Gamma}} \Gamma(t, \mathbf{x}, \mathbf{y}') u\_0(\mathbf{y}') d\mathbf{y}', \end{aligned}$$

where *u*<sup>0</sup> is the initial profile (15.3).

The following identities hold for the heat kernels:

$$H\_{\Gamma}(t, \mathbf{x}, \mathbf{y}) \; := H\_{\Gamma \cup -a, a} \Gamma(t, \mathbf{x}, \mathbf{y}) + H\_{\Gamma \cup -a, a} \Gamma(t, \mathbf{x}, \mathbf{t} \mathbf{y}), \mathbf{x}, \mathbf{y} \in \Gamma;$$

$$H\_{[-a, a]}(t, \mathbf{x}, \mathbf{y}) = H\_{\Gamma \cup -a, a} \Gamma(t, \mathbf{x}, \mathbf{y}) - H\_{\Gamma \cup -a, a} \Gamma(t, \mathbf{x}, \mathbf{t} \mathbf{y}), \mathbf{x}, \mathbf{y} \in [-a, a]. \tag{15.14}$$

To see this consider first the heat flow with the even initial profile *u*0*(x)* = *u*0*(τ x).* The corresponding solution remains even and therefore restricted to coincides with the heat flow on the original graph with the initial data *u*0*(x), x* ∈ *.* To get the heat kernel associated with the Dirichlet operator on [−*a, a*] consider the heat flow with the odd initial profile *u*0*(x)* = −*u*0*(τ x).* The heat flow remains odd and therefore satisfies Dirichlet conditions at the contact vertices between and *.*

Taking into account that all kernels appearing in (15.14) are positive we conclude that

$$H\_{\Gamma}(t, \mathbf{x}, \mathbf{y}) - H\_{[-a, a]}(t, \mathbf{x}, \mathbf{y}) = 2H\_{\Gamma \cup\_{-a, a} \Gamma}(t, \mathbf{x}, \tau \mathbf{y}) \ge 0, \ \mathbf{x}, \mathbf{y} \in [-a, a]$$

and inequality (15.13) is proven.

Our next step is to prove an upper estimate for the Laplacian heat kernel. We show first that the order of the singularity cannot be higher than those of the heat kernel for Laplacian on the line.

**Lemma 15.5** *The heat kernel H(t, x, y) for the standard Laplacian on a metric finite compact graph satisfies the upper estimate* 

$$H\_{\Gamma}(t, x, y) \le C \frac{1}{\sqrt{t}}, \quad 0 < t < 1,\tag{15.15}$$

*where C is a certain constant depending on the graph.* 

*Proof* We use the spectral decomposition for the standard Laplacian on and explicit representation for the heat kernel (15.5) leading to

$$H\_{\Gamma}(t, x, y) \le \sum\_{n=1}^{\infty} e^{-\lambda\_n t} \max\_{\mathbf{x} \in \Gamma} \left| \psi\_n(\mathbf{x}) \right|^2.$$

We use the fact that the normalised eigenfunctions of the standard Laplacian are uniformly bounded (11.37)

$$|\psi\_n(\mathbf{x})| \le c,$$

where *c* is independent of *x* and *n*, and the lower estimate for the eigenvalues

$$\left(\frac{\pi}{\mathcal{L}}\right)^2 \left(n - M\right)^2 \le \lambda\_m$$

to get

$$H\_{\Gamma}(t, x, x) \le c^2 \sum\_{n=1}^{\infty} e^{-\left(\frac{\pi}{\mathcal{Z}}\right)^2 (n - M)^2 t}$$

$$ < c^2 \sum\_{n=-\infty}^{\infty} e^{-\left(\frac{\pi}{\mathcal{Z}}\right)^2 n^2 t}$$

366 15 Further Theorems Inspired by Ambartsumian

$$\begin{aligned} &< 2c^2 \sum\_{n=0}^{\infty} e^{-\left(\frac{\pi}{\mathcal{L}}\right)^2 n^2 l} \\ &< 2c^2 \Big( 1 + \underbrace{\sum\_{n=0}^{\infty} e^{-\left(\frac{\pi}{\mathcal{L}}\right)^2 (n+1/2)^2 l}}\_{=: \mathcal{L}H\_{[-\mathcal{L}, \mathcal{L}]}(t, 0, 0)} \Big) \\ &< 2c^2 \Big( 1 + \mathcal{L} \frac{1}{\sqrt{4\pi t}} \Big) \\ &\le C \frac{1}{\sqrt{t}}, \end{aligned}$$

where the constant *C* can be chosen equal to

$$C = 2c^2 \left( 1 + \frac{\mathcal{L}}{\sqrt{4\pi}} \right) \cdot \frac{1}{2}$$

Note that the constant *C* appearing in the proof of the lemma is far from being optimal, but it was not our goal to obtain the best constant. On the other hand it is very important to understand that the constant is in general different from √ 1 4*π* (appearing in the estimate for the heat kernel for the Dirichlet Laplacian on an interval). For example the heat kernel for the Neumann Laplacian on the interval [−1*,* 1] is given by the infinite series obtained by reflecting the point *y* with respect to the boundary points −1 and 1

$$\begin{aligned} H\_{[-1,1]}^N(t, \mathbf{x}, \mathbf{y}) &= \frac{1}{\sqrt{4\pi t}} e^{-|\mathbf{x} - \mathbf{y}|^2/(4t)} + \frac{1}{\sqrt{4\pi t}} e^{-|\mathbf{x} + \mathbf{y} + 2|^2/(4t)} \\ &+ \frac{1}{\sqrt{4\pi t}} e^{-|\mathbf{x} + \mathbf{y} - 2|^2/(4t)} + \dots, \end{aligned}$$

and does not satisfy the upper estimate with the free heat kernel.

The obtained upper estimate is enough to prove that √*tH(t, x, x)* tends to 1*/* <sup>√</sup>4*<sup>π</sup>* for almost any *x.*

**Lemma 15.6** *Let be a finite compact metric graph with the vertex set* **V** = ∪*M <sup>m</sup>*=1*<sup>V</sup> m. Then for any <sup>z</sup>* <sup>∈</sup> \ **<sup>V</sup>** *the following limit holds* 

$$\lim\_{t \to 0} \sqrt{t} \,\, H\_{\Gamma}(t, x, x) = \frac{1}{\sqrt{4\pi}}, \quad x \in \Gamma \,\,\,\,\mathbf{V}. \tag{15.16}$$

*Proof (Following [155])* Consider any point *x* ∈ \**V** and let *a* be its half-distance to the nearest vertex. Let us introduce the positive functions

$$f(\mathbf{y}) = \begin{cases} H\_{[\mathbf{x}-a,\mathbf{x}+a]}(t,\mathbf{x},\mathbf{y}), \text{dist}(\mathbf{x},\mathbf{y}) \le a, \\ \mathbf{0}, & \text{otherwise}; \end{cases}$$

$$\mathbf{g}(\mathbf{y}) = H\_{\Gamma}(t,\mathbf{x},\mathbf{y}),$$

where we consider the interval [*x* − *a, x* + *a*] as a subset of *.* Estimate (15.13) implies that

$$f(\mathbf{y}) \le \mathbf{g}(\mathbf{y}).$$

Moreover, conservation of heat implies

$$\int\_{\Gamma} \mathbf{g}(\mathbf{y})d\mathbf{y} = \int\_{\Gamma} H\_{\Gamma}(t, \mathbf{x}, \mathbf{y})d\mathbf{y} = 1.$$

Let *u(t, x)* be a solution to the heat equation on , then it holds

$$\begin{aligned} \frac{d}{dt} \int\_{\Gamma} u(t,x) dx &= \int\_{\Gamma} \frac{\partial}{\partial t} u(t,x) dx = -\int\_{\Gamma} u\_{\chi\chi}(t,x) dx\\ &= -\sum\_{m=1}^{M} \underbrace{\left(\sum\_{x\_{j}\in V^{m}} \partial\_{n} u(t,x)\right)}\_{=0} = 0, \end{aligned}$$

due to standard conditions at the vertices. More precisely we use just Kirchhoff conditions. Taking into account (15.7) we obtain

$$\begin{aligned} \int\_{\Gamma} (g(\mathbf{y}) - f(\mathbf{y})) d\mathbf{y} &= 1 - \int\_{\chi - a}^{\chi + a} H\_{[\chi - a, \chi + a]}(t, \mathbf{x}, \mathbf{y}) d\mathbf{y} \\ &\leq \frac{4a}{\sqrt{\pi t}} e^{-a^2/(4t)}. \end{aligned}$$

Moreover with the help of the identity (15.11) we get

$$\begin{aligned} 0 \le H\_{\Gamma}(2t, \mathbf{x}, \mathbf{x}) - H\_{[\mathbf{x}-a, \mathbf{x}+a]}(2t, \mathbf{x}, \mathbf{x}) &= \int\_{\Gamma} (\mathbf{g}(\mathbf{y}) - f(\mathbf{y}))^2 d\mathbf{y} \\ &\le \int\_{\Gamma} (\mathbf{g}(\mathbf{y}) - f(\mathbf{y})) 2\mathbf{g}(\mathbf{y}) d\mathbf{y} \\ &\le 2 \frac{C}{\sqrt{t}} \frac{4a}{\sqrt{\pi t}} e^{-a^2/(4t)}, \end{aligned}$$

where we used (15.15) for *x* = *y* and just proven integral inequality. Taking into account that √ 1 *t <sup>e</sup>*−*a*2*/(*4*t)* <sup>→</sup> <sup>0</sup> as *<sup>t</sup>* <sup>→</sup> <sup>0</sup>*,* we calculate the limit using (15.12).

The statement of the lemma holds for almost any *x*, since the vertices have measure zero in the metric graph *.* It is very important to notice that the limit does not depend on the point *x* ∈ \ **V***,* also the rate of convergence may be different.

## *15.1.3 On Schrödinger Semigroups*

In what follows we shall need formula (15.23) below providing the first order correction for the trace of the heat semigroup in terms of the perturbation *q.* Also this is a standard fact, we indicate its proof here.

We start by differentiating

$$\frac{d}{dt}e^{-L\_{q^t}t}e^{Lt} = e^{-L\_{q^t}t}(-L\_q)e^{Lt} + e^{-L\_{q^t}t}Le^{Lt} = -e^{-Lt}qe^{Lt}$$

$$\Rightarrow e^{-L\_{q^t}t}e^{Lt} - I = -\int\_0^t e^{-L\_qs}qe^{Ls}ds,$$

implying

$$\begin{aligned} e^{-L\_{q^I}} &= e^{-Lt} - \int\_0^t e^{-L\_q s} q e^{L(s-t)} ds \\ &= e^{-Lt} - \int\_0^t e^{-L\_q(t-s)} q e^{-Ls} ds, \end{aligned}$$

where we changed variables *s* → *t* − *s*.

In a similar way we get

$$e^{-L\_q I} = e^{-Lt} - \int\_0^t e^{-L(t-s)} q e^{-L\_q s} ds.$$

Iterating the equation once we get

$$e^{-L\_{q}t} = e^{-Lt} - \underbrace{\int\_{0}^{t} e^{-L(t-s)} q e^{-Ls} ds}\_{=::A\_{1}(t)} + \underbrace{\int\_{0}^{t} e^{-L(t-s)} q \int\_{0}^{s} e^{-L\_{q}(s-u)} q e^{-Lu} du ds}\_{=:B(t)}.\tag{15.17}$$

The operator *A*1*(t)* defined above can be considered as an integral operator with the following kernel

$$L(t, \mathbf{x}, \mathbf{y}) = \int\_{s=0}^{t} \int\_{z \in \Gamma} H\_{\Gamma}(t - s, \mathbf{x}, z) q(z) H\_{\Gamma}(s, z, \mathbf{y}) dz ds.$$

For essentially bounded potentials taking into account positivity of the free heat kernel *H* we get

$$\begin{split} |L(t, \mathbf{x}, \mathbf{y})| &\leq \|q\|\_{\infty} \int\_{s=0}^{t} \int\_{\mathbf{z}\in\Gamma} H\_{\Gamma}(t-s, \mathbf{x}, z) H\_{\Gamma}(s, z, \mathbf{y}) dz ds \\ &= \|q\|\_{\infty} \int\_{s=0}^{t} H\_{\Gamma}(t, \mathbf{x}, \mathbf{y}) ds \\ &\leq \|q\|\_{\infty} C t^{1/2}, \end{split} \tag{15.18}$$

where we used the estimate (15.15) and the fact that *H(t, x, y)* is a kernel for a semigroup

$$\int\_{\mathbb{Z}\in\Gamma} H\_{\Gamma}(t-s,\chi,z)H\_{\Gamma}(s,z,\mathfrak{y})dz = H\_{\Gamma}(t,\chi,\mathfrak{y}).$$

Note that (15.18) is valid for *t <* 1 and almost everywhere with respect to *x* and *y,* more precisely, for all *x* and *y* not belonging to a vertex.

Continuing iterations one obtains the following formal series

$$\begin{aligned} e^{-L\_4 t} &= e^{-Lt} - \int\_0^t e^{-L(t-s)} q e^{-Ls} ds \\ &+ \int\_0^t e^{-L(t-s)} q \int\_0^s e^{-L(s-u)} q e^{-Lu} du ds \\ &+ \dots \\ &+ (-1)^m \int\_0^t ds\_1 e^{-L(t-s\_1)} q \int\_0^{s\_1} ds\_2 e^{-L(s\_1 - s\_2)} q \dots \\ &\underbrace{\dots q \int\_0^{s\_{m-1}} ds\_m e^{-L(s\_{m-1} - s\_m)} q e^{-Ls\_m}}\_{=: A\_m(t)} \\ &+ \dots \end{aligned} \tag{15.19}$$

To prove convergence of this formal series one may introduce the integral kernel *Lm(t, x, y)* associated with *Am* and use estimates similar to ones we already carried out in order to get (15.18). Really, using positivity of the heat kernel, the semigroup property and essential boundedness of the potential, we get

$$\begin{aligned} |L\_m(t, \mathbf{x}, \mathbf{y})| &\leq \|q\|\_\infty^m \int\_0^t ds\_1 \int\_0^{s\_1} ds\_2 \dots \int\_0^{s\_{m-1}} ds\_m H\_\Gamma(t, \mathbf{x}, \mathbf{y}) \\ &\leq \|q\|\_\infty^m \frac{t^m}{m!} H\_\Gamma(t, \mathbf{x}, \mathbf{y}) \\ &\leq C \frac{t^{m-1/2}}{m!} \|q\|\_\infty^m. \end{aligned}$$

It follows that even the perturbed semigroup can be considered as an integral operator with a certain kernel *H,q (t, x, y), x, y* ∈ \ **V***,* satisfying the uniform estimate

$$H\_{\Gamma,q}(t,x,y) \le Ct^{-1/2} \tag{15.20}$$

with a maybe different constant *C.* We return back to formula (15.17) in order to prove that the second integral is of order *<sup>t</sup>*3*/*<sup>2</sup> <sup>=</sup> *<sup>t</sup>*2−1*/*2*.* Really, the kernel *M(t, x, y)* of the operator *B(t)* is given by the integral

$$\begin{split} &|M(\varPi,\mathbf{y},\mathbf{y})| \\ &= \left| \int\_{0}^{l} ds \int\_{0}^{s} du \int\_{\Gamma} dw \int\_{\Gamma} dz H\_{\Gamma}(t-s,\mathbf{x},z) q(z) H\_{\Gamma\cdot q}(s-u,z,w) q(w) H\_{\Gamma}(u,w,\mathbf{y}) \right| \\ &\leq \|q\|\_{\infty}^{2} \int\_{0}^{l} ds \int\_{0}^{s} du \int\_{\Gamma} dw \int\_{\Gamma} dz H\_{\Gamma}(t-s,\mathbf{x},z) H\_{\Gamma\cdot q}(s-u,z,w) H\_{\Gamma}(u,w,\mathbf{y}) \\ &\leq \|q\|\_{\infty}^{2} e^{\|q\|\_{\infty}l} \int\_{0}^{l} ds \int\_{\Gamma}^{s} du \int\_{\Gamma} dw \int\_{\Gamma} dz H\_{\Gamma}(t-s,\mathbf{x},z) H\_{\Gamma}(s-u,z,w) H\_{\Gamma}(u,w,\mathbf{y}) \\ &= \|q\|\_{\infty}^{2} e^{\|q\|\_{\infty}l} \int\_{0}^{l} ds \int\_{0}^{s} du \, H\_{\Gamma}(t,\mathbf{x},\mathbf{y}) \\ &= \|q\|\_{\infty}^{2} e^{\|q\|\_{\infty}l} \frac{l^{2}}{2} H\_{\Gamma}(t,\mathbf{x},\mathbf{y}) \\ &\leq C \frac{\|q\|\_{\infty}^{2}}{2} e^{\|q\|\_{\infty}l} l^{3/2}, \end{split} \tag{15.21}$$

where we used the estimate

$$L - \|q\|\_{\infty} \le L\_q \le L + \|q\|\_{\infty} \implies e^{-\|q\|\_{\infty}t}e^{-Lt} \le e^{-L\_qt} \le e^{\|q\|\_{\infty}t}e^{-Lt},$$

implying in particular that the Schrödinger heat kernel is positive. Now formula (15.17) can be written using integral kernels as follows

$$\begin{aligned} &H\_{\Gamma,q}(t, \mathbf{x}, \mathbf{y}) \\ &= H\_{\Gamma}(t, \mathbf{x}, \mathbf{y}) - \int\_{0}^{t} \int\_{\Gamma} H\_{\Gamma}(t - s, \mathbf{x}, z) q(z) H\_{\Gamma}(s, z, \mathbf{y}) dz ds \\ &+ \int\_{0}^{t} \int\_{0}^{s} \int\_{\Gamma} \int\_{\Gamma} H\_{\Gamma}(t - s, \mathbf{x}, z) q(z) H\_{\Gamma,q}(s - u, z, w) q(w) H\_{\Gamma}(u, w, \mathbf{y}) dz dw ds ds. \end{aligned} \tag{15.22}$$

**Lemma 15.7** *The difference between the traces of the perturbed (Schrödinger) and unperturbed (Laplacian) semigroups satisfies* 

$$\text{tr}\left[e^{-L\_q I}\right] - \text{tr}\left[e^{-LI}\right] = -t \int\_{\Gamma} H\_{\Gamma}(t, x, x) q(x) dx + \rho(t),\tag{15.23}$$

*where ρ(t)* <sup>=</sup> <sup>O</sup>*(t*3*/*2*).*

*Proof* We put *x* = *y* in formula (15.22) and integrate with respect to *x*

$$\begin{aligned} \text{tr}\left[e^{-L\_{qI}}\right] &= \text{tr}\left[e^{-LI}\right] - \int\_0^I \int\_\Gamma \int\_\Gamma H\_\Gamma(t-s,x,z)q(z)H\_\Gamma(s,z,x)dzdxds \\ &+ \rho(t) \\ &= \text{tr}\left[e^{-LI}\right] - \int\_0^I \int\_\Gamma H\_\Gamma(t,z,z)q(z)dzds + \rho(t) \\ &= \text{tr}\left[e^{-LI}\right] - t\int\_\Gamma H\_\Gamma(t,x,x)q(x)dx + \rho(t), \end{aligned} \tag{15.24}$$

where we used that *H* is a kernel of a semigroup. Here *ρ(x)* denotes the integral corresponding to the last term in (15.22). Its estimate as O*(t*3*/*2*)* follows directly from (15.21).

We prepared all tools that are needed to prove the main result of this section—a direct generalisation of Ambartsumian theorem for the case where the fixed metric graph is not necessarily an interval.

## *15.1.4 A Theorem by Davies*

We start by proving a direct analog of the original Ambartsumian theorem.

**Theorem 15.8 (Davies [155])** *Let be a finite compact metric graph. Then the standard Schrödinger operator is isospectral to the standard Laplacian on the same* *metric graph*

$$
\lambda\_n(L\_q^{\rm st}(\Gamma)) = \lambda\_n(L^{\rm st}(\Gamma)), \tag{15.25}
$$

*for all n* = 1*,* 2*,..., if and only if q(x)* ≡ 0 *almost everywhere.* 

*Proof* The traces of the Schrödinger and Laplacian semigroups coincide, since the operators are isospectral. Then formula (15.23) reads as follows

$$0 = -t \int\_{\Gamma} H\_{\Gamma}(t, x, x) q(\mathbf{x}) d\mathbf{x} + \rho(t).$$

We divide by √*<sup>t</sup>* and take the limit as *<sup>t</sup>* <sup>→</sup> 0. Formula (15.15) implies that <sup>√</sup>*tH(t, x, x)* is uniformly bounded, moreover for almost every *<sup>x</sup>* it holds (see (15.16))

$$\lim\_{t \to 0} \sqrt{t} \,\, H\_{\Gamma}(t, x, x) = \frac{1}{\sqrt{4\pi}}.$$

implying that

$$\int\_{\Gamma} q(\alpha)d\alpha = 0,$$

since √ 1 *t ρ(t)* = *O(t).* It follows from Theorem 15.1 that *q* is zero almost everywhere, since *<sup>λ</sup>*1*(L*st*()* <sup>=</sup> <sup>0</sup> <sup>⇒</sup> *<sup>λ</sup>*1*(L*st *<sup>q</sup> ()* = 0.

To prove the theorem it was important that the limit (15.16) does not depend on the point *x* ∈ *.* This theorem can be strengthened as follows.

**Theorem 15.9 (Davies [155])** *Let be a finite compact metric graph. Assume that the eigenvalues of the standard Schrödinger and Laplace operators satisfy* 

*1. λ*1*(L*st *<sup>q</sup> ())* ≥ 0; *2.* lim sup *n*→∞ *(λn(L*st *<sup>q</sup> ())* <sup>−</sup> *λn(L*st <sup>0</sup> *()))* ≤ 0*,*

*then q(x) is equal to zero almost everywhere.* 

*Proof* To prove the theorem using the same method it is enough to show that

$$\limsup\_{t \to 0} \frac{1}{\sqrt{t}} \left( \text{tr} \left[ e^{-Lt} \right] - \text{tr} \left[ e^{-L\_q t} \right] \right) \le 0,\tag{15.26}$$

or equivalently

$$\limsup\_{t \to 0} \frac{1}{\sqrt{t}} \sum\_{n=1}^{\infty} \left( e^{-\lambda\_n (L\_0^{\text{st}})\_I} - e^{-\lambda\_n (L\_q^{\text{st}})\_I} \right) \le 0.$$

Condition 2 implies that given *>* 0 there exists *N* = *N ()*, such that *λn(Lq )* − *λn(L)* ≤  for all *n* ≥ *N.* We use the estimates

$$\begin{split} \frac{1}{\sqrt{t}} \sum\_{n=1}^{N-1} \left( e^{-\lambda\_n(L\_0^{\mathrm{st}})t} - e^{-\lambda\_n(L\_q^{\mathrm{st}})t} \right) &\leq \sqrt{t} \sum\_{n=1}^{N-1} |\lambda\_n(L\_q) - \lambda\_n(L)|;\\ \frac{1}{\sqrt{t}} \sum\_{n=N}^{\infty} \left( e^{-\lambda\_n(L\_0^{\mathrm{st}})t} - e^{-\lambda\_n(L\_q^{\mathrm{st}})t} \right) &\leq \frac{1}{\sqrt{t}} \sum\_{n=N}^{\infty} e^{-\lambda\_n(L)} (1 - e^{-\epsilon t}) \\ &\leq \frac{\epsilon t}{\sqrt{t}} \sum\_{n=1}^{\infty} e^{-\lambda\_n(L)t} \\ &= \epsilon \sqrt{t} \mathrm{Tr} \left( e^{-H\_0 t} \right) \\ &\leq C \mathcal{L} \epsilon, \end{split}$$

for a certain *C >* 0*.* We conclude that

$$\lim\_{t \to 0} \frac{1}{\sqrt{t}} \sum\_{n=1}^{\infty} \left( e^{-\lambda\_n (L\_0^{\text{st}})\_{\mathcal{I}}} - e^{-\lambda\_n (L\_q^{\text{st}})\_{\mathcal{I}}} \right) \le C \mathcal{L} \epsilon, \eta$$

where one may need to adjust the constant *C*. Therefore estimate (15.26) holds.

Proven theorem implies that zero potential, or more generally any constant potential, possesses unique properties allowing one to single out the spectrum of the Laplacian among the spectra of Schrödinger operators on a metric graph. The reason the spectrum of the Laplacian is unique is that it is given by zeroes of a certain generalised trigonometric polynomial. We have already used this fact proving Theorem 14.11.

## **15.2 On Asymptotically Isospectral Quantum Graphs**

Our goal in this section is to prove several geometric versions of Ambartsumian's theorem without assuming that the underlying graph is just the interval. The main analytic tool will be the theory of almost periodic functions (see for example [92]), but we do not require from the reader any knowledge of this wonderful theory—all results will be proven using well-known facts.

# *15.2.1 On the Zeroes of Generalised Trigonometric Polynomials*

Our analysis is based on the fact that the spectrum of a scaling invariant Laplacian on a finite compact metric graph is given by zeroes of a generalised trigonometric polynomial (see Theorem 6.1). Our first step is to prove that generalised trigonometric polynomials with real *ωj* determine holomorphic almost periodic functions [92] in a strip along the real axis. This is not surprising, since one way to define almost periodic functions is to consider their approximations via generalised trigonometric polynomials.

**Lemma 15.10** *For any generalised trigonometric polynomial* 

$$p = \sum\_{j=1}^{J} p\_j e^{i\alpha\_j k},$$

*one may find shifts t (δ)* → ∞*, as δ* → 0*, such that* 

$$|p(k+t(\delta)) - p(k)| \le \delta \tag{15.27}$$

*for all <sup>k</sup>* <sup>∈</sup> <sup>C</sup>*,* <sup>|</sup>Im *<sup>k</sup>*<sup>|</sup> *<sup>&</sup>lt;* <sup>1</sup>*.*

*Proof* Lemma 14.9 implies that for any *>* 0 one may choose *t ()* such that:

$$|e^{i(k+t(\epsilon))\alpha\_f} - e^{ik\alpha\_f}| < \epsilon$$

for all *ωj* and *k* : |Im *k*| *<* 1. Then

$$|p(k+t(\epsilon)) - p(k)| \le \left(\sum\_{j=1}^{J} |p\_j|\right) \epsilon.$$

Choosing *(δ)* <sup>=</sup> *<sup>δ</sup> <sup>J</sup> <sup>j</sup>*=<sup>1</sup> <sup>|</sup>*pj* <sup>|</sup> , the corresponding sequence *t ((δ))* satisfies the claim of the Lemma.

The above Lemma can be generalised for a finite set of generalised trigonometric polynomials, since the key point in the proof is Lemma 14.9 holds whenever the set of frequencies *ω <sup>j</sup>* is finite.

**Lemma 15.11** *For any finite set of generalised trigonometric polynomials* 

$$p^\ell = \sum\_{j=1}^{J\_\ell} p\_j^\ell e^{i\omega\_j^\ell k}, \qquad \ell = 1, 2, \dots, L,$$

*one may find common shifts t (δ)* → ∞*, as δ* → 0*, such that* 

$$|p^\ell(k+t(\delta)) - p^\ell(k)| \le \delta \tag{15.28}$$

*for all <sup>k</sup>* <sup>∈</sup> <sup>C</sup>*,* <sup>|</sup>Im *<sup>k</sup>*<sup>|</sup> *<sup>&</sup>lt;* <sup>1</sup>*.*

We will use the above Lemma for two generalised trigonometric polynomials together with their derivatives, which is permitted since the derivative of a generalised trigonometric polynomial is of course a generalised trigonometric polynomial. The following Theorem is our main analytic tool for this section.

**Theorem 15.12** *Let p and q be generalised trigonometric polynomials* 

$$p(k) = \sum\_{l=1}^{J\_1} p\_l e^{i\omega\_l k}, \quad q(k) = \sum\_{j=1}^{J\_2} q\_j e^{i\nu\_j k} \tag{15.29}$$

*with real zeros kn, lm, respectively. If there exists a subsequence lmn of ln such that* 

$$\lim\_{n \to \infty} (k\_n - l\_{m\_n}) = 0,\tag{15.30}$$

*then all the zeros of p are zeros of q with at least the same multiplicity.* 

*Proof* Let *k*<sup>0</sup> be any zero of *p*, and denote its order by *m*0. We may find an *>* 0 such that *p* has no other zeros in the disc *B*<sup>2</sup> *(k*0*)* of radius 2 centered at *k*0, and such that *p, p ,q,* and *q* have no zeros on the boundaries of *B (k*0*)* and *B*<sup>2</sup> *(k*0*)*. This choice implies that there are common constants 0 *< c, C* such that

$$c < |p(k)|, |p'(k)|, |q(k)|, |q'(k)| < C,\tag{15.31}$$

on the boundaries *∂B (k*0*)*, *∂B*<sup>2</sup> *(k*0*)*.

Since there are no other zeroes inside the circles we have

$$\int\_{\partial B\_{\ell}(k\_0)} \frac{p'(k)}{p(k)} dk = \int\_{\partial B\_{2\ell}(k\_0)} \frac{p'(k)}{p(k)} dk = 2\pi i m\_0,\tag{15.32}$$

and

$$\int\_{\partial B\_{2\ell}(k\_0)} \frac{q'(k)}{q(k)} dk = 2\pi i m',\tag{15.33}$$

for some *<sup>m</sup>* <sup>∈</sup> <sup>N</sup>. We now choose a *<sup>δ</sup>* satisfying

$$
\delta < \min \left\{ \frac{1}{16} \frac{c^2}{C\epsilon}, \frac{c}{2} \right\} \tag{15.34}
$$

(the specific choice of constants plays a role in the estimates of the integrals below (15.35)) and a common shift *t (, δ)* such that


We first show that with this choice of *δ* and *t (, δ)* the integrals over the boundaries of the shifted discs *B (k*<sup>0</sup> + *t (, δ)), B*<sup>2</sup> *(k*<sup>0</sup> + *t (, δ))* and the unshifted discs are equal. Consider the difference:

$$\begin{aligned} \int\_{\partial B\_{\epsilon}(k\_0)} \frac{p'(k)}{p(k)} dk - \int\_{\partial B\_{\epsilon}(k\_0 + t(\epsilon, \delta))} \frac{p'(k)}{p(k)} dk \\ = \int\_{\partial B\_{\epsilon}(k\_0)} \left( \frac{p'(k)}{p(k)} - \frac{p'(k + t(\epsilon, \delta))}{p(k + t(\epsilon, \delta))} \right) .\end{aligned}$$

We estimate:

$$\begin{split} & \left| \int\_{\partial B\_{\epsilon}(k\_{0})} \left( \frac{p'(k)}{p(k)} - \frac{p'(k + t(\epsilon, \delta))}{p(k + t(\epsilon, \delta))} \right) dk \right| \\ & \leq \int\_{\partial B\_{\epsilon}(k\_{0})} \left| \frac{|p'(k) - p'(k + t(\epsilon, \delta))| \, |p(k + t(\epsilon, \delta))|}{|p(k)| \, |p(k + t(\epsilon, \delta))|} \right| \\ & \qquad + \frac{|p'(k + t(\epsilon))| \, |p(k + t(\epsilon, \delta)) - p(k)|}{|p(k)| \, |p(k + t(\epsilon, \delta))|} \right| dk \\ & \leq 2\pi\epsilon \left( \frac{\delta(C + \delta)}{c(c - \delta)} + \frac{(C + \delta)\delta}{c(c - \delta)} \right) = \epsilon \frac{4\pi\delta(C + \delta)}{c(c - \delta)} \\ & \leq \epsilon \frac{4\pi\delta 2C}{c^{2}/2} = \epsilon \frac{16\pi C}{c^{2}} \delta < \pi, \end{split} \tag{15.35}$$

where we used that (15.34) implies *δ < c/*2 and *δ <* <sup>1</sup> <sup>16</sup> *<sup>c</sup>*<sup>2</sup> *<sup>C</sup> .* The reason to choose *δ* satisfying (15.34) is clear now. It follows that the difference between the two integrals is less than *π* and therefore they are equal, so

$$\int\_{\partial B\_{\epsilon}(k\_0 + t(\epsilon, \delta))} \frac{p'(k)}{p(k)} dk = 2\pi i m\_0,\tag{15.36}$$

and *m*<sup>0</sup> zeroes of *p(k)* lie inside the shifted disc *B (k*<sup>0</sup> + *t (, δ))*. In the same way one shows that

$$\int\_{\partial B\_{2\epsilon}(k\_0 + \iota(\epsilon, \delta))} \frac{p'(k)}{p(k)} dk = 2\pi i m\_0, \quad \int\_{\partial B\_{2\epsilon}(k\_0 + \iota(\epsilon, \delta))} \frac{q'(k)}{q(k)} dk = 2\pi i m', $$

so *p* has *m*<sup>0</sup> zeros in *B*<sup>2</sup> *(k*0+*t (, δ))* all of which are actually contained in *B (k*0+ *t (, δ))*, and *q* has *m* zeros in *B*<sup>2</sup> *(k*<sup>0</sup> + *t (, δ))*.

By the choice of *t (, δ)* to every zero of *p* inside *B (k*<sup>0</sup> + *t (, δ))* corresponds a zero of *q* lying at a distance less than *.* All such zeroes lie inside the ball of double radius *B*<sup>2</sup> *(k*<sup>0</sup> + *t (, δ))*, hence *m* ≥ *m*0*.*

Letting  → 0 — while choosing suitable *δ*'s and shifts *t (, δ)* — we conclude that *k*<sup>0</sup> is a zero of *q* of multiplicity at least *m*0.

The above theorem holds even for holomorphic almost periodic functions [357]. By applying Theorem 15.12 twice we obtain the following statement as a special case.

**Theorem 15.13** *Let p, q be generalised trigonometric polynomials* 

$$p(k) = \sum\_{l=1}^{\infty} p\_l e^{l\omega\_l k}, \ q(k) = \sum\_{j=1}^{\infty} q\_j e^{l\nu\_j k} \tag{15.37}$$

*with zeros kn, ln respectively, if* 

$$\lim\_{n \to \infty} (k\_n - l\_n) = 0,\tag{15.38}$$

*then the zeros of the functions are identical.* 

This direction of research has already been continued in [220] where uniqueness theorems for Fourier quasicrystals were established.

## *15.2.2 Asymptotically Isospectral Quantum Graphs*

In this subsection we will apply Theorem 15.13 to the spectral theory of quantum graphs. As was shown in Theorem 6.1 the spectrum of *L***S***()* for a finite compact quantum graph and scaling invariant vertex conditions **S** is given by the zeros of a generalised trigonometric polynomial.

We shall also use the notion of asymptotically isospectral quantum graphs given in Definition 11.7. We start by showing that two asymptotically isospectral scaling invariant Laplacians are in fact isospectral. This implies that the spectrum of scaling invariant Laplacians possesses certain rigidity, so that it is determined by the asymptotics.

**Theorem 15.14** *Let L***S**<sup>1</sup> *(*1*) and L***S**<sup>2</sup> *(*2*) be two Laplace operators defined on finite compact metric graphs* <sup>1</sup> *and* <sup>2</sup> *by certain scaling invariant vertex conditions given by* **S**<sup>1</sup> *and* **S**<sup>2</sup> *respectively. If the operators are asymptotically isospectral, then they are isospectral.*

*Proof* Scaling invariant Laplacians are non-negative operators, which is easily seen from their quadratic forms given by Dirichlet integrals. The positive eigenvalues are given by certain generalised trigonometric polynomials (Theorem 6.1). Then Theorem 15.13 implies that the zeroes of these polynomials coincide, since they are asymptotically close. We have thus proven that all positive eigenvalues coincide.

It remains to show that the multiplicities of the eigenvalue zero coincide.4 But this trivially follows from the fact that the lowest non-zero eigenvalues not only coincide but have the same index.

Taking into account the above theorem one may think that the notion of asymptotic isospectrality is redundant. This is not completely true, for example Schrödinger operator on an interval is asymptotically isospectral but not isospectral to the Laplacian, unless the potential is identically zero. More generally, Theorem 11.9 implies that Schrödinger operator *L***<sup>S</sup>** *<sup>q</sup> ()* is asymptotically isospectral to the reference Laplacian *L***S***(*∞*) (*∞*).*

Two asymptotically isospectral Schrödinger operators are not necessarily isospectral—Theorem 15.14 does not hold for Schrödinger operators. On the other hand one may generalise it by proving that the corresponding reference Laplacians are isospectral.

**Theorem 15.15** *Let L***S**<sup>1</sup> *<sup>q</sup>*<sup>1</sup> *(*1*) and L***S**<sup>2</sup> *<sup>q</sup>*<sup>2</sup> *(*2*) be two Schrödinger operators on finite compact metric graphs i with qi* ∈ *L*1*(i) and vertex conditions determined by unitary matrices* **S***i, for i* = 1*,* 2*. Suppose that the operators are asymptotically isospectral, then the corresponding reference Laplacians L***S**1*(*∞*) (*∞ <sup>1</sup> *) and L***S**1*(*∞*) (*∞ <sup>1</sup> *) are isospectral.* 

*Proof* In accordance with Theorem 11.9 the operators

$$L\_{q\_1}^{\mathbf{S}\_1}(\Gamma\_1) \quad \text{and} \ L^{\mathbf{S}\_1(\infty)}(\Gamma\_1^{\infty}),$$

$$L\_{q\_2}^{\mathbf{S}\_2}(\Gamma\_2) \quad \text{and} \ L^{\mathbf{S}\_2(\infty)}(\Gamma\_2^{\infty})$$

are pairwise asymptotically isospectral. One of the conditions in the current theorem is that the Schrödinger operators *L***S**<sup>1</sup> *<sup>q</sup>*<sup>1</sup> *(*1*)* and *L***S**<sup>2</sup> *<sup>q</sup>*<sup>2</sup> *(*2*)* are asymptotically isospectral, hence the reference Laplacians are also asymptotically isospectral. Then Theorem 15.14 implies that they are in fact isospectral.

The above theorem does not imply that the underlying reference graphs ∞ *<sup>i</sup>* coincide, since there exists isospectral scaling invariant or even standard Laplacians. The theorem just implies that the reference Laplacians belong to the same isospectral class.

One may strengthen the above theorem by assuming that two Schrödinger operators are not necessarily asymptotically isospectral, but just have asymptotically

<sup>4</sup> Remember that the multiplicity of *<sup>k</sup>* <sup>=</sup> <sup>0</sup> as a zero of the secular trigonometric polynomial may be different from the spectral multiplicity of *λ* = 0.

close spectra. The only new point is that the multiplicities of zero as the eigenvalue of the two reference Laplacians may be different.

**Theorem 15.16** *Let L***S**<sup>1</sup> *<sup>q</sup>*<sup>1</sup> *(*1*) and L***S**<sup>2</sup> *<sup>q</sup>*<sup>2</sup> *(*2*) be two Schrödinger operators on finite compact metric graphs i with qi* ∈ *L*1*(i) and vertex conditions determined by unitary matrices* **S***i, for i* = 1*,* 2*. Suppose that their spectra* 

$$\{k\_n^2\} = \Sigma(L\_{q\_1}^{\mathbf{S}\_1}(\Gamma\_1)), \text{ and } \quad \{l\_n^2\} = \Sigma(L\_{q\_2}^{\mathbf{S}\_2}(\Gamma\_2)),$$

*are asymptotically close in the sense that* 

$$k\_{n+m} - l\_n \to 0, \text{ as } n \to \infty \tag{15.39}$$

*for some <sup>m</sup>* <sup>∈</sup> <sup>N</sup>*. Then all non-zero eigenvalues of the corresponding reference Laplacians L***S**1*(*∞*) (*∞ <sup>1</sup> *) and L***S**2*(*∞*) (*∞ <sup>2</sup> *) coincide and m is the difference in the multiplicity of the eigenvalue* 0 *of the operators.* 

**Problem 71** Prove Theorem 15.16 in full details.

The above Theorem cannot be strengthened by showing that the eigenvalue 0 is of the same multiplicity. Consider the circle graph **S**<sup>1</sup> of length 2*π* with one vertex with standard conditions and the graph consisting of two disjoint intervals of length *π* with standard conditions (i.e. Neumann) at all endpoints. Then *(L*st*(***S**1*))* <sup>=</sup> <sup>0</sup>*,* <sup>1</sup>*,* <sup>1</sup>*,* 22*,* 22*,...* while *(L*st*())* <sup>=</sup> <sup>0</sup>*,* <sup>0</sup>*,* <sup>1</sup>*,* <sup>1</sup>*,* 22*,* 22*,...* . All nonzero eigenvalues coincide, but the multiplicity of the eigenvalue *λ* = 0 is determined by the number of connected components in the corresponding graph.

# *15.2.3 When a Schrödinger Operator Is Isospectral to a Laplacian*

Our studies of asymptotically isospectral graphs lead us to the following unexpected generalisation of the Davies theorem (Theorem 15.8).

**Theorem 15.17** *Let* <sup>1</sup> *be a finite compact metric graph, q* ∈ *L*∞*(*1*) and suppose that (L*st *<sup>q</sup> (*1*))* <sup>=</sup> *(L*st <sup>0</sup> *(*2*)) for some (may be different) finite compact* 2*. Then q(x)* ≡ 0*.* 

*Proof* Theorem 15.15 implies that *(L*st <sup>0</sup> *(*1*))* <sup>=</sup> *(L*st <sup>0</sup> *(*2*))*, since standard conditions are scaling invariant, and therefore *(L*st *<sup>q</sup> (*1*))* <sup>=</sup> *(L*st <sup>0</sup> *(*1*))* and from Theorem 15.8 we obtain *q(x)* ≡ 0.

Note that we do not claim that the graphs <sup>1</sup> and <sup>2</sup> coincide—just the corresponding standard Laplacians are isospectral. On the other hand, there exist metric graphs that are uniquely determined by the spectrum of the corresponding standard Laplacians (see for example Sect. 9.4), for such graphs we may conclude in addition that <sup>1</sup> = 2*.*

**Theorem 15.18** *Let be a finite compact metric graph with rationally independent edge lengths. Then the spectrum of the standard Schrödinger operator L*st *<sup>q</sup> () with q* ∈ *L*1*() determines the unique metric graph .* 

The theorem is an easy corollary of Theorems 9.11 and 15.17.

The above theorems show another one time that standard vertex conditions and zero potential are exceptional for the inverse problem.

**Problem 72** Is it possible to strengthen the above theorem by assuming that the vertex conditions in the Schrödinger operator are just asymptotically standard (instead of standard conditions)?

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 16 Magnetic Fluxes**

We have already seen that the spectra of quantum graphs are independent of the particular form of the magnetic potential (5.8)—only integrals of the magnetic potential along the edges play a role. One may say that adding magnetic potential is equivalent to a special change of vertex conditions. The goal of this chapter is to look at this connection in more detail.

The integrals along the cycles can be interpreted as the fluxes of the magnetic field. In particular, spectra of trees is independent of the magnetic potential. This phenomenon is well-known for physicists, who used to say that *there is no magnetic field in one dimension*. On the other hand Y. Aharonov and D. Bohm predicted that the movement of charged quantum particles in a ring is affected by the magnetic field though the ring, despite that it could be equal to zero on the ring itself. This phenomenon is known as Aharonov-Bohm effect. What we are going to do is an extension of these studies to the case of several cycles.

It will also be shown that even though the number of parameters the spectrum depends on is equal to the number *β*<sup>1</sup> of independent cycles, it might happen that dependence upon one of these parameters is suppressed, provided the other parameters are chosen in a special way. The reasons are pure topological, therefore we call this phenomenon by topological damping of Aharonov-Bohm effect.

# **16.1 Unitary Transformations via Multiplications and Magnetic Schrödinger Operators**

Consider the magnetic Schrödinger operator *L***<sup>S</sup>** *q,a()* defined following Sect. 3.8 on a connected finite compact metric graph by the differential expression

$$
\pi\_{q,a}u = \left(i\frac{d}{dx} + a(\mathbf{x})\right)^2 + q(\mathbf{x}),\tag{16.1}
$$

© The Author(s) 2024 P. Kurasov, *Spectral Geometry of Graphs*, Operator Theory: Advances and Applications 293, https://doi.org/10.1007/978-3-662-67872-5\_16

381

on the functions satisfying vertex conditions (3.53)

$$\vec{u} \left( \mathbb{S}\_m - I \right) \vec{u}\_m = \left( \mathbb{S}\_m + I \right) \partial \vec{u}\_m, \quad m = 1, 2, \dots, M. \tag{16.2}$$

In quantum mechanics unitarily equivalent operators define the same physical models and therefore are usually identified. On the other hand the probability density is given by the squared absolute value of the wave function *ρ(x)* = |*ψ(x)*| 2*.* Therefore if the configuration space is fixed, then it is natural to identify operators *H*˜ and *H* connected via the unitary transformation given by multiplication by any unimodular function **U***(x)* <sup>=</sup> exp *i(x), (x)* <sup>∈</sup> <sup>R</sup>:

$$\tilde{L} = \mathbf{U}^{-1} L \mathbf{U} = e^{-l\Theta(\mathbf{x})} L e^{l\Theta(\mathbf{x})}.\tag{16.3}$$

We have already seen in Sect. 4.1 that magnetic potential on each edge can be eliminated, but the corresponding transformation affects the vertex conditions. In what follows we study this dependence systematically.

Let us first consider elementary examples when the function is constant on the edges.

**Special Case 1** If the function *(x)* is chosen equal to a fixed real constant 0*(x)* <sup>≡</sup> *θ, θ* <sup>∈</sup> <sup>R</sup> on the whole graph , then the operator of multiplication **<sup>U</sup>** commutes with any *L***<sup>S</sup>** *q,a* and therefore the transformed operator *L***S**˜ *q,*˜ *<sup>a</sup>*˜ is equal to the original one:

$$L^{\mathbf{s}}\_{\vec{q},\vec{a}} = L^{\mathbf{s}}\_{q,a}.$$

This case is trivial and may be ignored.

**Special Case 2** Let the function be chosen equal to a separate constant on each edge of

$$
\Theta\_{\ell}(\mathbf{x}) = \theta\_n, \quad \mathbf{x} \in E\_n,\tag{16.4}
$$

where *θn* are certain real parameters. With this choice of the differential expressions *τq,*˜ *<sup>a</sup>*˜ and *τq,a* coincide on every edge *En*, but the corresponding operators may be different, since the vertex conditions at a vertex *V <sup>m</sup>* are affected if the phases *θ (xj )* are different for *xj* <sup>∈</sup> *<sup>V</sup> m.*

More precisely, assume without loss of generality that the edges joined together at the vertex *V <sup>m</sup>* are enumerated as *E*1*, E*2*,...,Edm* and that the functions from the domain of *L***<sup>S</sup>** *q,a* satisfy vertex conditions (16.2). Consider the diagonal unitary matrix *Um* given by

$$U\_m = \text{diag}\left\{ e^{i\theta\_1}, e^{i\theta\_2}, \dots, e^{i\theta\_{d\_{00}}} \right\},$$

where *θn* = *θ (xj ),* provided *xj* ∈ *En.* Then the unitary matrices *S*˜ *<sup>m</sup>* and *Sm* associated with the two operators are connected via

$$
\tilde{S}\_m = (U\_m)^{-1} \ S\_m U\_m. \tag{16.5}
$$

To see this assume that *<sup>u</sup>* <sup>∈</sup> Dom *(L***S**˜ *q,a)*. Every such function after the transformation **U** is mapped to a function from the domain of *L***<sup>S</sup>** *q.a* and therefore satisfies condition (16.2)

$$i(\mathcal{S}\_m - I)U\_m \vec{\mu}\_m = (\mathcal{S}\_m + I)U\_m \partial \vec{\mu}\_m$$

Multiplying both sides by *(Um)* <sup>−</sup><sup>1</sup> we arrive at

$$i(\bar{\mathcal{S}}\_m - I)\vec{\mu}\_m = (\bar{\mathcal{S}}\_m + I)\partial \vec{\mu}\_m,$$

using (16.5).

Summing up, multiplication by the function *eiθe(x)* leads to magnetic Schrödinger operators given by the same differential expression, but the vertex conditions determined by matrices *S*˜ *<sup>m</sup>* and *Sm* are connected via (16.5).

**General Case** Assume that the operators *L***S**˜ *q,*˜ *<sup>a</sup>*˜ and *L***<sup>S</sup>** *q,a* are related by the unitary transformation (16.3) above, where we assume that the function *(x)* is continuously differentiable inside the edges. Applying formula

$$\left(i\frac{d}{d\boldsymbol{\chi}} + a(\boldsymbol{\chi})\right)e^{i\Theta(\boldsymbol{\chi})}u(\boldsymbol{\chi}) = e^{i\Theta(\boldsymbol{\chi})} \left(i\frac{d}{d\boldsymbol{\chi}} + a(\boldsymbol{\chi}) - \Theta'(\boldsymbol{\chi})\right)u(\boldsymbol{\chi})$$

twice we obtain the following expression for the transformed differential operator

$$\begin{split} \pi\_{\tilde{q},\tilde{a}}u &= e^{-l\Theta(\mathbf{x})} \left[ \left( i\frac{d}{dx} + a(\mathbf{x}) \right)^2 + q(\mathbf{x}) \right] e^{l\Theta(\mathbf{x})} u \\ &= \left( i\frac{d}{dx} + \underbrace{a(\mathbf{x}) - \Theta'(\mathbf{x})}\_{=\tilde{a}(\mathbf{x})} \right)^2 u + \underbrace{q(\mathbf{x})}\_{\tilde{q}(\mathbf{x})} u. \end{split} \tag{16.6}$$

It follows that the electric potential is not affected but the magnetic potential is changed

$$
\tilde{q}(\mathbf{x}) = q(\mathbf{x}), \quad \text{and} \quad \tilde{a}(\mathbf{x}) = a(\mathbf{x}) - \Theta'(\mathbf{x}).\tag{16.7}
$$

The transformation of the vertex conditions is the same as the one discussed in the second special case. The only difference is that the phases *θn* should be substituted with the limiting values *(xj )* where *xj* <sup>∈</sup> *<sup>V</sup> m.* Note that the extended normal

derivatives given by (2.26) are changed accordingly, since their definition depends on the value of the magnetic potential.

**Problem 73** Prove that transformation (16.3) maps extended normal derivatives to extended normal derivatives and that new vertex conditions at a vertex *V <sup>m</sup>* are given by the matrix *S*˜ *<sup>m</sup>* connected to *Sm* via (16.5).

In particular, the magnetic potential on every edge is eliminated if one chooses:

$$
\Theta(\mathfrak{x}) = \int\_{\mathfrak{x}\_0}^{\mathfrak{x}} a(\mathfrak{y}) d\mathfrak{y}.
$$

Observe that by eliminating the magnetic potential one introduces new phases in vertex conditions as given by (16.5). Hence in order to study spectral properties of magnetic Schrödinger operators on metric graphs it is enough to consider Schrödinger operators with zero magnetic potentials, but with different extra phases in the vertex conditions. We are going to call these phases simply **vertex phases**  (see (16.10) below) and will in particular study the dependence upon these phases.

**Special Case 3** If the graph is a tree, then the function eliminating the magnetic potential can be chosen continuous on the whole metric graph

$$
\Theta(\mathbf{x}) = \int\_{\chi\_0}^{\chi} a(\mathbf{y}) d\mathbf{y},\tag{16.8}
$$

where the point *x*<sup>0</sup> ∈ is arbitrary and integration is along the shortest path on connecting *x*<sup>0</sup> and *x.* Note that integration along the edges forming the path should be carried out respecting their orientation: if the path goes along an edge in the positive direction, then the corresponding contribution should be taken with + sign, otherwise—with − sign. This is necessary since changing edge orientation the magnetic potential is multiplied by −1*.*

It follows that magnetic potential on a tree can be eliminated without changing the vertex conditions, i.e. the spectrum of a magnetic Schrödinger operator on a tree coincides with the spectrum of the non-magnetic Schrödinger operator with the same electric potential *q* and the same vertex conditions.

Assume now that is not a tree, but contains cycles. Consider then any spanning tree *T* on obtained by chopping precisely *β*<sup>1</sup> = *N* − *M* + 1 vertices—one on each independent cycle in *.* Then the magnetic potential on *T* can be eliminated as described above. Using the same function to eliminate magnetic potential on leads to introducing at most *β*<sup>1</sup> vertex phases, since the values of at different parts of the chopped vertices may be different. Assume that a vertex *V <sup>m</sup>* was divided into two vertices *V <sup>m</sup>* and *V <sup>m</sup>* with one of the vertex degrees equal to 1. If *(V m)* = *(V m)* modulus 2*π* then the matrix *Sm* describing vertex conditions on is transformed by (16.5). The matrix *Um* contains factors *ei(V m)* and *ei(V m)* , but one common factor in the similarity transformation can be cancelled. Hence the matrix *S*˜ *<sup>m</sup>* depends on the difference *(V m)* − *(V m)* which turns to be equal

to the integral of the magnetic potential along the independent cycle to which *V <sup>m</sup>* belongs. We conclude that elimination of the magnetic potential on a graph with *β*<sup>1</sup> independent cycles *Cj* leads to an operator with zero magnetic potential, the same electric potential and new vertex conditions containing at most *β*<sup>1</sup> phases equal to

$$\Phi\_j = \int\_{C\_j} a(\mathbf{y}) d\mathbf{y}, \ \ j = 1, 2, \dots, \mathbf{g}. \tag{16.9}$$

These phases should be interpreted as **magnetic fluxes** —the fluxes of the magnetic field through the independent cycles.

We have in particular proven the following theorem which probably appeared first in [311]

**Theorem 16.1** *Consider the magnetic Schrödinger operator L***<sup>S</sup>** *q,a on a finite compact metric graph with a fixed electric potential q and fixed vertex conditions (16.2). Then as the magnetic potential a varies, the spectrum of L***<sup>S</sup>** *q,a depends on at most β*<sup>1</sup> = *N* − *M* + 1 *parameters which can be identified as the fluxes j given by (16.9).* 

This theorem states that the spectrum depends on at most *β*<sup>1</sup> parameters. We have strong reasons to believe that if all vertex conditions are chosen properly connecting, then the spectrum does depend on all *β*<sup>1</sup> magnetic fluxes. Observe that in very special cases it might happen that the spectrum is independent of one of the fluxes, provided the other fluxes are chosen in a special way (see Sect. 16.3 below).

## **16.2 Vertex Phases and Transition Probabilities**

We have seen that eliminating magnetic potential on the edges one obtains Schrödinger operator with the same electric potential, but with vertex conditions given by *S*˜ *<sup>m</sup>* <sup>=</sup> *(Um)*−<sup>1</sup>*SmUm* instead of *Sm.* The transformation above contains certain phases, which we are going to call **vertex phases**:

$$
\theta\_j = \Theta(\mathbf{x}\_j),
\tag{16.10}
$$

where *(x)* is the function used to define the unitary transformation **U** = exp*(i(x))*.

The matrices *S*ˆ *<sup>m</sup>* and *Sm* have the same absolute values of all entries:

$$\|\left(\tilde{S}\_m\right)\_{lj}\|^2 = |(S\_m)\_{lj}|^2. \tag{16.11}$$

It follows that the waves penetrate such vertices with precisely equal transition probabilities given by *ρij* = |*(Sm)ij* | <sup>2</sup>*.* Transition probabilities are physically relevant and in principle are much easier to be measured in experiments than

the scattering coefficients. Therefore one may discuss the possibility to define the quantum graph model using just transition probabilities instead of vertex scattering coefficients.

It turns out that this point of view is not optimal, since a unitary matrix *Sm* in general is not defined by transition probabilities |*(Sm)ij* | <sup>2</sup>*.* First of all it is trivial to see that the transformation

$$S\_m \mapsto \tilde{S}\_m = (D\_m)^{-1} S\_m D\_m,\tag{16.12}$$

where *Dm* is a diagonal unitary matrix

$$D\_m = \text{diag}\left\{ e^{l\varphi\_1}, e^{l\varphi\_2}, \dots, e^{l\varphi\_{\text{dm}}} \right\}, \quad \varphi\_l \in \mathbb{R}, \tag{16.13}$$

preserves the transition probabilities. Note that this transformation coincides with (16.5) if all phases *ϕi* are chosen equal to the values of *θj* at the corresponding endpoints. Then we have *Dm* = *Um* and considering two quantum graphs with the vertex conditions defined by *S*˜ *<sup>m</sup>* and *Sm* is equivalent to reintroducing magnetic potentials.

Transformation (16.12) in general does not characterize all unitary matrices with the same transition probabilities. As an example, consider the following one parameter family of reflectionless equitransmitting matrices of order 6 [358]

$$C\_{1,1,1} = \frac{1}{\sqrt{5}} \begin{pmatrix} 0 & 1 & 1 & 1 & 1 & 1 \\ 1 & 0 & 1 & 1 & -1 & -1 \\ 1 & 1 & 0 & -1 & e^{i\varphi} & -e^{i\varphi} \\ 1 & 1 & -1 & 0 & -e^{i\varphi} & e^{i\varphi} \\ 1 & 1 & e^{-i\varphi} & -e^{-i\varphi} & 0 & 1 \\ 1 & -1 & -e^{-i\varphi} & e^{-i\varphi} & 1 & 0 \end{pmatrix},$$

where *<sup>ϕ</sup>* <sup>∈</sup> <sup>R</sup>*.* All matrices from this family have the same reflection probability |*Sjj* | <sup>2</sup> <sup>=</sup> <sup>0</sup> (reflectionless) and the same transition probabilities (equitransmitting) |*Sij* | <sup>2</sup> <sup>=</sup> <sup>1</sup> <sup>5</sup> *, i* = *j.* Of course, considered example is very special, since many of the entries in *C*1*,*1*,*<sup>1</sup> have the same absolute value. Adding the similarity transformation (16.12) one obtains a 6-parameter family of matrices with the same transition probabilities.

One may study quantum graph models fixing all *Sm* up to the vertex phases described above, in other words considering vertex conditions given by *S*˜ *m* = *(Dm)*−<sup>1</sup>*SmDm* with *Sm* fixed, *<sup>m</sup>* <sup>=</sup> <sup>1</sup>*,* <sup>2</sup>*,... ,M.* Then with every vertex *<sup>V</sup> <sup>m</sup>* we have precisely *dm* − 1 arbitrary phases associated. The spectrum of the quantum graph depends on the vertex phases, but how many parameters are independent? Altogether we have *<sup>M</sup> <sup>m</sup>*=<sup>1</sup>*(dm* <sup>−</sup> <sup>1</sup>*)* <sup>=</sup> <sup>2</sup>*<sup>N</sup>* <sup>−</sup> *<sup>M</sup>* vertex phases. It turns out that only *β*<sup>1</sup> = *N* − *M* + 1 parameters are independent. The proof is completely analogous to the proof of Theorem 16.1.

Let us study the following clarifying example.

**Example 16.2** Consider the figure eight metric graph *(*2*.*4*)* obtained by joining together two loops at a single vertex. The endpoints joined in the vertex *V* <sup>1</sup> are *x*1*, x*2*, x*3*, x*4*.*

Define the Laplace operator *LS*˜ on *(*2*.*4*)* on the functions from the Sobolev space *W*<sup>2</sup> <sup>2</sup> *(*[*x*1*, x*2]∪[*x*3*, x*4]*)* satisfying vertex conditions

$$i(\tilde{S} - I)\vec{\psi}(V^1) = (\tilde{S} + I)\partial\vec{\psi}(V^1),\tag{16.14}$$

where *S*˜ = *D*∗*SD* with

$$S = \frac{1}{\sqrt{3 + \cos^2 \beta}} \begin{pmatrix} \cos \beta & 1 & 1 & 1 \\ 1 & \cos \beta & -e^{i\beta} & -e^{-i\beta} \\ 1 & -e^{-i\beta} - \cos \beta & e^{-i\beta} \\ 1 & -e^{i\beta} & e^{i\beta} & -\cos \beta \end{pmatrix}, \beta \in \left( -\frac{\pi}{2}, \frac{\pi}{2} \right) \qquad (16.15)$$

and *D* = diag 1*, eiϕ*<sup>1</sup> *, eiϕ*<sup>2</sup> *, eiϕ*<sup>3</sup> , *ϕj* <sup>∈</sup> [−*π, π)*. For *<sup>a</sup>* = − *<sup>e</sup>iβ* <sup>√</sup>3+cos2 *<sup>β</sup>* , *t* = √ 1 3+cos2 *β* and *<sup>r</sup>* <sup>=</sup> <sup>√</sup> cos *<sup>β</sup>* 3+cos2 *β* we write the above matrix as follows

$$S = \begin{pmatrix} r & t & t & t \\ t & r & a & \overline{a} \\ t & \overline{a} & -r & -\overline{a} \\ t & a & -a & -r \end{pmatrix}.$$

This is a zero-trace equitransmitting unitary Hermitian matrix constructed in [348].

The defined operator *L* is determined by 4 real parameters: one in the matrix *S* and three in *D*. The parameter included in *S* determine different transition probabilities through the central vertex, while not all phases included in *D* are important leading to unitarily equivalent operators. Let us calculate the spectrum explicitly in order to see this phenomena.

The solution to the Laplace equation −*ψ* <sup>=</sup> *<sup>k</sup>*2*<sup>ψ</sup>* is given by

$$\psi\_{\mathbf{}}(\mathbf{x}) = \begin{cases} a\_1 e^{ik|\mathbf{x} - \mathbf{x}\_1|} + a\_2 e^{ik|\mathbf{x} - \mathbf{x}\_2|}, & \mathbf{x} \in \left[\mathbf{x}\_1, \mathbf{x}\_2\right], \\\\ a\_3 e^{ik|\mathbf{x} - \mathbf{x}\_3|} + a\_4 e^{ik|\mathbf{x} - \mathbf{x}\_4|}, & \mathbf{x} \in \left[\mathbf{x}\_3, \mathbf{x}\_4\right]. \end{cases}$$

Remember that *l*<sup>1</sup> = *x*<sup>2</sup> − *x*<sup>1</sup> and *l*<sup>2</sup> = *x*<sup>4</sup> − *x*3. Since the unitary matrix *S*˜ is also Hermitian, it plays the role of the vertex scattering matrix *S* connecting the amplitudes of the incoming and outgoing waves at the vertex. In other words, vertex

conditions (16.2) imposed on *ψ* imply that

$$
\tilde{S} \begin{pmatrix} e^{ikl\_1} a\_2 \\ e^{ikl\_1} a\_1 \\ e^{ikl\_2} a\_4 \\ e^{ikl\_2} a\_3 \end{pmatrix} = \begin{pmatrix} a\_1 \\ a\_2 \\ a\_3 \\ a\_4 \end{pmatrix} . \tag{16.16}
$$

This equation can be written as

$$\left(\tilde{S}S\_{\mathfrak{e}} - I\right) \begin{pmatrix} a\_1 \\ a\_2 \\ a\_3 \\ a\_4 \end{pmatrix} = 0,\tag{16.17}$$

where

$$S\_{\mathbf{e}} = \begin{pmatrix} 0 & e^{ikl\_1} & 0 & 0 \\ e^{ikl\_1} & 0 & 0 & 0 \\ 0 & 0 & 0 & e^{ikl\_2} \\ 0 & 0 & e^{ikl\_2} & 0 \end{pmatrix}.$$

Hence the spectrum of *L* is determined by the secular equation det *SS*˜ **<sup>e</sup>** − *I* = 0.

We are going to show that the spectrum depends just on three parameters: *β*, the phase *ϕ*<sup>1</sup> and the difference *ϕ*<sup>3</sup> − *ϕ*2, i.e. one of the phase parameters can be eliminated if we are interested in the spectrum.

One may look at the secular equation directly:

$$\begin{cases} 1 + \left\{ 7t^4 + 2r^2t^2 + rt^3 \left( 4e^{-l\beta} + e^{l\beta} \right) + \cos\left(2\beta\right) + rt^3e^{l\left(\beta - \varphi\_1\right)} \right\} e^{2ik(l\_1 + l\_2)} \\\\ \quad + 2t \left\{ \left( r^2 - 3t^2 \right) e^{ik(2l\_1 + l\_2)} + 2te^{ik(l\_1 + l\_2)} - e^{ikl\_2} \right\} \cos\left(\beta - \varphi\_3 + \varphi\_2\right) \\\\ \quad - 2te^{lkl\_1} \left( t^2 e^{2ilk\_2} + 1 \right) \cos\varphi\_1 - 2t \left( 2t^2 - r^2 + rt\cos\beta \right) e^{ik(l\_1 + 2l\_2)} \cos\varphi\_1 \\\\ \quad - 2rt^2 e^{lk(2l\_1 + l\_2)} \left( \cos\varphi\_1 + \cos\left(2\beta - \varphi\_3 + \varphi\_2\right) \right) = 0. \end{cases}$$

Introducing *φ*<sup>1</sup> = *ϕ*1*, φ*<sup>2</sup> = *ϕ*<sup>3</sup> − *ϕ*<sup>2</sup> we rewrite the equation as

$$\begin{cases} 1 + \left\{ 7t^4 + 2r^2t^2 + rt^3 \left( 4e^{-l\beta} + e^{l\beta} \right) + \cos\left(2\beta\right) + rt^3e^{i(\beta-\phi\_1)} \right\} e^{2ik(l\_1+l\_2)} \\\\ \quad + 2t \left\{ \left( r^2 - 3t^2 \right) e^{ik(2l\_1+l\_2)} + 2te^{ik(l\_1+l\_2)} - e^{ikl\_2} \right\} \cos\left(\beta - \phi\_2\right) \\\\ \quad - 2te^{ill\_1} \left( t^2e^{2ikl\_2} + 1 \right) \cos\phi\_1 - 2t \left( 2t^2 - r^2 + rt\cos\beta \right) e^{ik(l\_1+2l\_2)} \cos\phi\_1 \\\\ \quad - 2rt^2e^{ik(2l\_1+l\_2)} \left( \cos\phi\_1 + \cos\left(2\beta - \phi\_2\right) \right) = 0. \end{cases}$$

It is clear that the spectrum is completely described by the three mentioned parameters.

Is it possible to see this using the unitary multiplication transformation described in the previous section? Consider the following function **U***(x)* constant on each of the edges:

$$\mathbf{U}\psi\left(\mathbf{x}\right) = \begin{cases} \psi\left(\mathbf{x}\right), & \mathbf{x} \in E\_1 = \left[\mathbf{x}\_1, \mathbf{x}\_2\right], \\\\ e^{-i\varphi\_2}\psi\left(\mathbf{x}\right), & \mathbf{x} \in E\_2 = \left[\mathbf{x}\_3, \mathbf{x}\_4\right]. \end{cases}$$

This unitary transformation does not change the differential operator but amends the vertex scattering matrix as follows

$$\hat{S} = U^{-1}\tilde{S}U = \text{diag}\left(1, e^{-i\phi\_1}, 1, e^{-i\phi\_2}\right)\tilde{S}\,\text{diag}\left(1, e^{i\phi\_1}, 1, e^{i\phi\_2}\right). \tag{16.18}$$

The transformation **U** obviously does not change the edge scattering matrix *S***e***,*

$$U^{-1} \mathcal{S}\_\mathbf{e} U = \mathcal{S}\_\mathbf{e},$$

hence

$$\det\left(\hat{S}S\_{\mathfrak{e}} - I\right) = \det\left(U^{-1}\tilde{S}US\_{\mathfrak{e}} - U^{-1}U\right) = \dots = \det\left(\tilde{S}S\_{\mathfrak{e}} - I\right).$$

Thus the secular equation remains unchanged despite one of the parameters having been eliminated.

The parameters *φ*1*,*<sup>2</sup> are associated with the two loops forming *(*2*.*4*)* and can be interpreted as fluxes of the magnetic field through the loops, since these phases disappear if the magnetic potential on the edges is chosen appropriately as described in the previous section.

## **16.3 Topological Damping of Aharonov-Bohm Effect**

This section is devoted to just one concrete example of a magnetic Schrödinger operator showing that dependence of the spectrum upon some of the magnetic fluxes may be damped by choosing appropriate values of the other fluxes. We call this phenomenon topological damping as explained at the end of the section. This example is taken from our recent paper [352].

## *16.3.1 Getting Started*

Consider again the figure eight metric graph *(*2*.*4*)* given in Fig. 16.1 and magnetic Laplacian *L<sup>S</sup>* <sup>0</sup>*,a* given by the differential expression

$$
\pi\_{0,a} = \left( i \frac{d}{dx} + a(x) \right)^2 \tag{16.19}
$$

assuming vertex conditions

$$
\vec{n} \left( \mathbf{S} - \mathbf{I} \right) \vec{\mu} = \left( \mathbf{S} + \mathbf{I} \right) \partial \vec{\mu}, \tag{16.20}
$$

where *u* and *∂u* are the vectors of all limit values of the function and its normal extended derivatives at the vertex *V*

$$
\vec{u} = \begin{pmatrix} u(\mathbf{x}\_1) \\ u(\mathbf{x}\_2) \\ u(\mathbf{x}\_3) \\ u(\mathbf{x}\_4) \end{pmatrix}, \quad \partial \vec{u} = \begin{pmatrix} u'(\mathbf{x}\_1) - ia(\mathbf{x}\_1)u(\mathbf{x}\_1) \\ -\left(\mu'(\mathbf{x}\_2) - ia(\mathbf{x}\_2)u(\mathbf{x}\_2)\right) \\ \mu'(\mathbf{x}\_3) - ia(\mathbf{x}\_3)u(\mathbf{x}\_3) \\ -\left(\mu'(\mathbf{x}\_4) - ia(\mathbf{x}\_4)u(\mathbf{x}\_4)\right) \end{pmatrix}.
$$

The matrix *S* is unitary and is used to parametrise all possible matching conditions making the operator *L* self-adjoint in *L*2*((*2*.*4*))* when defined on all functions from the Sobolev space *W*<sup>2</sup> <sup>2</sup> *(* \ *V )* satisfying (16.20).

**Fig. 16.1** The figure eight graph *(*2*.*4*)*

Our main interest is the dependence of the spectrum upon magnetic fluxes through the two loops

$$\phi\_j = \int\_{\chi\_{2j-1}}^{\chi\_{2j}} a(\mathbf{x}) d\mathbf{x}, \quad j = 1, 2. \tag{16.21}$$

Using transformation

$$U\_a: \mu(\mathbf{x}) \mapsto \exp\left(i \int\_{\chi\_{2f-1}}^{\chi} a(\mathbf{y}) d\mathbf{y}\right) \mu(\mathbf{x})\tag{16.22}$$

the magnetic Laplacian is mapped to the Laplacian *<sup>L</sup>Sφ*1*,φ*<sup>2</sup> <sup>=</sup> *<sup>L</sup>Sφ*1*,φ*<sup>2</sup> <sup>0</sup>*,*<sup>0</sup> defined on the set of functions satisfying vertex conditions

$$i\left(S^{\phi\_1,\phi\_2} - \mathbf{I}\right)\ddot{\boldsymbol{\mu}} = \left(S^{\phi\_1,\phi\_2} + \mathbf{I}\right)\partial\_n\ddot{\boldsymbol{\mu}},\tag{16.23}$$

which are obtained from (16.20) by substituting the matrix *S* with

$$S^{\phi\_1, \phi\_2} = D^{-1}SD, \quad D = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 \ e^{i\phi\_1} & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & e^{i\phi\_2} \end{pmatrix},\tag{16.24}$$

and the vector of extended derivatives *∂u*-—with the vector of normal derivatives

$$
\partial\_n \vec{u} = \begin{pmatrix} u'(x\_1) \\ -u'(x\_2) \\ u'(x\_3) \\ -u'(x\_4) \end{pmatrix}.
$$

In general the spectrum of the operator *LSφ*1*,φ*<sup>2</sup> depends on the fluxes, but it might happen that only the sum of the fluxes counts. For example if

$$S = \begin{pmatrix} 0 \ 0 \ 0 \ 1 \\ 0 \ 0 \ 1 \ 0 \\ 0 \ 1 \ 0 \ 0 \\ 1 \ 0 \ 0 \ 0 \end{pmatrix}$$

then the graph is equivalent to the loop of length L = *x*<sup>2</sup> − *x*<sup>1</sup> + *x*<sup>4</sup> − *x*<sup>3</sup> and the spectrum obviously depends on the sum of the fluxes *φ*<sup>1</sup> + *φ*2*.* This case is not interesting, since the anomalous behavior of the spectrum is due to the choice of vertex conditions that do not respect the geometry of the graph: the vertex *V* can be divided into two vertices *<sup>V</sup>* <sup>1</sup> = {*x*1*, x*4} and *<sup>V</sup>* <sup>2</sup> = {*x*2*, x*3} and the vertex

conditions connect separately the boundary values corresponding to the two new vertices. Such boundary conditions do not correspond to the figure eight graph but rather to the loop graph.

Another degenerate example is when

$$\mathcal{S} = \begin{pmatrix} 0 \ 1 \ 0 \ 0 \\ 1 \ 0 \ 0 \ 0 \\ 0 \ 0 \ 0 \ 1 \\ 0 \ 0 \ 1 \ 0 \end{pmatrix}.$$

In this case the eigenvalues can be divided into two series: each one depending on one of the fluxes *φ*<sup>1</sup> or *φ*<sup>2</sup> only. The vertex conditions connect together the pairs of endpoints *(x*1*, x*2*)* and *(x*3*, x*4*)* separately. The corresponding metric graph is not *(*2*.*4*)* but rather two separate loops formed by the two edges. This case is not interesting for us either.

In what follows we study the magnetic Schrödinger operator corresponding to the vertex conditions given by the following vertex scattering matrix:

$$S = \begin{pmatrix} 0 & 0 & \alpha & \beta \\ 0 & 0 & -\beta & \alpha \\ \alpha - \beta & 0 & 0 \\ \beta & \alpha & 0 & 0 \end{pmatrix}, \quad \alpha, \beta \in \mathbb{R}, \ \alpha^2 + \beta^2 = 1. \tag{16.25}$$

This unitary matrix connects together boundary values at all four endpoints and therefore is properly connecting. One may visualize this by the following picture, where all possible scattering processes are indicated by curves (Fig. 16.2).

It will be shown that interesting effects can be observed if the probabilities of these passages are equal, which corresponds to the choice *α* = *β* = 1*/* <sup>√</sup>2:

$$S = \begin{pmatrix} 0 & 0 & \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\\ 0 & 0 & -\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} & 0 & 0\\ \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 0 & 0 \end{pmatrix}. \tag{16.26}$$

**Fig. 16.2** Visual representation of the connections determined by the vertex conditions with *S* given by (16.25): the curves indicate possible passages

## *16.3.2 Explicit Calculation of the Spectrum*

Our immediate goal is to derive the equation describing the spectrum of the operator *<sup>L</sup>Sφ*1*,φ*<sup>2</sup> depending on the fluxes *φ*<sup>1</sup> and *φ*<sup>2</sup> and parameters *<sup>α</sup>* and *β.* The matrix *<sup>S</sup>* and therefore the matrix *Sφ*1*,φ*<sup>2</sup> <sup>=</sup> *<sup>D</sup>*−1*SD* appearing in the vertex conditions is not only unitary, but also Hermitian. It follows that the corresponding vertex scattering matrix *Sv* does not depend on the energy and coincides with

$$S^{\phi\_1, \phi\_2} = \begin{pmatrix} 0 & 0 & \alpha & e^{i\phi\_2}\beta \\ 0 & 0 & -e^{-i\phi\_1}\beta \ e^{-i(\phi\_1 - \phi\_2)}\alpha \\ \alpha & -e^{i\phi\_1}\beta & 0 & 0 \\ e^{-i\phi\_2}\beta \ e^{i(\phi\_1 - \phi\_2)}\alpha & 0 & 0 \end{pmatrix}. \tag{16.27}$$

The differential operator on the edges does not contain any electric or magnetic potential, hence the corresponding edge scattering matrix is

$$S\_{\mathfrak{e}} = \begin{pmatrix} 0 & e^{ik\ell\_1} & 0 & 0 \\ e^{ik\ell\_1} & 0 & 0 & 0 \\ 0 & 0 & 0 & e^{ik\ell\_2} \\ 0 & 0 & e^{ik\ell\_2} & 0 \end{pmatrix},\tag{16.28}$$

where *j* = *x*2*<sup>j</sup>* − *x*2*j*−1*, j* = 1*,* 2 are the lengths of the edges. Then all nonzero eigenvalues are given by the solutions of the equation

$$\det\left(S\_{\mathfrak{e}}(k)S^{\phi\_1,\phi\_2} - \mathbb{I}\right) = 0,\tag{16.29}$$

which is equivalent to

$$\det \begin{pmatrix} -1 & 0 & -e^{ik\ell\_1}e^{-i\phi\_1}\beta \ e^{ik\ell\_1}e^{-i(\phi\_1-\phi\_2)}\alpha \\ 0 & -1 & e^{ik\ell\_1}\alpha & e^{ik\ell\_1}e^{i\phi\_2}\beta \\ e^{ik\ell\_2}e^{-i\phi\_2}\beta \ e^{ik\ell\_2}e^{i(\phi\_1-\phi\_2)}\alpha & -1 & 0 \\ e^{ik\ell\_2}\alpha & -e^{ik\ell\_2}e^{i\phi\_1}\beta & 0 & -1 \end{pmatrix} = 0$$

$$\Leftrightarrow 1 + \left(\alpha^2 + \beta^2\right)^2 e^{2ik(\ell\_1 + \ell\_2)} + 2\left(\cos(\phi\_1 + \phi\_2)\beta^2 - \cos(\phi\_1 - \phi\_2)\alpha^2\right) e^{ik(\ell\_1 + \ell\_2)} = 0. \tag{16.31}$$

Taking into account that *α*<sup>2</sup> <sup>+</sup> *<sup>β</sup>*<sup>2</sup> <sup>=</sup> <sup>1</sup> and *eik(*1+2*)* = <sup>0</sup> we arrive at the following secular equation

$$\cos k(\ell\_1 + \ell\_2) = \alpha^2 \cos(\phi\_1 - \phi\_2) - \beta^2 \cos(\phi\_1 + \phi\_2). \tag{16.32}$$

The right hand side of this equation is a real constant between −1 and 1 and hence solutions to the equation form a periodic in *k* sequence. The corresponding eigenvalues *<sup>λ</sup>* <sup>=</sup> *<sup>k</sup>*<sup>2</sup> in general depend on both magnetic fluxes *φ*<sup>1</sup> and *φ*2*.*

Interesting phenomenon occurs if one choses *α* = *β* = 1*/* <sup>√</sup>2, i.e. the matrix *<sup>S</sup>* given by (16.26). The secular equation (16.32) takes the form

$$\begin{split} \cos k(\ell\_1 + \ell\_2) &= \frac{\cos(\phi\_1 - \phi\_2) - \cos(\phi\_1 + \phi\_2)}{2} \\ &= \sin \phi\_1 \sin \phi\_2. \end{split} \tag{16.33}$$

It follows, that if one of the magnetic fluxes is an integer multiple of *π*, then the spectrum is independent of the other flux. This is a trivial consequence of the secular equation (16.33), but we are interested in having an intuitive explanation of this phenomena. Aharonov-Bohm effect tells us that the spectrum of a system like magnetic Schrödinger operator on *(*2*.*4*)* should depend on the magnetic fluxes. This dependence is damped only in very special cases. What is so special when one of the fluxes is zero? An explicit answer to this question is given in the following section. We use the trace formula connecting the spectrum of a quantum graph to the set of periodic orbits on the underlying metric graph.

In what follows we are interested in this special case, therefore without loss of generality let us assume that *φ*<sup>1</sup> = 0*.*

Before proceeding, let us determine whether *λ* = 0 is an eigenvalue of the operator *LSφ*1*,φ*<sup>2</sup> or not. *<sup>k</sup>* <sup>=</sup> <sup>0</sup> is a solution to the secular equation only if sin *φ*<sup>1</sup> sin *φ*<sup>2</sup> = 1*.* If one of the fluxes is zero, then *k* = 0 is not a solution to the secular equation. It follows that the *algebraic multiplicity*<sup>1</sup>*ma(*0*)* [332, 346] is zero.

Let us turn to calculation of the spectral multiplicity *ms(*0*)*—the number of linearly independent solutions to the equation *LSφ*1*,φ*<sup>2</sup> *ψ* = 0*.* In order to underline that only the lengths of the edges are important, let us parameterize the edges as follows

$$[\mathbf{x}\_1, \mathbf{x}\_2] = [0, \ell\_1], \quad [\mathbf{x}\_3, \mathbf{x}\_4] = [0, \ell\_2].$$

All solutions to the differential equation are then given by:

$$\psi(\mathbf{x}) = \begin{cases} a\_1 \mathbf{x} + b\_1 & \text{If } \mathbf{x} \in [\mathbf{x}\_1, \mathbf{x}\_2], \\ a\_2 \mathbf{x} + b\_2 & \text{If } \mathbf{x} \in [\mathbf{x}\_3, \mathbf{x}\_4]. \end{cases} \tag{16.34}$$

<sup>1</sup> The algebraic multiplicity is the order of zero in the secular equation, see Chap. 8.

Then at the endpoints we have

$$
\vec{\psi} = \begin{pmatrix} b\_1 \\ a\_1 l\_1 + b\_1 \\ b\_2 \\ a\_2 l\_2 + b\_2 \end{pmatrix}, \quad \partial\_n \vec{\psi} = \begin{pmatrix} a\_1 \\ -a\_1 \\ a\_2 \\ -a\_2 \end{pmatrix}. \tag{16.35}
$$

The matrix *Sφ*1*,φ*<sup>2</sup> is unitary and Hermitian, hence its eigenvalues are just ±1*.* Therefore the vertex conditions (16.23) are satisfied if and only if both the left and right hand sides are equal to zero:

$$
\begin{pmatrix}
0 & -\frac{1}{2} & -\frac{e^{i\phi\_1}}{2\sqrt{2}} & \frac{e^{i\phi\_1 - i\phi\_2}}{2\sqrt{2}} \\
\frac{1}{2\sqrt{2}} & -\frac{e^{-i\phi\_1}}{2\sqrt{2}} & -\frac{1}{2} & 0 \\
\frac{e^{i\phi\_2}}{2\sqrt{2}} & \frac{e^{-i\phi\_1 + i\phi\_2}}{2\sqrt{2}} & 0 & -\frac{1}{2}
\end{pmatrix}
\quad \begin{pmatrix} a\_1 \\ -a\_1 \\ a\_2 \\ -a\_2 \end{pmatrix} = 0,\tag{16.36}
$$

$$
\begin{pmatrix}
\frac{1}{2} & 0 & \frac{1}{2\sqrt{2}} & \frac{e^{-i\phi\_2}}{2\sqrt{2}} \\
0 & \frac{1}{2} & -\frac{e^{i\phi\_1}}{2\sqrt{2}} & \frac{e^{i\phi\_1} - i\phi\_2}{2\sqrt{2}} \\
\frac{1}{2\sqrt{2}} & -\frac{e^{-i\phi\_1}}{2\sqrt{2}} & \frac{1}{2} & 0 \\
\frac{e^{i\phi\_2}}{2\sqrt{2}} & \frac{e^{-i\phi\_1 + i\phi\_2}}{2\sqrt{2}} & 0 & \frac{1}{2}
\end{pmatrix}
\begin{pmatrix}
b\_1 \\
a\_1l\_1 + b\_1 \\
b\_2 \\
a\_2l\_2 + b\_2
\end{pmatrix} = 0.
\tag{16.37}
$$

We omit tedious computations and give just a sketch. From the first equation we obtain *a*<sup>1</sup> = *a*<sup>2</sup> = 0, then plugging this result into the second equation we obtain *b*<sup>1</sup> = *b*<sup>2</sup> = 0 for any values of *φ*2. This proves that *λ* = 0 is not an eigenvalue and hence the spectral multiplicity (as well as the algebraic multiplicity) is zero in this case.

Summing up the spectrum of the magnetic Schrödinger operator on *(*2*.*4*)* is given by the solutions of the secular equation

$$
\cos k\mathcal{L} = 0, \ \mathcal{L} = \ell\_1 + \ell\_2,\tag{16.38}
$$

provided one of the magnetic fluxes is zero.

## *16.3.3 Topological Reasons for Damping*

We have seen that in the case *φ*<sup>1</sup> = 0 the spectrum does not depend on the flux *φ*2 the Aharonov-Bohm effect is damped which contradicts our intuition. The main goal of this section is to explain that this effect has a topological explanation. We are going to use trace formula (see Theorem 8.7) connecting the spectrum of a quantum

graph to the set of periodic orbits on the metric graph. It will be shown that orbits that *feel* the magnetic flux *φ*<sup>2</sup> give zero total contribution into the trace formula.

Under a periodic orbit we understand any oriented closed path on the graph *(*2*.*4*)*, which is allowed to turn back at the unique vertex only. Paths having opposite directions are considered to be different. We repeat the trace formula (8.20)

$$\mu(k) := 2m\_s(0)\delta(k) + \sum\_{k\_n \neq 0} \left( \delta(k - k\_n) + \delta(k + k\_n) \right) \tag{16.39}$$

$$0 = \left(2m\_s(0) - m\_a(0)\right)\delta(k) + \frac{\mathcal{L}}{\pi} + \frac{1}{\pi} \sum\_{\boldsymbol{\chi} \in \mathcal{P}} l(\text{prim}\,(\boldsymbol{\chi}))S\_v(\boldsymbol{\chi})\cos kl(\boldsymbol{\chi}).$$

The fluxes *φ*<sup>1</sup> and *φ*<sup>2</sup> are contained in the products *Sv(γ )*, since the entries of *Sv* <sup>≡</sup> *<sup>S</sup>φ*1*,φ*<sup>2</sup> depend on the fluxes (see formula (16.27)). Therefore it is natural to expect that the left hand side also depends on the fluxes as well. On the other hand the left hand side in (16.39) is determined by the spectrum of *LSφ*1*,φ*<sup>2</sup> which in the case *φ*<sup>1</sup> = 0 is independent of *φ*2*.* More precisely, the spectrum is determined by cos *k*L = 0 (we have already shown that *λ* = 0 is not an eigenvalue in this case, *ms(*0*)* = 0)

$$k\_n = \frac{\pi}{2\mathcal{L}} + \frac{\pi}{\mathcal{L}}n, \ n = 0, 1, 2, 3, \dots \tag{16.40}$$

Then the left hand side of trace formula can be written as

$$\begin{aligned} u(k) &= \sum\_{k\_n \neq 0} \left( \delta(k - k\_n) + \delta(k + k\_n) \right) \\ &= \sum\_{n \in \mathbb{Z}} \delta\left(k - \left(\frac{\pi}{2\mathcal{L}} + \frac{\pi}{\mathcal{L}} n\right)\right) \\ &= \sum\_{m \in \mathbb{Z}} \delta\left(k - \frac{\pi}{2\mathcal{L}} m\right) - \sum\_{m \in \mathbb{Z}} \delta\left(k - \frac{\pi}{\mathcal{L}} m\right) .\end{aligned}$$

We use now Poisson summation formula

$$\sum\_{n \in \mathbb{Z}} \delta(\mathbf{x} - Tn) = \frac{1}{T} \sum\_{m \in \mathbb{Z}} e^{-i2\pi \frac{m}{T} x^2}$$

and rewrite the last expression as follows:

$$\mu(k) = \sum\_{n \in \mathbb{Z}} \delta \left( k - \left( \frac{\pi}{2\mathcal{L}} + \frac{\pi}{\mathcal{L}} n \right) \right) = \frac{2\mathcal{L}}{\pi} \sum\_{m \in \mathbb{Z}} e^{-i4\mathcal{L}mk} - \frac{\mathcal{L}}{\pi} \sum\_{m \in \mathbb{Z}} e^{-i2\mathcal{L}mk} \,. \tag{16.41}$$

This formula represents the distribution *u(k)* as a formal exponential series. This series is independent of *φ*2, while the series on the right hand side of (16.39) formally contain *φ*2, since *Sv(γ )* depends on the second flux. Let us examine the series over all periodic orbits in more detail in order to understand the reason why all terms containing *φ*<sup>2</sup> cancel.

As we have shown, both the algebraic and spectral multiplicities of *k* = 0 are equal to zero, hence the right hand side of trace formula can be written as

$$
\mu(k) = \frac{\mathcal{L}}{\pi} + \frac{1}{\pi} \sum\_{\boldsymbol{\chi} \in \mathcal{P}} l(\text{prim}(\boldsymbol{\chi})) S\_{\boldsymbol{v}}(\boldsymbol{\chi}) \cos kl(\boldsymbol{\chi}).
$$

Let us note first that the sum in the trace formula contains contributions from the paths that go around the left and right loops equally many times. This is due to the fact that the coefficients 12*,* 21*,* 34*,* and 43 in the vertex scattering matrix are zero

$$(S)\_{12} = (S)\_{21} = (S)\_{34} = (S)\_{43} = 0.1$$

Therefore the length of each path with nontrivial *Sv(γ )* is an integer multiple of the total length L := <sup>1</sup> + 2*.*

The sum over all paths is taken over all closed paths and *l(*prim *(γ ))* is the length of the corresponding primitive path. It will be convenient for us to distinguish paths with different starting edges—the first edges the path comes across. Then the sum *<sup>γ</sup>* <sup>∈</sup><sup>P</sup> *l(*prim *(γ ))Sv(γ )* cos *kl(γ )* can be written as two sums—over the paths that go around the left loop first and over the paths that go around the right loop first:

$$\begin{split} u(k) &= \frac{\mathcal{L}}{\pi} + \frac{1}{\pi} \sum\_{\boldsymbol{\gamma} \in \mathcal{P}} l(\text{prim}(\boldsymbol{\gamma})) S\_v(\boldsymbol{\gamma}) \cos kl(\boldsymbol{\gamma}) \\ &= \frac{\mathcal{L}}{\pi} + \frac{\ell\_1}{\pi} \sum\_{\boldsymbol{\gamma} \in \mathbb{P}\_l} S\_v(\boldsymbol{\gamma}) \cos kl(\boldsymbol{\gamma}) + \frac{\ell\_2}{\pi} \sum\_{\boldsymbol{\gamma} \in \mathbb{P}\_l} S\_v(\boldsymbol{\gamma}) \cos kl(\boldsymbol{\gamma}), \end{split} \tag{16.42}$$

where P*l,r* denote the sets of paths where paths with different starting edges are considered different. The lower indices *l* and *r* indicate whether the path goes around the left or the right loop first. Each of the two sums can be treated in a similar way.

Let us consider first the series *<sup>γ</sup>* <sup>∈</sup>P*<sup>l</sup> Sv(γ )* cos *kl(γ )* over all paths starting by going into the left edge. After going around the left loop the path should go around the right loop and then again around the left one: the left and right loops appear one after another. Every such path can be uniquely parametrised by a series of indices *νj* = ± indicating whether the path goes around the left or right path in the positive (+) (clockwise following the orientation of the edges) or negative (−) (anti clockwise) direction. All odd indices correspond to the left loop, all even—to the right loop. The number of signs is even, which reflects the fact that every such path goes around the left and right loops equally many times. For example the path indicted on Fig. 16.3 is parametrised as *(*+*,* −*,* +*,* +*).*

**Fig. 16.3** A path of length 2*(l*<sup>1</sup> + *l*2*)*

$$\begin{aligned} S\_v(\boldsymbol{\nu}) &= (S\_v)\_{14} (S\_v)\_{32} (S\_v)\_{13} (S\_v)\_{42} \\ &= e^{i\phi\_2} \boldsymbol{\beta} \cdot (-e^{-i\phi\_1} \boldsymbol{\beta}) \cdot \boldsymbol{\alpha} \cdot e^{i(\phi\_1 - \phi\_2)} \boldsymbol{\alpha} \\ &= -e^{2i\phi\_1} \boldsymbol{\alpha}^2 \boldsymbol{\beta}^2 = \frac{-1}{4} e^{i2\phi\_1} .\end{aligned}$$

One may calculate the same product using the original vertex scattering matrix (16.26), but taking into account that each time when the path goes along the left or right loop the product gains the phase coefficient *e*±*iφ*<sup>1</sup> or *e*±*iφ*<sup>2</sup> , respectively. The sign corresponds to positive or negative direction. Each time when the path crosses the middle vertex, *Sv(γ )* gets an extra term ± <sup>√</sup> 1 2 *.* Note that only coefficients corresponding to the transitions 2 → 3 and 3 → 2 have minus sign, all other coefficients are positive

$$S\_v(\boldsymbol{\chi}) = \underbrace{e^{i\phi\_1}}\_{\text{left loop}} \underbrace{\frac{1}{\sqrt{2}}}\_{\text{2\rightarrow 4}} \underbrace{e^{-i\phi\_2}}\_{\text{right loop}} \underbrace{\frac{1}{\sqrt{2}}}\_{\text{3\rightarrow 1}} \underbrace{e^{i\phi\_1}}\_{\text{left loop}} \underbrace{\frac{-1}{\sqrt{2}}}\_{\text{2\rightarrow 3}} \underbrace{e^{i\phi\_2}}\_{\text{right loop}} \underbrace{\frac{1}{\sqrt{2}}}\_{\text{4\rightarrow 1}} = \frac{-1}{4} e^{i2\phi\_1} \dots$$

It will be convenient to see the product *Sv(γ )* corresponding to the path *(ν*1*, ν*2*,...,ν*2*n)* divided into three factors

• the product of all phase factors

$$e^{i\sum\_{j=1}^{n}\upsilon\_{2j-1}\phi\_1} \cdot e^{i\sum\_{j=1}^{n}\upsilon\_{2j}\phi\_2};$$

• the product of absolute values of scattering coefficients

$$\left(\frac{1}{\sqrt{2}}\right)^{2n} = \frac{1}{2^n};$$

• the product of sign factors ±1*.*

Our next claim is that only paths that every second time go around the right loop in a different direction give a contribution into the trace formula. Consider the

path *p* that contains the sequence *(. . . ,* + 2*m ,* + 2*m*+1 *,* + 2*m*+2 *. . . ).* Then the contribution

from the path *p* obtained from *p* by reversing the edge with the number 2*m* + 1, i.e. given by *(. . . ,* + 2*m ,* <sup>−</sup> 2*m*+1 *,* + 2*m*+2 *...)*, cancels the contribution from *p .* Really,

the phase contributions from *p* and *p* are the same, the absolute values are also the same, while the product of signs for *p* contains *(*−1*)* × 1 corresponding to transitions 3 → 2 and 1 → 4, in contrast to the product 1 × 1 appearing in the product for *p* (corresponds to the transitions 3 → 1 and 2 → 4, all other coefficients are the same). Similarly contributions from the paths given by *,* + *...)* cancel each other.

$$(\dots, \underbrace{-, -}\_{2m}, \underbrace{- }\_{2m+1}, \underbrace{- }\_{2m+2}, \dots) \text{ and } (\dots, \underbrace{- }\_{2m}, \underbrace{+ }\_{2m+1}, \underbrace{- }\_{2m+2})$$

Assume now that the path *p* contains the sequence *(. . . ,* + 2*m ,* + 2*m*+1 *,* <sup>−</sup> 2*m*+2 *. . . ),*

then the contribution from *p* corresponding to *(. . . ,* + 2*m ,* <sup>−</sup> 2*m*+1 *,* <sup>−</sup> 2*m*+2 *...)* is just

the same. It follows that only the paths of the form *(ν*1*,* +*, ν*3*,* −*, ν*5*,* +*, ν*7*,* −*,...)* and *(ν*1*,* −*, ν*3*,* +*, ν*5*,* −*, ν*7*,* +*,...)* survive in the series. Every such path has discrete length (the number of edges it comes across) being multiple of 4*.* The phase contribution from such paths is zero, since we assumed *φ*<sup>1</sup> = 0*.* It follows that the sum over the periodic paths starting with the left loop does not depend on *φ*2*.* Similar result holds for the other sum explaining the reason why the spectrum of the magnetic Schrödinger operator on *(*2*.*4*)* does not depend on *φ*2*,* provided *φ*<sup>1</sup> = 0*.*

Let us continue to calculate the sum over the periodic orbits. We have seen that only orbits of lengths 2*n*L (discrete length 4*n*) make a contribution to the series. Consider for example the orbits of length 2L*.* Only the orbits of the form *(ν*1*,* +*, ν*3*,* −*)* and *(ν*1*,* −*, ν*3*,* +*)* give nonzero contributions. The phase contribution is *ei*<sup>0</sup> <sup>=</sup> <sup>1</sup>*.* The absolute value contribution is √ 1 2 4 <sup>=</sup> <sup>1</sup> <sup>4</sup> *.* The sign contribution is −1*.* Altogether there are 4 × 2 such orbits, since the signs *ν*1*, ν*<sup>3</sup> can be chosen freely. So the total contribution to the series is: −12 cos *k*2L*.*

Similarly contribution from the orbits of length 2*n*<sup>L</sup> is 12*(*−1*)<sup>n</sup>* cos *<sup>k</sup>*2L*n.* Taking into account contribution from the paths from P*<sup>r</sup>* (starting by first going around the right loop) we get the following expression

$$\begin{split} u(k) &= \frac{\mathcal{L}}{\pi} + \frac{1}{\pi} (\underbrace{\ell\_1 + \ell\_2}\_{=\mathcal{L}}) \sum\_{n=1}^{\infty} 2(-1)^n \cos k2\mathcal{L}n \\ &= \frac{\mathcal{L}}{\pi} + \frac{\mathcal{L}}{\pi} \left( \sum\_{m=1}^{\infty} \left( 2e^{ik4\mathcal{L}m} - e^{ik2\mathcal{L}m} \right) + \sum\_{m=1}^{\infty} \left( 2e^{-ik4\mathcal{L}m} - e^{-ik2\mathcal{L}m} \right) \right) \\ &= \frac{2\mathcal{L}}{\pi} \sum\_{m \in \mathbb{Z}} e^{-ik4\mathcal{L}m} - \frac{\mathcal{L}}{\pi} \sum\_{m \in \mathbb{Z}} e^{-ik2\mathcal{L}m}, \end{split}$$

which coincides with the expression (16.41) obtained using Poisson summation formula.

The calculations carried out above show the reason, why Aharonov-Bohm effect is not present if one of the fluxes is zero: contributions from the periodic orbits going with non-zero flux cancel each other. On the other hand, if one of the fluxes is not zero, then the spectrum depends on the other flux as can be seen from Eq. (16.32). Similar result holds even if one of the fluxes is an integer multiple of *π.* It is clear that figure eight graph is not unique and other graphs exhibiting the same effect can be found. Trace formula together with our explicit calculations provides a recipe to construct such graphs.

**Problem 74** Investigate whether topological damping of Aharonov-Bohm effect can be observed for an equilateral flower graph.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 17 M-Functions: Definitions and Examples**

## **17.1 The Graph M-Function**

M-functions associated with quantum graphs provide an efficient tool not only to describe spectral properties of quantum graphs, but to solve the inverse problems. The goal of this chapter is to give a self-consistent introduction to the theory of the graph's M-functions.

## *17.1.1 Motivation and Historical Hints*

The classical Titchmarsh-Weyl M-function **M***(λ)* (see [501]) connects together the Dirichlet and Neumann data for any solution *ψ* of the stationary Schrödinger equation on the half-line [0*,*∞*)*:

$$-\psi''(\lambda, \mathbf{x}) + q(\mathbf{x})\psi(\lambda, \mathbf{x}) = \lambda\psi(\lambda, \mathbf{x}) \Rightarrow \mathbf{M}(\lambda) = \frac{\psi\_{\chi}'(\lambda, \mathbf{0})}{\psi(\lambda, \mathbf{0})}, \quad \text{Im}\,\lambda \neq \mathbf{0}.$$

This function not only accumulates all information about spectral properties of the one-dimensional Schrödinger operator with arbitrary boundary condition at the origin, but can be used to determine the potential *q*. The point *x* = 0 is the unique boundary point and the inverse problem can be seen as recovery of the potential from the boundary observations. Our goal is to generalise this object for the case of quantum graphs.

First of all we need to agree what should be understood as a graph's boundary. One possibility could be to take all endpoints of the intervals forming the edges. We have already explored this direction in Sect. 5.3, where we derived the characteristic equation using the edge M-functions. In this approach the graph is considered as a collection of intervals and it is hard to see the graph's topological structure. One may say that this set is too large. Another possibility could be to identify the boundary of with all vertices of degree one. The boundary defined in this way appears rather natural and has a clear visual interpretation, especially if the graph under consideration is a tree plotted on a sheet of paper. This definition does not work for all graphs since there are obviously graphs without any degree one vertices. Therefore in what follows we shall speak about the graph's contact set *∂*—the set of vertices that are used to approach the graph . Internal points on the edges can be seen as degree two vertices, hence the contact set may contain any finite set of points in .

We shall also assume that the vertex conditions at the contact vertices are standard. This restriction is not essential, but will make all formulas more transparent.

## *17.1.2 The Formal Definition*

Let be a finite compact metric graph formed by *N* edges joined together at *M* vertices *V m.* The **contact set** *∂* is a fixed arbitrary subset of the vertices. Without loss of generality we assume that the vertices are enumerated so that the contact set is formed by the first *M∂* <sup>≥</sup> <sup>1</sup> vertices: *∂* = {*<sup>V</sup> <sup>j</sup>* } *M∂ <sup>j</sup>*=1*.* Let us denote by *D∂* the total degree of all vertices from *∂*

$$D\_{\partial} = \sum\_{j=1}^{M\_{\partial}} d(V^j). \tag{17.1}$$

All vertices in are thus divided into


The magnetic Schrödinger operator *L<sup>S</sup> q,a()* is defined by standard vertex conditions on the contact set *∂* and arbitrary Hermitian conditions at all internal vertices *V M∂*+1*,...,V M.*

Consider any nonreal *<sup>λ</sup>* : Im *<sup>λ</sup>* = <sup>0</sup>*,* and any function *ψ(λ, x)* <sup>∈</sup> *<sup>W</sup>*<sup>2</sup> <sup>2</sup> *(* \ **V***)* which is a solution to the stationary magnetic Schrödinger equation on every edge:

$$-\left(\frac{d}{dx} - ia(\mathbf{x})\right)^2 \psi(\lambda, \mathbf{x}) + q(\mathbf{x})\psi(\lambda, \mathbf{x}) = \lambda \psi(\lambda, \mathbf{x}).\tag{17.2}$$

Every such function is continuous and has continuous first derivative—this is proven repeating the arguments presented in Sect. 4.1 substituting in Eqs. (4.5) and (4.6) the function *f* with *λψ*.

It follows that the limiting values of *ψ* at the vertices are well-defined. We assume that *ψ* satisfies vertex conditions (3.51) at all internal vertices, but just continuity condition at the contact vertices. Note that no condition on the derivatives is imposed on *∂*.

The vertex conditions at the internal vertices can be written using single *(*2*N* − *D∂ )* <sup>×</sup> *(*2*<sup>N</sup>* <sup>−</sup> *D∂ )* irreducible unitary matrix **S**int as follows. We first introduce the notations

$$
\vec{\psi}^{\text{int}} = \{\psi(\mathbf{x}\_{j})\}\_{\mathbf{x}\_{j}\notin\partial\Gamma}, \ \partial\vec{\psi}^{\text{int}} = \{\partial\psi(\mathbf{x}\_{j})\}\_{\mathbf{x}\_{j}\notin\partial\Gamma},\tag{17.3}
$$

where the <sup>2</sup>*<sup>N</sup>* <sup>−</sup> *D∂* -dimensional vectors *<sup>ψ</sup>* int and *∂ψ* int collect together all limiting values at the internal vertices. Then putting together the vertex conditions (4.8) at each internal vertex we get the *(*2*<sup>N</sup>* <sup>−</sup> *D∂ )* <sup>×</sup> *(*2*<sup>N</sup>* <sup>−</sup> *D∂ )* unitary matrix **<sup>S</sup>**int, which is block-diagonal if the endpoints are ordered respecting the vertex structure

$$\mathbf{S}^{\text{int}} = \bigoplus\_{V^m \notin \partial \Gamma} S\_m,\tag{17.4}$$

where *Sm* are the *dm* × *dm* irreducible unitary matrices parameterising vertex conditions at the vertices (4.8). In the rest of this chapter we shall always assume that the vertex conditions

$$\operatorname{Ai}\left(\mathbf{S}^{\text{int}} - \mathbf{I}\right)\tilde{\boldsymbol{\psi}}^{\text{int}} = \left(\mathbf{S}^{\text{int}} + \mathbf{I}\right)\partial\tilde{\boldsymbol{\psi}}^{\text{int}}\tag{17.5}$$

are satisfied.

In addition, we introduce the *M∂* -dimensional vectors of the limiting values at contact vertices

$$\vec{\psi}^{\partial} = \{\psi(V^m)\}\_{m=1}^{M\_{\partial}}, \quad \partial \vec{\psi}^{\partial} = \{\sum\_{\boldsymbol{x}\_{j} \in V^m} \partial \psi(\boldsymbol{x}\_{j})\}\_{m=1}^{M\_{\partial}}.\tag{17.6}$$

It is important to remember that we assumed that the function *ψ* is continuous at the contact vertices, hence the values *ψ(V m)* are well-defined for *<sup>m</sup>* <sup>=</sup> <sup>1</sup>*,* <sup>2</sup>*,...,M∂ .* We use the sums of extended normal derivatives *xj*∈*<sup>V</sup> <sup>m</sup> ∂ψ(xj )* instead of single values *∂ψ(xj )* at the endpoints for two reasons


**Definition 17.1** The **graph's M-function M***(λ)* is the *M∂* × *M∂* matrix-valued function defined by the map:

$$\mathbf{M}\_{\Gamma}(\lambda) : \vec{\psi}^{\partial} \mapsto \partial \vec{\psi}^{\partial}, \quad \text{Im}\,\lambda \neq 0,\tag{17.7}$$

where *ψ <sup>∂</sup>* and *∂ψ <sup>∂</sup>* are the limiting values for an arbitrary function *ψ(λ, x)* satisfying the differential equation (17.2), the vertex conditions (17.5) at internal vertices and continuous at contact vertices.

In order to justify this definition we need to show existence and uniqueness of the solutions to:

**Dirichlet Problem** For arbitrary vector *f* <sup>∈</sup> <sup>C</sup>*M∂* find a function *<sup>ψ</sup>* solving the differential equation (17.2) satisfying vertex conditions (17.5) at internal vertices, continuous on *∂* and satisfying the boundary condition

$$
\psi(\lambda, \cdot)|\_{\partial \Gamma} = \vec{f}.\tag{17.8}
$$

To show the **existence**, let us denote by *L*min the magnetic Schrödinger operator defined on the functions satisfying vertex conditions (17.5) at all internal vertices and both Dirichlet and Neumann conditions at the contact vertices:

$$
\vec{\psi}^{\partial} = 0, \quad \partial \vec{\psi}^{\partial} = 0.
$$

This operator is clearly symmetric, its adjoint to be denoted by *L*max is given by the same differential expression on the domain of functions satisfying vertex conditions (17.5) at the internal vertices and just continuity condition at the contact vertices. Then Eq. (17.2) can be written as

$$(L^{\max} - \lambda)\psi(\lambda, x) = 0.$$

Let *f* be any vector from C*M∂* , then obviously there exists a function *<sup>w</sup>* <sup>∈</sup> Dom *(L*max*),* such that *<sup>w</sup>*|*∂* <sup>=</sup> *f .* Consider now the function

$$\begin{aligned} \psi &= - \underbrace{(L^{\operatorname{D}} - \lambda)^{-1} \underbrace{(L^{\operatorname{max}} - \lambda) w}\_{\in L\_2(\Gamma)} + w \in \operatorname{Dom}(L^{\operatorname{max}})} + w \in \operatorname{Dom}(L^{\operatorname{max}}), \\ &\in \operatorname{Dom}(L^{\operatorname{D}}) \subset \operatorname{Dom}(L^{\operatorname{max}}) \end{aligned}$$

where *L*<sup>D</sup> denotes the Dirichlet magnetic Schrödinger operator defined on the functions satisfying conditions (17.5) at the internal vertices and Dirichlet conditions at the contact vertices. Here we used that the operator *L*<sup>D</sup> is self-adjoint and its resolvent is defined on the whole Hilbert space. Clearly *L*max is an extension of *<sup>L</sup>*<sup>D</sup> implying that *(L*max <sup>−</sup> *λ)(L<sup>D</sup>* <sup>−</sup> *λ)*−<sup>1</sup> is the identity operator. It follows that *<sup>ψ</sup>* belongs to the kernel of *<sup>L</sup>*max <sup>−</sup> *<sup>λ</sup>*

$$(L^{\max} - \lambda)\psi = -(L^{\max} - \lambda)w + (L^{\max} - \lambda)w = 0.$$

Moreover, the restriction of *ψ* to the contact vertices coincides with *f*, since

$$(\left(L^{D} - \lambda\right)^{-1}(L^{\max} - \lambda)w \in \text{Dom}\left(L^{\text{D}}\right) \Rightarrow \left(\left(L^{D} - \lambda\right)^{-1}(L^{\max} - \lambda)w\right)\vert\_{\partial\Gamma} = 0.$$

Summing up, *u* is a solution to the Dirichlet problem formulated above.1

To show **uniqueness** assume on the contrary, that two solutions to the Dirichlet problem exist, say the functions *ψ*<sup>1</sup> and *ψ*2*.* Then their difference *ψ*<sup>2</sup> − *ψ*<sup>1</sup> is zero on *∂* and therefore belongs to the domain of *<sup>L</sup>*D*.* If *<sup>ψ</sup>*2−*ψ*<sup>1</sup> is not identically equal to zero, then it is an eigenfunction of *L*<sup>D</sup> corresponding to a non-real *λ*, but this is impossible since the operator is self-adjoint. It follows that solution to the Dirichlet problem is unique.

Let *λ >* 0, then any symmetric matrix with real entries is an M-function for a certain compact metric graph [227] and given *λ*.

To understand spectral properties of metric graphs it will be convenient to look at the **energy curves**—the eigenvalues of the M-function dependent on the real parameter *λ*.

## *17.1.3 Examples*

**Example 17.2** The M-function for the Laplacian on the single interval *I* = [0*,* 1], the contact set *∂* being one of the endpoints, say *∂* = {0}*.* Two different vertex conditions at the internal vertex are considered:

(1) Neumann condition at *x* = 1

$$
\psi'(1) = 0.
$$

(2) Dirichlet condition at *x* = 1

$$
\psi(1) = 0.
$$

*Case (1)* In the first case the function *ψ(λ, x)* satisfying Neumann condition at *x* = 1 is

$$
\psi(\lambda, x) = \cos k(x - 1).
$$

<sup>1</sup> The algebraic character of the proof given above may give a wrong impression. We were able to carry it out only because we knew that the Dirichlet operator *LD* is self-adjoint, which is a highly non-trivial fact.

**Fig. 17.1 M***<sup>N</sup> <sup>I</sup> (λ)*– M-function for the interval, Neumann condition at one of the endpoints

The M-function is given by the logarithmic derivative of *ψ* at *x* = 0

$$\mathbf{M}\_I^N(\lambda) = k \tan k.$$

See Fig. 17.1, where this function is plotted for real values of the spectral parameter. The function is piece-wise monotone with singularities at *<sup>λ</sup>* <sup>=</sup> *<sup>π</sup>*<sup>2</sup> 1 <sup>2</sup>+ *n* 2 *, n* = 0*,* 1*,* 2*,... ,* corresponding to the eigenvalues of the Dirichlet-Neumann Laplacian on [0*,* <sup>1</sup>]. The zeroes of **<sup>M</sup>** are situated at *<sup>λ</sup>* <sup>=</sup> *<sup>π</sup>*2*n*2*, n* <sup>=</sup> <sup>0</sup>*,* <sup>1</sup>*,* <sup>2</sup>*,... ,* corresponding to the spectrum of the Neumann-Neumann Laplacian on [0*,* 1]*.*

*Case (2)* Similarly, for the Dirichlet condition at *x* = 1 we have

$$
\psi(\lambda, x) = \sin k(x - 1).
$$

The logarithmic derivative gives the M-function

$$\mathbf{M}\_I^D(\lambda) = -k \cot k.$$

The function is plotted in Fig. 17.2. The singularities are located at *<sup>λ</sup>* <sup>=</sup> *<sup>π</sup>*2*n*2*, n* <sup>=</sup> 0*,* 1*,* 2*,... ,* corresponding to the spectrum of the Dirichlet-Dirichlet Laplacian; the zeroes—at *<sup>λ</sup>* <sup>=</sup> *<sup>π</sup>*<sup>2</sup> 1 <sup>2</sup>+ *n* 2 *, n* = 0*,* 1*,* 2*,... ,* corresponding to the spectrum of the Neumann-Dirichlet Laplacian.

These examples illustrate that the M-function depends on the conditions at the internal vertices, in this case on the condition at *x* = 1*.* The M-function is originally defined for non-real *λ* but to see the spectra of the operators corresponding to

**Fig. 17.2 M***<sup>D</sup> <sup>I</sup> (λ)*– M-function for the interval, Dirichlet condition at one of the endpoints

different conditions at the contact vertex one has to consider the continuation of **M***(λ)* to the real line. This continuation has singularities which explains the reason why only non-real *λ* were used in the definition.

**Example 17.3** The M-function for the Laplace operators on the single interval *I* = [0*,* ] and the contact set *∂* equal to the union of endpoints *∂* = {0*,* }*.*

This function has already been calculated in Sect. 5.3 for the Schrödinger equation. We repeat here explicit calculations in the case of zero potential. Any solution to (17.2) has the form

$$
\psi = a \cos kx + b \sin kx \dots
$$

The parameters *a* and *b* can be calculated from the values of the function *ψ* at the contact points 0 and

$$\psi(\mathbf{x}) = \cos kx \,\psi(0) + \left( -\frac{\cos k\ell}{\sin k\ell} \psi(0) + \frac{1}{\sin k\ell} \psi(\ell) \right) \sin kx \dots$$

The normal derivatives are

$$\begin{cases} \psi'(0) &= -k \frac{\cos k\ell}{\sin k\ell} \psi(0) + \frac{k}{\sin k\ell} \psi(\ell), \\\ -\psi'(\ell) = \frac{k}{\sin k\ell} \psi(0) - \frac{k}{\sin k\ell} \psi(\ell). \end{cases}$$

We can write this system in matrix form as

$$
\begin{pmatrix} \psi'(0) \\ -\psi'(\ell) \end{pmatrix} = \underbrace{\begin{pmatrix} -k\cot k\ell & \frac{k}{\sin k\ell} \\ \frac{k}{\sin k\ell} & -k\cot k\ell \end{pmatrix}}\_{=\mathbf{M}\_{\ell}(\lambda)} \begin{pmatrix} \psi(0) \\ \psi(\ell) \end{pmatrix} . \tag{17.9}
$$

The corresponding M-function is completely determined by the length of the interval. For almost all real *λ*, more precisely for <sup>2</sup> *<sup>π</sup>*<sup>2</sup>*<sup>λ</sup>* = <sup>1</sup>*,* <sup>4</sup>*,... , <sup>n</sup>*2*,...* , this is a Hermitian 2 × 2 matrix. Let us plot its eigenvalues (see Fig. 17.3). We see that for every nonsingular *λ* there are precisely two energy values, the energy curves are monotone between the singularities. An interesting feature of this example is that, at the singular points, say *λ* = 4, one of the energy curves approaches ±∞, while the other curve crosses the real line. One may say that the singularities and zeroes of **M***(λ)* occur simultaneously.

**Example 17.4** The M-function for the standard Laplacian on the lasso graph *(*2*.*2*)* with the loop of length and outgrowth of length *s.* The contact set formed by the unique vertex of degree one.

This graph is depicted in Fig. 17.4, where parametrisation of the edges is indicated.

To calculate the M-function it is enough to consider only even functions on the loop (odd functions are equal to zero at the inner vertex and are naturally continued

**Fig. 17.3** M-function for the interval with two contact points and = *π*. (The energy curves)

**Fig. 17.4** The lasso graph *(*2*.*2*)*

by zero to the outgrowth):

$$\psi(\alpha) = \begin{cases} \cos k\alpha, & \text{on the loop,} \\ a\cos k\alpha + b\sin k\alpha, & \text{on the outgoing.} \end{cases}$$

Standard conditions at the vertex of degree three give

$$\begin{cases} \cos\frac{k\ell}{2} &= a\cos ks + b\sin ks \\ 2k\sin\frac{k\ell}{2} &= -ak\sin ks + bk\cos ks \end{cases}$$

$$\Rightarrow \begin{cases} a = \cos\frac{k\ell}{2}\cos ks - 2\sin\frac{k\ell}{2}\sin ks \\ b = 2\sin\frac{k\ell}{2}\cos ks + \cos\frac{k\ell}{2}\sin ks \end{cases}$$

The M-function is just equal to the ratio *kb/a*

$$\mathbf{M}\_{\Gamma(2,2)}(\lambda) = k \frac{\cos\frac{k\ell}{2}\sin ks + 2\sin\frac{k\ell}{2}\cos ks}{\cos\frac{k\ell}{2}\cos ks - 2\sin\frac{k\ell}{2}\sin ks}.\tag{17.10}$$

Let us consider the special case = 2*π*

$$\mathbf{M}\_{\Gamma\_{(2,2)}}(\lambda) = k \frac{\cos k\pi \sin ks + 2 \sin k\pi \cos ks}{\cos k\pi \cos ks - 2 \sin k\pi \sin ks}.$$

The eigenfunctions supported just by the loop correspond to integer values of *k*: *<sup>λ</sup>* <sup>=</sup> *<sup>n</sup>*2*, n* <sup>∈</sup> <sup>N</sup>. These eigenvalues are not seen from the M-function as is indicated by the plot—the function is regular there

$$\mathbf{M}\_{\Gamma\_{(2,2)}}(n^2) = n \frac{\sin ns}{\cos ns},$$

unless cos *ns* = 0*.* For example, for *n* = 3 and *s* = *π/*6 we have cos *ns* = cos 3*π/*6 = 0 and we observe a singularity there. But this singularity has nothing to do with the eigenfunctions supported by the loop: small change of *s* shifts the

**Fig. 17.5** M-function for the Lasso graph with = 2*π,s* = *π/*6

singularity to a neighbouring point, while the eigenfunction on the loop and the corresponding eigenvalue remain unchanged (Fig. 17.5).

**Example 17.5** The M-function for the standard Laplacian on the loop *(*2*.*3*)* with two contact points. The lengths of the edges are 1 and 2*.*

Consider the graph formed by two intervals [0*,* 1] and [0*,* 2] joined pairwise at their endpoints. The contact set is formed by the two vertices (Fig. 17.6).

The corresponding M-function is equal to the sum of M-functions for the two separate intervals of lengths 1 and 2 (see (17.9))

$$\mathbf{M}\_{\Gamma\_{(2,3)}}(\lambda) = \mathbf{M}\_{I\_{\ell\_1}}(\lambda) + \mathbf{M}\_{I\_{\ell\_2}}(\lambda)$$

$$= \begin{pmatrix} -k\cot k\ell\_2 - k\cot k\ell\_2 & \frac{k}{\sin k\ell\_1} + \frac{k}{\sin k\ell\_2} \\ \frac{k}{\sin k\ell\_1} + \frac{k}{\sin k\ell\_2} & -k\cot k\ell\_1 - k\cot k\ell\_2 \end{pmatrix}. \tag{17.11}$$

To understand this formula consider solutions to the Schrödinger equation on the two intervals. Their normal derivatives are related via the corresponding Mfunctions to the values at the endpoints. Hence the sum of M-functions maps the values at the vertices in loop 1*,*2 to the sums of normal derivatives.

If the standard Laplacian on the loop has eigenfunctions equal to zero at both contact points, then the corresponding eigenvalues do not cause any singularities of **M***(λ).* This fact is best illustrated by plotting the energy curves for 1 = <sup>2</sup> (symmetric case) and <sup>1</sup> = 2 (non-symmetric case) (see Fig. 17.7). One can see that the number of singularities in the non-symmetric case is doubled compared to the symmetric case. In the non-symmetric case, we have chosen 1 close to 2,

this resulted in the appearance of almost vertical energy curves close to the points 2 *<sup>π</sup>*<sup>2</sup>*<sup>λ</sup>* <sup>=</sup> <sup>1</sup>*,* 22*,...* corresponding to the energies of the invisible eigenfunctions in the symmetric case.

**Problem 75** Calculate the M-function for the watermelon graph formed by three parallel edges of lengths 1*,* 2*,* and 3 joining together two vertices. Consider two cases


Plot the corresponding energy curves for different values of *j* including the case where all lengths are equal.

## **17.2 Explicit Formulas Using Eigenfunctions**

The goal of this section is to present explicit formulas for M-functions in terms of the corresponding eigenfunctions. These formulas will be used to study properties of the M-functions, they also may be used to justify the definition itself, since the formulas we obtain will show another one time that the function *ψ* can be calculated for any vector of boundary values *f* <sup>=</sup> *<sup>ψ</sup> <sup>∂</sup> .* The M-function will be given in terms of the eigenfunctions of two differential operators on the same graph :

$$L\_{q,a}^{\mathbf{S}^{\text{int}}, \text{st}}(\Gamma) \text{ and } L\_{q,a}^{\mathbf{S}^{\text{int}}, \text{D}}(\Gamma).$$

These self-adjoint operators in *L*2*()* are defined by the same differential expression (2.17), the same vertex conditions (17.5) at the internal vertices and standard respectively Dirichlet conditions at the contact vertices. Sometimes abusing the terminology, we are going to call these operators as standard and Dirichlet operators, also vertex conditions at internal vertices are not assumed to be standard or Dirichlet. Just in this chapter, we are going to use short notations for these operators

$$L^{\rm st} := L\_{q,a}^{\rm S^{\rm int}, \rm st}(\Gamma) \text{ and } L^{\rm D} := L\_{q,a}^{\rm S^{\rm int}, \rm D}(\Gamma) \tag{17.12}$$

hoping that this will not lead to any misunderstanding.

**Fig. 17.7** M-function for the graph *(*2*.*3*)*.(The energy curves)

One of the usual ways to prove the existence of solutions of boundary value problems is to calculate the resolvent of the corresponding differential operator. Consider the standard operator *L*st. This is a self-adjoint operator with discrete spectrum and we denote the corresponding eigenvalues and eigenfunctions by *λ*st *n* and *ψ*st *<sup>n</sup>* , respectively. The eigenfunctions are assumed to form an orthonormal basis, hence for any *f* ∈ *L*2*()* we have the spectral resolution

$$f = \sum\_{n=1}^{\infty} \langle \psi\_n^{\rm st}, f \rangle\_{L\_2(\Gamma)} \psi\_n^{\rm st}. \tag{17.13}$$

This equality can be written using the integral kernel

$$k(\mathbf{x}, \mathbf{y}) = \sum\_{n=1}^{\infty} \psi\_n^{\mathrm{st}}(\mathbf{x}) \overline{\psi\_n^{\mathrm{st}}(\mathbf{y})}$$

as follows

$$f(\mathbf{x}) = \int\_{\Gamma} k(\mathbf{x}, \mathbf{y}) f(\mathbf{y}) d\mathbf{y}.\tag{17.14}$$

If *<sup>f</sup>* <sup>∈</sup> Dom *(L*st*)* then

$$L^{\rm st}f = \sum\_{n=1}^{\infty} \lambda\_n^{\rm st} \langle \psi\_n^{\rm st}, f \rangle\_{L^2(\Gamma)} \psi\_n^{\rm st}. \tag{17.15}$$

Similar formulas hold for the functions of the operator, in particular, the resolvent is given by

$$\left(L^{\rm st} - \lambda\right)^{-1} f = \sum\_{n=1}^{\infty} \frac{1}{\lambda\_n^{\rm st} - \lambda} \langle \psi\_n^{\rm st}, f \rangle\_{L\_2(\Gamma)} \psi\_n^{\rm st},\tag{17.16}$$

or as the bounded integral operator with the Hilbert-Schmidt kernel

$$r\_{\lambda}(\mathbf{x}, \mathbf{y}) = \sum\_{n=1}^{\infty} \frac{1}{\lambda\_n^{\rm st} - \lambda} \psi\_n^{\rm st}(\mathbf{x}) \overline{\psi\_n^{\rm st}(\mathbf{y})}. \tag{17.17}$$

We start with the simplest example of the interval graph *I* = [0*,* 1] with the contact point *x* = 0 and Neumann condition at *x* = 1 (case *(*1*)* in Example 17.2). The corresponding M-function is

$$\mathbf{M}\_I(\lambda) = k \frac{\sin k}{\cos k}.\tag{17.18}$$

We calculate now the resolvent kernel *rλ(x, y)* explicitly. It is a solution to the following differential equation

$$-r\_{\rm xx}(\mathbf{x}, \mathbf{y}) - \lambda r(\mathbf{x}, \mathbf{y}) = \delta(\mathbf{x} - \mathbf{y}).\tag{17.19}$$

Outside the point *x* = *y*, the kernel is a solution to the homogeneous equation. Taking into account boundary conditions at *x* = 0 and *x* = 1 we get

$$r\_{\lambda}(\mathbf{x}, \mathbf{y}) = \begin{cases} \alpha \cos kx, & x < \mathbf{y}, \\\\ \beta \cos k(\mathbf{x} - 1), & x > \mathbf{y}. \end{cases}$$

The parameters *α, β* should be chosen so that the function *rλ(*·*, y)* is continuous at *x* = *y* and its first derivative has jump −1, then differentiating *rλ* twice one gets the delta-function as required:

$$r\_k(\mathbf{x}, \mathbf{y}) = \begin{cases} -\frac{\cos k(1-\mathbf{y})}{k \sin k} \cos k \mathbf{x}, \; \mathbf{x} < \mathbf{y}, \\\ -\frac{\cos k \mathbf{y}}{k \sin k} \cos k(\mathbf{x} - 1), \; \mathbf{x} > \mathbf{y}. \end{cases} \tag{17.20}$$

We observe that the following formula holds:

$$-\left(r\_{\lambda}(0,0)\right)^{-1} = k \frac{\sin k}{\cos k} = \mathbf{M}\_I(\lambda). \tag{17.21}$$

The formula connecting *rλ* and **M***<sup>I</sup>* is not just a coincidence, it can be generalised for arbitrary graphs. But let us understand first the reason why this formula holds in our example. We consider the limit lim<sup>→</sup><sup>0</sup> *rλ(x, ).* The function *rλ(x, )* satisfies Neumann condition at *x* = 0*,* hence for small  *>* 0 we have approximately

$$(\frac{\partial}{\partial x}r\_{\lambda})(\epsilon+0,\epsilon) \sim -1.$$

It follows that *rλ(x,* 0*)* is a solution to the homogeneous differential equation satisfying the boundary condition

$$\left(\frac{\partial}{\partial x}r\_{\lambda}\right)(0,0) = -1.$$

Then the M-function can be calculated as

$$\mathbf{M}\_I(\lambda) = \frac{\frac{\partial}{\partial \lambda} r\_\lambda(0,0)}{r\_\lambda(0,0)} = - (r\_\lambda(0,0))^{-1} \,. \tag{17.22}$$

This formula can be easily generalised for the case of any finite compact metric graph with the contact set *∂*: the corresponding M-function is the matrix valued function with the entries

$$\mathbf{M}\_{\Gamma}(\lambda) = -\left( \left\{ r\_{\lambda}(V^{i}, V^{j}) \right\}\_{V^{i}, V^{j} \in \partial \Gamma} \right)^{-1} . \tag{17.23}$$

This formula holds for Schrödinger operators and arbitrary vertex conditions at internal vertices, but with standard conditions on the contact set *∂.*

The proof follows the same lines as the proof of (17.22) and we are going to assume that the magnetic potential is identically zero *a(x)* ≡ 0. This assumption is not restrictive, since vertex conditions at the internal vertices are arbitrary and elimination of *a* leads to a special change of those conditions. Consider the resolvent kernel *rλ(V <sup>j</sup> , y), V <sup>j</sup>* <sup>∈</sup> *∂* as a function of the second argument *y*, *<sup>V</sup> <sup>j</sup>* being fixed. For any function *<sup>ϕ</sup>* <sup>∈</sup> Dom *(L*st*)* we have

$$(L^{\rm st} - \lambda)^{-1} (L^{\rm st} - \lambda) \varphi = \varphi,$$

implying in particular

$$\int\_{\Gamma} r\_{\lambda}(V^{j}, \mathbf{y}) \Big( -\varphi''(\mathbf{y}) + q(\mathbf{y})\varphi(\mathbf{y}) - \lambda\varphi(\mathbf{y}) \Big) d\mathbf{y} = \varphi(V^{j}).\tag{17.24}$$

Taking first *ϕ* ∈ *C*<sup>∞</sup> <sup>0</sup> *(En), n* = 1*,* 2*,...,N* we conclude that the resolvent kernel is a weak solution of the differential equation

$$-\frac{\partial^2}{\partial \mathbf{y}^2} r\_\lambda(V^j, \mathbf{y}) + q(\mathbf{y}) r\_\lambda(V^j, \mathbf{y}) = \lambda r\_\lambda(V^j, \mathbf{y})$$

on every edge. Every such solution is continuous and has continuous derivative inside the edges, hence we may integrate by parts in (17.24) taking *ϕ* smooth on each closed edge:

$$\begin{aligned} &\sum\_{m=1}^{M} \left( \sum\_{\mathbf{x}\_{i}\in V^{m}} r\_{\lambda}(V^{j}, \mathbf{x}\_{i}) \partial\varphi(\mathbf{x}\_{i}) \right) - \sum\_{m=1}^{M} \left( \sum\_{\mathbf{x}\_{i}\in V^{m}} \partial r\_{\lambda}(V^{j}, \mathbf{x}\_{i}) \varphi(\mathbf{x}\_{i}) \right) \\ &+ \int\_{\Gamma} \underbrace{\left( -\frac{\partial^{2}}{\partial \mathbf{y}^{2}} r\_{\lambda}(V^{j}, \mathbf{y}) + q(\mathbf{y}) r\_{\lambda}(V^{j}, \mathbf{y}) - \lambda r\_{\lambda}(V^{j}, \mathbf{y}) \right)}\_{=\mathbf{0}} \varphi(\mathbf{y}) d\mathbf{y} = \varphi(V^{j}). \end{aligned}$$

Consider now test-functions *ϕ* with the support including the vertex *V <sup>j</sup>* and no other vertex and having all normal derivatives at *xi* <sup>∈</sup> *<sup>V</sup> <sup>j</sup>* equal to zero. It follows that2

$$-\sum\_{\mathbf{x}\_l \in V^j} \partial r\_\lambda(V^j, \mathbf{x}\_l) = 1,\tag{17.25}$$

where we used that *ϕ* is continuous at *V <sup>j</sup>* due to standard conditions there.

<sup>2</sup> Remember that the derivatives are taken with respect to the second argument here.

Relaxing condition that the derivatives are zero and considering all possible test functions we conclude that

$$\sum\_{\chi\_l \in V^j} r\_\lambda(V^j, \chi\_l) \partial \varphi(\chi\_l) = 0$$

holds whenever *xi*∈*<sup>V</sup> <sup>j</sup> ∂ϕ(xi)* <sup>=</sup> <sup>0</sup>*,* implying that *rλ(V <sup>j</sup> , y)* is continuous at *<sup>y</sup>* <sup>=</sup> *V <sup>j</sup> .*

Essentially the same calculations imply that the resolvent kernel satisfies standard vertex conditions at all other contact vertices *<sup>V</sup> m, m* = *<sup>j</sup>* . At all internal vertices *rλ(V <sup>j</sup> , y)* satisfies the vertex conditions described by **S**int—the same conditions as the functions from the domain of *<sup>L</sup>*st*(*<sup>=</sup> *<sup>L</sup>***S**int*,*st *q,a ).*

Summing up the resolvent kernel *rλ(*·*, V* <sup>1</sup>*)* is a solution to the differential equation (4.32) satisfying standard vertex conditions outside the boundary, continuous at the boundary vertices and having the following boundary values by (17.25):

$$r\_{\lambda}(V^1, \cdot)|\_{\partial \Gamma} = \begin{pmatrix} r\_{\lambda}(V^1, V^1) \\ r\_{\lambda}(V^1, V^2) \\ \vdots \\ r\_{\lambda}(V^1, V^K) \end{pmatrix}, \quad \partial r\_{\lambda}(V^1, \cdot)|\_{\partial \Gamma} = \begin{pmatrix} -1 \\ 0 \\ \vdots \\ 0 \end{pmatrix}.$$

Similar formulas hold for *rλ(V <sup>i</sup> ,* ·*)*|*∂* and *∂rλ(V <sup>i</sup> ,* ·*)*|*∂*, implying that the matrix − *rλ(V <sup>i</sup> , V <sup>j</sup> ) M i,j*=1 is inverse to **M***(λ).*

**Theorem 17.6** *Let us denote by λ*st *<sup>n</sup> and ψ*st *<sup>n</sup> the eigenvalues and ortho-normalised eigenfunctions of the Schrödinger operator L*st <sup>=</sup> *<sup>L</sup>***S**int*,*st *q,a on a compact finite metric graph with the contact set ∂. Let standard conditions be assumed at the contact vertices <sup>V</sup> <sup>j</sup>* <sup>∈</sup> *∂ and arbitrary Hermitian conditions at the internal vertices <sup>V</sup> <sup>j</sup>* <sup>∈</sup>*/ ∂. Then the M-function for is given by.* 

$$\mathbf{M}\_{\Gamma}(\lambda) = -\left(\sum\_{n=1}^{\infty} \frac{\langle \boldsymbol{\psi}\_{n}^{\mathrm{st}} |\_{\partial \Gamma}, \cdot \rangle\_{\ell\_{2}(\partial \Gamma)} \boldsymbol{\psi}\_{n}^{\mathrm{st}} |\_{\partial \Gamma}}{\lambda\_{n}^{\mathrm{st}} - \lambda} \right)^{-1}.\tag{17.26}$$

*Proof* We use the explicit expression for the resolvent (17.17) to get:

$$r\_{\lambda}(\cdot,\chi) = \sum\_{n=1}^{\infty} \frac{1}{\lambda\_n^{\rm st} - \lambda} \overline{\psi\_n^{\rm st}(\chi)} \psi\_n^{\rm st}(\cdot).$$

The series is convergent in *L*2*()*. Our goal is to prove that the convergence is pointwise.

We show first that the series

$$\sum\_{n=1}^{\infty} \frac{|\psi\_n^{\rm st}(x)|^2}{\lambda\_n^{\rm st} + C} \tag{17.27}$$

is (absolutely) convergent for any *x* ∈ *(* \ **V***)* ∪ *∂* and a certain sufficiently large positive *C.*

Consider the case of the Laplacian (with the same vertex conditions). Let us denote the corresponding eigenvalues and normalised eigenfunctions by *λ*<sup>0</sup> *<sup>n</sup>* and *ψ*0 *<sup>n</sup> (x).* We already established that the Laplacian eigenfunctions are uniformly bounded in the case of standard vertex conditions (11.37), but the proof did not use that the vertex conditions are standard; therefore we have

$$|\psi\_n^0(\mathbf{x})| \le c \underbrace{\|\psi\_n^0\|\_{L\_2(\Gamma)}}\_{=1}, \quad c \in \mathbb{R}\_+,\tag{17.28}$$

for any conditions at the internal vertices. The eigenvalues *λ*<sup>0</sup> *<sup>n</sup>* satisfy Weyl's asymptotics (4.25), hence the series (17.27) is absolutely convergent for *C >* <sup>−</sup>*λ*<sup>0</sup> 1*.*

The delta distribution *δx* , where *x* is any internal point on an edge, is a bounded functional with respect to the quadratic form of the Laplacian. Then the series (17.27) gives the norm of the delta distribution. To see this consider any test function *ϕ* from the domain of the Laplacian's quadratic form and consider the action of the delta distribution on it

$$|\langle \delta\_{\mathbf{x}}, \varphi \rangle| = |\varphi(\mathbf{x})| = \left| \sum\_{n} \psi\_{n}^{0}(\mathbf{x}) \varphi\_{n}^{0} \right| = \left| \sum\_{n} (\lambda\_{n}^{0} + C)^{-1/2} \psi\_{n}^{0}(\mathbf{x}) (\lambda\_{n}^{0} + C)^{1/2} \varphi\_{n}^{0} \right| \le \frac{1}{2}$$

$$\leq \left( \sum\_{n} \frac{|\psi\_{n}^{0}(\mathbf{x})|^{2}}{\lambda\_{n}^{0} + C} \right)^{1/2} \underbrace{\left( \sum\_{n} (\lambda\_{n}^{0} + C) |\varphi\_{n}^{0}|^{2} \right)^{1/2}}\_{= \langle \varphi, (L\_{0} + C) \varphi \rangle},\tag{17.29}$$

where *ϕ*<sup>0</sup> *<sup>n</sup>* <sup>=</sup>*ψ*<sup>0</sup> *<sup>n</sup> , ϕ <sup>L</sup>*2*()* are the Fourier coefficient of the function *ϕ* with respect to the orthonormal system {*ψ*<sup>0</sup> *<sup>n</sup>* }<sup>∞</sup> *<sup>n</sup>*=1*.* Since the coefficients *(λ*<sup>0</sup> *<sup>n</sup>* <sup>+</sup> *C)*|*ϕ*<sup>0</sup> *n*| <sup>2</sup> can be chosen arbitrarily (of course subject to the convergence of the series), the positive <sup>1</sup>*/*<sup>2</sup>

series - *n* <sup>|</sup>*ψ*<sup>0</sup> *<sup>n</sup> (x)*| 2 *λ*0 *<sup>n</sup>* + *C* gives the norm of the delta distribution. The Sobolev-type estimate (11.11)

$$|\langle qu, u \rangle| \le \epsilon \langle L\_0 u, u \rangle + \frac{2}{\epsilon} \|u\|^2, \quad q \in L\_1(\Gamma),$$

with a certain 0 *<<* 1, implies that the quadratic forms of the Laplacian and of the Schrödinger operator with *L*<sup>1</sup> potential are equivalent:

$$(1 - \epsilon) \langle L\_0 u, u \rangle + (C - \frac{2}{\epsilon}) \|u\|^2 \le \langle L\_q u, u \rangle + C \|u\|^2 \le (1 + \epsilon) \langle L\_0 u, u \rangle + (C + \frac{2}{\epsilon}) \|u\|^2.$$

One might need to adjust *C* to satisfy *C >* <sup>2</sup> *.* Hence the delta function is a bounded functional on the domain of the Schrödinger's quadratic form. The norm of this functional is calculated as above just changing the upper index from 0 to st:

$$|\langle \delta\_{\mathbf{x}}, \varphi \rangle| \le \left( \sum\_{n} \frac{|\psi\_{n}^{\mathrm{st}}(\mathbf{x})|^{2}}{\lambda\_{n}^{\mathrm{st}} + C} \right)^{1/2} \left( \sum\_{n} (\lambda\_{n}^{\mathrm{st}} + C) |\varphi\_{n}|^{2} \right)^{1/2}. \tag{17.30}$$

It follows that the norm is given by (17.27) and the series is absolutely convergent.

We show now that the series -∞ *n*=1 *ψ*st *<sup>n</sup> (y) λn* <sup>+</sup> *<sup>C</sup> <sup>ψ</sup>*st *<sup>n</sup> (*·*)* converges to *rλ(*·*, y)* in the norm given by the quadratic form of *Lq*

$$\left\langle \sum\_{n=1}^{N} \frac{\overline{\psi\_n^{\rm st}(\cdot)} \overline{\psi\_n^{\rm st}(\mathbf{y})}}{\lambda\_n + C}, (L\_q + C) \sum\_{n=1}^{N} \frac{\psi\_n^{\rm st}(\cdot) \overline{\psi\_n^{\rm st}(\mathbf{y})}}{\lambda\_n + C} \right\rangle = \sum\_{n=1}^{N} \frac{|\psi\_n^{\rm st}(\mathbf{y})|^2}{\lambda\_n + C}$$

$$\xrightarrow[N \to \infty]{} \sum\_{n=1}^{\infty} \frac{|\psi\_n^{\rm st}(\mathbf{y})|^2}{\lambda\_n + C} = \left\langle r\_\lambda(\cdot, \mathbf{y}), (L\_q + C) r\_\lambda(\cdot, \mathbf{y}) \right\rangle.$$

Equivalence of the Laplace's and Schrödinger's quadratic forms means that the series converges to *rλ(*·*, y)* in *<sup>W</sup>*<sup>1</sup> <sup>2</sup> -norm implying that the convergence is pointwise.

In particular we have that the positive series -∞ *n*=1 <sup>|</sup>*ψ*st *<sup>n</sup> (x)*| 2 *λn* <sup>+</sup> *<sup>C</sup>* converges pointwise

to the function *rλ(x, x)*, which is continuous everywhere on outside an arbitrarily small neighbourhood of the internal vertices, where no continuity condition is assumed. Dini's theorem implies then that the convergence is uniform, including the contact set *∂*, where we have standard conditions.

In particular we have

$$r\_{\lambda}(V^{i}, V^{j}) = \sum\_{n=1}^{\infty} \frac{\psi\_{n}^{\rm st}(V^{i}) \overline{\psi\_{n}^{\rm st}(V^{j})}}{\lambda\_{n}^{\rm st} - \lambda}, \quad V^{i}, V^{j} \in \partial \Gamma,$$

where we used that the values *ψ*st *<sup>n</sup> (V <sup>j</sup> )* are well-defined due to standard vertex conditions on *∂*. It remains to take into account formula (17.23). For details see [345].

Implications of this explicit formula will be discussed in the following section. Our goal right now will be to obtain a similar formula using the eigenfunctions of the self-adjoint operator *<sup>L</sup>*<sup>D</sup> <sup>=</sup> *<sup>L</sup>***S**int*,D q,a* defined by Dirichlet conditions at all contact vertices. Formula (17.26) can be obtained using the theory of finite rank singular perturbations [23, 471, 473]. One may consider perturbations of the operator *<sup>L</sup>***S**int*,*st *q,a* by the delta-distributions *δV <sup>j</sup>* with the support at the contact vertices

$$
\langle \delta\_{V^j}, \varphi \rangle := \varphi(V^j). \tag{17.31}
$$

Formula (17.30) implies that *δV <sup>j</sup>* is a bounded linear functional on the domain of the quadratic form of *<sup>L</sup>***S**int*,*st *q,a* . The perturbed operator is formally given by

$$L\_{q,a}^{\mathbf{S}^{\text{int}},\text{st}} + \sum\_{j=1}^{M\_{\partial}} \alpha\_j \delta\_{V^j} = L\_{q,a}^{\mathbf{S}^{\text{int}},\text{st}} + \sum\_{j=1}^{M\_{\partial}} \alpha\_j \langle \delta\_{V^j} \cdot \rangle \,\delta\_{V^j},\tag{17.32}$$

where *αj* <sup>∈</sup> <sup>R</sup> are certain coupling parameters, and we use that for any continuous function *ϕ* it holds

$$
\delta\_{V^\vee}(\mathfrak{x})\varphi(\mathfrak{x}) = \varphi(V^j)\delta\_{V^\vee} = \langle \delta\_{V^\vee}, \varphi \rangle \,\delta\_{V^\vee}.
$$

Such perturbations are called form-bounded [23, 442] and can be uniquely determined in terms of the quadratic forms: the perturbed operator is given by the same differential expression, the same vertex conditions at internal vertices, but by delta vertex conditions on *∂.* The central role is played by the Krein's *Q*-function, which appears in the formula describing the resolvent of the perturbed operator and therefore encodes the spectral properties. This matrix-valued Herglotz-Nevanlinna function (see Sect. 18.1 below) is given by the bordered resolvent

$$\begin{split} \left( \mathbf{Q}(\lambda) \right)\_{jl} &:= \langle \delta\_{V^{j}}, \left( L^{\text{st}} - \lambda \right)^{-1} \delta\_{V^{j}} \rangle \\ &= \left\langle \delta\_{V^{j}}, \sum\_{n=1}^{\infty} \frac{1}{\lambda\_{n}^{\text{st}} - \lambda} \langle \psi\_{n}^{\text{st}}, \delta\_{V^{j}} \rangle \psi\_{n}^{\text{st}} \right\rangle \\ &= \sum\_{n=1}^{\infty} \frac{\psi\_{n}^{\text{st}}(V^{j}) \overline{\psi\_{n}^{\text{st}}(V^{l})}}{\lambda\_{n}^{\text{st}} - \lambda} = r\_{\lambda}(V^{j}, V^{i}) . \end{split} \tag{17.33}$$

We have **<sup>Q</sup>***(λ)* = −**M**−1*(λ)* as matrices.

Let us turn now to the perturbations of the Dirichlet operator *L*<sup>D</sup> <sup>=</sup> *<sup>L</sup>***S**int*,*<sup>D</sup> *q,a* . The corresponding eigenvalues and eigenfunctions will be denoted by *λ<sup>D</sup> <sup>n</sup>* and *ψ<sup>D</sup> n , n* = 1*,* 2*,...* To perturb operators with Dirichlet conditions the delta-distributions with the support at the contact vertices cannot be used, since they vanish on the functions from the domain of the operators. One has to use more singular distributions like the derivative of the delta-function. Consider for example the distributions *∂δV <sup>j</sup>*

$$\langle \partial \delta\_{V^j}, \varphi \rangle := \sum\_{\mathbf{x}\_l \in V^j} \partial \varphi(\mathbf{x}\_l), \quad j = 1, 2, \dots, M\_{\partial}.$$

These distributions are well-defined on the functions that are continuously differentiable on the edges, in particular on the domain of the Dirichlet operator. But these distributions are not bounded with respect to the quadratic form of the operator, in other words, these distributions are not defined on all functions from the domain of the quadratic form. Such distributions are called form-unbounded [23, 442]. Roughly speaking formal expression generalising (17.32) does not determine the perturbed operator uniquely (even using quadratic form technique)

$$L^{\rm D} + \sum\_{j=1}^{M\_{\rm 3}} \alpha\_j \langle \partial \delta\_{V^j}, \cdot \rangle \partial \delta\_{V^j}.\tag{17.34}$$

To understand the reason, why such perturbations are not uniquely defined by the formal expression (17.34), let us examine the corresponding bordered resolvent:

$$
\langle \partial \delta\_{V^{\vee}}, \frac{1}{L^{\mathcal{D}} - \lambda} \partial \delta\_{V^{\vee}} \rangle. \tag{17.35}
$$

The scalar product in this formula cannot be understood even in the sense of distributions, since the element *<sup>L</sup>*<sup>D</sup> <sup>−</sup> *<sup>λ</sup>* −<sup>1</sup> *∂δV <sup>i</sup>* belongs to the Hilbert space, but not to the domain of the operator. To go around this difficulty one considers the difference between the values of this function at two different points say *λ* and *λ*

$$
\langle \partial \delta\_{V^{\prime}}, \frac{1}{L^{\mathrm{D}} - \lambda} \partial \delta\_{V^{\prime}} \rangle - \langle \partial \delta\_{V^{\prime}}, \frac{1}{L^{\mathrm{D}} - \lambda^{\prime}} \partial \delta\_{V^{\prime}} \rangle = \langle \partial \delta\_{V^{\prime}}, \frac{\lambda - \lambda^{\prime}}{\left(L^{\mathrm{D}} - \lambda\right) \left(L^{\mathrm{D}} - \lambda^{\prime}\right)} \partial \delta\_{V^{\prime}} \rangle. \tag{17.36}
$$

The expression on the right hand side is well-defined since

$$\underbrace{\left(\lambda-\lambda'\right)\frac{1}{L^{\mathrm{D}}-\lambda}\underbrace{\frac{1}{L^{\mathrm{D}}-\lambda'}\partial\delta\_{V^{\mathrm{I}}}}\_{\in L\_{2}(\Gamma)}}\_{\in\mathrm{Dom}\,(L^{\mathrm{D}})}\in\mathrm{Dom}\,(L^{\mathrm{D}}).$$

In other words, one needs to regularise the integral determining the *Q*-function in this case.

Without going further into the theory of singular interactions we formulate the second formula for the M-function, interested readers may consult Chapter 3 of [23] or [345]:

**Theorem 17.7** *Let us denote by λ<sup>D</sup> <sup>n</sup> and ψ<sup>D</sup> <sup>n</sup> the eigenvalues and ortho-normalised eigenfunctions of the Schrödinger operator L*<sup>D</sup> <sup>=</sup> *<sup>L</sup>***S**int*,*<sup>D</sup> *q,a () on a compact finite metric graph with the contact set ∂. Dirichlet conditions at the contact vertices and arbitrary Hermitian conditions at the internal vertices are assumed, then the M-function for satisfies the identity* 

$$\mathbf{M}\_{\Gamma}(\boldsymbol{\lambda}) - \mathbf{M}\_{\Gamma}(\boldsymbol{\lambda}') = \sum\_{n=1}^{\infty} \frac{\boldsymbol{\lambda} - \boldsymbol{\lambda}'}{(\boldsymbol{\lambda}\_{n}^{D} - \boldsymbol{\lambda})(\boldsymbol{\lambda}\_{n}^{D} - \boldsymbol{\lambda}')} \langle \boldsymbol{\partial} \boldsymbol{\psi}\_{n}^{D}|\_{\boldsymbol{\partial}\Gamma}, \cdot \rangle\_{\ell\_{2}(\partial\Gamma)} \partial \boldsymbol{\psi}\_{n}^{D}|\_{\boldsymbol{\partial}\Gamma}. \tag{17.37}$$

We see that this formula does not allow one to calculate the M-function directly, but just the difference of its values at two regular points. One may say that Dirichlet spectral data allow one to determine M-function up to a constant matrix. The two preceding theorems can be combined together to get the following explicit formula:

$$\begin{split} \mathbf{M}\_{\Gamma}(\boldsymbol{\lambda}) &= \underbrace{-\left(\sum\_{n=1}^{\infty} \frac{\langle \boldsymbol{\psi}\_{n}^{\rm st} | \boldsymbol{\alpha}\_{\Gamma} \cdot \boldsymbol{\gamma} \rangle \boldsymbol{\psi}\_{n}^{\rm st} | \boldsymbol{\alpha} \boldsymbol{\Gamma} \rangle}{\lambda\_{n}^{\rm st} - \boldsymbol{\lambda}^{\prime}}\right)^{-1}}\_{\mathbf{M} \boldsymbol{\Gamma} \langle \boldsymbol{\lambda}^{\prime} \rangle} \\ &+ \underbrace{\sum\_{n=1}^{\infty} \frac{\boldsymbol{\lambda} - \boldsymbol{\lambda}^{\prime}}{(\lambda\_{n}^{D} - \lambda)(\lambda\_{n}^{D} - \boldsymbol{\lambda}^{\prime})} \langle \partial \boldsymbol{\psi}\_{n}^{D} |\_{\partial \Gamma}, \cdot \rangle\_{\ell\_{2}(\partial \Gamma)} \partial \boldsymbol{\psi}\_{n}^{D} |\_{\partial \Gamma}}\_{\mathbf{M}\_{\Gamma}(\boldsymbol{\lambda}) - \mathbf{M}\_{\Gamma}(\boldsymbol{\lambda}^{\prime})}. \end{split} \tag{17.38}$$

We finish this section by providing a couple of clarifying examples.

**Example 17.8** M-function for the interval *I* = [0*,* 1] with Neumann condition at *x* = 1 and the boundary point *x* = 0.

To illustrate our methods we calculate the M-function **M***<sup>I</sup> (λ)* using formula (17.26). The spectrum and the eigenfunctions of the Neumann Laplacian are

$$\lambda\_n^{\rm st} = \left(\pi(n-1)\right)^2, n = 1, 2, \dots;$$

$$\psi\_1^{\rm st}(\mathbf{x}) = 1, \quad \psi\_n^{\rm st}(\mathbf{x}) = \sqrt{2}\cos\pi(n-1)\mathbf{x}, \quad n = 2, 3, \dots.$$

Substitution into (17.26) gives

$$\begin{split} \mathbf{M}\_{I}(\lambda) &= -\left(-\frac{1}{\lambda} + \sum\_{n=2}^{\infty} \frac{2}{(\pi(n-1))^{2} - \lambda} \right)^{-1} \\ &= -\left(-\frac{1}{\lambda} - \frac{2}{\pi^{2}} \sum\_{n=2}^{\infty} \frac{1}{((k/\pi)^{2} - n^{2})} \right)^{-1} \\ &= -\left(-\frac{1}{\lambda} - \frac{2}{\pi^{2}} \left(\cot k - \frac{1}{k}\right) \frac{\pi^{2}}{2k} \right)^{-1} \\ &= k \tan k, \end{split} \tag{17.39}$$

where we used formula 1.421.3 from [245]:

$$\cot \pi x = \frac{1}{\pi x} + \frac{2x}{\pi} \sum\_{n=1}^{\infty} \frac{1}{x^2 - n^2}.$$

The result, of course, coincides with formula (17.18).

In particular, we observe that

$$\mathbf{M}\_I(0) = 0.$$

Hence formula (17.38) takes the form

$$\mathbf{M}\_I(\boldsymbol{\lambda}) = 0 + \sum\_{n=1}^{\infty} \frac{\boldsymbol{\lambda}}{\lambda\_n^D (\lambda\_n^D - \boldsymbol{\lambda})} \langle \boldsymbol{\partial} \boldsymbol{\psi}\_n^D |\_{\boldsymbol{\partial}\boldsymbol{\Gamma}}, \cdot \rangle\_{\ell\_2(\partial\boldsymbol{\Gamma})} \boldsymbol{\partial} \boldsymbol{\psi}\_n^D |\_{\boldsymbol{\partial}\boldsymbol{\Gamma}}.$$

It remains to substitute the eigenvalues and eigenfunctions of the Dirichlet-Neumann problem:

$$\begin{aligned} \lambda\_n^D &= \left(\frac{\pi}{2}(2n-1)\right)^2, \quad n = 1, 2, \dots; \\\\ \psi\_n^{\rm st}(\mathbf{x}) &= \sqrt{2}\sin\frac{\pi}{2}(2n-1)\mathbf{x}, \quad n = 1, 2, \dots \end{aligned}$$

We use formula 1.421.1 from [245]

$$\tan\frac{\pi}{2}x = \frac{4x}{\pi}\sum\_{n=1}^{\infty}\frac{1}{(2n-1)^2 - x^2}$$

to get

$$\begin{split} \mathbf{M}\_{I}(\lambda) &= \sum\_{n=1}^{\infty} \frac{\lambda}{\left(\frac{\pi}{2}(2n-1)\right)^{2} \left(\left(\frac{\pi}{2}(2n-1)\right)^{2} - \lambda\right)} 2 \left(\frac{\pi}{2}(2n-1)\right)^{2} \\ &= \frac{8\lambda}{\pi^{2}} \sum\_{n=1}^{\infty} \frac{1}{(2n-1)^{2} - (2k/\pi)^{2}} \\ &= k \tan k, \end{split} \tag{17.40}$$

as one should expect.

# **17.3 Hierarchy of M-Functions for Standard Vertex Conditions**

The easiest way to calculate M-functions for graphs is to use the M-functions associated with the single edges. This procedure looks especially simple if standard vertex conditions are assumed at all vertices—only these vertex conditions will be considered in the current section. The functions from the domain of the operator are continuous at the vertices and it is possible to introduce function values at the vertices *ψ(V m)* building the *M*-dimensional vector *<sup>ψ</sup>***<sup>v</sup>** = {*ψ(V <sup>j</sup> )*}*<sup>M</sup> <sup>j</sup>*=1.

The first *M∂* components of this vector coincide with the vector *ψ <sup>∂</sup> .* Similarly the entries of the vector *∂ψ***<sup>v</sup>** are equal to the sums of normal derivatives at the vertices. Its first *M∂* components coincide with the vector *∂ψ <sup>∂</sup> .* Denoting by *ψ* int and *∂ψ* int the limiting values at internal vertices we have

$$
\vec{\psi}\_{\mathbf{v}} = \begin{pmatrix} \vec{\psi}^{\partial} \\ \vec{\psi}^{\mathrm{int}} \end{pmatrix}, \quad \partial \vec{\psi}\_{\mathbf{v}} = \begin{pmatrix} \partial \vec{\psi}^{\partial} \\ \partial \vec{\psi}^{\mathrm{int}\_{\mathbf{v}}} \end{pmatrix}. \tag{17.41}$$

This leads to the natural division of the *M*-dimensional space C*<sup>M</sup> <sup>ψ</sup>***v***, ∂ψ***<sup>v</sup>** into the orthogonal sum of the *M∂* and *M* − *M∂* -dimensional spaces:

$$\underbrace{\mathbb{C}^{\mathcal{M}}}\_{\in \vec{\psi}\_{\mathbf{v}}} = \underbrace{\mathbb{C}^{\mathcal{M}\_{\partial}}}\_{\in \vec{\psi}^{\partial}} \oplus \underbrace{\mathbb{C}^{\mathcal{M}-\mathcal{M}\_{\partial}}}\_{\in \vec{\psi}^{\text{int}}}.$$

The graph's M-function **M***(λ)* is defined as the matrix connecting the limiting values of any function *ψ(λ, x)* solving Eq. (17.2) on the edges and satisfying standard conditions at the internal vertices

$$\mathbf{M}\_{\Gamma}(\lambda)\,\,\tilde{\boldsymbol{\psi}}^{\partial} = \partial\tilde{\boldsymbol{\psi}}^{\partial}.\tag{17.42}$$

On the other hand, the matrix function **M**st*(λ)* introduced in Sect. 5.3.4 describes the relation between the boundary values at all vertices. Denoting by **M**st *ij (λ), i, j* = 1*,* 2 the block components of **<sup>M</sup>**st in the decomposition <sup>C</sup>*<sup>M</sup>* <sup>=</sup> <sup>C</sup>*M∂* <sup>⊕</sup> <sup>C</sup>*M*−*M∂* we write this relation as

$$\begin{cases} \mathbf{M}\_{11}^{\text{st}}(\lambda)\vec{\boldsymbol{\psi}}^{\partial} + \mathbf{M}\_{12}^{\text{st}}(\lambda)\vec{\boldsymbol{\psi}}^{\text{nt}} = \partial\vec{\boldsymbol{\psi}}^{\partial} \\ \mathbf{M}\_{21}^{\text{st}}(\lambda)\vec{\boldsymbol{\psi}}^{\partial} + \mathbf{M}\_{22}^{\text{st}}(\lambda)\vec{\boldsymbol{\psi}}^{\text{int}} = \partial\vec{\boldsymbol{\psi}}^{\text{int}} \end{cases}.\tag{17.43}$$

Taking into account that standard conditions at the internal vertices imply that

$$
\partial \vec{\psi}^{\rm int} = 0
$$

the second equation in (17.43) gives us

$$\mathbf{M}\_{21}^{\mathrm{st}}(\lambda)\ddot{\psi}^{\partial} + \mathbf{M}\_{22}^{\mathrm{st}}(\lambda)\ddot{\psi}^{\mathrm{int}} = 0.$$

The block **M**st <sup>22</sup>*(λ)* is invertible for Im *λ* = 0 since otherwise it has a nontrivial kernel implying that the self-adjoint Schrödinger operator on the same metric graph with Dirichlet conditions at *V* <sup>1</sup>*, V* <sup>2</sup>*,...,V M∂* and standard conditions at *V M∂*+1*,...,V <sup>M</sup>* has a non-real eigenvalue.

Invertibility of *M*st <sup>22</sup>*(λ)* means that the vector *<sup>ψ</sup>* int is determined by *<sup>ψ</sup> <sup>∂</sup>*

$$\vec{\psi}^{\rm int} = -\left(\mathbf{M}\_{22}^{\rm st}(\lambda)\right)^{-1}\mathbf{M}\_{21}^{\rm st}(\lambda)\vec{\psi}^{\rm al}$$

leading to the explicit Frobenius-Schur formula for the graph's M-function:

$$\mathbf{M}\_{\Gamma}(\lambda) = \mathbf{M}\_{11}^{\mathrm{st}}(\lambda) - \mathbf{M}\_{12}^{\mathrm{st}}(\lambda) \left(\mathbf{M}\_{22}^{\mathrm{st}}(\lambda)\right)^{-1} \mathbf{M}\_{21}^{\mathrm{st}}(\lambda). \tag{17.44}$$

This formula will be useful in the abstract analysis of inverse problems for graphs, but it is often not very practical when one is interested in calculating the spectrum or the graph M-function for a particular graph.

In the proof of (17.44) we did not use that the original M-function **M**st*(λ)* is associated with **all** vertices in . It is enough to know the M-function associated with a larger contact set *∂* containing all vertices from *∂*. We summarise this observation as:

**Theorem 17.9** *Let be a finite compact metric graph with the contact sets ∂ and ∂ satisfying* 

$$
\partial \Gamma \subset \partial' \Gamma. \tag{17.45}
$$

*Then the M-function* **M** *(λ) associated with the larger contact set ∂ determines the M-function* **M***(λ) associated with the smaller contact set ∂*

$$\mathbf{M}\_{\Gamma}(\lambda) = \mathbf{M}\_{11}^{\prime}(\lambda) - \mathbf{M}\_{12}^{\prime}(\lambda) \left(\mathbf{M}\_{22}^{\prime}(\lambda)\right)^{-1} \mathbf{M}\_{21}^{\prime}(\lambda),\tag{17.46}$$

*where* **M** *ij (λ), i, j* = 1*,* 2*, are the blocks of* **M** *(λ) in the decomposition* C*M* = <sup>C</sup>*<sup>M</sup>* <sup>⊕</sup> <sup>C</sup>*M* <sup>−</sup>*<sup>M</sup> with M* = |*∂* |*, M* = |*∂*|*.* 

**Example 17.10** Let us calculate the M-function for the compact lasso graph *(*2*.*2*)* depicted in Fig. 17.8 assuming that the Schrödinger operator is determined by the standard vertex conditions.

The graph is formed by the edges [*x*1*, x*2] and [*x*3*, x*4] joined together at the vertices *<sup>V</sup>* <sup>1</sup> <sup>=</sup> *<sup>x</sup>*<sup>4</sup> and *<sup>V</sup>* <sup>2</sup> <sup>=</sup> *<sup>x</sup>*1*, x*2*, x*3*.* The first edge forms the loop and the second edge is the outgrowth. Let us denote the corresponding edge M-functions by *M*1*,*2*(λ)*, each being a 2 <sup>×</sup> <sup>2</sup> matrix function. To build up function **M**st we need to write the M-functions associated with the edges in the basis of the vertices:

$$\mathbf{M}^{1}(\lambda) = \begin{pmatrix} 0 & 0\\ 0 \ M^{1}\_{11} + M^{1}\_{22} + M^{1}\_{12} + M^{1}\_{21} \end{pmatrix}; \quad \mathbf{M}^{2}(\lambda) = \begin{pmatrix} M^{2}\_{22} \ M^{2}\_{21} \\ M^{2}\_{12} \ M^{2}\_{11} \end{pmatrix}.$$

The scalar M-function associated with the loop is obtained from the M-function for the interval by summing up the entries. Writing it in the basis of the vertices we get the matrix function with all except one entries zero.

Then the <sup>2</sup> <sup>×</sup> <sup>2</sup> M-function **<sup>M</sup>**st is given by

$$\mathbf{M}^{\rm st}(\lambda) = \mathbf{M}^{1}(\lambda) + \mathbf{M}^{2}(\lambda) = \begin{pmatrix} M\_{22}^{2} & M\_{21}^{2} \\ M\_{12}^{2} \ M\_{11}^{1} + M\_{22}^{1} + M\_{12}^{1} + M\_{21}^{1} + M\_{11}^{2} \end{pmatrix} . \tag{17.47}$$

Formula (17.44) gives graph's M-function with the contact set *V* <sup>1</sup>

$$\mathbf{M}\_{\Gamma}(\lambda) = M\_{22}^2 - M\_{21}^2 \left( M\_{11}^1 + M\_{22}^1 + M\_{12}^1 + M\_{21}^1 + M\_{11}^2 \right)^{-1} M\_{12}^2,\tag{17.48}$$

which is a scalar Herglotz-Nevanlinna function. We see immediately that the Mfunction depends just on the sum *M*<sup>1</sup> <sup>11</sup> <sup>+</sup> *<sup>M</sup>*<sup>1</sup> <sup>22</sup> <sup>+</sup> *<sup>M</sup>*<sup>1</sup> <sup>12</sup> <sup>+</sup> *<sup>M</sup>*<sup>1</sup> <sup>21</sup>, not on the particular form of the entries in *M*1*(λ)*. It will be shown later that precisely this feature of lasso's M-function makes it impossible to solve the inverse problem in the case of standard vertex conditions.

In the case of the Laplace operator (zero magnetic and electric potentials) we may use formulas (5.55) to get

$$\mathbf{M}^{\text{st}}(\lambda) = \begin{pmatrix} -k \cot k\ell\_2 & \frac{k}{\sin k\ell\_2} \\ \frac{k}{\sin k\ell\_2} & -2k \cot k\ell\_1 + \frac{2k}{\sin k\ell\_1} - k \cot k\ell\_2 \end{pmatrix} \tag{17.49}$$

and

$$\mathbf{M}\_{\Gamma}(\lambda) = -k \cot k \ell\_2 - \left(\frac{k}{\sin k \ell\_2}\right)^2 \left(-2k \cot k \ell\_1 + \frac{2k}{\sin k \ell\_1} - k \cot k \ell\_2\right)^{-1} . \tag{17.50}$$

**Problem 76** Show that formulas (17.10) and (17.50) are identical, provided = 1*, s* = 2*.*

**Example 17.11** Calculation of the M-function for the Laplacian on the equilateral star graph.

Let S*<sup>d</sup>* be the star graph formed by *d* edges of length *.* Consider the Laplace operator on S*<sup>d</sup>* defined on the functions satisfying most general vertex conditions (3.21) at the central vertex

$$i(\mathcal{S} - I)\ddot{\mu} = (\mathcal{S} + I)\partial\ddot{\mu}.$$

To calculate the M-function we need to solve the eigenfunction equation

$$-\vec{\mu}''(\mathbf{x}) = \lambda \vec{\mu}(\mathbf{x}), \ \lambda = k^2,$$

subject to vertex conditions at the origin. Every solution to this differential equation can be written as

$$
\vec{u} = e^{-ik\alpha}\vec{b} + e^{ik\alpha}\vec{a}.
$$

Taking into account the vertex conditions we arrive at (3.13):

$$
\vec{a} = \mathcal{S}\_{\mathsf{V}}(k)\vec{b} = \frac{(k+1)\mathcal{S} + (k-1)I}{(k-1)\mathcal{S} + (k+1)I}\vec{b}.
$$

It follows that the boundary values of such solution are related via the following matrix

$$\begin{aligned} \vec{u}^{\partial} &= e^{-ikl}\vec{b} + e^{ikl}\vec{a} = \left(e^{-ikl} + e^{ikl}S\_{\mathbf{V}}(k)\right)\vec{b}, \\ \partial\vec{u}^{\partial} &= ike^{-ikl}\vec{b} - ike^{ikl}\vec{a} = ik\left(e^{-ikl} - e^{ikl}S\_{\mathbf{V}}(k)\right)\vec{b}, \end{aligned}$$

$$\Rightarrow \mathbf{M}(\lambda) = ik\frac{e^{-ik\ell} - e^{ik\ell}S\_{\mathbf{V}}(k)}{e^{-ik\ell} + e^{ik\ell}S\_{\mathbf{V}}(k)}. \tag{17.51}$$

**Problem 77** Consider the case of the star graph with standard vertex conditions. Simplify formulas (17.51) and explain the mechanism behind.

**Problem 78** Consider the case of the equilateral star graph with standard vertex conditions. Simplify formulas (17.26) and (17.37) and calculate the corresponding M-function.

**Problem 79** Describe the relation between the graph and the edge M-functions in the case of arbitrary Hermitian vertex conditions at inner vertices.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 18 M-Functions: Properties and First Applications**

# **18.1 M-Function as a Matrix-Valued Herglotz-Nevanlinna Function**

We start with the definition of matrix-valued Herglotz-Nevanlinna functions.

**Definition 18.1** A matrix-valued function **M***(λ), λ* <sup>∈</sup> <sup>C</sup>*,* is called Herglotz-Nevanlinna if and only if


$$\operatorname{Im}\lambda > 0 \Rightarrow \operatorname{Im}\mathbf{M}(\lambda) \geq 0;\tag{18.1}$$

(3) it is symmetric with respect to the real axis

$$\mathbf{M}^\*(\lambda) = \mathbf{M}(\overline{\lambda}).\tag{18.2}$$

Slightly different definitions of Herglotz-Nevanlinna functions can be found in the literature: for example, one may consider **M***(λ)* defined in the upper half-plane only, but then it is natural to extend it to the lower half-plane using (18.2). We refer to [239, 284] for comprehensive surveys on Herglotz-Nevanlinna functions.

We shall always consider any Herglotz-Nevanlinna function defined on the maximal domain, even for real values of *λ* if the corresponding non-tangential limit exists.

<sup>1</sup> Let *<sup>M</sup>* be a quadratic complex matrix. Then Re *<sup>M</sup>* <sup>=</sup> <sup>1</sup> <sup>2</sup> *(M* <sup>+</sup> *<sup>M</sup>*∗*)* and Im *<sup>M</sup>* <sup>=</sup> <sup>1</sup> <sup>2</sup>*i(M* − *M*∗*)* denote its real and imaginary parts, respectively. The imaginary part is non-negative if the quadratic form of Im *M* is non-negative.

P. Kurasov, *Spectral Geometry of Graphs*, Operator Theory: Advances and Applications 293, https://doi.org/10.1007/978-3-662-67872-5\_18

The real axis *<sup>λ</sup>* <sup>∈</sup> <sup>R</sup> contains all singularities of Herglotz-Nevanlinna functions, since it is not assumed that the function is analytic there. If a function **M** is singular at a certain point *λ*0, then this does not necessarily imply that it is in some sense infinite there: the function may have a jump at *λ*0, so that the limits

$$\mathbf{M}(\lambda\_0 \pm i0) := \lim\_{\epsilon \searrow 0} \mathbf{M}(\lambda\_0 \pm i\epsilon).$$

exist but are different.

The subclass of Herglotz-Nevanlinna functions we are going to use can be called **Wigner functions** [344, 502, 503] and it is characterised by the additional requirement that the function is analytic on C outside a discrete set of real singularities. It will be shown that in this case the singularities are nothing else than the eigenvalues of the Dirichlet operator *L*<sup>D</sup> as formula (17.37) suggests, but certain accuracy is needed, therefore we postpone this discussion until the end of the current section.

Let us prove now that graph's M-function is a matrix valued Herglotz-Nevanlinna function. One has to show that the matrix is analytic, has full rank and that its imaginary part is nonnegative in Im *λ >* 0 In order to prove the first two statements we are going to use explicit formula (17.7). This step might look easy, but one should remember that to derive the explicit formula the theory of singular perturbations was used [23]. After the existence is proven, to show that M-function has non-negative imaginary part is a relatively easy exercise—one either uses the same explicit formula or the integration by parts. The second approach does not require any knowledge of the theory of singular perturbations, therefore both proofs will be presented.

**Theorem 18.2** *Let be a compact finite metric graph with the contact set ∂ formed by M∂ vertices. Then graph's M-function* **M***(λ) is a matrix-valued Herglotz-Nevanlinna function.* 

*Proof* We have already established that solution to the Dirichlet problem (17.8) is unique. The existence of the solution follows also from formula (17.7), which provides an explicit expression for **M***(λ)* in terms of the eigenfunctions of *L*st*().* In fact the formula shows that the M-function is given by a sum of projectors with the coefficients <sup>1</sup> *λ*st *<sup>n</sup>* <sup>−</sup>*<sup>λ</sup> .* The matrix-valued function determined by this formula is *analytic outside the real axis* since the eigenvalues *λ*st *<sup>n</sup>* are real and the series is absolutely convergent.

To prove that the *imaginary part is positive in the upper half-plane* Im *λ >* 0, one may use the same formula and take into account that

$$\operatorname{Im} \frac{1}{\lambda\_n^{\rm st} - \lambda} = \frac{1}{|\lambda\_n^{\rm st} - \lambda|^2} \operatorname{Im} \lambda,$$

which implies that the imaginary part of **M** is a sum of projectors with positive coefficients.

Positivity of M-function's imaginary part can be also proven directly via integration by parts. Let *ψ(λ,* ·*)* be a solution to the Dirichlet problem (17.8), then we have

$$\begin{split} \lambda \| \| \psi(\lambda) \| ^2\_{L\_2(\Gamma)} &= \langle \psi(\lambda), \,\tau\_{q,a} \psi(\lambda) \rangle\_{L\_2(\Gamma)} \\ &= \langle \vec{\psi}^\partial, \partial \, \vec{\psi}^\partial \rangle\_{\mathbb{C}^{M\_3}} + \langle \vec{\psi}^{\text{int}}, \partial \, \vec{\psi}^{\text{int}} \rangle \\ &+ \sum\_{n=1}^N \int\_{E\_n} \left( |\psi'(\lambda, x) - ia(x) \psi(\lambda, x)|^2 + q(x) |\psi(\lambda, x)|^2 \right) dx \\ &= \langle \vec{\psi}^\partial, \partial \, \vec{\psi}^\partial \rangle\_{\mathbb{C}^{M\_3}} + \sum\_{m=M\_3+1}^M \langle \vec{\psi}^m, A\_{S^m} \vec{\psi}^m \rangle\_{\mathbb{C}^{M\_m}} \\ &+ \sum\_{n=1}^N \int\_{E\_n} \left( |\psi'(\lambda, x) - ia(x) \psi(\lambda, x)|^2 + q(x) |\psi(\lambda, x)|^2 \right) dx. \end{split}$$

Taking into account that the operators *ASm* are Hermitian

$$\operatorname{Im} \langle \vec{\psi}^m, A\_{S^m} \vec{\psi}^m \rangle\_{\mathbb{C}^{dn}} = 0$$

and the potential *q* is real-valued

$$\operatorname{Im} \int\_{E\_{\hbar}} \left( \left| \psi'(\lambda, x) - ia(x)\psi(\lambda, x) \right|^2 + q(x) \left| \psi(\lambda, x) \right|^2 \right) dx = 0,$$

we conclude that

$$\left\|\operatorname{Im}\lambda\right\|\psi(\lambda)\right\|^2 = \operatorname{Im}\left\langle\vec{\psi}^{\partial}, \partial\vec{\psi}^{\partial}\right\rangle\_{\mathbb{C}^{M\_{\partial}}} = \langle\vec{\psi}^{\partial}, \operatorname{Im}\mathbf{M}\_{\Gamma}(\lambda)\vec{\psi}^{\partial}\rangle\_{\mathbb{C}^{M\_{\partial}}}.$$

The latter equality implies that the matrix **M***(λ)* has positive imaginary part in the upper half-plane of *λ.* Of course we need to take into account that *ψ <sup>∂</sup>* is arbitrary (the Dirichlet problem (17.8) is always solvable).

To prove the *symmetry of the M-function* consider any two solutions *ψ(λ)* and *ψ(*ˆ *λ)* of the Dirichlet problem (17.8) for the spectral parameters *λ* and *λ*. Then it holds:

$$\begin{split} 0 &= \langle \mathsf{r}\_{q,a} \vec{\psi}(\overline{\lambda}), \psi(\lambda) \rangle\_{L\_{2}(\Gamma)} - \langle \vec{\psi}(\overline{\lambda}), \mathsf{r}\_{q,a} \vec{\psi}(\lambda) \rangle\_{L\_{2}(\Gamma)} \\ &= \langle \partial \vec{\tilde{\psi}}^{\partial}(\overline{\lambda}), \vec{\psi}(\lambda) \rangle\_{\mathbb{C}^{M\_{\partial}}} - \langle \vec{\tilde{\psi}}^{\partial}(\overline{\lambda}), \partial \vec{\psi}(\lambda) \rangle\_{\mathbb{C}^{M\_{\partial}}} \\ &= \langle \mathbf{M}\_{\Gamma}(\overline{\lambda}) \vec{\tilde{\psi}}^{\partial}, \vec{\psi}^{\partial} \rangle\_{\mathbb{C}^{M\_{\partial}}} - \langle \vec{\tilde{\psi}}^{\partial}, \mathbf{M}\_{\Gamma}(\lambda) \vec{\psi}^{\partial} \rangle\_{\mathbb{C}^{M\_{\partial}}}, \end{split}$$

which implies **M***(λ)* = **M**<sup>∗</sup> *(λ)* since the vectors *ψ <sup>∂</sup>* and *ψ*ˆ *<sup>∂</sup>* are again arbitrary.

Formulas (17.26) and (17.37) indicate that the eigenvalues of the operators on can be seen from the M-function: the eigenvalues of *L*st appear as certain zeroes of **M**, while the spectrum of the Dirichlet operator *L*<sup>D</sup> corresponds to the singularities. This connection is particularly clear when is given by just one interval. Let us return to Example 17.8 where the M-function was calculated for the interval *I* = [0*,* 1] with Neumann condition at *x* = 1 and the contact set given by *x* = 0*.* The M-function was expressed using the spectrum of the Neumann-Neumann and Dirichlet-Neumann problems. In the first approach one gets (17.39)

$$-\mathbf{M}\_I(\lambda)^{-1} = -\frac{1}{\lambda} + \sum\_{n=2}^{\infty} \frac{2}{(\pi(n-1))^2 - \lambda}.$$

The singularities of −**M***(λ)*−1, i.e. zeroes of **M***(λ),* occur when *<sup>λ</sup>* <sup>=</sup> *<sup>π</sup>*2*n*2*, n* <sup>=</sup> 0*,* 1*,...* —on the spectrum of the Neumann-Neumann problem. Similarly, the second formula (17.40)

$$\mathbf{M}\_I(\lambda) = \frac{8\lambda}{\pi^2} \sum\_{n=1}^{\infty} \frac{\lambda}{\left(\frac{\pi}{2}(2n-1)\right)^2 - \lambda}.$$

implies that the singularities of **M***<sup>I</sup> (λ)* are situated at the eigenvalues of the Dirichlet-Neumann problem.

This observation cannot be extended for arbitrary graphs without any modification for at least two reasons:


Our immediate goal now is to discuss how to extend these observations for arbitrary graphs.

In the first step we need to define what should be understood as zeroes and singularities of a matrix-valued function. We are going to use the following natural definitions:

**Definition 18.3** Matrix-valued function **M***(λ)* has a **singularity** at *λ* = *μ* iff there exists a vector *b* such that

$$\|\mathbf{M}(\lambda)b\| \to \infty \text{ as } \lambda \to \mu. \tag{18.3}$$

The multiplicity *m*sing*(λ)* of the singularity is equal to the dimension of the subspace spanned by all vectors *b* satisfying (18.3).

Of course, this definition requires that the function **M** is defined in a certain neighborhood of *μ,* but maybe not for *λ* = *μ.*

**Definition 18.4** Matrix-valued function **M***(λ)* has a **generalised zero** at *μ* iff <sup>−</sup>**M**−1*(λ)* has a singularity at *μ.* The multiplicity *m*zero*(λ)* of the zero coincides with the multiplicity of the corresponding singularity *<sup>m</sup>*sing*(λ)* for <sup>−</sup>**M**−1*(λ).*

Note that this definition does not necessarily imply that **M** is well-defined at *λ* = *μ*, and hence it may be inappropriate to require that the kernel of **M***(μ)* is non-trivial. On the other hand, if **M***(μ)* is well-defined, then *μ* is a generalised zero if and only if the kernel of **M***(μ)* is non-trivial.

One might wonder, what is the reason that to define generalised zeroes we use the inverse matrix and do not try to find a vector *b* = 0 , such that

$$\|\mathbf{M}(\lambda)b\| \to 0,\text{ as }\lambda \to \mu\tag{18.4}$$

(as was done for the singularity in (18.3). This criterion works in many cases but, as the following example shows, cannot always be used.

**Example 18.5** Consider the 2 × 2 matrix

$$\mathbf{M}(\lambda) = \begin{pmatrix} -\frac{1}{\lambda} & 1\\ 1 & \lambda \end{pmatrix}.$$

The matrix function ⎛ **M***(λ)* is a sum of three Herglotz-Nevanlinna functions ⎝−1 *λ* 0 0 0 ⎞ ⎠, 0 0 0 *λ* and 0 1 1 0 and hence is a Herglotz-Nevanlinna function.

Point *λ* = 0 is a singular point, since

$$\lim\_{\lambda \to 0} \|\mathbf{M}(\lambda) \begin{pmatrix} 1 \\ 0 \end{pmatrix} \| = \lim\_{\lambda \to 0} \frac{1}{\lambda} = \infty.$$

Point *λ* = 0 is a generalised zero, since for the inverse matrix

$$\mathbf{M}^{-1}(\lambda) = \frac{-1}{2} \begin{pmatrix} \lambda & -1 \\ -1 & -\frac{1}{\lambda} \end{pmatrix}$$

we have

$$\lim\_{\lambda \to 0} \|\mathbf{M}^{-1}(\lambda) \begin{pmatrix} 0 \\ 1 \end{pmatrix} \| = \lim\_{\lambda \to 0} \frac{1}{\lambda} = \infty.$$

On the other hand, it is impossible to find a vector *b* = *(b*1*, b*2*)* = *(*0*,* 0*)*, such that (18.4) holds. Really, we have

$$\mathbf{M}(\lambda)\vec{b} = \begin{pmatrix} -\frac{1}{\lambda}b\_1 + b\_2\\ b\_1 + \lambda b\_2 \end{pmatrix}.$$

Looking at the first component we see, that the limit is zero only if both coordinates of *b* are zero. Hence *b* = 0 , which is excluded.

Instead of using (18.4) for the definition of generalised zeroes, one may require that there exists a normalised sequence *b(λ), b(λ)* = 1*,* such that

$$\lim\_{\lambda \to \mu} \langle \tilde{b}(\lambda), \mathbf{M}(\lambda)\tilde{b}(\lambda) \rangle = 0$$

holds. For the matrix **M** above, the sequence can be chosen as *b* = *(λ,* <sup>√</sup> 1 − *λ*2*),* |*λ*| *<* 1*.* For this choice of the vector sequence we have

$$
\langle \vec{b}(\lambda), \mathbf{M}(\lambda)\vec{b}(\lambda) \rangle = -\frac{1}{\lambda}\lambda^2 + 2\lambda\sqrt{1-\lambda^2} + \lambda(1-\lambda^2) \to 0.
$$

In our opinion Definition 18.4 is easier to work with and that definition will be used in the sequel.2

Using the definitions above we may provide explicit characterisation of some part of the spectra of *L*st and *L*<sup>D</sup> via the corresponding M-function.

**Theorem 18.6** *The M-function determines certain eigenvalues of the standard and Dirichlet Schrödinger operators L*st*() and L*D*() as follows:* 


*Proof* As in the example above, our proof will be based on formulas (17.26) and (17.37). Let us first re-write (17.26) as

$$-\mathbf{M}\_{\Gamma}(\boldsymbol{\lambda})^{-1} = \sum\_{n=1}^{\infty} \frac{\langle \boldsymbol{\psi}\_{n}^{\mathrm{st}} |\_{\partial \Gamma}, \cdot \rangle\_{\mathbf{C}^{M\_{\partial}}} \boldsymbol{\psi}\_{n}^{\mathrm{st}} |\_{\partial \Gamma}}{\lambda\_{n}^{\mathrm{st}} - \boldsymbol{\lambda}}. \tag{18.5}$$

The series giving the entries of the matrix-valued function are absolutely convergent for any *<sup>λ</sup>* = *<sup>λ</sup>*st *<sup>n</sup> .* Hence the singularities may occur only at the points *<sup>λ</sup>* <sup>=</sup> *<sup>λ</sup>*st *<sup>n</sup>*—the

<sup>2</sup> The author is grateful to A. Luger for providing this explicit example.

spectrum of the standard operator on *.* Choosing *b* <sup>=</sup> *<sup>ψ</sup>*st *<sup>n</sup>* |*∂* we see that

$$\|\mathbf{M}\_{\Gamma}(\lambda)\vec{b}\| \sim \frac{\|\psi\_{n}^{\rm st}|\_{\partial\Gamma}\|^{\beta}}{\lambda\_{n}^{\rm st} - \lambda} \to \infty, \quad \text{as } \lambda \to \lambda\_{n}^{\rm st}.$$

Moreover, the multiplicity *m*zero*(λ*st *<sup>n</sup> )* of the generalised zero coincides with the dimension of the linear span of the traces on the contact set *∂* of all eigenfunctions corresponding to the eigenvalue *λ*st *n* :

$$m\_{\text{zero}}(\lambda\_n^{\text{st}}) = \text{dim span} \left\{ \psi\_j |\_{\partial \Gamma} \right\}\_{\lambda\_j = \lambda\_n^{\text{st}}}.\tag{18.6}$$

It is clear that the multiplicity of *λ*st *<sup>n</sup>* as a zero of **M** cannot exceed the multiplicity of *λ*st *<sup>n</sup>* as an eigenvalue of *L*st*,* but it is easy to construct examples, where these multiplicities are different: take any *λ*st *<sup>n</sup>* such that there exists an eigenfunction *ψ*st *n* with zero trace on *∂* : *<sup>ψ</sup>*st *<sup>n</sup>* <sup>|</sup>*∂* <sup>=</sup> <sup>0</sup>*.* An eigenvalue *λ*st *<sup>n</sup>* can be detected from the M-function only if at least one of the corresponding eigenfunctions has non-zero trace on the contact set *∂.*

The singularities of **M** can be analysed in a similar way. Let us re-write (17.37) as

$$\mathbf{M}\_{\Gamma}(\boldsymbol{\lambda}) = \mathbf{M}\_{\Gamma}(\boldsymbol{\lambda}') + \sum\_{n=1}^{\infty} \frac{\boldsymbol{\lambda} - \boldsymbol{\lambda}'}{(\boldsymbol{\lambda}\_{n}^{D} - \boldsymbol{\lambda})(\boldsymbol{\lambda}\_{n}^{D} - \boldsymbol{\lambda}')} \langle \boldsymbol{\partial}\boldsymbol{\psi}\_{n}^{D}|\_{\partial\Gamma}, \cdot \rangle\_{\ell\_{2}(\partial\Gamma)} \partial\boldsymbol{\psi}\_{n}^{D}|\_{\partial\Gamma}.\tag{18.7}$$

The singularities occur, as above, only at the points *<sup>λ</sup>* <sup>=</sup> *<sup>λ</sup><sup>D</sup> <sup>n</sup>* —the spectrum of the Dirichlet operator on *.* The multiplicity of the singularity can be calculated using

$$m\_{\rm sing}(\lambda\_n^D) = \dim \text{span} \left\{ \partial \psi\_j^D \vert\_{\partial \Gamma} \right\}\_{\lambda\_j = \lambda\_n^D}. \tag{18.8}$$

As above, a Dirichlet eigenvalue can be detected from the M-function only if there exists an eigenfunction with non-zero trace *∂ψ<sup>D</sup> <sup>j</sup>* |*∂* and the multiplicity of the singularity does not exceed the multiplicity of the Dirichlet eigenvalue.

Monotonicity of the matrix function can be proven by differentiating (18.7)

$$\frac{\partial}{\partial \lambda} \mathbf{M}\_{\Gamma}(\lambda) = \sum\_{n=1}^{\infty} \frac{1}{(\lambda\_n^D - \lambda)^2} \langle \partial \psi\_n^D |\_{\partial \Gamma}, \cdot \rangle\_{\ell\_2(\partial \Gamma)} \partial \psi\_n^D |\_{\partial \Gamma} \tag{18.9}$$

implying that for non-singular values of *λ* the matrix valued function *<sup>∂</sup> ∂λ***M***(λ)* is a sum of projectors with positive coefficients. 

Note that this theorem does not imply that **all** eigenvalues of the standard and Dirichlet operators can be determined from the M-function: only the eigenvalues with non-trivial traces on the contact set can be detected. This fact may sound disappointing, but we are going to show that at least the lowest eigenvalues can always be determined, provided the vertex conditions at internal vertices are generalised delta conditions. The corresponding operators with standard and Dirichlet conditions on the contact set will be denoted by *<sup>L</sup>α*int*,*st *<sup>q</sup> ()* and *<sup>L</sup>α*int*,*<sup>D</sup> *<sup>q</sup> ()* respectively.

**Theorem 18.7** *Let* **M***(λ) be the M-function for the Schrödinger operator <sup>L</sup>α*int*,*st *<sup>q</sup> () on the finite compact metric graph with the contact set ∂ and delta or generalised delta-couplings at internal vertices. Then the lowest eigenvalues of the standard and Dirichlet operators on can be detected from* **M***:* 


*The function* **M***(λ) is negative on the interval (*−∞*, λ*1*).*

*Proof* Theorem 4.16 states that the ground state eigenfunction can always be chosen nonzero in the case of delta-couplings and generalised delta-couplings at the vertices. Hence the trace of *ψ*st <sup>1</sup> on the contact set is non-zero. Moreover the lowest eigenvalues for *Lα*int*,*st and *Lα*int*,*<sup>D</sup> cannot coincide since otherwise any linear combination of the corresponding eigenfunctions

$$c\_1 \psi\_1^{\rm st} + c\_2 \psi\_1^{\rm D}$$

would minimise the quadratic form for *Lα*int*,*st, which is impossible since the ground state is simple. It follows that **M***(λ)* is regular at *λ*1*(Lα*int*,*st*())* and therefore its determinant is zero.

In the case where Dirichlet conditions are introduced at certain vertices, the same Theorem 4.16 states that the ground state eigenfunction can be chosen strictly positive, i.e. it is equal to zero only at the vertices, where the Dirichlet conditions are imposed. Let us show, that in this case the traces *∂ψ<sup>D</sup>* <sup>1</sup> |*∂* are non-trivial. Assume the opposite, i.e. that *∂ψ<sup>D</sup>* <sup>1</sup> |*∂* = 0 *.* At any contact vertex this would imply that not only the sum, but all normal derivatives are zero, since the function is non-negative. Then on each edge adjusted to the contact vertex, the ground state eigenfunction is identically equal to zero since it satisfies the second-order differential equation −*ψ* <sup>1</sup> + *q(x)ψ*<sup>1</sup> = *λ*1*ψ*<sup>1</sup> with zero Cauchy data. We get a contradiction to the fact that *ψ<sup>D</sup>* <sup>1</sup> attains zero only at the Dirichlet vertices.

To accomplish the proof consider formula (17.26) for *λ<λ*st <sup>1</sup> to get

$$-\mathbf{M}\_{\Gamma}^{-1}(\lambda) = \sum\_{n=1}^{\infty} \frac{\langle \boldsymbol{\psi}\_{n}^{\mathrm{st}}|\_{\partial \Gamma}, \cdot \rangle\_{\ell\_{2}(\partial \Gamma)} \boldsymbol{\psi}\_{n}^{\mathrm{st}}|\_{\partial \Gamma}}{\lambda\_{n}^{\mathrm{st}} - \lambda} \ge 0$$

as a sum of projectors with positive coefficients. 

Note that we have proven a slightly stronger statement: the vectors *ψ*st <sup>1</sup> |*∂* and *ψ<sup>D</sup>* <sup>1</sup> |*∂* do not have zero components.

Herglotz-Nevanlinna functions considered here belong to the Wigner class having discrete sets of singularities and generalised zeroes, both tending to +∞. Let us denote the singularities and zeroes monotonically by *aj* and *bj* respectively.

As before we introduce the **energy curves** – the eigenvalue branches of **M***(λ)* on the real axis. The M-function is Hermitian for *<sup>λ</sup>* <sup>∈</sup> <sup>R</sup> and therefore for every regular *λ* there are precisely *M∂* eigenvalues of **M***(λ)* depending continuously on *λ.* It follows from Theorem 18.6 that between the singularities the energy curves are monotonically increasing functions of *λ.* It is clear that these curves may cross each other. Understanding the behaviour of the energy curves globally and locally (near the singularities) is our next task.

The structure of Wigner functions close to their singularities can be described by formula (18.10) below. For the proof we are going to use representations (17.26) and (17.37) for simplicity, also this fact has a general nature and can be proven without using any explicit formula.

**Lemma 18.8** *Let aj be a singularity of the M-function* **M***(λ). Then the following representation is valid in a certain complex neighbourhood of aj*

$$\mathbf{M}\_{\Gamma}(\lambda) = \frac{1}{a\_f - \lambda} C\_f + F(\lambda),\tag{18.10}$$

*where* 

$$C\_j = \sum\_{\lambda\_n = a\_j} \langle \partial \psi\_n^{\mathbf{D}} |\_{\partial \Gamma}, \cdot \rangle\_{\ell\_2(\partial \Gamma)} \partial \psi\_n^{\mathbf{D}} |\_{\partial \Gamma}$$

*is a non-negative Hermitian matrix and F (λ) is analytic in the neighbourhood.* 

*The rank of Cj is equal to the multiplicity m*sing*(aj ). The eigenvalue branches μi(λ) of* **M***(λ) for real λ near aj can be divided into two classes:* 


$$
\mu\_l(\lambda) = \frac{\sigma\_l}{\lambda - a\_j} + \mathcal{O}(\mathbf{l}), \quad \lambda \to a,\tag{18.11}
$$

*where σi, i* = 1*,* 2*,...,m*sing*(aj ) are non-zero eigenvalues of Cj (counting with multiplicities).* 

*Proof* Representation (18.10) follows directly from formula (17.37). Consider the *M∂* × *M∂* matrix valued function

$$(a\_j - \lambda) \mathbf{M}\_\Gamma(\lambda) = C\_j + (a\_j - \lambda) F(\lambda).$$

It is analytic in the neighbourhood and Hermitian on the real axis, therefore its eigenvalue branches *μ*˜*i(λ)* can be chosen analytic [283] such that

$$
\tilde{\mu}\_i(\lambda) = \sigma\_l + \mathcal{O}(a\_j - \lambda), \ \lambda \to a\_j, \quad , i = 1, 2, \dots, M\_\partial,
$$

where *σi* are all *M∂* eigenvalues of *Cj .* Therefore the eigenvalue branches *μi(λ)* of **<sup>M</sup>***(λ)* can be chosen so that *μi(λ)* <sup>=</sup> *σi aj*−*<sup>λ</sup>* <sup>+</sup> <sup>O</sup>*(*1*).* 

The following geometric lemma is central for our method, since it determines a relation between the number of generalised zeroes and singularities of the Mfunction to the left of any real point *x.* This result is of general interest and hopefully will be used not only for graph M-functions. The proof presented here can without any problem be generalised to include arbitrary meromorphic operatorvalued Herglotz-Nevanlinna functions not having singularities below some *λ*<sup>0</sup> <sup>∈</sup> <sup>R</sup> (for example from the Stieltjes class).

**Lemma 18.9** *Let* **M***(λ) be the M-function for the graph . Then the number r(x) of generalised zeroes (counted with multiplicities) strictly to the left of any point <sup>x</sup>* <sup>∈</sup> <sup>R</sup> *can be calculated using the following formula* 

$$r(\mathbf{x}) = \sum\_{a\_j < \mathbf{x}} \underbrace{\text{rank } C\_j}\_{=m\_{\text{sing}}(a\_j)} + \lim\_{\epsilon \searrow 0} \# \{ \text{positive eigenvalues of } \mathbf{M}\_\Gamma(\mathbf{x} - \epsilon) \}\,. \tag{18.12}$$

**Corollary 18.10** *To the left of each x there exist at least* - *aj<x m*sing*(aj ) generalised* 

*zeroes of* **M***(λ).* 

*Proof* The inverse function −**M***(λ)*−<sup>1</sup> is also a Herglotz-Nevanlinna function: analytic outside the real axis and has non-negative imaginary part in the upper halfplane. The symmetry relation is also satisfied. The singularities of <sup>−</sup>**M**−<sup>1</sup> *(λ)* may be situated only at the spectrum of the standard Laplacian (see (17.26)) and therefore form a discrete set.

Between the singularities the *energy curves aj (λ)* are continuous monotonic functions. Following Lemma 18.8 let us divide the energy curves near any singular point *aj* into two classes:


The singular curves obviously are not monotonic on any interval containing corresponding singular points. In order to repair this, let us apply modified arctan map as follows. The function arctan *<sup>x</sup>* is defined up to *πn, n* <sup>∈</sup> <sup>Z</sup>*.* In accordance with Theorem 18.7 in the region *λ<λ*<sup>1</sup> all energy curves are negative and regular, therefore we define *yj (λ)* = arctan*μj (λ)* to have values in the interval *(*−*π/*2*,* 0*).* For general real *λ* each curve arctan*μj (λ)* is defined as continuous and monotone (globally for all *λ* ∈ *(*−∞*,*∞*)*): each time when *μj* crosses its singular point

**Fig. 18.1** M-function for the loop, two contact points

(jumping from +∞ down to −∞) we add extra +*π* to the value of the function arctan*μj (λ)*.

To illustrate the introduced transformation we plotted in Fig. 18.1 both the original and transformed eigenvalue curves for the symmetric loop with <sup>1</sup> = <sup>2</sup> = *π* taken from Example 17.5.

The described procedure gives us precisely *M∂* continuous monotonic curves on the *(λ, y)*-(half)-plane

$$\{ -\infty < \lambda < \infty; -\pi/2 < \mathbf{y} < \infty \} \dots$$

Note that we do not assume global analyticity of the branches *yj (λ).* Generalised zeroes of **M***(λ)* correspond to those *λ* for which one of the modified energy curves crosses the horizontal lines *y* = *πn, n* = 0*,* 1*,* 2*,....* The horizontal lines *y* = *π/*2 + *πn, n* = 0*.*1*.*2*,...* correspond to the singularities of **M***(λ).*

Consider any nonsingular point *x* = *aj , j* = 1*,* 2*,...* The branch *yj* crosses the horizontal lines *y* = *πn, n* = 0*,* 1*,* 2*,...,* on the interval *(*−∞*, x*] precisely *yj (x) π* + 1 times, where *y* denotes the integer part, i.e. the largest integer not greater than *y.* Hence the total number of crossings is *M∂* <sup>+</sup> *M∂ yj (x)* .

*j*=1 *π* The obtained formula is possibly hard to apply, since to calculate *r(x)* one needs to know the *history* of all energy curves *yj* for *λ < x.* To derive a more explicit formula, let us note that by monotonicity each energy curve crosses the lines *y* = *π/*2 + *πn* on the interval *(*−∞*, x)* at least as many times as the lines *y* = *πn.* The difference is at most equal to one and is different from zero if and only if *μj (x) >* 0*.* Hence the total number of generalised zeroes to the left of *x* is equal to the number of singular points to the left of *x,* counted summing up the ranks of the matrices *Cj* , plus the number of positive eigenvalues of **M***(x).* We obtain formula (18.12) for nonsingular points *x.*

To prove the formula for all *<sup>x</sup>* <sup>∈</sup> <sup>R</sup> it remains to consider the case where *<sup>x</sup>* <sup>=</sup> *aj*<sup>0</sup> for a certain *j*0*.* Take any nonsingular point *s<x* close enough to *x* so that in particular no other singular point or zero lies between *s* and *x.* Then obviously *r(s)* = *r(x)* and we may use (18.12) in the regular situation. We get formula (18.12) in the limit *s* → *aj* − 0*.* 

This lemma in particular gives us an effective tool to estimate a few lowest detectable, or **visible eigenvalues**—the eigenvalues that are generalised zeroes of the M-function. For example, in the case of standard Laplace operator on we have:


Most of the statements are direct corollaries of the formulas (17.26) and (17.37). One needs to take into account that the standard Laplacian is non-negative with the unique eigenfunction *ψ*1*(x)* ≡ 1 corresponding to the lowest eigenvalue *λ*<sup>1</sup> = 0, provided the graph is connected. This eigenfunction has the trace *ψ*1|*∂* = *(*1*,* 1*,...,* 1*)* and can be seen from **M***.*

## **18.2 Gluing Procedure and the Spectral Gap**

In this section we apply the developed theory of M-functions to study behaviour of the spectrum when two graphs are glued together. It turns out that we are able to fully analyse the behaviour of the spectral gap – the difference between the two lowest eigenvalues. This question has already been discussed in Sect. 12.5 but the methods used there (just estimates involving Rayleigh quotient) are too rough to give precise answer to the question what happens to the spectral gap. The answer given there is exhaustive only in the case when the graphs are glued at one vertex. How complicated this question could be in the case of gluing at two vertices can be seen from Examples 7.12 and 7.13 where we analysed the spectral gap behaviour under adding a single interval to simplest graphs formed by one and two edges. If the original graphs would be more complicated, then no precise analysis could be carried out. Precise answer can be given only analysing the structure of the M-functions associated with the glued parts. Our discussion will be restricted to the case of standard Laplacians, but all ideas can be easily modified to cover more general Schrödinger operators with delta or generalised delta-couplings at the vertices.

Consider two metric graphs <sup>1</sup> and 2. Pick up two subsets of vertices *∂j* = {*<sup>V</sup> m(j )*} *M∂ <sup>m</sup>*=1—of the same size. Then the **glued graph**  is the union of the original graphs <sup>1</sup> ∪ <sup>2</sup> with the vertices belonging to *∂*<sup>1</sup> and *∂*<sup>2</sup> identified pairwise. We are going to use the following notation

$$
\Gamma = \Gamma\_{\mathbb{L}} \sqcup\_{\partial} \Gamma\_{\mathbb{2}},
$$

assuming that the boundaries of the original graphs and the way they are paired together are fixed.

The following operators will play an important role in this section:


We shall warm up by proving the following elementary theorem using two different methods: via Rayleigh quotient (as in Sect. 12.5) and via graphs Mfunction.

**Theorem 18.11** *The first (nontrivial, i.e. non-zero) eigenvalue λ*2*() of the standard Laplacian on cannot exceed the second lowest point in the joint spectrum (counted with multiplicities) (L*D*(*1*))*∪*(L*D*(*2*)) of the Dirichlet operators on* <sup>1</sup> *and* 2*:* 

$$\lambda\_2(\Gamma) \le \min \left\{ \max \{ \lambda\_1^{\mathcal{D}}(\Gamma\_1), \lambda\_1^{\mathcal{D}}(\Gamma\_2) \}, \min \{ \lambda\_2^{\mathcal{D}}(\Gamma\_1), \lambda\_2^{\mathcal{D}}(\Gamma\_2) \} \right\}. \tag{18.13}$$

*The inequality is strict unless λ*<sup>D</sup> <sup>1</sup> *(*1*)* <sup>=</sup> *<sup>λ</sup>*<sup>D</sup> <sup>1</sup> *(*2*).*

*Proof Using Rayleigh Quotient* The first non-trivial eigenvalue is given as the minimum of the Rayleigh quotient

$$\lambda\_2(\Gamma) = \min\_{u \perp \mathbf{l}} \frac{\langle Lu, u \rangle}{\|u\|^2}. \tag{18.14}$$

The lowest points in *(L*D*(*1*))* <sup>∪</sup> *(L*D*(*2*)* are either the two Dirichlet ground states or the two lowest eigenvalues for one of the Dirichlet operators, say 1*.* Let us denote the corresponding eigenfunctions by *ψ*<sup>1</sup> and *ψ*<sup>2</sup> extending them by zero to the whole . These functions are orthogonal: *ψ*<sup>1</sup> <sup>⊥</sup> *<sup>ψ</sup>*<sup>2</sup> either because they have disjoint supports, or are the eigenfunctions of the same self-adjoint operator. The corresponding eigenvalues will be denoted by *λ*<sup>1</sup> and *λ*<sup>2</sup> respectively. Consider the function *ψ* given as a non-trivial linear combination of the introduced functions

$$
\psi = \alpha \psi^1 + \beta \psi^2.
$$

It is clear that the coefficients *α* and *β* can be found to satisfy the orthogonality condition *ψ* ⊥ 1. For any such values of the parameters we have

$$\begin{split} \lambda\_2(\Gamma) &\leq \frac{\langle L\psi,\psi\rangle}{\|\psi\|^2} = \frac{\lambda^1|\alpha|^2 + \lambda^2|\beta|^2}{|\alpha|^2 + |\beta|^2} \\ &\leq \min\left\{ \max\{\lambda\_1^{\mathrm{D}}(\Gamma\_j)\}\_{j=1}^2, \min\{\lambda\_2^{\mathrm{D}}(\Gamma\_j)\}\_{j=1}^2 \right\}, \end{split}$$

where we assumed that *ψ*1*,*<sup>2</sup> are normalised.

The equality may occur only if *<sup>λ</sup>*<sup>1</sup> <sup>=</sup> *<sup>λ</sup>*2*.* Since we already know that the ground state is not degenerate *λ*<sup>D</sup> <sup>1</sup> *(j )<λ*<sup>D</sup> <sup>2</sup> *(j ),* the equality may occur only if *<sup>λ</sup>*<sup>D</sup> <sup>1</sup> *(*1*)* = *λ*D <sup>1</sup> *(*2*).* 

*Proof Using M-Functions* One may divide the proof into two parts, showing that *<sup>λ</sup>*2*()* is less than max{*λ*<sup>D</sup> <sup>1</sup> *(j )*}<sup>2</sup> *<sup>j</sup>*=<sup>1</sup> and than min{*λ<sup>D</sup>* <sup>2</sup> *(j )*}<sup>2</sup> *<sup>j</sup>*=<sup>1</sup> separately, but there is no need in it. Consider any regular point *x* to the right of the maximum and minimum mentioned above. In both cases there are at least two singular points of *M(λ)* to the left of *x*:


Since the number of positive eigenvalues is a nonnegative integer, Lemma 18.9 implies that *r(x)* ≥ 2*.* We have proven non-strict inequality in (18.13).

If *λ*<sup>D</sup> <sup>1</sup> *(*1*)* = *<sup>λ</sup>*<sup>D</sup> <sup>1</sup> *(*2*),* then the lowest singular points for *M* are distinct, let us denote them by *<sup>λ</sup>*<sup>1</sup> *< λ*2*.* Then for a regular point *<sup>λ</sup>*2− for sufficiently small  *>* 0, there is just one singular point to the left and the M-matrix has at least one positive eigenvalue, since *M* has a generalised pole at *λ*2*.* Lemma 18.9 implies that there are at least two zeroes to the left, i.e. the inequality is strict. 

We are interested in the behaviour of the spectral gap under the gluing procedure. As we know, the eigenvalues of a quantum graph are inversely proportional to the squared total length. Hence it is natural to expect that the spectral gap decreases under gluing since the total length of the glued graph is obviously larger than the lengths of each of the original graphs. It turns out that this is not always the case, therefore let us investigate under which conditions the spectral gap becomes larger under gluing.

The answer to this question can be given in terms of the corresponding Mfunctions. Let us first note that the *M*-function for the glued graph is just equal to the sum of the M-functions associated with the original parts:

$$M\_{\Gamma\_1 \sqcup\_\partial \Gamma\_2}(\lambda) = M\_{\Gamma\_1}(\lambda) + M\_{\Gamma\_2}(\lambda),\tag{18.15}$$

of course provided *∂* is just the set of glued together vertices. This formula holds only because we assume standard vertex conditions at the vertices of the glued graph. Therefore one should be careful when other than standard vertex conditions are introduced on *∂.*

It follows that all singularities remain and their multiplicities are just the sums of the multiplicities of the detectable eigenfunctions of the parts. Therefore one may easily calculate the first sum in (18.12). The generalised zeroes on opposite are not preserved. The generalised zero *λ*<sup>1</sup> is preserved if and only if the traces of the two eigenfunctions on <sup>1</sup> and <sup>2</sup> are parallel:

$$
\psi^{\Gamma\_1}(\lambda\_1)|\_{\partial \Gamma\_1} \parallel \psi^{\Gamma\_2}(\lambda\_1)|\_{\partial \Gamma\_2}.
$$

**Lemma 18.12** *The eigenvalues of the standard Laplacian on* = <sup>1</sup> *<sup>∂</sup>* <sup>2</sup> *situated below the ground states of both of the Dirichlet Laplacians on* <sup>1</sup> *and* <sup>2</sup> *are always visible from* **M***.* 

*Proof* Assume that *λ*st *<sup>j</sup> () < λ*<sup>D</sup> <sup>1</sup> *(*1*), λ*<sup>D</sup> <sup>1</sup> *(*2*).* The M-function is regular at the point *λ*st *<sup>j</sup> ()*, since it lies to the left of the spectra of the Dirichlet Laplacians. Hence the eigenvalue is invisible only if the trace of the corresponding eigenfunction on *∂* is identically zero. This implies in particular that *λ*st *<sup>j</sup>* belongs to the spectra of the Dirichlet operators on <sup>1</sup> or 2, which is impossible since *λ*st *<sup>j</sup>* is situated below the ground states. 

**Theorem 18.13** *Let* = <sup>1</sup> *<sup>∂</sup>* <sup>2</sup> *be the metric graph obtained by gluing together two finite compact graphs* <sup>1</sup> *and* 2*. The spectral gap for the standard Laplacian* *on is less than the ground state energies of the Dirichlet Laplacians on* <sup>1</sup> *and* <sup>2</sup> *if and only if the M-function immediately to the left of <sup>λ</sup>* <sup>=</sup> min{*λ*<sup>D</sup> <sup>1</sup> *(), λ*<sup>D</sup> <sup>1</sup> *(*2*)*} *has at least two positive eigenvalues, i.e.* 

$$
\lambda\_2(\Gamma) < \min \{ \lambda\_1^D(\Gamma\_1), \lambda\_1^D(\Gamma\_2) \},
$$

<sup>⇔</sup> **<sup>M</sup>***(*min{*λ*<sup>D</sup> <sup>1</sup> *(), λ*<sup>D</sup> <sup>1</sup> *(*2*)*} − *) has at least* 2 *positive eigenvalues*

*for sufficiently small .*

*Proof* Every eigenvalue of *L*st*()* below *λ*<sup>D</sup> <sup>1</sup> *(*1*)* and *λ*<sup>D</sup> <sup>1</sup> *(*2*)* is visible (Lemma 18.12). Consider any point min{*λ*<sup>D</sup> <sup>1</sup> *(), λ*<sup>D</sup> <sup>1</sup> *(*2*)*} − for sufficiently small  *>* 0*.* Obviously there are no singular points to the left and Lemma 18.9 implies that there are two zeroes there. It follows that for all such the M-function should have two positive eigenvalues. 

The following theorem provides the answer to our main question: under which conditions does the spectral gap increase under the gluing procedure?

**Theorem 18.14** *Consider the standard Laplace operators on two compact finite metric graphs* 1*,* 2*, and on the glued graph* = <sup>1</sup> *<sup>∂</sup>* 2*. The spectral gap does not decrease under the gluing procedure:* 

$$
\lambda\_2(\Gamma) \ge \min\_j \{ \lambda\_2(\Gamma\_j) \}, \tag{18.16}
$$

*if and only if one of the following conditions is satisfied* 

*(1)* min*<sup>j</sup>* {*λ*2*(j )*} ≤ min*<sup>j</sup>* {*λ*<sup>D</sup> <sup>1</sup> *(j )*} *and* 

$$\lim\_{\epsilon \searrow 0} \# \left\{ \text{positive eigenvalues of } \mathbf{M}\_{\Gamma}(\min\_{j} \{ \lambda\_2(\Gamma\_j) \} - \epsilon) \right\} = 1; \tag{18.17}$$

*(2)* min*<sup>j</sup>* {*λ*<sup>D</sup> <sup>1</sup> *(j )*} *<sup>&</sup>lt;* min*<sup>j</sup>* {*λ*2*(j )*} *<sup>&</sup>lt;* max*<sup>j</sup>* {*λ*<sup>D</sup> <sup>1</sup> *(j )*} *and* 

$$\lim\_{\epsilon \searrow 0} \# \left\{ \text{positive } eigenvalues \text{ of } \mathbf{M}\_{\Gamma} (\min\_{j} \{ \lambda\_2(\Gamma\_j) \} - \epsilon) \right\} = 0;\tag{18.18}$$

$$\begin{aligned} \mathcal{O}(\mathcal{J})\,\lambda\_1^{\mathcal{D}}(\Gamma\_1) &= \lambda\_1^{\mathcal{D}}(\Gamma\_2) = \min\_{j} \{ \lambda\_2(\Gamma\_j) \} \, and \\\\ \lim\_{\epsilon \searrow 0} &\# \left\{ \operatorname{positive} \operatorname{eigenvalues of } \mathbf{M}\_{\Gamma} \text{(min} \{ \lambda\_2(\Gamma\_j) \} - \epsilon \right\} = 1. \end{aligned} \tag{18.19}$$

*Proof* To prove the theorem one needs to consider the following four cases covering all possibilities in terms of the spectra of the standard and Dirichlet Laplacians on *j* . The theorem is proved by contradiction in most of the cases.

(1) min*<sup>j</sup>* {*λ*2*(j )*} ≤ min*<sup>j</sup>* {*λ*<sup>D</sup> <sup>1</sup> *(j )*}*.*

Every *λj ()* which is less than min*<sup>j</sup>* {*λ*<sup>D</sup> <sup>1</sup> *(j )*} is visible due to Lemma 18.12. Therefore min*<sup>j</sup>* {*λ*2*(j )*} is visible.

Consider any regular point min*<sup>j</sup>* {*λ*2*(j )*} − *,* 0 *<*  1*,* and apply Lemma 18.9. The root counting function for **M***(λ)* is equal to one *r(*min*<sup>j</sup>* {*λ*2*(j )*} − *)* = 1 if and only if the only zero to the left is the point *λ* = 0 since there are no singular points there. In other words (18.16) holds if and only if **M***(*min*<sup>j</sup>* {*λ*2*(j )*} − *)* has just one positive eigenvalue for sufficiently small implying (18.17).

(2) min*<sup>j</sup>* {*λ*<sup>D</sup> <sup>1</sup> *(j )*} *<sup>&</sup>lt;* min*<sup>j</sup>* {*λ*2*(j )*} *<sup>&</sup>lt;* max*<sup>j</sup>* {*λ*<sup>D</sup> <sup>1</sup> *(j )*}*.* Without loss of generality assume:

$$
\lambda\_1^{\mathcal{D}}(\Gamma\_1) < \min\_j \{ \lambda\_2(\Gamma\_j) \} < \lambda\_1^{\mathcal{D}}(\Gamma\_2).
$$

We apply Lemma 18.9 to the point min*<sup>j</sup>* {*λ*2*(j )*} − . There is one singular point *λ*<sup>D</sup> <sup>1</sup> *(*1*)* to the left with the multiplicity one (see Theorem 4.16). There is no other singular point in that region, since otherwise *λ*<sup>D</sup> <sup>2</sup> *(*1*)<λ*2*(*1*)* which contradicts the operator inequality *<sup>L</sup>*D*(*1*)* <sup>≥</sup> *L(*1*).*

Assume that *<sup>λ</sup>*2*()* is visible, then *r(*min*<sup>j</sup>* {*λ*2*(j )*}<sup>2</sup> *<sup>j</sup>*=<sup>1</sup> <sup>−</sup> *)* <sup>=</sup> <sup>1</sup> if and only if **<sup>M</sup>***(*min*<sup>j</sup>* {*λ*2*(j )*}<sup>2</sup> *<sup>j</sup>*=<sup>1</sup> <sup>−</sup> *)* has no positive eigenvalues for sufficiently small , i.e. it is strictly negative. We get condition (18.18).

If *λ*2*()* is *invisible*, then it cannot be less than min*<sup>j</sup>* {*λ*2*(j )*}<sup>2</sup> *<sup>j</sup>*=1*.* Assume that this is the case. Then the corresponding non-zero eigenfunction satisfies Dirichlet conditions on *∂.* The restriction to <sup>2</sup> is identically equal to zero, since otherwise it is a Dirichlet eigenfunction on <sup>2</sup> with the energy less than *λ*<sup>D</sup> <sup>1</sup> *(*2*).* Consider the restriction of the eigenfunction to 1—it is not identically zero and satisfies both Dirichlet and standard conditions on *∂*<sup>1</sup> and the corresponding non-zero eigenvalue belongs to the spectrum of *L(*1*).* Condition (18.18) is again satisfied.

(3) max*<sup>j</sup>* {*λ*<sup>D</sup> <sup>1</sup> *(j )*} = min*<sup>j</sup>* {*λ*2*(j )*}*.*

If *λ*<sup>D</sup> <sup>1</sup> *(*1*)* = *<sup>λ</sup>*<sup>D</sup> <sup>1</sup> *(*2*)*, then there exist at least two zeroes of the M-function to the left of max*<sup>j</sup>* {*λ*<sup>D</sup> <sup>1</sup> *(j )*} and therefore due to our assumption the spectral gap decreases, hence this case should be excluded. It remains to study the case where

$$
\lambda\_1^{\mathcal{D}}(\Gamma\_1) = \lambda\_1^{\mathcal{D}}(\Gamma\_2) = \min\_j \{ \lambda\_2(\Gamma\_j) \}.
$$

Every eigenvalue of *L()* to the left of *λ*<sup>D</sup> <sup>1</sup> *(*1*)* <sup>=</sup> *<sup>λ</sup>*<sup>D</sup> <sup>1</sup> *(*2*)* is visible (Lemma 18.12). Hence there are no positive zeroes of M-function to the left of *λ*<sup>D</sup> <sup>1</sup> *(*1*)* <sup>=</sup> *<sup>λ</sup>*<sup>D</sup> <sup>1</sup> *(*2*)* if and only if **M***(*min*<sup>j</sup>* {*λ*2*(j )*} − *)* has precisely one positive eigenvalue for all sufficiently small  *>* 0*.* We get condition (18.19).

(4) max*<sup>j</sup>* {*λ*<sup>D</sup> <sup>1</sup> *(j )*} *<* min*<sup>j</sup>* {*λ*2*(j )*}*.* If *λ*<sup>D</sup> <sup>1</sup> *(*1*)* = *<sup>λ</sup>*<sup>D</sup> <sup>1</sup> *(*2*)*, then there exist at least two zeroes of the M-function to the left of max*<sup>j</sup>* {*λ*<sup>D</sup> <sup>1</sup> *(j )*} and the spectral gap decreases. Indeed in this case Theorem 12.11 implies that *L()* in addition to *λ* = 0 has at least one further eigenvalue less or equal to max*<sup>j</sup>* {*λ*<sup>D</sup> <sup>1</sup> *(j )*}<sup>2</sup> *<sup>j</sup>*=<sup>1</sup> implying diminishing of the spectral gap.

The same proof applies if *λ*<sup>D</sup> <sup>1</sup> *(*1*)* <sup>=</sup> *<sup>λ</sup>*<sup>D</sup> <sup>1</sup> *(*2*)* and the matrices *C*1*,*<sup>2</sup> <sup>1</sup> for <sup>1</sup> and <sup>2</sup> in the representation (18.10) are not proportional.

In the double degenerate case *λ*<sup>D</sup> <sup>1</sup> *(*1*)* <sup>=</sup> *<sup>λ</sup>*<sup>D</sup> <sup>1</sup> *(*2*)* and the matrices *C*1*,*<sup>2</sup> 1 are proportional Lemma 18.9 cannot be applied due to invisibility of *λ*2*().* However considering as in the proof of Theorem 12.11 a linear combination of the extended by zero ground states for the Dirichlet operators on <sup>1</sup> and <sup>2</sup> one obtains a trial function on orthogonal to the constant function with Rayleight quotient less or equal to *λ*<sup>D</sup> <sup>1</sup> *(*1*)* <sup>=</sup> *<sup>λ</sup>*<sup>D</sup> <sup>1</sup> *(*2*)*. This is less than min*<sup>j</sup>* {*λ*2*(j )*} implying that the spectral gap always decreases.

 

The result just proven can be formulated in a more transparent way assuming by symmetry *almost* without loss of generality that

$$
\lambda\_1^D(\Gamma\_1) < \lambda\_1^D(\Gamma\_2). \tag{18.20}
$$

**Corollary 18.15** *Assume (18.20), then it holds* 

• *if* min*<sup>j</sup>* {*λ*2*(j )*} *< λ*<sup>D</sup> <sup>1</sup> *(*1*), then* 

*λ*2*() >* min *j* {*λ*2*(j )*}⇔**M***(*min *j* {*λ*2*(j )*}*) has exactly one positive eigenvalue*; (18.21)

• *if λ*<sup>D</sup> <sup>1</sup> *(*1*) <* min*<sup>j</sup>* {*λ*2*(j )*} *< λ*<sup>D</sup> <sup>1</sup> *(*2*), then* 

$$
\lambda\_2(\Gamma) > \min\_j \{ \lambda\_2(\Gamma\_j) \} \Leftrightarrow \mathbf{M}\_\Gamma(\min\_j \{ \lambda\_2(\Gamma\_j) \}) < 0; \tag{18.22}
$$

• *if λ*<sup>D</sup> <sup>1</sup> *(*2*) <* min*<sup>j</sup>* {*λ*2*(j )*}*, then* 

$$
\lambda\_2(\Gamma) < \min\_j \{ \lambda\_2(\Gamma\_j) \}. \tag{18.23}
$$

The above Corollary is just a reformulation of Theorem 18.14 (and Theorem 12.11) ignoring the border cases. Below we present further implications of this theorem. All these statements can be proven using Lemma 18.9.

**Theorem 18.16** *Consider the graph* = <sup>1</sup> *<sup>∂</sup>* <sup>2</sup> *obtained by gluing together two compact finite graphs* <sup>1</sup> *and* 2*. Assume in addition (18.20), then the spectral gap of lies between the ground states for the Dirichlet Laplacians on* <sup>1</sup> *and* <sup>2</sup> *if and only if the M-function is negative immediately to the right of the lowest Dirichlet* 0

*ground state (say λ*<sup>D</sup> <sup>1</sup> *(*1*)):* 

$$
\lambda\_1^{\mathcal{D}}(\Gamma\_1) < \lambda\_2(\Gamma) < \lambda\_1^{\mathcal{D}}(\Gamma\_2)
$$

$$
\Leftrightarrow \begin{cases}
\lim\_{\epsilon \searrow 0} \# \left\{ \text{positive eigenvalues of } \mathbf{M}\_{\Gamma}(\lambda\_1^{\mathcal{D}}(\Gamma\_1) - \epsilon) \right\} = 1 \\
\text{and } \lambda\_1^{\mathcal{D}}(\Gamma\_1) \text{ is not a generalized zero}
\end{cases}
$$

$$
\Leftrightarrow \lim\_{\epsilon \searrow 0} \# \left\{ \text{positive eigenvalues of } \mathbf{M}\_{\Gamma}(\lambda\_1^{\mathcal{D}}(\Gamma\_1) + \epsilon) \right\} = 0.
$$

*Proof* In this case *λ*<sup>D</sup> <sup>1</sup> *(*1*)* is a simple singularity of **M***(λ).* We apply Lemma 18.9 to the point *λ*<sup>D</sup> <sup>1</sup> *(*1*)* + for a sufficiently small  *>* 0*.* There exists just one singularity to the left with rank *C*<sup>1</sup> = 1, hence the number of zeroes is also equal to one, i.e. *λ*2*() > λ*<sup>D</sup> <sup>1</sup> *(*1*)*, if and only if the M-matrix is negative at this point. Remember that in accordance with Lemma 18.12 all eigenvalues below *λ*<sup>D</sup> <sup>1</sup> *(*1*)* are visible. On the other hand Theorem 12.11 implies that *λ*2*() < λ*<sup>D</sup> <sup>1</sup> *(*2*).* 

The following theorem is also a straightforward corollary of our result:

**Theorem 18.17** *Assume (18.20) and that λ*2*() is visible, then* 

$$\lambda\_2(\Gamma) = \lambda\_1^{\mathrm{D}}(\Gamma\_1) \Leftrightarrow \begin{cases} \lim\_{\epsilon \searrow 0} \# \left\{ \operatorname{positive} \operatorname{eigenvalues of } \mathbf{M}\_{\Gamma}(\lambda\_1^{\mathrm{D}}(\Gamma\_1) - \epsilon) \right\} = 1; \\ \lim\_{\epsilon \searrow 0} \# \left\{ \operatorname{positive} \operatorname{eigenvalues of } \mathbf{M}\_{\Gamma}(\lambda\_1^{\mathrm{D}}(\Gamma\_1) + \epsilon) \right\} = 1. \end{cases}$$

*Proof* We base our proof on applying Lemma 18.9 to points *λ*<sup>D</sup> <sup>1</sup> *(*1*)* ± for a sufficiently small  *>* 0*.* The point *λ*<sup>D</sup> <sup>1</sup> *(*1*)* is the first nontrivial eigenvalue for *L()* if and only if there is just one zero of **M** to the left of *λ*<sup>D</sup> <sup>1</sup> *(*1*)* − and there are two zeroes to the left of *λ*<sup>D</sup> <sup>1</sup> *(*1*)* + *,* for any sufficienty small positive . Taking into account that there are no singular points to the left of *λ*<sup>D</sup> <sup>1</sup> *(*1*)* − and there is precisely one singular point to the left of *λ*<sup>D</sup> <sup>1</sup> *(*1*)* + we use Lemma 18.9 to conclude that **M***(λ*<sup>D</sup> <sup>1</sup> *(*1*)* ± *)* should have precisely one positive eigenvalue. 

Consider now the degenerate case.

**Theorem 18.18** *Assume λ*<sup>D</sup> <sup>1</sup> *(*1*)* <sup>=</sup> *<sup>λ</sup>*<sup>D</sup> <sup>1</sup> *(*2*), then* 

$$
\lambda\_2(\Gamma) \le \lambda\_1^{\mathcal{D}}(\Gamma\_1) = \lambda\_1^{\mathcal{D}}(\Gamma\_2).
$$

*Consider the matrix <sup>C</sup> describing the singularity of* **M***(λ) at <sup>λ</sup>* <sup>=</sup> *<sup>λ</sup>*<sup>D</sup> <sup>1</sup> *(*1*,*2*). The inequality is strict* 

$$
\lambda\_2(\Gamma) < \lambda\_1^D(\Gamma\_{1,2})
$$

*if and only if one of the following two conditions is satisfied:* 


$$\lim\_{\epsilon \searrow 0} \# \left\{ \text{positive eigenvalues of } \mathbf{M}\_{\Gamma}(\lambda\_{1}^{\text{D}} - \epsilon) \right\} \ge 2. \tag{18.24}$$

Note that the cases *(1)* and *(2)* can be joined together by just requiring that (18.24) holds. This condition is always satisfied in the case *(1)*.

*Proof* The first inequality is just a special case of Theorem 12.11 already proven using two different techniques.

The matrix *C* is a sum of the matrices *C*<sup>1</sup> <sup>1</sup> and *C*<sup>2</sup> <sup>1</sup> describing the singularities in **M**<sup>1</sup> and **M**<sup>2</sup> *,* each having rank one. Hence the matrix *C* may have either rank two or one, it cannot have rank zero, since *C*<sup>1</sup> <sup>1</sup> and *<sup>C</sup>*<sup>2</sup> <sup>1</sup> are strictly positive.

Assume first that rank*<sup>C</sup>* <sup>=</sup> <sup>2</sup>*.* Consider any point *λ*<sup>D</sup> <sup>1</sup> *(*1*,*2*)* − *,* 0 *<*  1*.* The matrix **M***(λ*<sup>D</sup> <sup>1</sup> *(*1*,*2*)* <sup>−</sup> *)* is dominated by the term <sup>1</sup> *C* and therefore has at least two positive eigenvalues. Then Lemma 18.9 implies, that there are at least two zeroes to the left of *λ*<sup>D</sup> <sup>1</sup> *(*1*,*2*)* − *.*

Assume now that rank*<sup>C</sup>* <sup>=</sup> <sup>1</sup> and again consider any point *<sup>λ</sup>*<sup>D</sup> <sup>1</sup> *(*1*,*2*)*−*,* 0 *<*  1*.* There are at least two zeroes to the left if and only if the matrix **M***(λ*<sup>D</sup> <sup>1</sup> *(*1*,*2*)*−*)* has at least two positive eigenvalues for sufficiently small *.* The inverse statement follows from Lemma 18.12. 

We denote by *ψ*D*,j* <sup>1</sup> the Dirichlet ground states for the graphs *j* . Then the matrices *C*<sup>1</sup> <sup>1</sup> and *<sup>C</sup>*<sup>2</sup> <sup>1</sup> are projectors on the vectors of normal derivatives

$$
\partial \psi\_1^{\mathbf{D}, \Gamma\_1} \vert\_{\partial \Gamma\_1} \quad \text{and} \quad \partial \psi\_1^{\mathbf{D}, \Gamma\_2} \vert\_{\partial \Gamma\_2} \dots
$$

The rank of the matrix *<sup>C</sup>* <sup>=</sup> *<sup>C</sup>*<sup>1</sup> <sup>1</sup> <sup>+</sup> *<sup>C</sup>*<sup>2</sup> <sup>1</sup> is one if and only if the vectors of normal derivatives are proportional. In that case one may construct an eigenfunction for *L()* by taking a linear combination of the Dirichlet ground states on <sup>1</sup> and 2. The corresponding eigenvalue is *λ*<sup>D</sup> <sup>1</sup> *(*1*)* <sup>=</sup> *<sup>λ</sup>*<sup>D</sup> <sup>1</sup> *(*2*)* and it is the third eigenvalue of *L()* if and only if the **M***(λ)* has two positive eigenvalues immediately to the left of this point.

The obtained conditions may appear hard to check in concrete examples, however our goal was to explain the reason why the spectral gap may grow under gluing. The answer is given using the M-function which appears to be the most natural object for the studied problem. Note that checking conditions on the M-functions does not require calculation of their spectra, which might be a complicated computational problem—it is enough to determine the number of positive or negative eigenvalues which can be done using quadratic form techniques.

It might be interesting to analyse what happens when more than two graphs are glued together. The developed methods can also be applied to investigate higher

eigenvalues. It might be also interesting to combine the obtained results with the estimates for negative eigenvalues obtained in [66].

## *18.2.1 Examples*

**Gluing Two Segments** This is an illustration to the case (1) in Theorem 18.14 and Theorem 18.13. We return back to Example 7.12. Consider two segments of lengths *a* = 1 (the graph 1) and *b* = 0*,* 5 (the graph 2) joined together as shown in Fig. 18.2.

The corresponding M-functions are plotted in Fig. 18.3 with the variable *k* = sign*(λ)*√|*<sup>λ</sup>*<sup>|</sup> on the horizontal axis. The lengths are adjusted so that:

$$
\lambda\_2(\Gamma\_1) = \lambda\_1^D(\Gamma\_1) = \pi^2,\\
\lambda\_2(\Gamma\_2) = \lambda\_1^D(\Gamma\_2) = (2\pi)^2.
$$

This example is degenerate in the following sense: the matrices **M**1 and **M**<sup>2</sup> share the same set of eigenvectors *(*1*,* 1*)* and *(*1*,* −1*).* As a result the matrix **M** has the same eigenvectors and its energy curves are obtained by summing the energy curves for **M***j , j* = 1*,* 2*.* It is clear that the first zero of the blue energy curve corresponding to the eigenvector *(*1*,* −1*)* shifts to the right independently of the actual lengths of the edges.

We check that just to the left of the point *π*2 <sup>=</sup> *<sup>λ</sup>*2*(*1*)* the function *M* has one negative eigenvalue implying

$$\underbrace{\lambda\_2(\Gamma)}\_{=(3/2\pi)^2} > \underbrace{\lambda\_2(\Gamma\_1)}\_{=\pi^2}.$$

Note that in order to make our conclusion we needed to look at the energy curves—no information about the eigenvectors was needed. Our conclusion is not surprising since the cycle graph has higher connectivity than the segments.

**Gluing Two** 3**-Stars** This is another illustration for the case (1) in Theorem 18.14. Consider two 3-star graphs with edge lengths 2*,* 1*.*5*,* 1 and 0*.*4*,* 0*.*2*,* 0*.*5 glued together (see Fig. 18.4). We assume that vertices at the same height are glued together.

The corresponding 3 × 3 M-functions are plotted in Fig. 18.5 using variable *k* = sign*(λ)*√|*<sup>λ</sup>*<sup>|</sup> on the horizonthal axis. The lengths of the edges are chosen so that

**Fig. 18.4** Gluing together two star graphs

*λ*2*(*1*) (*∼0*.*8*)*<sup>2</sup> *< λ*<sup>D</sup> <sup>1</sup>*(*1*) (*∼1*.*05*)*<sup>2</sup> . It is easy to see that **M***(λ*2*(*1*))* has precisely one positive eigenvalue. It follows that the spectral gap increases under gluing.

**Fig. 18.5** Eigenvalue curves for the star graphs with edge lengths 2*,* 1*.*5*,* 1 and 0*.*4*,* 0*.*2*,* 0*.*5 and for the glued graph, *<sup>k</sup>* <sup>=</sup> <sup>√</sup>*λ, <sup>λ</sup> <sup>&</sup>gt;* 0. Infinite vertical lines are asymptotes of the branches

Note that gluing two segments there was no reason to choose the lengths in a special way. Having three-star graphs the edge lengths were chosen in a special way to guarantee *λ*2*(*1*)<λ*<sup>D</sup> <sup>1</sup>*(*1*).*

**Problem 80** The result in Example 7.12 depended on the ratio between the lengths *a* and *b.* What is the reason that gluing two segments the result is independent of this ratio?

**Problem 81** Use M-function approach to show that the spectral gap always decreases if a chord is added to the cycle graph. In other words, prove the result presented in Example 7.13 using the newly developed approach.

## **18.3 Gluing Graphs and M-Functions**

We continue our studies by deriving an explicit formula for the M-function for the graph obtained by gluing together two metric graphs. We shall consider the case of most general vertex conditions even at the contact vertices.

# *18.3.1 The M-Function for General Vertex Conditions at the Contact Set*

Let be a finite compact metric graph with a preselected set of contact vertices *∂.* Consider the extended graph ext obtained from by attaching *M∂* = |*∂*<sup>|</sup> edges *En* = [*x*2*n*−1*,*∞*), n* = *N* + 1*,...,N* + *M∂* , one to each contact vertex. Assume that the potentials *q* and *a* originally defined on are extended by zero to the rest of ext

$$q(\mathbf{x}) \equiv 0, \quad a(\mathbf{x}) \equiv 0, \quad \mathbf{x} \in \Gamma^{\text{ext}} \backslash \Gamma. \tag{18.25}$$

We also assume that the vertex conditions at all internal vertices are fixed parametrising them via certain *dm* <sup>×</sup>*dm* unitary matrices *<sup>S</sup>m*, where *dm* is the degree of the internal vertex *<sup>V</sup> m.* For the contact vertices we select *(dm* <sup>+</sup> <sup>1</sup>*)* <sup>×</sup> *(dm* <sup>+</sup> <sup>1</sup>*)* unitary matrices *Sm*, where *dm* <sup>+</sup> <sup>1</sup> is the degree on the contact vertex *<sup>V</sup> <sup>m</sup>* in the graph ext (and *dm*—the degree of *V <sup>m</sup>* in ).

Consider now solutions *ψ(λ, x)* of the eigenfunction differential equation (17.2) on the edges of ext satisfying the vertex conditions at all (internal and contact) vertices. We introduce the limiting values of the function *ψ* at the contact endpoints of

$$\begin{aligned} \vec{\psi}^{\partial} &:= (\psi(\mathbf{x}\_{2N+1}), \psi(\mathbf{x}\_{2N+3}), \dots, \psi(\mathbf{x}\_{2(N+M\_{\partial})-1})), \\ \partial \vec{\psi}^{\partial} &:= -(\psi'(\mathbf{x}\_{2N+1}), \psi'(\mathbf{x}\_{2N+3}), \dots, \psi'(\mathbf{x}\_{2(N+M\_{\partial})-1})). \end{aligned} \tag{18.26}$$

Extra minus sign in the definition of normal derivatives is related to the fact that we are interested in the derivatives pointing inside the original graph *.* We may generalise Definition 17.1 as

**Definition 18.19** The **graph's M-function M***(λ)* is the *M∂* × *M∂* matrix-valued function defined by the map:

$$\mathbf{M}\_{\Gamma}(\lambda) : \vec{\psi}^{\partial} \mapsto \partial \vec{\psi}^{\partial}, \quad \text{Im}\,\lambda \neq 0,\tag{18.27}$$

where *ψ <sup>∂</sup>* and *∂ψ <sup>∂</sup>* are the limiting values on *∂* for an arbitrary function *ψ(λ, x)* solving the differential equation (17.2) on ext and satisfying the vertex conditions at all vertices of ext.

The analysis carried out in Chap. 17 to justify the Definition 17.1 can be repeated without much modification. The M-function is again a matrix-valued Herglotz-Nevanlinna function. The only essential difference is that the new definition requires that the *(dm* + 1*)* × *(dm* + 1*)* unitary matrices at contact vertices are selected. There was no necessity to select such matrices in the original definition since we used only standard conditions at the contact vertices. That convention would determine uniquely the vertex conditions at the contact vertices both for and for ext*.* Under this convention the new definition coincides with the original one.

The M-function is determined by the metric graph , the contact set of vertices *∂*, the electric and magnetic potentials *q* and *a* (on ) and the vertex conditions on ext*.*

## *18.3.2 Gluing Graphs with General Vertex Conditions*

Let <sup>1</sup> and <sup>2</sup> be two finite compact metric graphs with preselected contact sets *∂j* . We denote by *Nj* and by *Mj* the number of edges and vertices in *j* and by *M∂j* the number of vertices in the contact sets. We assume further that the Schrödinger differential expressions on *j* and the vertex conditions on the extended graphs ext *j* are selected. We denote by **M***j (λ)* the corresponding M-functions.

To define gluing we assume that certain vertices from *∂*<sup>1</sup> and *∂*<sup>2</sup> are glued together and these vertices are removed from the contact set for the new graph. To this end let us denote by *δ(j )* ⊂ *∂(j )* the subsets of contact vertices that have to be joined. We assume that these sets contain an equal number of vertices *Mδ*. Identifying the vertices from these subsets we get the new graph with the set of edges equal to the union of the edges in <sup>1</sup> and <sup>2</sup>

$$\{E\_n(\Gamma)\}\_{n=1}^{N\_1+N\_2} = \left\{E\_{n\_1}(\Gamma\_1)\right\}\_{n\_1=1}^{N\_1} \cup \left\{E\_{n\_2}(\Gamma\_2)\right\}\_{n\_2=1}^{N\_2}.\tag{18.28}$$

To describe the vertices in the glued graph let us without loss of generality enumerate the vertices in *j* so that:

$$V^n(\Gamma\_j) \in \delta(\Gamma\_j), \quad n = 1, 2, \dots, M\_\delta, \ j = 1, 2.$$

Then the *M*<sup>1</sup> + *M*<sup>2</sup> − *Mδ* vertices in are

$$V^n(\Gamma) = V^n(\Gamma\_1) \cup V^n(\Gamma\_2), \quad n = 1, 2, \dots, M\_\delta;$$

$$V^n(\Gamma) = V^n(\Gamma\_1), \quad n = M\_\delta + 1, \dots, M\_1; \tag{18.29}$$

$$V^{M\_1 - M\_\delta + n}(\Gamma) = V^n(\Gamma\_2), \quad n = M\_\delta + 1, \dots, M\_2.$$

We should remember that the vertices are considered as equivalence classes of endpoints, hence the vertices from the first series are obtained by joining the equivalence classes corresponding to the vertices in <sup>1</sup> and 2. The contact set for will be chosen equal to the union of the contact sets in <sup>1</sup> and <sup>2</sup> taking away the glued vertices:

$$
\partial \Gamma = \underbrace{V^{M\_{\delta}+1} \cup \dots \cup V^{M\_{\delta1}}}\_{\text{the vertices inherited from } \partial \Gamma\_1} \cup \underbrace{V^{M\_1+1} \cup \dots \cup V^{M\_1+M\_{\delta2}-M\_{\delta}}}\_{\text{the vertices inherited from } \partial \Gamma\_2}.
\tag{18.30}
$$

It is natural to keep vertex conditions at the preserved vertices, but we need to select the vertex conditions at the glued vertices

$$\delta(\Gamma) := \{ V^1(\Gamma), V^2(\Gamma), \dots, V^\delta(\Gamma) \}.$$

Consider any solution *ψ(λ, x)* of the differential equation (17.2) on . The restriction of *ψ* to *j* possesses unique extension to ext *<sup>j</sup>* solving the same differential equation on ext *<sup>j</sup>* . We introduce the vectors *ψ δ(*ext *<sup>j</sup> ), ∂ψ δ(*ext *<sup>j</sup> )* with the entries given by the limiting values of the solutions *ψ* at the contact vertices from *δ(j ).* Then we require in addition to the vertex conditions at *<sup>V</sup> n(*1*)* and *<sup>V</sup> n(*2*), n* <sup>=</sup> 1*,* 2*,...,Mδ* the following gluing conditions

$$\begin{cases} \vec{\psi}^{\delta}(\Gamma\_1^{\text{ext}}) = \vec{\psi}^{\delta}(\Gamma\_2^{\text{ext}}), \\\\ \partial \vec{\psi}^{\delta}(\Gamma\_1^{\text{ext}}) = -\partial \vec{\psi}^{\delta}(\Gamma\_2^{\text{ext}}). \end{cases} \tag{18.31}$$

The gluing condition (18.31) is a certain generalisation of the standard conditions at the degree two vertices. These conditions can be written using the standard form (3.21) involving just the limiting values at the glued vertex in .

To understand where do these conditions come from, consider the graph ˜ obtained from <sup>1</sup> and <sup>2</sup> not by gluing directly the first *Mδ* vertices, but connecting these vertices pairwise by short edges. Then in the limit where the edge lengths of these tiny edges go to zero we obtain the above conditions. This procedure reminds contraction of edges in graphs as described in Sect. 7.1. We leave justification that the obtained conditions are Hermitian as a problem for the readers.

**Problem 82** Let *S*<sup>1</sup> and *S*<sup>2</sup> be two unitary matrices of arbitrary dimensions *d*<sup>1</sup> + 1 and *d*<sup>2</sup> + 1 respectively. Then the system of linear relations

$$\begin{cases} i(S\_1 - I) \begin{pmatrix} \vec{\psi}\_1 \\ a\_1 \end{pmatrix} = (S\_1 + I) \begin{pmatrix} \partial \vec{\psi}\_1 \\ b\_1 \end{pmatrix}, \ \vec{\psi}\_1, \ \partial \vec{\psi}\_1 \in \mathbb{C}^{d\_1}; \\\\ i(S\_2 - I) \begin{pmatrix} \vec{\psi}\_2 \\ a\_2 \end{pmatrix} = (S\_2 + I) \begin{pmatrix} \partial \vec{\psi}\_2 \\ b\_2 \end{pmatrix}, \ \vec{\psi}\_2, \ \partial \vec{\psi}\_2 \in \mathbb{C}^{d\_2}; \\\\ a\_1 = a\_2, & a\_1, a\_2 \in \mathbb{C}; \\\\ b\_1 = -b\_2, & b\_1, b\_2 \in \mathbb{C} \end{cases} \tag{18.32}$$

excluding *aj , bj* can be written as

$$i(\mathcal{S} - I) \begin{pmatrix} \vec{\psi}\_1 \\ \vec{\psi}\_2 \end{pmatrix} = (\mathcal{S} + I) \begin{pmatrix} \partial \vec{\psi}\_1 \\ \vec{\partial} \vec{\psi}\_2 \end{pmatrix},\tag{18.33}$$

where *S* is a certain *(d*<sup>1</sup> +*d*2*)*×*(d*<sup>1</sup> +*d*2*)* unitary matrix. The matrix *S* is irreducible if the matrices *S*<sup>1</sup> and *S*<sup>2</sup> are.

The following Lemma describes the relation between the M-functions associated with *j* and *.* It is not surprising that the M-functions for the glued components determine the M-function for the resulting graph.

**Lemma 18.20** *Let be the finite compact metric graph obtained by gluing together certain graphs* <sup>1</sup> *and* <sup>2</sup> *by identifying the sets δ(j )* ⊂ *∂(j ), j* = 1*,* 2*, where ∂(j ) are the contact sets for j . Assume that the contact set for is inherited from the contact sets of the glued graphs as described in* (18.30)*. Then the M-functions associated with the graphs* 1*,* 2*, and are related via* 

$$\mathbf{M}\_{\Gamma}(\boldsymbol{\lambda}) = \begin{pmatrix} M\_1^{22} - M\_1^{21} (M\_1^{11} + M\_2^{11})^{-1} M\_1^{12} & -M\_1^{21} (M\_1^{11} + M\_2^{11})^{-1} M\_2^{12} \\ -M\_2^{21} (M\_1^{11} + M\_2^{11})^{-1} M\_1^{12} & M\_2^{22} - M\_2^{21} (M\_1^{11} + M\_2^{11})^{-1} M\_2^{12} \end{pmatrix}, \tag{18.34}$$
 
$$\text{Im}\,\boldsymbol{\lambda} \neq \boldsymbol{0}, \tag{18.34}$$

*where Mlm <sup>j</sup> , l,m* = 1*,* 2*, come from the block decomposition of* **M***j*

$$\mathbf{M}\_{\Gamma\_{j}}(\lambda) = \begin{pmatrix} M\_{j}^{11}(\lambda) \ M\_{j}^{12}(\lambda) \\ M\_{j}^{21}(\lambda) \ M\_{j}^{22}(\lambda) \end{pmatrix},\tag{18.35}$$

*with the principal block M*<sup>11</sup> *<sup>j</sup> having dimension Mδ* × *Mδ.* *Proof* The matrix *M*<sup>11</sup> <sup>1</sup> *(λ)* <sup>+</sup> *<sup>M</sup>*<sup>11</sup> <sup>2</sup> *(λ)* is invertible as a sum of two nontrivial Herglotz-Nevanlinna functions. Consider any solution to the differential equation (17.2) satisfying vertex conditions on the extended graphs ext *<sup>j</sup> .* Then its values on the contact sets for *j* are connected via the corresponding M-functions **M***j (λ)*

$$\mathbf{M}\_{\Gamma\_{\slash}} \tilde{\boldsymbol{\psi}}^{\partial}(\Gamma\_{\slash}) = \partial \tilde{\boldsymbol{\psi}}^{\partial}(\Gamma\_{\slash}).$$

To use the block decomposition (18.35) let us denote following (18.31) by *ψ <sup>δ</sup>* the vector of common (for <sup>1</sup> and 2) values of the solution at the glued vertices and by *ψ ∂ <sup>j</sup>* the complementary vectors of limiting values on the contact sets *∂j*

$$
\vec{\psi}^{\vec{\partial}}(\Gamma\_j) = \begin{pmatrix} \vec{\psi}^{\vec{\partial}} \\ \vec{\psi}\_j^{\vec{\partial}} \end{pmatrix}, \quad j = 1, 2.
$$

We get the *(M∂* + *Mδ)* × *(M∂* + *Mδ)* matrix system

$$
\begin{pmatrix}
\boldsymbol{M}\_1^{11} + \boldsymbol{M}\_2^{11} & \boldsymbol{M}\_1^{12} \ \boldsymbol{M}\_2^{12} \\
\boldsymbol{M}\_1^{21} & \boldsymbol{M}\_1^{22} & \boldsymbol{0} \\
\boldsymbol{M}\_2^{21} & \boldsymbol{0} & \boldsymbol{M}\_2^{22}
\end{pmatrix}
\begin{pmatrix}
\ddot{\boldsymbol{\psi}}^{\delta} \\
\ddot{\boldsymbol{\psi}}\_1^{\delta} \\
\ddot{\boldsymbol{\psi}}\_2^{\delta}
\end{pmatrix} = \begin{pmatrix}
\boldsymbol{0} \\
\boldsymbol{\vartheta}\ddot{\boldsymbol{\psi}}\_1^{\delta} \\
\boldsymbol{\vartheta}\ddot{\boldsymbol{\psi}}\_2^{\delta}
\end{pmatrix},
\tag{18.36}
$$

where we have taken into account the gluing conditions (18.31). Excluding *ψ <sup>δ</sup>* using the first equation we obtain

$$\begin{split} \partial \vec{\psi}^{\partial}(\Gamma\_{1}) &= M\_{1}^{22} \vec{\psi}\_{1}^{\partial} - M\_{1}^{21} \left( M\_{1}^{11} + M\_{2}^{11} \right)^{-1} \left( M\_{1}^{12} \vec{\psi}\_{1}^{\partial} + M\_{2}^{12} \vec{\psi}\_{2}^{\partial} \right), \\ \partial \vec{\psi}^{\partial}(\Gamma\_{2}) &= M\_{2}^{22} \vec{\psi}\_{2}^{\partial} - M\_{2}^{21} \left( M\_{1}^{11} + M\_{2}^{11} \right)^{-1} \left( M\_{1}^{12} \vec{\psi}\_{1}^{\partial} + M\_{2}^{12} \vec{\psi}\_{2}^{\partial} \right), \end{split} \tag{18.37}$$

leading to formula (18.34). 

This at first glance elementary lemma has a very important implication: it can be used to solve the inverse problems for graphs, primarily for trees. It turns out that if the gluing set consists of just one vertex, then not only **M***(λ)* is uniquely determined by **M**<sup>1</sup> *(λ)* and **M**<sup>2</sup> *(λ)* but also **M***(λ)* and **M**<sup>1</sup> *(λ)* determine **M**<sup>2</sup> *(λ)*.

**Theorem 18.21** *Let be the finite compact metric graph obtained by gluing the graphs* <sup>1</sup> *and* <sup>2</sup> *as explained in Lemma 18.20. Assume in addition that the graphs are connected and the gluing set δ consists of just one vertex Mδ* = 1*. Then any two out of the three M-functions associated with* 1*,* 2*, and determine the third one.* 

*Proof* Lemma 18.20 states that **M** is determined by **M**<sup>1</sup> and **M**<sup>2</sup> via formula (18.34). Therefore it remains to show that **M** and **M**<sup>2</sup> determine **M**1*.*

Let us examine formula (18.34). The matrix function **M**<sup>2</sup> *(λ)* is not blockdiagonal since the graph <sup>2</sup> is assumed connected. In particular the entries *M*<sup>12</sup> <sup>2</sup> *(λ)*

$$\mathbb{T}$$

and *M*<sup>21</sup> <sup>2</sup> *(λ)* and their product are not identically zero. Then the lower diagonal block of **M***(λ)* determines the function *M*<sup>11</sup> <sup>1</sup> *(λ)*

$$(M\_2^{22}(\lambda) - (\mathbf{M}\_\Gamma(\lambda))\_{22} = \left(M\_1^{11}(\lambda) + M\_2^{11}(\lambda)\right)^{-1} \underbrace{M\_2^{21}(\lambda)M\_2^{12}(\lambda)}\_{\text{known non-zero matrix function}}$$

We used here that *M*<sup>11</sup> <sup>1</sup> and *M*<sup>11</sup> <sup>2</sup> are scalar functions and therefore commute with *M*<sup>21</sup> <sup>2</sup> . The scalar function *M*<sup>11</sup> <sup>1</sup> *(λ)* <sup>+</sup> *<sup>M</sup>*<sup>11</sup> <sup>2</sup> *(λ)*−<sup>1</sup> appears as a proportionality coefficient between the left and right hand sides.

The non-diagonal blocks determine *M*<sup>21</sup> <sup>1</sup> and *M*<sup>12</sup> <sup>1</sup> . Finally the principal block in **M***(λ)* is used to determine *M*<sup>22</sup> <sup>1</sup> *(λ)* and thus reconstruction of **M**<sup>1</sup> *(λ)* is accomplished. 

The assumption that the gluing set consists of just one vertex is very important for our proof and cannot be removed. Therefore the M-function alone does not allow to solve the inverse problem for graphs with cycles, while the inverse problem for trees can be solved by chopping the edges one-by-one (see Sect. 20.6).

## **Appendix 1: Scattering from Compact Graphs**

With every finite compact graph having nontrivial contact set we associate the scattering matrix, despite the fact the spectrum of the corresponding magnetic Schrödinger operator is pure discrete. This scattering matrix is a straightforward generalisation of the single interval scattering matrix introduced in Sect. 5.2. It is just a fractional transformation of the corresponding M-function and therefore encodes all information which can be obtained in experiments not destroying the structure of the graph.

We follow notations form Sect. 18.3.1. Consider the extended graph ext obtained from , as above, by attaching *M∂* semi-infinite edges *En* = [*x*2*n*−1*,*∞*), n* = *N* + 1*,...N* + *M∂* to all contact vertices. We assume that the vertex conditions on ext are selected. It is natural to see the original graph as a subset of ext*.* Both the magnetic and electric potentials are extended by zero outside *.*

Consider any function *ψ(λ, x)* solving Eq. (17.2) on the whole ext and prescribed conditions at all vertices. Outside the original graph the function *ψ* is given by a combination of plane waves

$$\psi(\lambda, \chi) = a\_{2n-1}e^{-ik|\chi - \chi\_{2n-1}|} + b\_{2n-1}e^{ik|\chi - \chi\_{2n-1}|}. \tag{18.38}$$

Then the **graph scattering matrix S***(λ)* is the *M∂* × *M∂* matrix connecting the amplitudes of incoming *A <sup>∂</sup>* = {*a*2*n*−1} *N*+*M∂ <sup>n</sup>*=*N*+<sup>1</sup> and outgoing *<sup>B</sup> <sup>∂</sup>* = {*b*2*n*−1} *N*+*M∂ n*=*N*+1

*.*

waves

$$
\vec{B}^{\partial} = \mathbf{S}\_{\Gamma}(\lambda)\vec{A}^{\partial}.\tag{18.39}
$$

The scattering matrix is determined by the graph's M-function.

**Theorem 18.22** *The M-function* **M***(λ) and the scattering matrix* **S***(λ) for the same compact finite quantum graph are related as follows for almost any <sup>λ</sup>* <sup>∈</sup> <sup>R</sup>+*:* 

$$\mathbf{S}\_{\Gamma}(\lambda) = \frac{ik\mathbf{I} - \mathbf{M}\_{\Gamma}(\lambda)}{ik\mathbf{I} + \mathbf{M}\_{\Gamma}(\lambda)}.\tag{18.40}$$

*Proof* Let *ψ(λ, x)* be a solution to Eq. (17.2) for the magnetic Schrödinger operator on ext satisfying prescribed vertex conditions at all vertices. Consider the limiting values of the solution as given by (18.26), they are related via the graph's M-function

$$
\partial \vec{\psi}^{\partial} = \mathbf{M}\_{\Gamma}(\lambda) \vec{\psi}^{\partial} .
$$

The same limiting values can be calculated directly from the representation (18.38)

$$\begin{cases} \vec{\psi}^{\partial} &= -\vec{A}^{\partial} + \vec{B}^{\partial}, \\ -\partial \vec{\psi}^{\partial} &= -ik\vec{A}^{\partial} + ik\vec{B}^{\partial}. \end{cases}$$

Extra sign on the left hand side of the last equality appears due to definition (18.26).

We get the following equation connecting the amplitudes of incoming and outgoing waves

$$\mathbf{M}\_{\Gamma}(\lambda) \left( \vec{A}^{\partial} + \vec{B}^{\partial} \right) = ik \vec{A}^{\partial} - ik \vec{B}^{\partial},$$

which implies (18.40) almost everywhere, more precisely for all *λ* not being singularities of **M***(λ).* We know that the singularities of the M-function form a uniformly discrete set on the real line. 

**Theorem 18.23** *The scattering matrix* **S***(λ) can be extended to a continuous unitary function for all <sup>λ</sup>* <sup>∈</sup> <sup>R</sup>+*.*

*Proof* We prove the theorem for the case of standard vertex conditions on the contact set. The proof for general vertex condiitons follows essentially the same lines but requires proving the representation (18.10) for such vertex conditions.

The formula (18.40) determines the scattering matrix for all real *λ* different from the singularities of the M-function. In the case of standard conditions at the contact vertices the singularities of **M** are situated at the eigenvalues *λ*<sup>D</sup> *<sup>n</sup>* of the Dirichlet operator—the operator determined by Dirichlet conditions at the contact set. This follows directly from the representation (18.10).

We consider a small neighbourhood of any point *λ<sup>D</sup> <sup>n</sup>* from the Dirichlet spectrum and let us calculate explicitly the limit

$$\lim\_{\lambda \to \lambda\_n^D} \mathbf{S}\_{\Gamma}(\lambda).$$

The M-function possesses representation (18.10). Consider the kernel decomposition of the space C*M∂* associated with the Hermitian matrix *Cn* determining behaviour of **M** near the singularity

$$
\mathbb{C}^{M\_{\partial}} = \left(\text{Ker } \mathcal{C}\_n\right)^\perp \oplus \text{Ker } \mathcal{C}\_n.
$$

Observe that the matrix *Cn* restricted to Ker *Cn* <sup>⊥</sup> is invertible. Using this decomposition the representation (18.10) takes the form

$$\mathbf{M}\_{\Gamma}(\lambda) = \frac{1}{\lambda\_n^D - \lambda} \begin{pmatrix} C\_n \ 0 \\ 0 \ 0 \end{pmatrix} + \begin{pmatrix} F\_{11}(\lambda) \ F\_{12}(\lambda) \\ F\_{21}(\lambda) \ F\_{22}(\lambda) \end{pmatrix},\tag{18.41}$$

where *Fij (λ)* are analytic matrix-valued functions. For real *<sup>λ</sup>* = *<sup>λ</sup><sup>D</sup> <sup>n</sup>* the M-function is Hermitian, hence we have

$$F\_{11}^\*(\lambda) = F\_{11}(\lambda), \quad F\_{22}^\*(\lambda) = F\_{22}(\lambda), \quad F\_{12}^\*(\lambda) = F\_{21}(\lambda).$$

Formula (18.40) can be rewritten as

$$\begin{split} \mathbf{S}\_{\Gamma}(\lambda) &= -\mathbf{I} + 2ik \left( ik \mathbf{I} + \mathbf{M}\_{\Gamma}(\lambda) \right)^{-1} \\ &= -\mathbf{I} + 2ik \left( \begin{array}{c} ik \mathbf{I}\_{(\text{Ker }C\_{n})^{\perp}} + \frac{1}{\lambda\_{n}^{D} - \lambda} C\_{n} + F\_{11}(\lambda) & F\_{12}(\lambda) \\ F\_{21}(\lambda) & \, ik \mathbf{I}\_{\text{Ker }C\_{n}} + F\_{22}(\lambda) \end{array} \right)^{-1} \end{split}$$

Both diagonal entries are invertible, since the diagonal entries of the M-function are Hermitian matrices themselves. Hence we may use Schur complements to calculate the inverse using the formula

$$
\begin{pmatrix} A \ B \\ C \ D \end{pmatrix}^{-1} = \begin{pmatrix} A^{-1} + A^{-1} B S^{-1} C A^{-1} - A^{-1} B S^{-1} \\ -S^{-1} C A^{-1} & S^{-1} \end{pmatrix}, \tag{18.42}
$$

where the Schur complement is

$$\mathcal{S} := D - CA^{-1}B.$$

We use explicit representations to determine the limits of the involved matrices

$$\begin{split} A^{-1}(\lambda) &= \left(ik\mathbf{I}\_{(\text{Ker }C\_{n})^{\perp}} + \frac{1}{\lambda\_{n}^{\text{D}} - \lambda}C\_{n} + F\_{11}(\lambda)\right)^{-1} \xrightarrow[\lambda \to \lambda\_{n}^{\text{D}}]{} 0, \\ B(\lambda) &= F\_{12}(\lambda) \xrightarrow[\lambda \to \lambda\_{n}^{\text{D}}]{} F\_{12}(\lambda\_{n}^{\text{D}}), \\ C(\lambda) &= F\_{21}(\lambda) \xrightarrow[\lambda \to \lambda\_{n}^{\text{D}}]{} F\_{21}(\lambda\_{n}^{\text{D}}), \\ D(\lambda) &= F\_{22}(\lambda) \xrightarrow[\lambda \to \lambda\_{n}^{\text{D}}]{} ik\mathbf{I}\_{\text{Ker }C\_{n}} + F\_{22}(\lambda\_{n}^{\text{D}}), \\ S(\lambda) &= D(\lambda) - C(\lambda)A^{-1}(\lambda)B(\lambda) \xrightarrow[\lambda \to \lambda\_{n}^{\text{D}}]{} ik\mathbf{I}\_{\text{Ker }C\_{n}} + F\_{22}(\lambda\_{n}^{\text{D}}). \end{split}$$

Summing up we have proven that

$$\begin{pmatrix} ik\mathbf{I}\_{\left(\text{Ker }C\_{n}\right)^{\perp}} + \frac{1}{\lambda\_{n}^{D} - \lambda} C\_{n} + F\_{11}(\lambda) & F\_{12}(\lambda) \\ F\_{21}(\lambda) & ik\mathbf{I}\_{\text{Ker }C\_{n}} + F\_{22}(\lambda) \end{pmatrix}^{-1}$$

$$\xrightarrow[\lambda \to \lambda\_{n}^{D}]{} \begin{pmatrix} 0 & 0 \\ 0 \left(ik\mathbf{I}\_{\text{Ker }C\_{n}} + F\_{22}(\lambda\_{n}^{D})\right)^{-1} \end{pmatrix}.$$

For the scattering matrix we get

$$\lim\_{\lambda \to \lambda\_n^D} \mathbf{S}\_\Gamma(\lambda) = \begin{pmatrix} -\mathbf{I}\_{\mathrm{(Ker } \mathcal{C}\_n)^\perp} & \mathbf{0} \\ \mathbf{0} & \frac{ik\mathbf{I}\_{\mathrm{Ker } \mathcal{C}\_n} - F\_{22}(\lambda\_n^D)}{ik\mathbf{I}\_{\mathrm{Ker } \mathcal{C}\_n} + F\_{22}(\lambda\_n^D)} \end{pmatrix}. \tag{18.43}$$

This formula determines a unitary matrix in <sup>C</sup>*M∂* <sup>=</sup> Ker *Cn* <sup>⊥</sup> ⊕ Ker *Cn.* 

The graph's scattering matrix can be used to determine detectable eigenvalues of both Dirichlet and standard operators on the graph:

(1) if *λ*<sup>0</sup> is a solution to the equation

$$\det(\mathbf{S}\_{\Gamma}(\lambda) - \mathbf{I}) = 0,\tag{18.44}$$

then *<sup>λ</sup>*<sup>0</sup> belongs to the spectrum of the standard Laplacian *<sup>L</sup>*st <sup>=</sup> *<sup>L</sup>***S**int*,*st *q,a* ; (2) if *λ*<sup>0</sup> is a solution to the equation

$$\det(\mathbf{S}\_{\Gamma}(\lambda) + \mathbf{I}) = 0,\tag{18.45}$$

then *<sup>λ</sup>*<sup>0</sup> belongs to the spectrum of the Dirichlet Laplacian *<sup>L</sup>*D*)* <sup>=</sup> *<sup>L</sup>***S**int*,D q,a .* **Problem 83** Prove formulas (18.44) and (18.45) above.

**Problem 84** Calculate the scattering matrix for the Laplace operator on the lasso graph *G(*2*.*2*)* given in Fig. 6.11. Compare the result to the M-function calculated in Example 17.4.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 19 Boundary Control: BC-Method**

## **19.1 Inverse Problems: First Look**

With this chapter we start the discussion on how to solve in full generality the inverse problems for Schrödinger operators on metric graphs. When considering Ambartsumian type theorems in Chaps. 14 and 15, we have already pointed out (see the introductory comments to Chap. 14) that the solution of the inverse problem means recovering all three members of the triple


We do not discuss how to reconstruct the magnetic potential since it can be eliminated, leading to different vertex conditions (see Chap. 16). On the contrary, we are going to consider spectral data dependent on the magnetic fluxes through the cycles in the graph, thus allowing a non-destructive investigation of quantum graphs in real world experiments. The fact that spectral and transport properties of nano-systems depend on the magnetic fluxes *j* is well-known for physicists as the Aharonov-Bohm effect [9, 110, 113, 459, 470, 485, 508]. More precisely, we shall use spectral data for the magnetic fluxes equal to 0 and *π*. These spectral data correspond to the standard Schrödinger operators on  with zero magnetic potential and possibly extra signing conditions (3.43) introduced on every cycle.

The proven Ambartsumian type theorems allow us to solve the inverse spectral problem in certain very specific cases. In general, a single spectrum is not enough to solve the inverse problem. For example, a potential on an interval is determined by the two spectra corresponding to different boundary conditions at one of the endpoints [375]. In the case of metric graphs one may extend the set of spectral data by adding the spectra of the problems obtained by amending vertex conditions at different vertices. We do not pursue this direction, since the knowledge of all

vertices trivializes reconstruction of the metric graph and the problem is highly overdetermined for large graphs.

Our set of spectral data will contain the M-functions associated with a relatively small set of vertices, to be called the **contact set**. One should imagine that contact vertices are used to approach the graph. For example, in the case of trees the contact set can be chosen to coincide with all degree one vertices. On one hand, drawing an arbitrary tree on a sheet of paper the degree one vertices naturally form the graph's boundary. On the other hand, the M-function's diagonal entry associated with any degree one vertex determines the potential on the corresponding edge. This reconstruction can be carried out using the **Boundary Control** method described in this chapter. As a result we end up with the M-function associated with a smaller tree. Repeating the procedure we solve the inverse problem step-by-step. This procedure is described in Chap. 20.

In general the contact set should be allowed to contain higher degree vertices, since there are graphs without degree one vertices. The following assumption will be assumed in the rest of the book.

**Assumption 19.1** *The contact set ∂ is a non-empty subset of the vertex set that contains all degree one vertices.* 

To guarantee the unique solvability of the inverse problem the contact vertices should be well-distributed inside the graph  taking into account its topology. Without any knowledge of the graph's structure it is hard to formulate explicit conditions on how the contact vertices should be placed. The number of required contact vertices may reduced if one considers the M-functions depending on the magnetic fluxes through the cycles. This will allow us to reconstruct the Mfunction for a spanning tree associated with the original metric graph *-*. We call the corresponding method **Magnetic Boundary Control** as it uses ideas from the classical Boundary Control method, but the spectral data are magnetic flux dependent.

Using the MBC-method different approaches having local and global characters are combined. The local approach we are going to use is the **Boundary Control method** (BC-method) due to Belishev [67, 70–72], who formulated and developed this approach bringing ideas from control theory to the area of inverse problems. Local approaches to inverse problems have been used earlier independently by Gopinath and Sondhi [241, 242] and by Blagoweshchenskii [91], but it was Belishev, who turned BC-method into a standard tool to solve inverse problems, with the help of numerous collaborators and colleagues, in particular: S. Avdonin, D. Korikov, Ya. Kurylev, L. Oksanen, L. Pestov, and A. Vakulenko. BC-method takes particular simple form in one dimension, where it is closely related to the solution of the inverse problem using the asymptotics of the M-function, suggested recently by Simon et al. [238, 441, 472], the relations are well-described in [460]. As the name suggests, the BC-method uses ideas from control theory to solve inverse problems using boundary observations. The Laplace transform connects the response operator appearing in the BC-method to the graph's M-function (see (19.12)). The procedure

of opening the cycles has global character and is based entirely on the connection between the M-functions for the graph and its spanning tree.

In this chapter we give a comprehensive introduction to the Boundary Control method and discuss how it can be used to solve the inverse problem for the star graph. This approach emerged from our papers [37, 44].

Before we proceed let me mention that the inverse problems for operators on metric graphs have been discussed using alternative sets of spectral data. It is not always straightforward to establish connections between these approaches. The method of spectral mapping was used by V.A.Yurko and collaborators, let me mention the most important publications: [223, 224, 509–524]. M. Belishev with collaborators employed the BC-method, in particular its variant used to reconstruct Riemann surfaces, to solve inverse problems for graphs [68, 69, 73–75, 280]. See also [98–101, 109, 122, 151, 331, 426, 427, 429, 430, 447, 447, 458, 507].

## **19.2 How to Use BC-Method for Graphs**

With the Schrödinger operator *Lq* = − *<sup>d</sup>*<sup>2</sup> *dx*<sup>2</sup> + *q* on [0*,*∞*)* one associates the wave equation

$$\begin{cases} \frac{\partial^2}{\partial t^2} u + L\_q u = 0, & x \in \Gamma, \ t > 0, \\\\ u(x, 0) = \frac{\partial}{\partial t} u(x, 0) \equiv 0, & \end{cases} \tag{19.1}$$
 
$$u(0, t) = f(t).$$

The function *f* is called the **boundary control**. Solving the wave equation one obtains a certain differentiable function *uf (x, t)*. The linear operator

$$\begin{aligned} \mathbf{R}: \underbrace{f}\_{=\mu(0,t)} &\mapsto \frac{\partial}{\partial \mathbf{x}} \boldsymbol{\mu}^f(\mathbf{0}, t) \end{aligned} \tag{19.2}$$

is called the **response operator**, and contains all information that an observer placed at the origin can possibly obtain sending waves to [0*,*∞*)* and collecting their response. In the theory of one-dimensional inverse problems it is proven that the response operator determines the potential *q* (see Sect. 19.4). The reconstruction procedure is local in the sense that in order to reconstruct the potential on the interval [0*,* ] one needs to know the response operator only for all *t* ≤ *T* = 2*.* Since the propagation speed is equal to 1, the time *T* = 2 is precisely the time needed for the wave to travel from the boundary point *x* = 0 to *x* = and back. It is clear that this result is optimal since the response operators for any *T < T* are independent of

the form of the potential on the interval *x>T /*2*.* A precise solution of the inverse problem following BC-method is described in Sect. 19.4.

The described properties of the BC-method show that it may be applied to Schrödinger operators on graphs in order to recover the potential on the edges having degree one vertex as one of the endpoints. No serious modification is required, since the wave evolution on a metric graph has finite speed of propagation. Let us assume that the boundary control is applied at some degree one vertex. Then for small values of time *t* the waves initiated by the boundary control may reach only a small neighbourhood of the vertex. More precisely the wave function may be different from zero only for points *x* at distances less than or equal to *t* from the vertex. If *T* is less than double the length of the pendant edge, then the response operator **R** coincides with the response operator for the half-axis with the same potential on the interval [0*,T/*2*).* This principle can be extended to compare response operators for two arbitrary quantum graphs with equal potentials on one of the pendant edges: the corresponding entries of the response operators are identical for sufficiently small values of *t*.

## **19.3 The Response Operator and the M-Function**

Assume that a magnetic Schrödinger operator *L***<sup>S</sup>** *q,a(-)* is given. On the metric graph  we select any non-empty contact set of vertices *∂* satisfying Assumption 19.1. As before we assume standard vertex conditions on *∂* to facilitate our presentation. The vertex conditions at the internal vertices are arbitrary (and are given in (17.5)).

For a given continuous function *u*, let *<sup>u</sup> <sup>∂</sup>* be the vector whose entries are the values of *u* at the contact vertices *∂-*. Similarly, the vector of extended normal derivatives *∂u <sup>∂</sup>* will have the coordinates:

$$\partial \mu(V^m) = \sum\_{\mathbf{x}\_j \in V^m} \partial \mu(\mathbf{x}\_j), \quad V^m \in \partial \Gamma.$$

These vectors have dimension *M∂* = #*∂-*, the number of vertices in the contact set.

Consider the wave equation on *-*,

$$\frac{\partial^2}{\partial t^2} u(\mathbf{x}, t) + \left(i \frac{\partial}{\partial \mathbf{x}} + a(\mathbf{x})\right)^2 u(\mathbf{x}, t) + q(\mathbf{x})u(\mathbf{x}, t) = 0, \quad \mathbf{x} \in \Gamma, \ t > 0,\tag{19.3}$$

with zero initial data

$$\begin{cases} \mu(\mathbf{x}, 0) = 0, \\ \frac{\partial}{\partial t} \mu(\mathbf{x}, 0) = 0, \end{cases} \tag{19.4}$$

subject to the matching conditions (17.5) at all internal vertices, and to the continuity condition and boundary control

$$
\vec{u}^{\partial}(t) = \vec{f}(t),\tag{19.5}
$$

at the contact vertices.

The **boundary control** *f* is a vector valued function with values in C*M∂ .* If the boundary control is not a smooth function, then one has to consider weak solutions to the wave equation. On the other hand if *f* is at least two times continuously differentiable and both potentials are sufficiently continuous,

$$\begin{aligned} \vec{f} \in \mathcal{C}^2(\mathbb{R}\_+, \mathbb{C}^D), \quad \vec{f}(0) = \vec{f}'(0) = 0, \\\ a \in \mathcal{C}^2(\Gamma \backslash \mathbf{V}), q \in \mathcal{C}(\Gamma \backslash \mathbf{V}), \end{aligned} \tag{19.6}$$

then the function *w(x, t)* satisfies Eq. (19.3) in the conventional sense.

The solution to this wave equation will be denoted by *uf .* We are going to study the properties of this solution in Sect. 19.4. In particular, since the wave equation has a finite speed of propagation the solution *w(x, t)* will be equal to zero outside the *t*-neighbourhood of the contact set.

The dynamical **response operator R** is then given by the equality

$$\begin{aligned} \left(\mathbf{R} \underbrace{\vec{f}}\_{\sim}\right)(t) &= \partial \vec{\mu}^{\vec{f},\partial}. \end{aligned} \tag{19.7}$$

The dynamical response operator is a natural generalisation of the Dirichlet-to-Neumann map and therefore is sometimes referred to as the *dynamical Dirichletto-Neumann map*, since it connects the Dirichlet and Neumann boundary data for any solution to the wave equation on *-.* This operator originally defined on functions satisfying conditions (19.6) can be extended to the set of *L*loc <sup>2</sup> -functions by continuity. The dynamical response collects all information that an observer may obtain about the quantum graph via boundary measurements. As we shall immediately see it is closely related to the graph's M-function and the scattering matrix.

Assume that the boundary control has compact support as a function of time

$$\tilde{f} \in C\_0^{\infty}((0,\infty); \mathbb{C}^{M\_3}).\tag{19.8}$$

Then for sufficiently large *t* (to the right of the support of *f* ) the evolution is described by the wave equation on  with the Dirichlet conditions on the contact set, hence the energy is preserved. It follows in particular that the *L*2*(-)* norm of *u* is uniformly bounded, and the Laplace transform can be used to solve the original

wave equation (19.1), yielding

$$
\hat{u}(\mathbf{x}, \mathbf{s}) = \int\_0^\infty e^{-st} u(\mathbf{x}, t) dt, \quad \text{Re} \, \mathbf{s} > 0. \tag{19.9}
$$

The function *u*ˆ is a solution of the following differential equation

$$\begin{aligned} s^2 \hat{u}(\mathbf{x}, \mathbf{s}) + \left( i \frac{\partial}{\partial \mathbf{x}} + a(\mathbf{x}) \right)^2 \hat{u}(\mathbf{x}, t) + q(\mathbf{x}) \hat{u}(\mathbf{x}, t) &= 0, & \mathbf{x} \in \Gamma, \ \mathbf{Re} \, s > 0, \\\ & \qquad (19.10) \end{aligned} \tag{19.10}$$

satisfying the matching conditions (17.5) at all internal vertices, continuous on *∂-* and satisfying the condition

$$
\hat{\vec{\mu}}^{\partial}(\mathbf{s}) = \hat{\vec{f}}(\mathbf{s}) \tag{19.11}
$$

on the contact set, where ˆ *f (s)* is the Laplace transform of *f .* 

The extended normal derivatives of the function *w(x, s)* ˆ on the contact set may be calculated using the graph M-function (17.7). The result is

$$
\partial \hat{\vec{\mu}}^{\partial} = \mathbf{M}\_{\Gamma}(-s^2) \hat{\vec{\mu}}^{\partial}.
$$

We have proven the following remarkable formula

$$\widehat{\left(\mathbf{R}\vec{f}\right)}(\mathbf{s}) = \mathbf{M}\_{\Gamma}(-\mathbf{s}^{2})\hat{\vec{f}}(\mathbf{s}),\tag{19.12}$$

which implies that the M-function and the dynamical response operator are in oneto-one correspondence (this was noticed first in [37]). The connection between the scattering matrices and M-functions was established in the previous chapter (see (18.40)).

In what follows we are going to switch between the three equivalent sets of spectral data:


**Problem 85** Calculate the response operator for the Laplacian on the interval [0*,* 1] if the contact set is given by


**Problem 86** Check that formula (19.12) connecting the M-function and the response operator is valid for the interval **I** = [0*,* 1]with the contact set *∂***I** = {0} and Neumann condition at *x* = 1.

# **19.4 Inverse Problem for the One-Dimensional Schrödinger Equation**

In this section we describe how to use the BC-method to reconstruct the potential in the one-dimensional Schrödinger equation. As we already explained earlier this reconstruction is local in the sense that the knowledge of the response operator for relatively small values of the time parameter *t* allows one to reconstruct the potential *q* on a part of the interval [0*,*∞*)* close to the point *x* = 0, where control is applied. Here we are going to follow [36, 37, 40–42], see also recent review papers [43, 460], where relations between the BC method and other local inverse methods are clarified.

The method works under rather weak assumptions on the potential, such as *q* ∈ *L*1*,*loc[0*,*∞*)* as described in [460], but we restrict our presentation to continuous potentials in order to make the presentations transparent. In this case all equations are satisfied in the classical sense and there is no need to work with weak solutions. We essentially follow [460] in our presentation of the BC-method.

Consider the wave equation on the interval [0*,*∞*)* with boundary control at the point *x* = 0:

$$\begin{cases} -\frac{\partial^2}{\partial x^2} u(\mathbf{x}, t) + q(\mathbf{x}) u(\mathbf{x}, t) = -\frac{\partial^2}{\partial t^2} u(\mathbf{x}, t), \\ u(\mathbf{x}, t) = 0, \quad t < \mathbf{x} \quad \text{(causality condition)}, \\ u(\mathbf{x}, 0) = \frac{\partial}{\partial t} u(\mathbf{x}, 0) = 0, \\ u(\mathbf{x}, 0) = f(t). \end{cases} \tag{19.13}$$

We will firstly be interested in the solution of this problem for sufficiently small *t* ≤ *T* .

The solution possesses the following representation:

$$u^f(\mathbf{x}, t) = \begin{cases} f(t - \mathbf{x}) + \int\_{\mathbf{x}}^t w(\mathbf{x}, \mathbf{s}) f(t - \mathbf{s}) d\mathbf{s}, & \mathbf{x} \le t, \\ 0, & t \le \mathbf{x}, \end{cases} \tag{19.14}$$

where *w(x, t)* is the unique solution to the Goursat problem (Fig. 19.1)

$$\begin{cases} \left(-\frac{\partial^2}{\partial x^2} + q(\mathbf{x})\right) w = -\frac{\partial^2}{\partial t^2} w, \\\\ w(0, t) = 0, \\ w(\mathbf{x}, \mathbf{x}) = -\frac{1}{2} \int\_0^\chi q(\mathbf{y}) d\mathbf{y}. \end{cases} \tag{19.15}$$

Formula (19.14) is nothing else than the Duhamel principle telling that the solution at any point *(x, t)* can be written as a linear combination of the boundary controls at *τ* ∈ [0*, t* − *x*] :

$$u^f(\mathbf{x}, t) = f(t - \mathbf{x}) + \int\_0^{t - \chi} w(\mathbf{x}, t - \mathbf{r}) f(\mathbf{r}) d\mathbf{r}, \quad \mathbf{x} \le t. \tag{19.16}$$

The boundary control at *τ>t* − *x* cannot influence the value of the wave function at *(x, t)* due to the finite propagation speed. The following definition introduces the control operator connecting the boundary control *f* to the solution of the wave equation.

**Definition 19.2 (Control Operator)** The operator *W<sup>T</sup>* on *L*2*(*0*,T)* defined by

$$\left(W^T f\right)(\mathbf{x}) = f(T - \mathbf{x}) + \int\_{\mathcal{X}}^T w(\mathbf{x}, \tau) f(T - \tau) d\tau,\tag{19.17}$$

where *w(x, s)* solves the Goursat problem (19.15) is called the **control operator**.

The control operator is invertible and bounded on *L*2*(*0*,T)*, since it can be inverted by solving a Volterra equation of the second kind. The inverse operator solves the Boundary Control problem:

*Given a fixed time T >* 0 *and a function g* ∈ *L*2*(*0*, T ), find a boundary control f* ∈ *L*2*(*0*,T) such that* 

$$\left(W^T f\right)(\mathbf{x}) = \mathbf{g}(\mathbf{x}) \text{ for all } \mathbf{x} \in (0, T). \tag{19.18}$$

The representation (19.14) allows one to calculate the response operator **R***<sup>T</sup>* (already introduced in the previous section by (19.2))

$$(\mathbf{R}^T f)(t) = \frac{\partial}{\partial x} \mu(0, t),$$

where *u* is the unique solution to the BC problem (19.13)

$$\left(\mathbf{R}^T f\right)(t) = -f'(t) + \int\_0^t r(t-\tau)f(\tau)d\tau,\tag{19.19}$$

where

$$r(t) = \frac{\partial}{\partial x} w(0, t). \tag{19.20}$$

Here we used the fact that the kernel *w* is differentiable as a solution to the Goursat problem with a continuous potential.

**Definition 19.3 (Connecting Operator)** The operator on *L*2*(*0*,T)* defined by

$$\mathbf{C}^T = \mathbf{W}^{T\*} \mathbf{W}^T \tag{19.21}$$

is called the **connecting operator**.

The connecting operator can also be defined using its quadratic form

$$\langle \langle \mathcal{C}^T f, g \rangle\_{L\_2[0, T]} = \langle (W^T)^\* W^T f, g \rangle\_{L\_2[0, T]} = \langle W^T f, W^T g \rangle\_{L\_2[0, T]} \tag{19.22}$$
 
$$= \langle u^f(\cdot, T), u^g(\cdot, T) \rangle\_{L\_2[0, T]}.$$

Observe that *C<sup>T</sup>* is boundedly invertible and positive, since *W<sup>T</sup>* is boundedly invertible,

$$C^T > 0 \text{ in } L\_2(0, T).$$

In what follows we are going to prove two important properties concerning the control operator forming the core of the BC-method.

**Lemma 19.4** *The connecting operator C<sup>T</sup> admits the following representation:* 

$$\left(C^T f\right)(t) = f(t) + \int\_0^T \left[p(2T - t - s) - p(|t - s|)\right] f(s)ds,\tag{19.23}$$

*where* 

$$p(t) = \frac{1}{2} \int\_0^t r(\mathbf{s}) d\mathbf{s}.\tag{19.24}$$

*Proof* To prove this representation it will be convenient to introduce the following rather simple linear operators:

• the operator of odd continuation *S<sup>T</sup>* :

$$\left(\mathcal{S}^T f\right)(t) = \begin{cases} f(t), & 0 \le t \le T, \\\\ -f(2T - t), & T < t \le 2T; \end{cases}$$

• the operator extracting the odd part *Q*2*<sup>T</sup>* :

$$\left(\left(Q\_{2T}f\right)(t) = \frac{1}{2}\left[f(t) - f(2T - t)\right]; t$$

• the operator of restriction *N<sup>T</sup>* :

$$N^T f = f|\_{[0,T]};$$

• the integration operator *J*2*<sup>T</sup>* :

$$\left(J\_{2T}f\right)(t) = \int\_0^t f(s)ds, \quad 0 \le t \le 2T.$$

It is easy to check that

$$\left(\boldsymbol{\mathcal{S}}^{T}\right)^{\*} = 2\boldsymbol{N}^{T}\boldsymbol{\mathcal{Q}}\_{2T}.\tag{19.25}$$

To prove representation (19.23), consider arbitrary functions *f, g* ∈ *C*<sup>∞</sup> <sup>0</sup> [0*, T* ]. Let *<sup>f</sup>*<sup>−</sup> <sup>=</sup> *<sup>S</sup><sup>T</sup> <sup>f</sup>* and set

$$w^{f, \mathfrak{g}}(\mathfrak{s}, t) := \int\_0^T u^{f\_-}(\mathfrak{x}, s) \overline{u^{\mathfrak{g}}}(\mathfrak{x}, t) d\mathfrak{x}.\tag{19.26}$$

Our goal is to calculate this function for *s* = *t* = *T* since formula (19.22) implies

$$w^{f, \mathbf{g}}(T, T) = \langle \mathbf{C}^T f, \mathbf{g} \rangle\_{L\_2[0, T]}.\tag{19.27}$$

To calculate *wf,g(T , T )* we first show that the function *wf,g* satisfies the inhomogeneous wave equation and then use d'Alembert's formula to get its solution. Integration by parts gives the following equalities:

$$\begin{aligned} \left[\frac{\partial^2}{\partial t^2} - \frac{\partial^2}{\partial s^2}\right] w^{f, \overline{g}}(s, t) &= \int\_0^T \left[u^{f\_-}(\mathbf{x}, s) \overline{u^{\overline{g}}\_{ll}}(\mathbf{x}, t) - u^{f\_-}\_{ss} \overline{u^{\overline{g}}}(\mathbf{x}, t)\right] d\mathbf{x} \\ &= \int\_0^T \left[u^{f\_-}(\mathbf{x}, s) \left(\overline{u^{\overline{g}}\_{XX}}(\mathbf{x}, t) - q(\mathbf{x}) \overline{u^{\overline{g}}}(\mathbf{x}, t)\right)\right. \\ &\qquad - \left(u^{f\_-}\_{xx}(\mathbf{x}, s) - q(\mathbf{x}) u^{f\_-}(\mathbf{x}, s)\right) \overline{u^{\overline{g}}}(\mathbf{x}, t)\right] d\mathbf{x} \\ &= \int\_0^T \left[u^{f\_-}(\mathbf{x}, s) \overline{u^{\overline{g}}\_{XX}}(\mathbf{x}, t) - u^{f\_-}\_{xx} \overline{u^{\overline{g}}}(\mathbf{x}, t)\right] f \mathbf{x} \\ &= \left[u^{f\_-}(\mathbf{x}, s) \overline{u^{\overline{g}}\_{\overline{x}}}(\mathbf{x}, t) - u^{f\_-}\_{\overline{x}} \overline{u^{\overline{g}}}(\mathbf{x}, t)\right] \Big|\_{\overline{x} = 0}^T \\ &= -f\_-(\mathbf{s}) \overline{(\mathbf{R}^T \overline{g})}(t) + (\mathbf{R}\_{2T} f\_-)(\mathbf{s}) \overline{\overline{g}}(t), \end{aligned} \tag{19.28}$$

where we used that

$$
\mu\_{II}^f = \mu\_{\chi\chi}^f - q(\chi)\mu^f \quad \mu\_{II}^g = \mu\_{\chi\chi}^g - q(\chi)\mu^g,
$$

and that

$$
\mu^{f\_-}(T, s) = \mu^{\mathfrak{g}}(T, t) \equiv 0,
$$

since *f, g* ∈ *C*<sup>∞</sup> <sup>0</sup> [0*, T* ]*.*

Summing up, the function *wf,g* is a solution to the inhomogenous wave equation

$$w\_{ll}^{f,g} - w\_{ss}^{f,g} = -f\_{-}(\mathbf{s})\overline{(\mathbf{R}^T\mathbf{g})}(t) + (\mathbf{R}\_{2T}f\_{-})(\mathbf{s})\mathbf{g}(t) \tag{19.29}$$

in the region 0 ≤ *s* ≤ 2*T ,* 0 ≤ *t* ≤ *T* with zero initial conditions

$$w^{f, \mathbf{g}}(\mathbf{s}, \mathbf{0}) = w\_t^{f, \mathbf{g}}(\mathbf{s}, \mathbf{0}) = \mathbf{0}.\tag{19.30}$$

The solution is given by d'Alembert's formula [182], which we use for *t* = *s* = *T*

$$w^{f, \mathfrak{g}}(T, T) = -\frac{1}{2} \int\_0^T d\eta \int\_{\eta}^{2T - \eta} d\xi \left[ f\_-(\xi) \overline{(\mathbf{R}^T \mathbf{g})}(\eta) - (\mathbf{R}\_{2T} f\_-)(\xi) \overline{\mathfrak{g}}(\eta) \right] \tag{19.31}$$

Taking into account that <sup>2</sup>*<sup>T</sup>* <sup>−</sup>*<sup>η</sup> <sup>η</sup> f*−*(ξ )dξ* = 0 (*f*<sup>−</sup> is an odd function with respect to *ξ* = *T* ) we have

$$w^{f, \mathfrak{g}}(T, T) = \frac{1}{2} \int\_0^T d\eta \left( \int\_{\eta}^{2T - \eta} d\xi (\mathbf{R}\_{2T} f\_-)(\xi) \right) \overline{\mathbf{g}}(\eta). \tag{19.32}$$

On the other hand we have

$$\int\_{\eta}^{2T-\eta} (\mathbf{R}\_{2T} f\_{-})(\xi) d\xi = (J\_{2T} \mathbf{R}\_{2T} f\_{-})\left(2T - \eta\right) - \left(J\_{2T} \mathbf{R}\_{2T} f\_{-}\right)(\eta),$$

$$= -2\left(Q\_{2T} J\_{2T} \mathbf{R}\_{2T} f\_{-}\right)(\eta),$$

and expression (19.32) takes the form (using (19.25))

$$\begin{split} w^{f,\mathfrak{g}}(T,T) &= -\int\_{0}^{T} \left( N^{T} \mathcal{Q}\_{2T} J\_{2T} \mathbf{R}\_{2T} S^{T} f \right)(\eta) \overline{\mathfrak{g}}(\eta) d\eta \\ &= -\frac{1}{2} \langle (S^{T})^{\*} J\_{2T} \mathbf{R}\_{2T} S^{T} f, \mathfrak{g} \rangle\_{L\_{2}[0,T]}. \end{split} \tag{19.33}$$

Remembering (19.22) and (19.27) we obtain

$$\boldsymbol{C}^{T} = -\frac{1}{2} (\boldsymbol{S}^{T})^{\*} \boldsymbol{J}\_{2T} \mathbf{R}\_{2T} \boldsymbol{S}^{T},\tag{19.34}$$

since *C*∞ <sup>0</sup> [0*, T* ] is dense in *L*2[0*, T* ]*.*

Using representation (19.19) we obtains formula (19.23) for the connecting operator.

**Problem 87** Using change of variables check that (19.34) implies representation (19.23) for the connecting operator.

Let *y* be the solution of the Sturm-Liouville problem

$$\begin{cases} -\mathbf{y}''(\mathbf{x}) + q(\mathbf{x})\mathbf{y}(\mathbf{x}) = \mathbf{0}, \\\\ \mathbf{y}(\mathbf{0}) = \mathbf{0}, \ \mathbf{y}'(\mathbf{0}) = \mathbf{1}. \end{cases} \tag{19.35}$$

Consider the boundary control problem (19.18): find the control function *f <sup>T</sup>* such that

$$\left(W^{T}f^{T}\right)(\mathbf{x}) = \begin{cases} \mathbf{y}(\mathbf{x}), \; \mathbf{x} \le T, \\\\ \mathbf{0}, \; \quad \mathbf{x} > T. \end{cases} \tag{19.36}$$

One may prove that on such boundary control the connecting operator has very simple form.

**Lemma 19.5** *Let f <sup>T</sup> be the boundary control leading to the solution of the Sturm-Liouville problem y for x<T , defined by formula (19.36). Then the connecting operator <sup>C</sup><sup>T</sup> maps <sup>f</sup> <sup>T</sup> to <sup>T</sup>* <sup>−</sup> *<sup>x</sup>*

$$\left(\mathbb{C}^{T}f^{T}\right)(\mathbf{x}) = T - \mathbf{x}, \quad \mathbf{x} \in [0, T]. \tag{19.37}$$

*Proof* Consider arbitrary function *g* ∈ *C*<sup>∞</sup> <sup>0</sup> *(*0*,T)*—smooth function with compact support inside *(*0*, T ).* In our calculations we are going to use that the wave equation has finite speed of propagation and therefore

$$w^{\mathcal{S}}(\mathbf{x},t) = 0,\text{ and }\frac{\partial}{\partial t}w^{\mathcal{S}}(\mathbf{x},t) = 0 \text{ provided } \mathbf{x} > t.$$

In particular we have:


Then we may perform the following calculations, mostly using integration by parts

$$\begin{split} & (\mathbf{C}^T \boldsymbol{f}^T \boldsymbol{g})\_{L\_2(0,T)} \\ &= (\mathbf{W}^T \boldsymbol{f}^T, \mathbf{W}^T \boldsymbol{g})\_{L\_2(0,T)} \\ &= \int\_0^T \mathbf{y}(\mathbf{x}) \boldsymbol{w}^g(\mathbf{x}, T) d\mathbf{x} \\ &= \int\_0^T \left( \int\_0^T \mathbf{y}(\mathbf{x}) \boldsymbol{w}\_t^g(\mathbf{x}, t) d\mathbf{x} \right) dt + \int\_0^T \mathbf{y}(\mathbf{x}) \frac{\mathbf{w}^g(\mathbf{x}, 0)}{\mathbf{m}} d\mathbf{x} \\ &= -(T-t) \int\_0^T \mathbf{y}(\mathbf{x}) \boldsymbol{w}\_t^g(\mathbf{x}, t) d\mathbf{x} \Big|\_{t=0}^T + \int\_0^T (T-t) \left( \int\_0^T \mathbf{y}(\mathbf{x}) \boldsymbol{w}\_H^g(\mathbf{x}, t) d\mathbf{x} \right) dt \\ &= T \int\_0^T \mathbf{y}(\mathbf{x}) \underbrace{\boldsymbol{w}\_t^g(\mathbf{x}, 0)}\_{=0} d\mathbf{x} + \int\_0^T (T-t) \left( \int\_0^T \mathbf{y}(\mathbf{x}) \boldsymbol{w}\_H^g(\mathbf{x}, t) d\mathbf{x} \right) dt \\ &= \int\_0^T (T-t) \left( \int\_0^T \mathbf{y}(\mathbf{x}) \left( \boldsymbol{w}\_{xx}^g(\mathbf{x}, t) - q(\mathbf{x}) \boldsymbol{w}^g(\mathbf{x}, t) \right) d\mathbf{x} \right) dt \\ &= \int\_0^T (T-t) \left( \mathbf{y}(\mathbf{x}) \boldsymbol{w}\_x^g(\mathbf{x}, t) - \mathbf{y}'(\mathbf{x}) \boldsymbol{w}^g(\mathbf{x}, t) \right) \Bigr|\_{t=0}^T dt \end{split}$$

$$\begin{aligned} \mathbf{f} &= \int\_0^T (T - t) \mathbf{y}(T) \underbrace{\boldsymbol{w}^\boldsymbol{\boldsymbol{x}}(T, t)}\_{\equiv 0} dt - \int\_0^T (T - t) \underbrace{\mathbf{y}(0)}\_{=0} \boldsymbol{w}^\boldsymbol{\boldsymbol{x}}(0, t) dt \\ &- \int\_0^T (T - t) \mathbf{y}'(T) \underbrace{\boldsymbol{w}^\boldsymbol{\boldsymbol{x}}(T, t)}\_{\equiv 0} dt + \int\_0^T (T - t) \underbrace{\mathbf{y}'(0)}\_{=1} \boldsymbol{w}^\boldsymbol{\boldsymbol{x}}(0, t) dt \\ &= \int\_0^T (T - t) \mathbf{g}(t) dt. \end{aligned} \tag{19.38}$$

Here we used that *w<sup>g</sup>* satisfies the wave equation (19.13) and *y*—the Sturm-Liouville equation (19.35). Since the function *g* is arbitrary, we get the operator equality (19.37).

**Problem 88** Consider formula (19.38) and check all steps.

We are finally ready to describe the solution of the inverse problem using the BC-method.

#### **Algorithm to solve the inverse problem using BC-method**

(1) **Reconstruct the kernel** *r(τ ),* 0 *<τ<* 2*T* **of the response operator R**2*<sup>T</sup>* assuming that it is given by (19.19)

$$\left(\mathbf{R}\_{2T}f\right)(t) = -f'(t) + \int\_0^t r(t-\tau)f(\tau)d\tau.$$

(2) **Calculate the connecting operator** using formula (19.23)

$$\left(\mathbb{C}^T f\right)(t) = f(t) + \int\_0^T \left[p(2T - t - s) - p(|t - s|)\right] f(s)ds,$$

where

$$p(t) = \frac{1}{2} \int\_0^t r(s)ds, \quad t \in [0, 2T],$$

is determined by the kernel of the response operator.

(3) **Invert the response operator**, i.e. solve Eq. (19.37)

$$\left(\mathbb{C}^T f^T\right)(\mathbf{x}) = T - \mathbf{x}, \quad \mathbf{x} \in [0, T]$$

to find the boundary control function *f <sup>T</sup> (x)* leading to the solution *y(x)* on the interval *x* ∈ [0*, T* ] as a result of the boundary control. Note that the boundary control function *<sup>f</sup> <sup>T</sup> (*·*)* depends on *<sup>T</sup>* , i.e. to get the linear function *<sup>T</sup>* <sup>−</sup> *<sup>x</sup>* as the result of connecting operator, different boundary controls dependent on *T* should be applied.

(4) **Calculate the solution** *y* using its relation with the control function *f <sup>T</sup>* via (19.17)

$$\left(W^T f^T\right)(\mathbf{x}) = f^T(T - \mathbf{x}) + \int\_{\mathbf{x}}^T w(\mathbf{x}, \mathbf{r}) f^T(T - \mathbf{r}) d\mathbf{r}, \mathbf{r}$$

leading to

$$\mathbf{y}(T) = \left(W^T f^T\right)(T - 0) = f^T(T - T + 0) = f^T(+0).$$

The value of *y(T )* is roughly equal to the initial value of the boundary control function *<sup>f</sup> <sup>T</sup>* that has to be applied to get *<sup>y</sup>* on the interval [0*, T* ]*.*

(5) **Calculate the potential** *q* using that *y* is a solution to the Schrödinger equation

$$q(T) = \frac{\mathbf{y}^{\prime\prime}(T)}{\mathbf{y}(T)} = \frac{\frac{d^2}{dT^2} f^T(+0)}{f^T(+0)}.\tag{19.39}$$

Note that in formula (19.39) one needs to take the limit first and then differentiate the control function *<sup>f</sup> <sup>T</sup> (*+0*)* with respect to *T.*

We summarise our studies as

**Theorem 19.6** *The response operator* **R***<sup>T</sup> for the Schrödinger differential expression* <sup>−</sup> *<sup>d</sup>*<sup>2</sup> *dx*<sup>2</sup> + *q(x) on* [0*,*∞*) with locally integrable potential q determines the unique potential on the interval* [0*,T/*2]*.* 

The advantage of the described method is that solution to the inverse problem is essentially reduced to reconstruction of the kernel of an integral operator and inversion of another integral operator. The rest is just the integration and differentiation. The nature of this method is local: to recover potential close to *x* = 0 one needs to know the response operator **R***<sup>T</sup>* for small values of *T.*

An alternative approach to inverse spectral problems in one dimension based on *A*-amplitude was developed by B. Simon and collaborators [238, 441, 443, 472]. This approach is based on the analysis of the M-function. The two approaches have already been compared in [43, 460].

# **19.5 BC-Method for the Standard Laplacian on the Star Graph**

In this section we study the boundary response operator for the Laplacian on any equilateral star graph with standard vertex conditions at the middle vertex. The case of Laplacian (zero potential) is important, since as we shall see later on, the

singularities in the kernel of the response operator for the Schrödinger evolution are determined precisely by the response operator for the Laplacian. Let us denote by the (common) length of the edges and by *N* the number of edges. Every edge is glued by one endpoint to the central vertex and the opposite endpoints belong to the contact set. It will be convenient to identify functions *u* from *L*2*(-)* with the vector-valued functions *<sup>u</sup>* <sup>∈</sup> *<sup>L</sup>*2*(*[0*,* ]*,* <sup>C</sup>*<sup>N</sup> )*, so that the central vertex corresponds to *x* = 0.

To calculate the dynamical response operator we need to find the unique solution to the wave equation

$$\frac{\partial^2}{\partial t^2}\vec{\tilde{u}}(\mathbf{x},t) - \frac{\partial^2}{\partial x^2}\vec{\tilde{u}}(\mathbf{x},t) = 0, \ \mathbf{x} \in (0, \ell), \ t \in (0, T), \tag{19.40}$$

satisfying standard vertex conditions at the origin, subject to the boundary control

$$
\vec{u}(\ell, t) = \bar{f}(t), \tag{19.41}
$$

and with zero initial data (19.4)

$$\begin{cases} \vec{\mu}(\mathbf{x}, 0) = 0, \\ \frac{\partial}{\partial t} \vec{\mu}(\mathbf{x}, 0) = 0. \end{cases} \tag{19.42}$$

It is clear that solutions to the differential equation can be written as a combination of d'Alembert waves

$$
\vec{\mu}(\mathbf{x},t) \equiv \vec{\mu}^{\vec{f}}(\mathbf{x},t) = \vec{b}(t+\mathbf{x}) + \vec{a}(t-\mathbf{x}),\tag{19.43}
$$

where *b* and *a* denote, respectively, the waves going toward the central vertex and coming from it. The boundary control initiates waves on the edges *En, n* = 1*,* 2*,...,N*, which reach the central vertex at the time *t* = . Therefore for sufficiently small *t* (*t<*) the solution is given by just one traveling wave

$$
\vec{u}(\mathbf{x},t) = \bar{f}(t+\mathbf{x}-\ell), \ t < \ell. \tag{19.44}
$$

The argument is chosen in a special way in order to satisfy the boundary control (19.41):

$$
\vec{u}(\ell, t) = \vec{f}(t + \ell - \ell) = \vec{f}(t), \quad t < \ell.
$$

For such relatively small values of *t* the value of *u* on each interval *En* is determined by the corresponding component *(f ) <sup>n</sup>* of the boundary control function.

For *t* slightly larger than (more precisely for *<t<* 2) the solution on *En* in addition to the wave initiated by *(f ) <sup>n</sup>* contains a wave going away from the central vertex

$$
\vec{u}(\mathbf{x},t) = \vec{f}(t+\mathbf{x}-\ell) + \vec{a}(t-\mathbf{x}).\tag{19.45}
$$

The incoming wave remains the same since no wave coming from the central vertex may turn back inside the edge but the time *t <* 2 is not enough to reach the central vertex and return back to any of the boundary vertices. It turns out that the outgoing wave *<sup>a</sup>* can be taken equal to *S*st **<sup>v</sup>** *f (t* − *x* − *)* leading to the solution

$$
\vec{\mu}(\mathbf{x}, t) = \ddot{\tilde{f}}(t + \mathbf{x} - \boldsymbol{\ell}) + \mathbf{S}\_{\mathbf{v}}^{\mathrm{st}} \ddot{\tilde{f}}(t - \mathbf{x} - \boldsymbol{\ell}), \quad \boldsymbol{\ell} < t < \mathcal{D} \ell.
$$

Here *S*st **<sup>v</sup>** is the vertex scattering matrix corresponding to the standard conditions (3.41). The intuition behind this formula should be rather clear: the waves coming to the central vertex penetrate to the other edges with the amplitudes equal to the entries of the vertex scattering matrix *S*st **<sup>v</sup>** *.* Let us check directly from the definition that this formula gives the correct solution. First of all the combination of d'Alambert waves always give a solution to the wave equation. The boundary control (19.41) is satisfied since the second term in the solution is identically equal to zero

$$\vec{f}(t-x-\ell)|\_{\mathbf{x}=\ell} = \vec{f}(\underbrace{t-2\ell}\_{<\mathbf{0}}) \equiv \vec{0}, \quad t < 2\ell.$$

It remains to check that standard conditions are satisfied at the central vertex. The matrix *S*st **<sup>v</sup>** possesses the representation

$$S\_\mathbf{v}^{\rm st} = -\mathbf{I} + 2P\_{(1,1,\ldots,1)},\tag{19.46}$$

where *P(*1*,*1*,...,*1*)* is the orthogonal projector on the vector *(*1*,* <sup>1</sup>*,...,* <sup>1</sup>*)* <sup>∈</sup> <sup>C</sup>*<sup>N</sup> .* We get

$$
\vec{u}(0,t) = \vec{f}(t-\ell) + S\_\mathbf{v}^\mathrm{st}\,\vec{f}(t-\ell) = 2P\_{(1,1,...,1)}\vec{f}(t-\ell),
$$

implying that all coordinates in the vector *w(* 0*,t)* are equal. To check the Kirchhoff condition on the normal derivatives we calculate

$$\partial\_{\boldsymbol{\theta}}\vec{u}(0,t) = \frac{\partial}{\partial \boldsymbol{x}} \Big( \vec{f}(t+\boldsymbol{x}-\boldsymbol{\ell}) + \mathcal{S}\_{\mathbf{v}}^{\mathrm{st}}\vec{f}(t-\boldsymbol{x}-\boldsymbol{\ell}) \Big)|\_{\boldsymbol{x}=\boldsymbol{0}}$$

$$= 2(\mathbf{I} - P\_{(1,1,\ldots,1)})\vec{f}(t-\boldsymbol{\ell})$$

$$= P\_{(1,1,\ldots,1)}^{\perp}\vec{f}(t-\boldsymbol{\ell}),$$

where *P* ⊥ *(*1*,*1*,...,*1*)* = 1 − *P(*1*,*1*,...,*1*)* is the projector on the subspace orthogonal to *(*1*,* 1*,...,* 1*)* in C*<sup>N</sup>* . It follows that the sum of derivatives is zero.

On the next interval 2*<t<* 3 the waves initiated by the boundary control for 0 *<t<* have enough time not only to reach the central vertex, but also to return back and reflect from the contact vertices. The solution is given by

$$
\vec{\mu}(\mathbf{x},t) = \vec{f}(t+\mathbf{x}-\ell) + \mathbf{S}\_{\mathbf{v}}^{\mathrm{st}}\vec{f}(t-\mathbf{x}-\ell) - \mathbf{S}\_{\mathbf{v}}^{\mathrm{st}}\vec{f}(t+\mathbf{x}-\mathfrak{A}\ell), \quad 2\ell < t < \Im\ell. \tag{19.47}
$$

To check that this formula is correct there is no need to consider the central vertex since the third term is identically equal to zero there

$$f(t + 0 - \mathfrak{A}\ell) \equiv 0, \quad t < \mathfrak{A}\ell.$$

On the other hand, the second and third terms cancel each other on the boundary:

$$S\_\mathbf{v}^\mathrm{st}\,\breve{f}(t-\ell-\ell)-S\_\mathbf{v}^\mathrm{st}\,\breve{f}(t+\ell-3\ell)\equiv 0.$$

In fact formula (19.47) can be applied for any 0 *<t<* 3 since the third term is identically equal to zero on *t <* 2 and the second term on *t < .*

Continuing this procedure it is straightforward to obtain solution to the boundary control problem for arbitrary *t >* 0

$$
\vec{u}(\mathbf{x},t) = \begin{cases}
\sum\_{m=0}^{n-1} (-1)^m \left( (S\_\mathbf{v}^{\mathrm{cl}})^m \vec{f}(t+x-(2m+1)\ell) \right) \\
+ (S\_\mathbf{v}^{\mathrm{cl}})^{m+1} \vec{f}(t-x-(2m+1)\ell) \\
\quad + (-1)^n (S\_\mathbf{v}^{\mathrm{cl}})^n \vec{f}(t+x-(2n+1)\ell), \qquad 2n\ell < t < (2n+1)\ell; \\
\\
\sum\_{m=0}^n (-1)^m \left( (S\_\mathbf{v}^{\mathrm{cl}})^m \vec{f}(t+x-(2m+1)\ell) \right) \\
+ (S\_\mathbf{v}^{\mathrm{cl}})^{m+1} \vec{f}(t-x-(2m+1)\ell) \\
\qquad \qquad 2(n+1)\ell < t < (2n+2)\ell. \\
\end{cases} \tag{19.48}
$$

Note that the formula may be simplified taking into account that *S*st **v** <sup>2</sup> <sup>=</sup> **<sup>I</sup>** eliminating all even powers of the vertex scattering matrix and substituting odd powers with just *S*st **v** *.*

Let us calculate now the dynamical response operator for *T* ∈ *(*0*,* 3*)* using formula (19.47)

$$\begin{split} \left(\mathbf{R}^{T}\vec{f}\right)(t) &= \partial\_{\boldsymbol{\eta}}\vec{u}(\boldsymbol{\ell},t) = -\frac{\partial}{\partial\boldsymbol{\chi}}\vec{u}(\boldsymbol{\ell},t) \\ &= -\frac{d}{dt}\vec{f}(t) + 2S\_{\mathbf{v}}^{\mathrm{st}}\frac{d}{dt}\vec{f}(t-2\boldsymbol{\ell}) \\ &= -\vec{f}'(t) + 2S\_{\mathbf{v}}^{\mathrm{st}}\vec{f}'(t-2\boldsymbol{\ell}). \end{split} \tag{19.49}$$

The response operator can be seen as a convolution operator with the generalised kernel

$$-\delta'(t) + 2S\_\mathbf{v}^{\mathrm{st}}\delta'(t-2\ell). \tag{19.50}$$

We see that the kernel of the response operator is singular and the singularities occur at the time delays corresponding to the time needed for the wave to travel from the contact set to the central vertex and back. It is important to note that the second and third terms in the solution determine the same singularity in the kernel, hence coefficient 2 in the formula. It follows that the response operator determines the distance to the nearest vertex in the case of standard conditions.

# **19.6 BC-Method for the Laplacian on the Star Graph with General Vertex Conditions**

Our goal now is to obtain an explicit formula for the solution of the boundary control on the star graph assuming most general vertex conditions at the central vertex. We again use vertex notations *<sup>u</sup>* <sup>∈</sup> *<sup>L</sup>*2*(*[0*,* ]*,* <sup>C</sup>*<sup>N</sup> )* implying that the vertex conditions (3.21) can be written as

$$
\vec{u}(S-I)\vec{\mu}(0) = (S+I)\vec{\mu}'(0). \tag{19.51}
$$

To calculate the dynamical response operator we need to find the unique solution to the wave equation (19.40) satisfying vertex conditions (19.51), subject to the boundary control (19.41) and zero initial data (19.42). The same ideas can be applied


Hence we start by calculating the solution for *<t<* 2

$$
\vec{w}(\mathbf{x},t) = \vec{f}(t+\mathbf{x}-\ell) + \vec{a}(t-\mathbf{x}).\tag{19.52}
$$

Our immediate aim is to calculate the function *a* from the vertex conditions (19.51). It will be more convenient to write these conditions using Hermitian matrices as it was done in Sect. 3.4:

$$\begin{cases} P\_{-1}\vec{u}(0) = 0, \\\\ (\mathbf{I} - P\_{-1})\vec{u}'(0) = A(\mathbf{I} - P\_{-1})\vec{u}, \end{cases} \tag{19.53}$$

where *P*−<sup>1</sup> is the projection on the eigensubspace of *S* corresponding to the eigenvalue −1 and

$$A = (\mathbf{I} - P\_{-1})i\frac{\mathbf{S} - \mathbf{I}}{\mathbf{S} + \mathbf{I}}(\mathbf{I} - P\_{-1}).$$

The boundary values at the origin of the solution given by (19.52) are

$$\begin{aligned} \vec{\mu}(0, t) &= \vec{f}(t - \ell) + \vec{a}(t), \\ \vec{u}'\_{\chi}(0, t) &= \vec{f}'(t - \ell) - \vec{a}'(t). \end{aligned} \tag{19.54}$$

Substitution into vertex conditions (19.53) yields

$$\begin{cases} P\_{-1} \left( \vec{f}(t-\ell) + \vec{a}(t) \right) = 0, \\ \left( \mathbf{I} - P\_{-1} \right) \left( \vec{f}'(t-\ell) - \vec{a}'(t) \right) = A(\mathbf{I} - P\_{-1}) \left( \vec{f}(t-\ell) + \vec{a}(t) \right). \end{cases} \tag{19.55}$$

These equations can easily be solved. The only difficulty is that the signs in front of *a* on different sides of the second equation are different. Here are the explicit solutions:

$$\begin{aligned} P\_{-1}\vec{a}(t) &= -P\_{-1}\vec{\tilde{f}}(t-\ell), \\\\ (\mathbf{I} - P\_{-1})\vec{a}(t) &= (\mathbf{I} - P\_{1})\vec{\tilde{f}}(t-\ell) - 2Ae^{-At} \int\_{\ell}^{t} e^{A\mathbf{r}}(\mathbf{I} - P\_{-1})\vec{\tilde{f}}(\tau - \ell)d\tau, \\\\ \Rightarrow \vec{a}(t) &= \underbrace{(\mathbf{I} - 2P\_{-1})}\_{=S\_{\mathsf{V}}(\infty)}\vec{\tilde{f}}(t-\ell) - 2Ae^{-At} \int\_{\ell}^{t} e^{A\mathbf{r}}(\mathbf{I} - P\_{-1})\vec{\tilde{f}}(\tau - \ell)d\tau, \end{aligned} \tag{19.56}$$

where we used notation (3.31) for the high energy limit *Sv(*∞*)* of the vertex scattering matrix *S***v***(k)* given by (3.20). The integral from to *t* should be interpreted so that it is equal to zero whenever *t* ≤ *.*

Having calculated *a* we obtain the solution to the wave equation satisfying vertex conditions for *t <* 2

$$\begin{split} \vec{u}(\mathbf{x},t) &= \vec{f}(t+\mathbf{x}-\ell) + S\_{\mathbf{v}}(\infty)\vec{f}(t-\mathbf{x}-\ell) \\ &- 2Ae^{-A(t-\mathbf{x}-\ell)} \int\_{0}^{t-\mathbf{x}-\ell} e^{A\mathbf{r}} (\mathbf{I}-P\_{-1}) \vec{f}(\mathbf{r}) d\mathbf{r}. \end{split} \tag{19.57}$$

Note that we decided to change the argument in the convolution integral so that the solution *w(x, t)* is given as a linear combination of *f (τ ),* 0 ≤ *τ* ≤ *t* − *x* − *.* Here *x* + is precisely the delay time needed for a wave to travel from the contact point () to the central vertex (point 0) and back to point *x.*

It should be clear for the reader that our next step is to obtain an explicit formula for the solution of the wave equation for *t* ∈ *(*2*,* 3*).* The central vertex can be treated in the same way as before and we obtain precisely the same formula for the wave *a.* The only difference is that we have to take into account reflection of this wave from the contact set *x* = *.* Reflection due to the Dirichlet condition results in multiplication by −1*.* It is easy to check that the following function satisfies not only the vertex conditions at the central vertex, but also boundary control for *t* ∈ *(*2*,* 3*)*:

$$\begin{split} \vec{u}(\mathbf{x},t) &= \vec{f}(t+\mathbf{x}-\ell) \\ &+ \mathcal{S}\_{\mathbf{V}}(\infty)\vec{f}(t-\mathbf{x}-\ell) \\ &- 2Ae^{-A(t-\mathbf{x}-\ell)} \int\_{0}^{t-\mathbf{x}-\ell} e^{A\mathbf{r}} (\mathbf{I}-P\_{-1}) \vec{f}(\tau) d\tau \\ &- \mathcal{S}\_{\mathbf{V}}(\infty)\vec{f}(t+\mathbf{x}-3\ell) \\ &+ 2Ae^{-A(t+\mathbf{x}-3\ell)} \int\_{0}^{t+\mathbf{x}-3\ell} e^{A\mathbf{r}} (\mathbf{I}-P\_{-1}) \vec{f}(\tau) d\tau. \end{split} \tag{19.58}$$

As before the formula can be used for any *t <* 3 since the second and the third terms vanish for *t<*, while the fourth and fifth terms—for *t <* 2*.*

To check that this function satisfies the boundary control consider

$$\vec{u}(\ell, t) = \vec{f}(t) + S\_{\mathsf{V}}(\infty)\vec{f}(t - 2\ell) - 2Ae^{-A(t - 2\ell)} \int\_0^{t - 2\ell} e^{A\tau} (\mathbf{I} - P\_{-1})\vec{f}(\tau) d\tau$$

$$-S\_{\mathsf{V}}(\infty)\vec{f}(t - 2\ell) + 2Ae^{-A(t - 2\ell)} \int\_0^{t - 2\ell} e^{A\tau} (\mathbf{I} - P\_{-1})\vec{f}(\tau) d\tau$$

$$= \vec{f}(t).$$

Checking the vertex condition at *x* = 0 one should just take into account that the last two terms in (19.58) vanish at *x* = 0 for *t <* 3:

$$\begin{aligned} &-S\_{\mathbf{V}}(\infty)\vec{\tilde{f}}(t+\mathbf{x}-3\ell) + 2Ae^{-A(t+\mathbf{x}-3\ell)}\int\_{0}^{t+\mathbf{x}-3\ell} \\ &\times e^{A\mathbf{r}}(\mathbf{I}-P\_{-1})\vec{\tilde{f}}(\mathbf{r})d\tau|\_{\mathbf{x}=0, t<3\ell} \\ &= -S\_{\mathbf{V}}(\infty)\vec{\tilde{f}}(t-3\ell) + 2Ae^{-A(t-3\ell)}\int\_{0}^{t-3\ell}e^{A\mathbf{r}}(\mathbf{I}-P\_{-1})\vec{\tilde{f}}(\mathbf{r})d\tau|\_{t<3\ell} \\ &\equiv 0, \end{aligned}$$

implying that all formulas are identical to just considered in the case *<t<* 2.

It is clear that the process can be continued further adding more and more waves as we have already done for the standard Laplacian. For any finite *T* the formula for the solution for *t<T* will contain a finite number of terms. For the solution of the inverse problem it will be enough to have *T* = 3*.* Let us analyse obtained solution (19.58) given by the five terms:


The dynamical response operator for *T* ∈ *(*0*,* 3*)* is given by

$$\begin{split} \left(\mathbf{R}^{T}\vec{f}\right)(t) &= \partial\_{\boldsymbol{\eta}}\vec{u}(\boldsymbol{\ell},t) = -\frac{\partial}{\partial \boldsymbol{\ell}}\vec{w}(\boldsymbol{\ell},t) \\ &= -\frac{d}{dt}\vec{f}(t) + 2S\_{\mathbf{v}}(\infty)\frac{d}{dt}\vec{f}(t-2\ell) - 4A(\mathbf{I} - P\_{-1})\vec{f}(t-2\ell)) \\ &+ 4A^{2}e^{-A(t-2\ell)}\int\_{-\infty}^{t-2\ell} e^{A\mathbf{r}}(\mathbf{I} - P\_{-1})\vec{f}(\tau)d\tau \end{split}$$

$$\begin{split} &= -\vec{f}'(t) + 2S\_{\text{\textquotedblleft}}(\infty)\vec{f}'(t-2\ell) - 4A(\mathbf{I} - P\_{-1})\vec{f}(t-2\ell)) \\ &\quad + 4A^2 e^{-A(t-2\ell)} \int\_{-\infty}^{t-2\ell} e^{A\tau} (\mathbf{I} - P\_{-1})\vec{f}(\tau) d\tau, \end{split} \tag{19.59}$$

where the third term appears as the result of the differentiation of the integral. The generalised kernel *r(t* − *τ )* of the response operator is

$$r(t) = -\delta'(t) + 2S\_{\mathbf{V}}(\infty)\delta'(t - 2\ell) - 4A(\mathbf{I} - P\_{-1})\delta(t - 2\ell)$$

$$+ 4A^2 e^{-A(t - 2\ell)}(\mathbf{I} - P\_{-1})\theta(t - 2\ell),\tag{19.60}$$

where *<sup>θ</sup>* is the Heaviside function. The kernel 4*A*2*e*−*A(t*−<sup>2</sup>*)(***<sup>I</sup>** <sup>−</sup> *<sup>P</sup>*−1*)θ (t* <sup>−</sup> <sup>2</sup>*)* is locally *L*2, therefore we have the following classification of the singularities in the kernel:


In addition there is an integral operator with the bounded kernel 4*A*2*e*−*A(t*−<sup>2</sup>*)(***<sup>I</sup>** <sup>−</sup> *P*−1*)θ (t* − 2*).*

These properties of the dynamical response operator will be very important for our future analysis, especially for the solution of the inverse problem. Let us remember that the kernel of the dynamical response operator contains the delayed *δ* and *δ* singularities, of course provided *P*−<sup>1</sup> = **I***.* The delay is equal to the time needed for the wave to travel from the contact vertex to the central vertex and back. It follows that in general examining the response operator we may determine the distance to the nearest vertex, since it is not really important that the graph we consider is a star graph. The only possible obstacle is that the reflection coefficient from the nearest vertex could be identically zero.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 20 Inverse Problems for Trees**

This chapter is devoted to the solution of the inverse problem for Schrödinger operators on metric trees. The problem has been studied in [68, 69, 73, 109, 509, 510] (see also later references by the authors). We follow the approach developed in [37, 44] based on BC-method for the one-dimensional Schrödinger equation.

For trees it is customary to consider as the contact set the set of all degree one vertices

$$\left\{ V^m \colon \deg V^m = 1 \right\} \tag{20.1}$$

—the graph's boundary. The magnetic potential on any metric tree can be eliminated, hence the set of spectral data will consist of the two equivalent sets:


The inverse problem for trees can be solved by a **leaf-pealing procedure** where the metric tree, potential, and vertex conditions are recovered step by step starting from a region situated close to the boundary (Fig. 20.1). In this way the inverse problem is reduced to the inverse problem on a smaller tree. The whole operator is reconstructed step-by-step.

By using the two different sets of spectral data (the M-function and the dynamical response operator) we are going to make our presentation more transparent and avoid difficulties which appear if just one set is used. For example the dynamical response operator is more suitable for the reconstruction of the potential on the **pendant edges** having as one of the endpoints a degree one contact vertex, while the reduction to a smaller tree is easier if the Titchmarsh-Weyl M-functions are used. **Fig. 20.1** A metric tree

## **20.1 Obvious Ambiguities and Limitations**

Before solving the inverse problem let us discuss necessary assumptions and possible ambiguities.

**Observation 20.1** The magnetic potential in the Schrödinger equation on a tree can be removed, leading to a certain similarity transformation of the M-function. Let us choose any point *x*<sup>0</sup> ∈ **T** and consider the unitary transformation of multiplication by the function (generalising (4.10))

$$\exp\left(i\int\_{\chi\_0}^{\chi} a(\mathbf{y})d\mathbf{y}\right),$$

where *a* is the magnetic potential and the integration is taken along the shortest path connecting *x*<sup>0</sup> and *x.*1 Then the operators *Lq,a* and *Lq* are connected by

$$e^{-i\int\_{x\_0}^{\chi} a(\mathbf{y})d\mathbf{y}}L\_{q,a}e^{i\int\_{x\_0}^{\chi} a(\mathbf{y})d\mathbf{y}} = L\_q.$$

Introducing the diagonal *M∂* × *M∂* = *(N* + 1*)* × *(N* + 1*)* matrix

$$\mathbf{U}\_{\mathbf{x}\_{j},\mathbf{x}\_{j}} = e^{i \int\_{\mathbf{x}\_{0}}^{\mathbf{x}\_{j}} a(\mathbf{y}) d\mathbf{y}}, \quad \mathbf{x}\_{j} \in \partial \mathbf{T}, \tag{20.2}$$

we get an explicit relation connecting the corresponding M-functions

$$\mathbf{M}\_{L\_{q,a}}(\lambda) = \mathbf{U} \mathbf{M}\_{L\_q}(\lambda) \mathbf{U}^{-1}. \tag{20.3}$$

<sup>1</sup> In fact the integral does not depend on the path, since any path different from the shortest one has to return back along the same way leading to cancellations of the corresponding contributions.

It follows that it is impossible to reconstruct the exact form of the magnetic potential—only the integrals *xj xi a(y)dy* between the contact points. Having this in mind we restrict our consideration in this chapter to Schrödinger operators with zero magnetic potential.

Physicists use to say that *there is no magnetic field in one dimension*.

**Observation 20.2** Let *L<sup>S</sup> <sup>q</sup>* be a Schrödinger operator on a metric graph **T***.* Let *θ (x)* be a real-valued function on **T**, constant on every edge *En* and equal to zero on all pendant edges:

$$\theta(\mathbf{x}) = \begin{cases} \theta\_n, \ x \in E\_n, \ E\_n \text{ is not a pendant edge;} \\ 0, \ x \in E\_n, \ E\_n \text{ is a pendant edge.} \end{cases}$$

Then the similarity transformation

$$L\_q^{\mathbf{S}} \to \quad L\_q^{\hat{\mathbf{S}}}(\theta) = e^{-l\theta(\mathbf{x})} L\_q^S e^{l\theta(\mathbf{x})} \tag{20.4}$$

preserves the M-function. In other words, the operators *L***S**<sup>ˆ</sup> *<sup>q</sup>* and *L***<sup>S</sup>** *<sup>q</sup> (θ )* have precisely the same Titchmarsh-Weyl matrices, but the vertex conditions at internal vertices in may be different, since they are described by the scattering matrices **S** and **S**ˆ*.* The relation between these matrices is rather explicit, but we leave the exact form of the relation as a problem for the readers.

**Problem 89** Determine the formula that describes the relation between the matrices **S** and **S**ˆ, explaining how the vertex conditions are changed under the similarity transformation (20.4).

It follows that the inverse problem can be solved only up to the similarity transformation (20.4). Note that this transformation does not change the metric graph **T** and the potential *q*, but does affect the vertex conditions.

**Observation 20.3** If the matrix parametrising the vertex conditions has zero entries, then it might happen that the metric graph cannot be reconstructed uniquely. Such a counterexample was first presented in [335], where the Laplace operator *L***<sup>S</sup>** on the cross graph depicted at Fig. 20.2 was considered. If the vertex scattering matrix *S*<sup>1</sup> associated with the central vertex *<sup>V</sup>* <sup>1</sup> = {*x*2*, x*4*, x*6*, x*8} is chosen such that there is no transition between the opposite branches of the cross and no reflection from the central vertex, then all crosses with equal distances between the *neighbouring* pendant vertices *d(x*1*, x*3*), d(x*3*, x*5*), d(x*5*, x*7*), d(x*7*, x*1*)* may have identical M-functions.

The matrix *S*<sup>1</sup> possessing the described properties and being scaling invariant has the form

$$S^1 = \begin{pmatrix} 0 \ \alpha & 0 & \beta \\ \alpha & 0 & \beta & 0 \\ 0 & \beta & 0 & -\alpha \\ \beta & 0 & -\alpha & 0 \end{pmatrix}, \quad \alpha^2 + \beta^2 = 1, \ \alpha, \beta \in (-1, 1).$$

The corresponding M-function is calculated in Appendix 1:

**M***(λ)* = −*k* ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ *<sup>α</sup>*2*c*1−3*s*2−4−*c*1+4*s*2+<sup>3</sup> *<sup>α</sup>*2*s*1−3*s*2−4−*s*2+3*s*1+<sup>4</sup> *αs*3+<sup>4</sup> *<sup>α</sup>*2*s*1−3*s*2−4−*s*2+3*s*1+<sup>4</sup> *αβs*2−<sup>4</sup> *<sup>α</sup>*2*s*1−3*s*2−4−*s*2+3*s*1+<sup>4</sup> *βs*2+<sup>3</sup> *<sup>α</sup>*2*s*1−3*s*2−4−*s*2+3*s*1+<sup>4</sup> *αs*3+<sup>4</sup> *<sup>α</sup>*2*s*1−3*s*2−4−*s*2+3*s*1+<sup>4</sup> *<sup>α</sup>*2*c*2−4*s*1−3−*c*2+3*s*1+<sup>4</sup> *<sup>α</sup>*2*s*1−3*s*2−4−*s*2+3*s*1+<sup>4</sup> *βs*1+<sup>4</sup> *<sup>α</sup>*2*s*1−3*s*2−4−*s*2+3*s*1+<sup>4</sup> *αβs*1−<sup>3</sup> *<sup>α</sup>*2*s*1−3*s*2−4−*s*2+3*s*1+<sup>4</sup> *αβs*2−<sup>4</sup> *<sup>α</sup>*2*s*1−3*s*2−4−*s*2+3*s*1+<sup>4</sup> *βs*1+<sup>4</sup> *<sup>α</sup>*2*s*1−3*s*2−4−*s*2+3*s*1+<sup>4</sup> <sup>−</sup>*α*2*c*1−3*s*2−4+*c*2+3*s*1+<sup>4</sup> *<sup>α</sup>*2*s*1−3*s*2−4−*s*2+3*s*1+<sup>4</sup> −*αs*1+<sup>2</sup> *<sup>α</sup>*2*s*1−3*s*2−4−*s*2+3*s*1+<sup>4</sup> *βs*2+<sup>3</sup> *<sup>α</sup>*2*s*1−3*s*2−4−*s*2+3*s*1+<sup>4</sup> *αβs*1−<sup>3</sup> *<sup>α</sup>*2*s*1−3*s*2−4−*s*2+3*s*1+<sup>4</sup> −*αs*1+<sup>2</sup> *<sup>α</sup>*2*s*1−3*s*2−4−*s*2+3*s*1+<sup>4</sup> <sup>−</sup>*α*2*c*2−4*s*1−3+*c*1+4*s*2+<sup>3</sup> *<sup>α</sup>*2*s*1−3*s*2−4−*s*2+3*s*1+<sup>4</sup> ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ *,* (20.5)

where we introduced the following short notations

$$c\_{l \pm j} := \cos k(l\_l \pm l\_j), \ s\_{l \pm j} := \sin k(l\_l \pm l\_j), \ i, j = 1, 2, 3, 4. \tag{20.6}$$

The matrix **M***(λ)* does not depend on all four length parameters *lj , j* = 1*,* 2*,* 3*,* 4 determining the cross **T***.* To see this one may introduce the following three new length parameters

$$\begin{array}{ll} \mathcal{L} = l\_1 + l\_2 + l\_3 + l\_4 \text{ - the total length of the graph,} \\ L\_{1+2} = l\_1 + l\_2 & \text{- the distance between the vertices } V^1 \text{ and } V^2, \\ L\_{1+4} = l\_1 + l\_4 & \text{- the distance between the vertices } V^1 \text{ and } V^4. \end{array} \tag{20.7}$$

It is easy to see that all relevant combinations of *lj* appearing in (20.5) can be expressed in terms of L*, L*1+2*, L*1+<sup>4</sup>

$$\begin{cases} l\_2 + l\_3 = \mathcal{L} - L\_{1+4}, \\ l\_1 - l\_3 = L\_{1+2} + L\_{1+4} - \mathcal{L}, \\ l\_2 - l\_4 = L\_{1+2} - L\_{1+4}. \end{cases} \tag{20.8}$$

We have proven that the M-functions for the 4-star graphs with the edge lengths *l*1*, l*2*, l*3*, l*<sup>4</sup> and with the edge lengths *l*<sup>1</sup> + *l,l*<sup>2</sup> − *l,l*<sup>3</sup> + *l,l*<sup>4</sup> − *l* are identical for any 0 ≤ *l <* min *lj .*

It follows that to ensure that the metric graph is uniquely determined by the Mfunction one should require that the vertex scattering matrices *S<sup>m</sup>* **<sup>v</sup>** *(k)* or its high energy limit *S<sup>m</sup>* **<sup>v</sup>** *(*∞*)* does not have identically vanishing entries. This assumption is rather strong and is not necessary for most of our considerations, but it is easy to formulate and to verify.

**Problem 90** Consider the cross graph shown in Fig. 20.2 and the corresponding Laplace operator defined on the set of functions satisfying the vertex conditions (20.32). Calculate the dynamical response operator **R***<sup>T</sup>* associated with the contact set *∂***T** = {*x*1*, x*3*, x*5*, x*7} (use Definition 19.3). Compare the result with the M-function calculated above. Do you see that the response operator depends on the distances between the *neighbouring* pendant vertices, but is independent of the distances between the opposite vertices?

**Remark 20.4** As we already mentioned, degree two vertices should be ignored in the case of standard vertex conditions, since the corresponding two edges may be substituted by one bigger edge without changing the spectral data for the whole graph. Note that these vertex conditions are excluded if one requires that *S<sup>m</sup>* **<sup>v</sup>** *(k)* or *S<sup>m</sup>* **<sup>v</sup>** *(*∞*)* does not have entries identically equal to zero.

Based on our observations, we are going to assume that the following assumption is satisfied:

**Assumption 20.5** *The high energy limits of the vertex scattering matrices S<sup>m</sup>* **<sup>v</sup>** *(*∞*) do not have zero entries.* 

The assumption implies in particular that the *S<sup>m</sup>* **<sup>v</sup>** *(*∞*)* are irreducible and it holds for the standard vertex conditions with the exception of degree two vertices.

Under Assumption 20.5 the spectral data, *i.e.* the M-function or the dynamical response operator **R***<sup>T</sup>* associated with all degree one vertices, determine


In what follows we are going to describe how to solve all three subproblems.

## **20.2 Subproblem I: Reconstruction of the Metric Tree**

The BC-method allows one to reconstruct either the whole tree at once or just its parts that look like bunches. We are going to speak about global and local procedures respectively. Consider Fig. 20.1 presenting a tree. Instead of reconstructing the whole tree at once one might be interested in recovering just the part looking like the bunch formed by the edges *E*1*, E*2, and *E*<sup>3</sup> (marked with red colour on the figure).

The main idea behind our method of reconstruction of the metric graph is the fact that the singularities in the response operator for the Schrödinger and Laplace evolutions coincide, of course provided the vertex conditions are the same. This can already be seen from formula (19.14) representing the solution using the Goursat kernel: if the potential is identically zero, then the solution is given by traveling wave, if the potential is not zero, then the solution contains in addition to the same travelling wave an integral term with the bounded kernel.

## *20.2.1 Global Reconstruction of the Metric Tree*

The whole metric tree **T** can be reconstructed at once if the distances between all contact points are known.

**Lemma 20.6** *Let* **T** *be a metric tree with the contact set ∂***T** *given by all degree one vertices. Then* **T** *is uniquely determined by the set of all distances between the contact points:* 

$$\text{dist}\,(V^l, V^j), \quad V^l, V^j \in \partial \mathbf{T}.\tag{20.9}$$

#### **Fig. 20.3** Subtree **T**1*,*2*,*<sup>3</sup>

*Proof* The distance between a pair of pendant vertices is equal to the length of the unique shortest path connecting them. Consider any three vertices from *∂***T**, say *V* <sup>1</sup>*, V* <sup>2</sup>*,* and *V* <sup>3</sup> (Fig. 20.3). Let us denote by *W*<sup>1</sup> the unique vertex where all three paths connecting *V* <sup>1</sup> with *V* 2, *V* <sup>2</sup> with *V* 3, and *V* <sup>3</sup> with *V* <sup>1</sup> intersect. Then the distance between *V* <sup>1</sup> and *W*<sup>1</sup> can be calculated as

$$\text{dist}\left(V^{\text{l}},W^{\text{l}}\right) = \frac{\text{dist}\left(V^{\text{l}},V^{\text{2}}\right) + \text{dist}\left(V^{\text{l}},V^{\text{3}}\right) - \text{dist}\left(V^{\text{2}},V^{\text{3}}\right)}{2}.$$

Hence we are able to reconstruct the subtree **T**1*,*2*,*<sup>3</sup> ⊂ **T** covered by the shortest paths connecting the three vertices.

Consider now the next vertex, say *V* <sup>4</sup>*.* Our immediate goal is to reconstruct the subtree **T**1*,*2*,*3*,*<sup>4</sup> <sup>⊂</sup> **<sup>T</sup>** covered by the six shortest paths connecting *<sup>V</sup> <sup>i</sup>* with *<sup>V</sup> <sup>j</sup>* , *i, j* <sup>=</sup> <sup>1</sup>*,* <sup>2</sup>*,* <sup>3</sup>*,* <sup>4</sup>*.* In general **T**1*,*2*,*3*,*<sup>4</sup> has two internal vertices *W*<sup>1</sup> and *W*2, which also might coincide in the degenerate case. Let us calculate the distance between *<sup>V</sup>* <sup>4</sup> and **T**1*,*2*,*<sup>3</sup> considered as a subtree of **T**1*,*2*,*3*,*<sup>4</sup> :

$$\text{dist}\left(V^{4}, \mathbf{T}\_{1,2,3}\right) = \min\_{l \neq j=1,2,3} \frac{\text{dist}\left(V^{4}, V^{i}\right) + \text{dist}\left(V^{4}, V^{j}\right) - \text{dist}\left(V^{l}, V^{j}\right)}{2}. \tag{20.10}$$

Consider any pair of indices *(i*0*, j*0*)* realising the minimum above. Then the tree **T**1*,*2*,*3*,*<sup>4</sup> is obtained from **T**1*,*2*,*<sup>3</sup> by creating a new vertex *W*<sup>2</sup> (if necessary) at the distance

$$\text{dist}\left(V^{l\_0}, W^2\right) = \frac{\text{dist}\left(V^{l\_0}, V^{j\_0}\right) + \text{dist}\left(V^{l\_0}, V^4\right) - \text{dist}\left(V^{j\_0}, V^4\right)}{2}$$

and attaching an edge of length given by (20.10) (Fig. 20.4).

Continuing this process the whole finite metric tree **T** is reconstructed step by step.

 

The distances between the contact points may be determined using travelling times, introduced below.

#### **Fig. 20.4** Subtree **T**1*,*2*,*3*,*<sup>4</sup>

**Definition 20.7** Let **R***<sup>T</sup>* be the dynamical response operator associated with the metric tree **T** and the contact set *∂***T***.* Let *V <sup>i</sup>* and *V <sup>j</sup>* be any two vertices from *∂***T***.* Then the travelling time *t (V <sup>i</sup> , V <sup>j</sup> )* between the vertices is defined by

$$\text{tr}(V^{i}, V^{j}) = \sup \left\{ T : R^{T}\_{V^{i}, V^{j}} \equiv 0 \right\},\tag{20.11}$$

where *R<sup>T</sup> <sup>V</sup> i,V <sup>j</sup>* denotes the entry of the matrix operator **R***<sup>T</sup>* associated with the vertices *V <sup>i</sup>* and *V <sup>j</sup>* .

Consider the wave evolution on **T** initiated by a boundary control applied just at the vertex *V <sup>i</sup>* :

$$f(t) = f(t)\ddot{e}\_l, \quad f|\_{t<0} \equiv 0,$$

where *<sup>e</sup><sup>i</sup>* is the *i*-th standard basis vector in C*M∂ .* Then the travelling time between the vertices *V <sup>j</sup>* and *V <sup>i</sup>* is the smallest time when the wave initiated by such boundary control *f (t)* may reach the vertex *V <sup>i</sup> .*

The relation between the travelling times and the distances between the pendant vertices in a tree is described by the following lemma.

**Lemma 20.8** *Consider the Schrödinger equation on a finite metric tree* **T** *with vertex conditions at the internal vertices such that the high energy limits of the vertex scattering matrices S<sup>m</sup>* **<sup>v</sup>** *(*∞*) do not have zero entries. Then the travelling time between any two vertices V <sup>i</sup> and V <sup>j</sup> from the contact set is equal to the distance*  dist *(V <sup>i</sup> , V <sup>j</sup> ) between the vertices.* 

*Proof* The wave evolution has unit speed of propagation, hence the travelling time cannot exceed the distance between the vertices. It remains to prove the opposite inequality.

Consider the shortest path from *<sup>V</sup> <sup>j</sup>* to *<sup>V</sup> <sup>i</sup>* and denote the endpoints on it *<sup>V</sup> <sup>j</sup> <sup>x</sup>*1*, x*2*, x*3*,...,x*2*<sup>s</sup>* <sup>∈</sup> *<sup>V</sup> <sup>i</sup> ,* so that *x*2*<sup>n</sup>* and *x*2*n*+1*, n* = 1*,* 2*,...,s* − 1, belong to the same vertex. Formula (19.14) describing the solution to the wave equation on an arbitrary interval using the Goursat kernel implies that travelling times between the endpoints of the same edge are always equal to its length. Formula (19.59) for the kernel of the response operator on the star graph implies that travelling

times between the endpoints belonging to the same vertex is always equal to zero, provided *S<sup>m</sup>* **<sup>v</sup>** *(*∞*)* does not have zero entries. It follows that the travelling time between the pendant vertices coincides with the distance. 

Combining the previous two lemmas we get

**Theorem 20.9** *Consider the Schrödinger operator on a finite compact metric tree*  **T** *with contact set ∂***T***. Assume that the high energy limits of the vertex scattering matrices associated with all internal vertices do not have zero entries. Then knowledge of the response operator* **R***<sup>T</sup> associated with the boundary for a certain* 

$$T > \text{diam}(\mathbf{T}),$$

*determines the metric tree. Here* diam*(***T***) is the diameter of the metric tree – the maximal distance between any two points on* **T***.*

Several of our assumptions can be weakened. For example it is not necessary to require that all entries of *S<sup>m</sup>* **<sup>v</sup>** *(*∞*)* are different from zero: it would be enough that the entries used in the calculation of the travelling times are non-zero. Note that our definition of the travelling times is valid for metric trees only—in the case of graphs with cycles formula (20.11) may give an infinite value despite the graphs being finite and compact.

**Problem 91** Find an example of a finite compact metric graph with cycles, such that formula (20.11) determines an infinite travelling time between certain two vertices.

Another possible weakening of the assumptions is to use the principal *(M∂* − 1*)*×*(M∂* −1*)* block of the response operator instead of the whole matrix. Using this block, the travelling times between any two of the selected *M∂* − 1 pendant vertices can be determined. This information allows one to reconstruct the whole tree **T**, except the pendant edge connected to the excluded pendant vertex. Comparing the principal block of the response operator associated with **T** with the response operator for the reconstructed subtree allows us to calculate the length of the last pendant edge and where in **T**1*,*2*,...,M∂*−<sup>1</sup> it should be attached. As before it is enough to compare the response operator for the Schrödinger evolution with the response operator for the Laplacian evolution since their singularities coincide.

In the case of the Laplacian the metric tree can be reconstructed from **M***(*0*)* := **M***(λ)*|*λ*=<sup>0</sup> (see Appendix 2).

## *20.2.2 Local Reconstruction of the Metric Tree*

Our main tool will be a bunch-peeling procedure where the potential and the vertex conditions are reconstructed locally on a part of the metric tree. Applying this

procedure there is no need to reconstruct the whole graph at once. Therefore we describe here how to reconstruct a bunch of a metric tree.

**Step 1. Reconstruction of the Pendant Edges** Let *V* <sup>1</sup> be any vertex from *∂***T***.* Then the diagonal entry *R<sup>T</sup> <sup>V</sup>* <sup>1</sup>*,V* <sup>1</sup> of the response operator has a kernel of the form

$$r\_{V^1,V^1}(t) = -\delta'(t) + 2(S^m\_\mathbf{v}(\infty))\_{22}\delta'(t - 2\ell\_1) + H(t),$$

where <sup>1</sup> is the length of the pendant edge [*x*1*, x*2] starting at *<sup>V</sup>* 1, *(S<sup>m</sup>* **<sup>v</sup>** *(*∞*))*<sup>22</sup> is the high energy limit of the reflection coefficient from the nearest vertex (endpoint *x*<sup>2</sup> belongs to this vertex), and *H (t)* is a certain *L*1*,*loc-function. This formula is implied by (19.14) and (19.59).

It follows that the knowledge of the diagonal part of the response operator allows one to reconstruct the lengths of all pendant edges.

**Step 2. Identification of the Bunches** We are interested to know which pendant edges form bunches like the edges *E*1*, E*2, and *E*<sup>3</sup> in Fig. 20.1. To this end we use the travelling times again: two vertices *V <sup>i</sup>* and *V <sup>j</sup>* belong to the same bunch if and only if the travelling time between the pendant vertices is equal to the sum of the lengths of the corresponding pendant edges

$$T(V^l, V^j) = \ell\_l + \ell\_j,$$

where *i* and *j* are the length of the pendant edges starting at *V <sup>i</sup>* and *V <sup>j</sup>* respectively. This is an equivalence relation in the case of a metric tree. Then all bunches in the tree are simply equivalence classes of pendant edges.

We summarise the results of our studies as:

**Theorem 20.10** *Consider the Schrödinger operator on a finite compact metric tree*  **T** *with contact set ∂***T***. Then under Assumption 20.5 the knowledge of the response operator* **R***<sup>T</sup> associated with the boundary for a certain* 

$$T > 2 \max\_{V \ne \partial \mathbf{T}} \ell\_f,\tag{20.12}$$

*determines all bunches in the tree. Here j are the lengths of the pendant edges emanating from the contact vertices V <sup>j</sup> .*

Note that reconstruction of a bunch includes not only determining which pendant edges are connected together at a certain inner vertex but also calculation of their lengths. In other words the bunches are reconstructed as metric subtrees of **T***.*

Moreover local reconstruction of bunches requires knowledge of the response operator for *T* just above the double length of the shortest pendant edge (see (20.12)), which could be much less than the diameter of the tree.

## **20.3 Subproblem II: Reconstruction of the Potential**

In this section we assume that we have already reconstructed either the whole metric tree or one of its bunches. Our goal is to describe how the potential can be recovered on the pendant edges.

**Theorem 20.11** *Let V* <sup>1</sup> *be any pendant vertex on a metric tree* **T***. Let* <sup>1</sup> *be the length of the pendant edge emanating from V* <sup>1</sup>*. Then the diagonal entry R<sup>T</sup> V* 1*,V* 1 *of the response operator for T* = 2<sup>1</sup> *completely determines the potential on the pendant edge.* 

*Proof* Without loss of generality we assume that the pendant edge *E*<sup>1</sup> = [*x*1*, x*2] is parametrised so that the endpoint *x*<sup>1</sup> corresponds to the pendant vertex *V* <sup>1</sup>*.*

To reconstruct the potential on the pendant edge we are going to compare the diagonal of the response operator for **T** with the (scalar) response operator **R***<sup>T</sup>* <sup>0</sup> for the Schrödinger operator on the half-axis [*x*1*,*∞*)* with the same potential *q* on the interval [*x*1*, x*2]. One may assume that the potential is extended to be zero outside the interval.

We are interested in comparing **R***<sup>T</sup>* <sup>0</sup> with the diagonal element *R<sup>T</sup> <sup>V</sup>* <sup>1</sup>*,V* <sup>1</sup> of the original response operator. To determine these operators we consider the wave equation on [*x*1*,*∞*)* and **T**, respectively, subject to boundary control at *x*<sup>1</sup> <sup>=</sup> *<sup>V</sup>* 1. For the original tree we assume zero control at all other contact vertices and vertex conditions at the internal vertices. Let us denote by *uf (x, t)* and *uf* <sup>0</sup> *(x, t)* the solutions of the wave equations on **T** and [*x*1*,*∞*)* respectively.

The response operator for [*x*1*,*∞*)* is determined by solving the initial-value problem (see (19.1)):

$$\begin{aligned} \frac{\partial^2}{\partial t^2} \mu\_0 - \frac{\partial^2}{\partial x^2} \mu\_0 + q(\mathbf{x}) \mu\_0 &= 0, & \mathbf{x} \in [\mathbf{x}\_1, \infty), \ t \in [0, T], \\\\ \mu\_0(\mathbf{x}, 0) &= \frac{\partial}{\partial t} \mu\_0(\mathbf{x}, 0) = 0, \\ \mu\_0(\mathbf{x}\_1, t) &= f(t). \end{aligned}$$

The solution is given by (19.14) as

$$= \begin{cases} \mu\_1^f(\mathbf{x}, t) \\\\ \mathbf{0}, \end{cases} = \begin{cases} f(t - \mathbf{x} + \mathbf{x}\_1) + \int\_{\mathbf{x} - \mathbf{x}\_1}^t w(\mathbf{x}, s) f(t - \mathbf{s}) ds, \; \mathbf{x} - \mathbf{x}\_1 \le t, \\\ \mathbf{0}, \qquad \qquad t \le \mathbf{x} - \mathbf{x}\_1, \end{cases} \quad \mathbf{x} \in [\mathbf{x}\_1, \infty), \tag{20.13}$$

where *w* is the Goursat kernel. The solution is identically equal to zero for *x > t* + *x*1.

Note that in order to determine the **R***<sup>T</sup> <sup>V</sup>* <sup>1</sup>*,V* <sup>1</sup> for *T* ≤ <sup>1</sup> one needs to solve the same wave equation for *x* ∈ [*x*1*, x*2]*.* The causality condition *u(x, t)* = 0 for *x>x*<sup>1</sup> + *t* is automatically satisfied due to the unit speed of wave propagation and the zero boundary control at the contact vertices different from *V* 1. It follows that the two solutions are identical:

$$
\mu\_0^f(\mathbf{x}, t) = \mu^f(\mathbf{x}, t), \quad \mathbf{0} \le t < \ell\_1, \ \mathbf{x} \in [\mathbf{x}\_1, \mathbf{x}\_2).
$$

The two solutions coincide because the waves generated by the boundary control do not have enough time to reach the nearest internal vertex in **T** and are localised to the pendant edge [*x*1*, x*2]. They are independent of the rest of the graph **T***.*

Consider now the time interval *t* ∈ [1*,* 21]*.* For such values of *t* the waves generated by the boundary control at *x*<sup>1</sup> have enough time to reach the closest internal vertex, but the wave reflected form this vertex may affect only the region

$$x \ge -t - x\_1 + 2x\_2.$$

This can be seen from formula (19.14) since the wave reflected from the nearest vertex may be obtained by solving the wave equation on [*x*1*, x*2] with a certain boundary control at *x*2*.* It follows that *uf* coincides with *uf* <sup>0</sup> not only for *(x, t)* ∈ [*x*1*, x*2]×[0*,* 1] (as above), but also in the triangular region

$$\mathbf{x} \in [\ell\_1, \mathcal{D}\ell\_1], \quad \mathbf{x}\_1 \le \mathbf{x} \le -t - \mathbf{x}\_1 + \mathcal{D}\mathbf{x}\_2.$$

It follows in particular that the two solutions coincide in the neighbourhood of *x* = *x*<sup>1</sup> implying that the diagonal entries of the response operators coincide:

$$\mathbf{R}\_0^T = \boldsymbol{R}\_{V^1, V^1}^T, \quad \boldsymbol{T} < \mathcal{2}\boldsymbol{\ell}\_1.$$

As is proven in Sect. 19.4 (Theorem 19.6), the response operator **R***<sup>T</sup>* <sup>0</sup> determines the potential in the Schrödinger equation for *x* − *x*<sup>1</sup> *< T/*2, *i.e.* on the interval *x* ∈ [*x*1*, x*2*).* 

Since the pendant edge in Theorem 20.11 was arbitrary, we conclude that the knowledge of the diagonal entries in the response operator **R***<sup>T</sup>* for *T* greater than or equal to double the maximal length of the pendant edges determines the potential on all pendant edges. The procedure is local, hence to determine the potential on a bunch it is enough to know the diagonal elements of the response operator associated with the bunch. The optimal value of *T* is also determined by the lengths of the edges in the bunch.

Note that we were not able to reconstruct the potential on the whole tree **T** yet. Before we proceed it is necessary to determine the vertex conditions at the nearest internal vertex. If the vertex conditions are known *a priori*, then one may proceed directly to Sect. 20.6

# **20.4 Subproblem III: Reconstruction of the Vertex Conditions**

Our goal in this section is to describe how the vertex conditions at the root of any bunch can be reconstructed from the response operator associated with the bunch. We shall limit our studies here to the case of zero potential on the bunch leaving the general case for the following section where most general inverse problem is considered. Throughout this section we assume that the block of the response operator **R***<sup>T</sup>* associated with the bunch is known. The time parameter *T* should be slightly greater than the double length of the longest edge in the bunch. As has been shown in Sect. 19.6, the response operator has particularly simple form in the equilateral case, since one is able to use vector notation. Therefore to reconstruct the vertex conditions we are going first to trim the bunch, making all pendant edges equilateral and then solve the inverse problem for the equilateral bunch.

## *20.4.1 Trimming a Bunch*

Let the bunch in **T** be formed by the pendant edges *E*1*, E*2*,...,EN*−<sup>1</sup> with *V* <sup>1</sup>*,...,V <sup>N</sup>*−<sup>1</sup> as pendant vertices. Let us denote by *V* <sup>0</sup> the root vertex for the bunch and by *EN* the inner edge connected to it. It is assumed that the entries

$$\mathcal{R}\_{V^i, V^j}^T, \quad i, j = 1, 2, \dots, N - 1, 2$$

of the response operator are known.

Let us modify the tree **T** to get a new tree **T**ˆ by substituting all pendant edges from the bunch with the equilateral edges of any length

$$\ell \ge \min\_{j=k,\dots,N-1} \ell\_k.$$

The new edges and pendant vertices will be denoted by *E*ˆ*<sup>j</sup>* and *V*ˆ *<sup>j</sup>* respectively. We are going to say that **T**ˆ is obtained from **T** by **trimming** the edges from the selected bunch (Fig. 20.5).

Let us choose in addition a parameter , 0 *< <* min{*, N* } where *N* is the length of *EN* . Taking into account that the solution to the wave equation on the pendant edges is given by the sum of travelling waves, we get the following formula connecting the response operators **R***<sup>T</sup>* and **R**ˆ *<sup>T</sup>* for the original and trimmed trees **T**

**Fig. 20.5** Trimming a bunch

and **T**ˆ :

$$\begin{aligned} \mathbf{R}^{\ell\_i + \ell\_j + \epsilon}(V^l, V^j)(t + \ell\_i + \ell\_j) &= \hat{\mathbf{R}}^{2\ell + \epsilon}(\hat{V}\_l, \hat{V}\_j)(t + 2\ell), \quad 0 \le t < \epsilon; \\\\ \mathbf{R}^{\ell\_i + \ell\_j + \epsilon}(V^l, V^j)(t) &= 0, \quad 0 \le t < \ell\_i + \ell\_j; \\\\ \hat{\mathbf{R}}^{2\ell + \epsilon}(\hat{V}\_l, \hat{V}\_j)(t) &= 0, \quad 0 \le t < 2\ell. \end{aligned} \tag{20.14}$$

We formulate our observation.

**Lemma 20.12** *The block of the response operator* **R**2*(*max*k*=1*,...,N*−<sup>1</sup> *k )*<sup>+</sup> *associated with any bunch in a metric tree* **T** *and with any* 0 *< <* min{*, N* } *determines the corresponding block of the response operator* **R**ˆ <sup>2</sup><sup>+</sup> *for the trimmed tree.* 

# *20.4.2 Recovering the Vertex Conditions for an Equilateral Bunch*

In what follows we assume that a bunch in the metric tree **T** is selected, the bunch is equilateral in the sense that all pendant edges have the same length , and the potential on the bunch is identically equal to zero. Assume that the *(N* −1*)*×*(N* −1*)* block of the response operator associated with the bunch is known for *T* = 2 +  *, >* 0. For  sufficiently small the block of the response operator is determined by the solution of the wave equation on the star graph formed by *E*1*,...,EN*−<sup>1</sup>*, EN* joined together at the root vertex *V* <sup>0</sup>*.* The rest of the tree has no influence on the selected block for *t <* 2 +  *.*

Formula (19.59) determines the response operator in the case of zero potential not only on the pendant edges, but on the edge *EN* as well. We have assumed that the potential is zero on the pendant edges, which is reasonable since the potential on the pendant edges is determined from the response operator and can be cleaned away. This will be explained in Sect. 20.5.1 below. On the other hand, we do not know how to determine the potential on *EN* and it is less clear how to clean it

away. Therefore to determine the principal *(N* −1*)*×*(N* −1*)*-block of the response operator we need to repeat the calculations from Sect. 19.6 taking into account that the potential on *EN* may be nonzero.

At *t* = the first waves generated by the boundary control reach the root vertex *<sup>V</sup>* <sup>0</sup> and penetrate into *EN .* Therefore at least for <sup>≤</sup> *<sup>t</sup>* <sup>≤</sup> <sup>+</sup>  the (scalar) response from *EN* is identical to the response for the one-dimensional Schrödinger operator on the half-axis with the potential inherited from *EN* , like in Sect. 20.3. It follows that there exists a differentiable kernel *rN (t)* (depending on the potential on *EN* ) such that

$$\frac{\partial}{\partial x}\mu\_N(\mathbf{x}\_{2N-1},t) = -\frac{\partial}{\partial t}\mu\_N(0,t) + \int\_{\ell}^{t} r\_N(t-s)\mu\_N(0,s)ds,\tag{20.15}$$

where *uN (x, t)* is the solution on *EN* and we parametrised the edge as [0*, N* ] starting from the vertex *V* <sup>0</sup>*.* Let us remind ourselves that we agreed that the definite integrals are considered to be zero when the upper limit is smaller than the lower one. In the rest of this subsection the solution on *EN* will be substituted with the response (20.15).

**Step** 0*.* We start by noting that the solution is identically equal to zero in the area

$$\mathbf{x} < \boldsymbol{\ell} - \boldsymbol{t}, \quad \mathbf{0} < \boldsymbol{t} < \boldsymbol{\ell},$$

due to the unit propagation speed. This area is marked by 0 in Fig. 20.6. The solution on *EN* is also identically zero for *t* ≤ *.*

**Step** 1*.* We proceed now to determining the solution on the bunch. It will be convenient to denote all *<sup>N</sup>* <sup>−</sup><sup>1</sup> dimensional vectors from <sup>C</sup>*N*−<sup>1</sup> by the upper index <sup>∗</sup>, sometimes identifying these vectors with the vectors from C*<sup>N</sup>* having the last entry equal to zero. Following Sect. 19.6 we conclude that for *t* ≤ the solution on the

**Fig. 20.6** Waves on the equilateral bunch: 0—red area—the solution is identically zero; 1—orange area—solution contains one wave given by (20.16); 2—green area—solution contains two waves given by (20.21); 3—blue area—solution contains three waves given by (20.22)

bunch is given by the single travelling wave directly determined by the boundary control function *f*∗ satisfying

$$
\tilde{u}^\*(\mathbf{x}, t) = f^\*(t + \mathbf{x} - \boldsymbol{\ell}), \quad 0 \le t < \boldsymbol{\ell}, \tag{20.16}
$$

where we have taken into account that the potential on the pendant edges is zero and the edges are parametrised so that *<sup>x</sup>* <sup>=</sup> <sup>0</sup> corresponds to the root vertex *<sup>V</sup>* <sup>0</sup> and *x* = – to the contact vertices. The same formula holds for *u*∗*(x, t)* even in the triangular area

$$
\ell - \ell < x < \ell, \quad \ell < t < 2\ell,
$$

since the waves reflected from *V* <sup>0</sup> do not have time to penetrate into it. The whole area where the solution on the bunch is given by just one travelling wave as in (20.16) is indicated by 1 in Fig. 20.6.

**Step** 2*.* Consider now the trapezoidal area

$$\max\{\ell, t - \ell - \epsilon\} < x < \min\{t - \ell, \Im\ell - t\},$$

indicated by 2 in Fig. 20.6 taking into account that  ≤ *.* The solution on the pendant edges contains—in addition to the travelling wave generated by the control function—the wave reflected from the root vertex

$$
\vec{u}^\*(\mathbf{x}, t) = \vec{f}^\*(t + \mathbf{x} - \ell) + \vec{a}^\*(t - \mathbf{x}).\tag{20.17}
$$

The wave reflected from the root does not have enough time to approach the pendant vertices of the bunch and after reflection reach the indicated area. To determine *a*<sup>∗</sup> we substitute formulas (20.15) and 20.17 into the vertex conditions (19.53)

$$\begin{cases} P\_{-1} \left( \vec{f}^\*(t-\ell) + \vec{a}^\*(t) + \mu\_N(0,t)\vec{e}\_N \right) = 0, \\\\ (I - P\_{-1}) \left( \frac{\partial}{\partial t} \vec{f}^\*(t-\ell) - \frac{\partial}{\partial t} \vec{a}^\*(t) - \frac{\partial}{\partial t} \mu\_N(0,t)\vec{e}\_N \right. \\\\ \left. + \int\_\ell^t r\_N'(t-s) u\_N(0,s) ds \, \vec{e}\_N \right) \\\\ \qquad = A(I - P\_{-1}) \left( \vec{f}(t-\ell) + \vec{a}^\*(t) + u\_N(0,t) \, \vec{e}\_N \right), \end{cases} \tag{20.18}$$

where *<sup>e</sup><sup>N</sup>* is the *N*-th vector from the standard basis in <sup>C</sup>*<sup>N</sup> .* Introducing the notation

$$
\vec{a}(t) = \begin{pmatrix} \vec{a}^\*(t) \\ u\_N(0, t) \end{pmatrix}.
$$

the system of equations can be simplified to

$$\begin{cases} P\_{-1} \left( \vec{f}(t-\ell) + \vec{a}(t) \right) = 0, \\\\ (I - P\_{-1}) \left( \vec{f}'(t-\ell) - \vec{a}'(t) + \int\_{\ell}^{t} r\_N'(t-s) u\_N(0,s) ds \, \vec{e}\_N \right), \\ = A(I - P\_{-1}) \left( \vec{f}(t-\ell) + \vec{a}(t) \right), \end{cases}$$

where the control vector *f (t)* = *f*∗*(t)* 0 has zero last component. As in Sect. 19.6 the first equation gives us

$$P\_{-1}\vec{a}(t) = -P\_{-1}\vec{f}(t-\ell).$$

The second equation can be turned into an integral equation assuming that *uN (*0*,t)* is known, yielding

$$\begin{aligned} (I - P\_{-1})\vec{a}(t) &= (I - P\_{-1})\vec{f}(t - \ell) - 2A \int\_{\ell}^{t} e^{-A(t - \tau)} (I - P\_{-1}) \vec{f}(\tau - \ell) d\tau \\ &+ \int\_{\ell}^{t} e^{-A(t - \tau)} \left( \int\_{\ell}^{\tau} r\_N'(\tau - s) u\_N(0, s) ds \right) (I - P\_{-1}) \vec{e}\_N d\tau. \end{aligned}$$

Summing up the last two equalities we arrive at

$$\begin{split} \vec{\tilde{a}}(t) &= \underbrace{(I - 2P\_{-1})}\_{=\mathbf{S}\_{\mathbf{V}}(\infty)} \vec{f}(t - \ell) - 2A \int\_{\ell}^{t} e^{-A(t - \tau)} (I - P\_{-1}) \vec{f}(\tau - \ell) d\tau \\ &+ \int\_{\ell}^{t} e^{-A(t - \tau)} \left( \int\_{\ell}^{\tau} r\_{N}'(\tau - s) u\_{N}(0, s) ds \right) (I - P\_{-1}) \vec{e}\_{N} d\tau. \end{split} \tag{20.19}$$

This is an integral equation because *uN* , being the last component in *a* on the left hand side, appears also in the last integral on the right hand side. We write the same equation in the more transparent form

$$\begin{aligned} \begin{pmatrix} \vec{a}^\*(t) \\ u\_N(0,t) \end{pmatrix} &= S\_\Gamma(\infty) \begin{pmatrix} \vec{f}^\*(t-\ell) \\ 0 \end{pmatrix} \\ &-2A \int\_\ell^t e^{-A(t-\tau)} (I - P\_{-1}) \begin{pmatrix} \vec{f}^\*(\tau-\ell) \\ u\_N(0,t) \end{pmatrix} d\tau \\ &+ \int\_\ell^t e^{-A} A(t-\tau) (I - P\_{-1}) \begin{pmatrix} \vec{0}^\* \\ \int\_\ell^\tau r\_N'(\tau-s) u\_N(0,s) ds \end{pmatrix} d\tau, \end{aligned}$$

where <sup>0</sup><sup>∗</sup> is the zero vector in <sup>C</sup>*N*−1*.* Note that the vector *<sup>a</sup>*<sup>∗</sup>*(t)* is coupled to *uN (*0*,t)* not only via the second (integral) term on the right hand side, but also via the last term. It follows that even in the case of zero matrix *A* (for example if standard vertex conditions are assumed) the coupling is not trivial due to the projector *I* − *P*−<sup>1</sup> appearing in the last term.

Equation (20.19) is a Volterra equation of the second kind with a continuous kernel and can be solved by iterations leading to the solution formula

$$\begin{split} \vec{a}(t) &= S\_{\mathsf{V}}(\infty) \vec{f}^\*(t-\ell) - 2A \int\_{\ell}^{t} e^{-A(t-\tau)} (I - P\_{-1}) \vec{f}^\*(\tau - \ell) d\tau \\ &+ \int\_{\ell}^{t} H(t, \tau) \vec{f}^\*(\tau - \ell) d\tau, \end{split} \tag{20.20}$$

with a continuous matrix kernel *H* identically equal to zero in the region

$$\mathbf{u} \le \mathbf{t}.$$

Let **<sup>P</sup>** denote the projector in C*<sup>N</sup>* onto the subspace C*N*−<sup>1</sup> <sup>⊂</sup> <sup>C</sup>*<sup>N</sup>* orthogonal to the last vector *<sup>e</sup><sup>N</sup>* <sup>∈</sup> <sup>C</sup>*<sup>N</sup>* from the standard basis. Then the solution of the control problem restricted to the bunch can be written as

$$\begin{split} \mathbf{P}u(\mathbf{x},t) &= \vec{f}^\*(t+\mathbf{x}-\ell) + \mathbf{P}\mathbf{S}\_{\mathbf{V}}(\infty)\vec{f}^\*(t-\mathbf{x}-\ell) \\ &- 2\mathbf{P}A \int\_0^{t-\chi-\ell} e^{-A(t-\mathbf{r}-\ell)}(I-P\_{-1})\vec{f}^\*(\mathbf{r})d\mathbf{r} \\ &+ \mathbf{P} \int\_0^{t-\chi-\ell} H(t,\mathbf{r}+\ell)\vec{f}^\*(\mathbf{r})d\mathbf{r}. \end{split} \tag{20.21}$$

This formula gives the solution to the control problem in the indicated trapezoidal area indicated by 2 in Fig. 20.6.

**Step** 3*.* For our purposes it remains to determine the solution in the triangular region

$$\mathcal{BL} - t < \mathbf{x} < \boldsymbol{\ell}, \quad \mathfrak{L}\boldsymbol{\ell} < \mathbf{t} < \boldsymbol{\ell} + \mathbf{x} + \boldsymbol{\epsilon},$$

indicated by 3 in Fig. 20.6. For points from this region the wave reflected from the root *V* <sup>0</sup> reaches the boundary of the bunch and reflects from it. Hence the solution on the bunch contains three waves:


The second reflected wave is easy to calculate: since the boundary control introduced on the contact set acts on the outgoing wave as if a Dirichlet condition is assumed there. The solution is given by

$$\begin{split} \mathbf{P}u(x,t) &= \tilde{f}^{\*}(t+x-\ell) + \mathbf{P}S\_{\mathbf{V}}(\infty)\tilde{f}^{\*}(t-x-\ell) \\ &- 2\mathbf{P}A \int\_{0}^{t-\chi-\ell} e^{-A(t-\chi-\tau-\ell)}(I-P\_{-1})\tilde{f}^{\*}(\tau)d\tau \\ &+ \mathbf{P} \int\_{0}^{t-\chi-\ell} H(t-x,\tau+\ell)\tilde{f}^{\*}(\tau)d\tau \\ &- \mathbf{P}S\_{\mathbf{V}}(\infty)\tilde{f}^{\*}(t+x-3\ell) \\ &+ 2\mathbf{P}A \int\_{0}^{t+x-3\ell} e^{-A(t+x-\tau-3\ell)}(I-P\_{-1})\tilde{f}^{\*}(\tau)d\tau \\ &- \mathbf{P} \int\_{0}^{t+x-3\ell} H(t+x-2\ell,\tau+\ell)\tilde{f}^{\*}(\tau)d\tau. \end{split} (20.22)$$

The block of the response operator associated with the bunch is then given by

$$\begin{split} \mathbf{P} \Big( \mathbf{R}^{T} \tilde{f}^{\*} \big) (t) &= -\frac{\partial}{\partial x} \mathbf{P} \tilde{u} (x, t)|\_{x=\ell} \\ &= -\frac{d}{dt} \tilde{f}^{\*} (t) + 2 \mathbf{P} S\_{\mathbf{v}} (\infty) \frac{d}{dt} \tilde{f}^{\*} (t - 2\ell) - 4 \mathbf{P} A (I - P\_{-1}) \tilde{f}^{\*} (t - 2\ell) \\ &\quad + 4 \mathbf{P} A^{2} \int\_{0}^{t - 2\ell} e^{-A(t - \tau - 2\ell)} (I - P\_{-1}) \tilde{f}^{\*} (\tau) d\tau \\ &\quad - 2 \mathbf{P} A \int\_{0}^{t - 2\ell} H(t - \ell, \tau + \ell) \tilde{f}^{\*} (\tau) d\tau, \\ &\quad T < 2\ell + 2\epsilon, \end{split} \tag{20.23}$$

where when differentiating the two integral terms containing the continuous kernel *H* we have taken into account that *H (t* − *, t* − *)* = 0*.* Here we extended the projector **<sup>P</sup>** so that it acts in C*M∂* <sup>⊃</sup> <sup>C</sup>*<sup>N</sup>* <sup>⊃</sup> <sup>C</sup>*N*−1*.* The singularities in the response operator coincide with the singularities of the Laplacian on the star graph (even with zero potential on the edge *EN* ).

**Lemma 20.13** *The block* **PR***<sup>T</sup>* **P** *of the response operator associated with an equilateral bunch in a metric tree* **T***, known for the time parameter T slightly larger than double the length of the pendant edges in the bunch determines uniquely the following matrices associated with the vertex conditions at the root:* 

$$\mathbf{PS}\_{\mathbf{V}}(\infty)\mathbf{P} \quad and \quad \mathbf{PAP}.\tag{20.24}$$

*Proof* The matrices (20.24) are determined by identifying the *δ* and *δ* singularities in the response operator given by formula (20.23). 

It remains to show that knowing the matrices (20.24) is enough to recover the unitary parameter *S* in the vertex conditions up to the phase factor associated with the edge *EN* in accordance with Observation 20.2.

**Theorem 20.14** *Consider the set of N* ×*N irreducible Hermitian unitary matrices S***v***(*∞*) having the same principal (N* − 1*)* × *(N* − 1*) block* **P***S***v***(*∞*)***P***. This family of matrices can be described using one real phase parameter so that* 

$$S\_{\mathbf{v}}^{\theta}(\infty) = \mathcal{R}\_{\theta} S\_{\mathbf{v}}^{0}(\infty) \mathcal{R}\_{-\theta}, \ \theta \in [0, \pi), \tag{20.25}$$

*where S*<sup>0</sup> **<sup>v</sup>** *(*∞*) is a certain particular member of the family and* R*<sup>θ</sup> is given by* 

$$\mathcal{R}\_{\theta} = \text{diag}\{1, 1, \dots, 1, e^{i\theta}\} = \begin{pmatrix} 1 \ 0 \ \dots \ 0 \ \begin{pmatrix} 1 \ 0 \ \dots \ 0 \ \begin{pmatrix} 0 \\ 0 \ \dots \ 0 \ \begin{pmatrix} 0 \\ \end{pmatrix} \\\end{pmatrix} \\ \begin{pmatrix} \vdots \ \vdots \ \ddots \ \vdots \ \vdots \\\end{pmatrix}, \ \theta \in [0, 2\pi). \end{pmatrix}, \quad \theta \in [0, 2\pi). \tag{20.26}$$

*Proof* Reconstruction of any unitary *N*×*N* matrix from its principal *(N*−1*)*×*(N*− 1*)* block in general includes two arbitrary phase parameters but the matrix *S***v***(*∞*)* = {*sij* } is in addition Hermitian. This reduces the number of arbitrary parameters to one.

Let us describe the reconstruction procedure. The entries of any unitary matrix satisfy the normalisation and orthogonality conditions:

$$\begin{cases} \sum\_{j=1}^{m} |s\_{lj}|^2 = 1, & \sum\_{l=1}^{m} |s\_{lj}|^2 = 1, \\\sum\_{j=1}^{m} s\_{lj}\overline{s\_{lj}} = 0, & i \neq l, \ \sum\_{l=1}^{m} s\_{lj}\overline{s\_{ll}} = 0, \ j \neq l. \end{cases}$$

Assume that the principal *(N* − 1*)* × *(N* − 1*)* block of the matrix *S***v***(*∞*)* is known. Consider the last row in the matrix. The absolute values of *smj* = *(S***v***(*∞*))mj , j* = 1*,* 2*,...,m* − 1 can be calculated from the normalization conditions. At least one of these numbers is different from zero, otherwise the matrix *S***v***(*∞*)* is reducible. Consider any such nonzero element, say with the index *m*1*.* All possible values of this element can be described by one real phase parameter *α* as follows *sm*<sup>1</sup> = *s*0 *<sup>m</sup>*1*eiα,* where *s*<sup>0</sup> *<sup>m</sup>*<sup>1</sup> is any complex number with the prescribed absolute value. Then all other elements *smj , j* = 2*,...,N* − 1 can be calculated using the orthogonality conditions. In the same way one may consider the last column and introduce a certain parameter *<sup>β</sup>* <sup>∈</sup> <sup>R</sup> such that *s*1*<sup>m</sup>* <sup>=</sup> *<sup>s</sup>*<sup>0</sup> <sup>1</sup>*meiβ.* Then all elements *sjm, j* = 2*,* 3*,...,N* are uniquely determined.

Let us summarise our calculations by stating the following result: the family of unitary matrices having the same principal *(N* −1*)*×*(N* −1*)* block can be described

using two real parameters so that

$$S\_{\mathbf{v}}^{\alpha,\beta}(\infty) = \mathcal{R}\_{\mathbf{u}} S\_{\mathbf{v}}^{0,0}(\infty) \mathcal{R}\_{\beta},\tag{20.27}$$

where *S*0*,*<sup>0</sup> **<sup>v</sup>** *(*∞*)* is a certain particular member of the family.

Let us recall that the limit scattering matrix is not only unitary but also Hermitian (as follows from (3.31), its eigenvalues are equal to ±1). Assume that *S*0*,*<sup>0</sup> **<sup>v</sup>** *(*∞*)* is Hermitian, then the matrix *Sα,β* **<sup>v</sup>** *(*∞*)* is Hermitian if and only if *β* = −*α*

$$\left(\mathcal{S}\_{\mathbf{v}}^{\alpha,\beta}(\infty)\right)^{\*} = \mathcal{R}\_{-\beta} \underbrace{\left(\mathcal{S}\_{\mathbf{v}}^{0,0}(\infty)\right)^{\*}}\_{\mathcal{S}\_{\mathbf{v}}^{0,0}(\infty)} \mathcal{R}\_{-\alpha}.$$

Summing up, all possible matrices *S***v***(*∞*)* having the same principal *(N* − 1*)* × *(N* − 1*)* block are described by formula (20.25). 

The assumption of Theorem 20.14 that *S***v***(*∞*)* is irreducible can be weakened. In fact we used just the fact that *S***v***(*∞*) mm* = ±1, in other words that *S***v***(*∞*)* is not block-diagonal with *(N* − 1*)* × *(N* − 1*)* and 1 × 1 blocks.

In the following lemma we discuss the possibility to reconstruct the unitary matrix *S* determining the vertex condition at the root.

**Lemma 20.15** *Let S be the unitary N* ×*N matrix determining the vertex condition at the root of a bunch in a metric tree. Let the matrix S be irreducible and let us denote by N*−<sup>1</sup> *its eigensubspace corresponding to the eigenvalue* −1*. Let A be the corresponding matrix appearing in the Hermitian parametrisation of the vertex conditions (see* (3.28)*). Then the knowledge of the subspace N*−<sup>1</sup> *and of the (N* − 1*)* × *(N* − 1*) matrix* 

$$\mathbf{P}(I - P\_{-1})A(I - P\_{-1})\mathbf{P} \tag{20.28}$$

*determines the unique matching condition, i.e. the unique matrix S.*

*Proof* Consider the *(N* − 1*)* × *(N* − 1*)* Hermitian matrix (20.28). We extend it to the Hermitian *N* × *N* matrix *A*ˆ = *A* ⊕ *ON*−<sup>1</sup> , where *ON*−<sup>1</sup> is the zero matrix in the subspace *N*−1*.* The kernel of *A*ˆ contains the whole subspace *N*−1*.* Since *S***v***(*∞*)* is irreducible, the subspace *N*−<sup>1</sup> is not trivial and contains at least one vector with nonzero *m*-th component, otherwise *S***v***(*∞*) mm* = 1 and *S***v***(*∞*)* is reducible. Applying the matrix *A*ˆ to this vector we should get zero vector. This fact allows us to calculate the elements *a*ˆ*jm, j* = 1*,* 2*,...,m* − 1 of the last column in *A.*ˆ Using the fact that *A*ˆ is Hermitian we reconstruct the last row except the element *a*ˆ*mm*, which again can be calculated using the fact that *A*ˆ maps every vector from *N*−<sup>1</sup> to the zero vector.

The result is

$$S = \frac{i I\_{N\_{-1}^\perp} + A}{i I\_{N\_{-1}^\perp} - A} \oplus (-1)I\_{N\_{-1}},\tag{20.29}$$

where *IN*<sup>⊥</sup> <sup>−</sup><sup>1</sup> and *IN*−<sup>1</sup> are the identity operators in *(N*−1*)*<sup>⊥</sup> and *<sup>N</sup>*−<sup>1</sup> respectively and *(iIN*<sup>⊥</sup> <sup>−</sup><sup>1</sup> <sup>+</sup> *H )(iIN*<sup>⊥</sup> <sup>−</sup><sup>1</sup> <sup>−</sup> *H )*−<sup>1</sup> is considered as a unitary operator in *(N*−1*)*⊥*.* 

The previous lemma may give the impression that using the knowledge of the principal *(N* −1*)*×*(N* −1*)* block of *S***v***(*∞*)* and of **P***(I* −*P*−1*)A(I* −*P*−1*)***P** allows us to reconstruct unique matching conditions. This is not true, since the principal *(N* −1*)*×*(N* −1*)* block of *S***v***(*∞*)* allows one to reconstruct *S***v***(*∞*)* up to the unitary transformation (20.25), *i.e.* the subspace *N*−<sup>1</sup> is determined up to multiplication by *Rθ .* Choosing different possible subspaces *N*−<sup>1</sup> one gets different possible matrices *S* (described in fact by the same unitary transformation (20.25)).

We summarize our studies in the following theorem.

**Theorem 20.16** *Let* **T** *be a tree graph with a certain selected equilateral bunch formed by <sup>N</sup>*−<sup>1</sup> *edges of length connected together at the root vertex <sup>V</sup>* <sup>0</sup>*. Consider the Schrödinger operator <sup>L</sup>* = − *<sup>d</sup>*<sup>2</sup> *dx*<sup>2</sup> +*q(x) with zero potential on the bunch. Assume that the vertex conditions at the rrot V* <sup>0</sup> *are parametrised by a certain unitary matrix S via* (3.21)*. If the limit scattering matrix S***v***(*∞*) be irreducible, then the (N* −1*)*× *(N* <sup>−</sup> <sup>1</sup>*) block of the response operator* **R***<sup>T</sup> ,T>* <sup>2</sup> *determines the matching conditions at the root vertex up to the unitary transformation* 

$$\mathcal{S}^{\theta} = \mathcal{R}\_{\theta} \mathcal{S}^{0} \mathcal{R}\_{-\theta}, \ \theta \in [0, \pi), \tag{20.30}$$

*where <sup>S</sup>*<sup>0</sup> *is any particular member of the family and* <sup>R</sup>*<sup>θ</sup> is determined by* (20.26)*.* 

The theorem is proven under the assumption that *S***v***(*∞*)* is irreducible, but the statement holds true under the weaker assumption that just *S* is irreducible (the corresponding proof is more involved). We shall omit this proof since reconstructing the metric tree we already assumed that *S***v***(*∞*)* is irreducible.

## **20.5 Cleaning and Pruning Using the M-functions**

The three subproblems described above may give an impression that the inverse problem for trees is now solved completely. In fact only the metric tree is completely recovered, the potential and the vertex condition are determined just partially:


The three described subproblems can be used as bricks to solve the inverse problem for trees completely. In order to glue these bricks together we need to clarify two points:


It turns out that solutions to both questions can easily be given using the language of M-functions. Exploiting the connection (19.12) one obtains relation between the response operators, but such a connection does not look straightforward anymore: it might be a challenging task to write the formulas connecting the two problems using the language of response operators instead of M-functions.

## *20.5.1 Cleaning the Edges*

We are not able to write an explicit formula connecting the response operators for Schrödinger and Laplace operators on the same tree, but following our methodology of reconstructing the quantum graph locally, we are interested in the formula relating the response operators or M-functions for two Schrödinger operators with zero and non-zero potentials on the pendant edges. Let *L***<sup>S</sup>** *<sup>q</sup> (***T***)* be an arbitrary Schrödinger operator on **T**. Let us denote by *q*<sup>0</sup> the potential obtained from *q* by restricting it to the inner edges in **T**:

$$q\_0|\_{E\_n} = \begin{cases} 0, & E\_n \text{ is a pendant edge;} \\ q|\_{E\_n}, & E\_n \text{ is not a pendant edge.} \end{cases} \tag{20.31}$$

We are going to call this procedure **cleaning** of the pendant edges (from the potential). This transformation will allow us to use Subproblem III to recover the vertex conditions at the root.

**Theorem 20.17** *Let q be an arbitrary real-valued L*1*-potential on a given finite compact metric tree* **T** *with contact set ∂***T***, and let q*<sup>0</sup> *be the restriction of the potential to the inner part of* **T** *(if any) as described by* (20.31)*. Assume that the potential q on the pendant edges is known. Then the M-functions for Lq (***T***) and Lq*<sup>0</sup> *(***T***) are in one-to-one correspondence.* 

*Proof* We may clean the potential on the pendant edges one-by-one. Let us choose one pendant edge, say *E*1. We denote by **T**<sup>1</sup> the graph formed by the single edge *E*<sup>1</sup> having two pendant vertices *<sup>V</sup>* <sup>1</sup> = {*x*1} (also a pendant vertex in **<sup>T</sup>**) and *<sup>V</sup>* <sup>2</sup> = {*x*2}*.* Let us denote by **T**<sup>2</sup> the tree obtained from **T** by removing *E*1*.* The set of contact points for **T**<sup>2</sup> consists of all degree one vertices and the vertex, say *V* 0, to which the edge *E*<sup>1</sup> is attached in the original tree. One can get the original tree **T** by gluing together **T**<sup>1</sup> and **T**<sup>2</sup> by identifying the vertices *V* <sup>2</sup> (in **T**1) and *V* <sup>0</sup> (in **T**2). We

denote by *q*<sup>1</sup> the restriction of the potential *q* to **T**<sup>1</sup> ⊂ **T***.* Formula (18.34) connects the M-functions for the graphs **T***,***T**1*,* and **T**<sup>2</sup> for *q* and *q*<sup>1</sup> if one of the following identifications is made:

$$\begin{array}{cccc}\mathbf{M}\_{\Gamma} &= \mathbf{M}\_{L\_{q}(\mathbb{T})} & \text{or} & \mathbf{M}\_{L\_{q\_{1}}(\mathbb{T})};\\\mathbf{M}\_{\Gamma\_{1}} &= \mathbf{M}\_{L\_{q}(\mathbb{T}\_{1})} & \text{or} & \mathbf{M}\_{L\_{0}(\mathbb{T}\_{1})};\\\mathbf{M}\_{\Gamma\_{2}} &= & \mathbf{M}\_{L\_{q}(\mathbb{T}\_{2})}.\end{array}$$

The gluing set consists of just one vertex, hence in accordance with Theorem 18.21 any two out of three M-functions determine the third one. The response operator for *Lq (***T***)* and the potential on *E*<sup>1</sup> determine the M-functions **M***Lq (***T***)* and **M***Lq (***T**1*)*, so Theorem 18.21 implies that **M***Lq (***T**2*)* is uniquely determined. Knowing **M***Lq (***T**2*)* and **M***L*0*(***T**1*)* (corresponding to the zero potential on *E*<sup>1</sup> and given by (5.55)) we obtain **M***Lq*<sup>1</sup> *(***T***)* using formula (18.34) one more time.

Repeating this procedure as many times as the number of pendant edges we clean all pendant edges from the potential and conclude that the response operator for *Lq*<sup>0</sup> *(***T***)* is uniquely determined by the response operator for *Lq (***T***).* 

**Corollary 20.18** *If it is not assumed that* **T** *and the potential on the pendant edges are known, then* **M***Lq (***T***) determines* **M***Lq*<sup>0</sup> *(***T***), but not the other way around.* 

## *20.5.2 Pruning Branches and Bunches*

We assume now that one of the bunches in **T** is identified, the potential on the pendant edges from the bunch is determined and the vertex conditions at the root of the bunch are recovered using Subproblem III. One may say that we assume that for a certain bunch all its characteristics are known: the geometric structure (the number of edges and their lengths), the potential on the bunch and the vertex condition at the root (up to the phase parameter discussed in Sect. 20.4). Then the M-function for **T** determines the M-function for the tree **T**<sup>2</sup> obtained from **T** by cutting away the bunch. We call this procedure **pruning the tree** since it reminds what gardeners do every autumn—remove dead and undesirable branches from their fruit trees. Our method works equally well if not a bunch, but a whole branch is known. By **branch**  we mean a subtree of a metric tree which is connected to the rest of the tree at just one vertex called the branch root. All except the root degree one vertices in the branch belong to the contact set of the original tree.

The reconstruction procedure is again based on Theorem 18.21.

**Theorem 20.19** *Let Lq (***T***) be a Schrödinger operator on a certain finite compact metric tree* **T** *with the contact set ∂***T** *given by all degree one vertices. Assume that one of the branches in* **T***, say the subtree* **T**1*, is known together with the potential on it and vertex conditions at all its vertices including the root. Let us denote by* **T**<sup>2</sup> *the* *tree obtained by cutting away the selected branch* **T**<sup>1</sup> *from* **T***. Then the M-functions for Lq (***T***) and Lq (***T**2*) are in one-to-one correspondence.* 

*Proof* The proof follows the same lines as the proof of Theorem 20.17. The reason is very simple: the original tree **T** is again obtained from **T**<sup>1</sup> and **T**<sup>2</sup> by gluing them at a single vertex. The only difference is that the graph **T**<sup>1</sup> is slightly more complicated—it is a subtree, but formula (18.34) is still valid and Theorem 18.21 can be used. 

We may take into account that the response operator determines bunches in a tree leading to the following corollary.

**Corollary 20.20** *Assume that the response operator for a Schrödinger operator on a metric tree is known. Let* **T**<sup>1</sup> *be any bunch in* **T***. Then the response operator for* **T** *determines the response operator for the pruned tree* **T**<sup>2</sup> *obtained from* **T** *by cutting away* **T**1*.*

## **20.6 Complete Solution of the Inverse Problem for Trees**

In this section we describe in detail how the inverse problem on a finite compact metric tree can be solved by combining the subproblems I-III with cleaning and pruning procedures.

Let *L***<sup>S</sup>** *<sup>q</sup> (***T***)* be a Schrödinger operator on a finite compact metric tree **T** with the vertex conditions determined by a certain irreducible unitary matrix *S<sup>m</sup>* at the internal vertices and standard conditions on the contact set *∂***T** formed by all degree one vertices. Assume that the response operator **R***<sup>T</sup>* associated with *∂***T** is known for *T* just greater than the diameter of the tree.

Assume that a certain bunch in **T** is selected. Let us denote by **T** the metric tree obtained from the original tree **T** by removing the selected bunch. The cutting bunches procedure consists essentially in calculation of the response operator associated with the new tree **T** from the original response operator. To perform this reduction we are going to use formula (19.12) connecting the response operator to the M-function.

**Step 1: Selecting a Bunch in the Tree** As described in Sect. 20.2 we are able either to reconstruct the whole tree **T** or select a bunch directly using just a block of the response operator (Subproblem I).

**Step 2: Reconstruction of the Potential on the Bunch** Every edge from the bunch is a pendant edge and potential on it can be recovered from the diagonal element of the response operator as described in Sect. 20.3 (Subproblem II).

**Step 3: Cleaning of the Bunch—Removing the Potential** Let *q*<sup>0</sup> be the restriction of the original potential *q* to **T** . Then knowing the response operator for *q* one may calculate the response operator for *q*<sup>0</sup> (see Sect. 20.5.1).

**Step 4: Trimming of the Bunch** This step is described in Sect. 20.4.1 and allows us to determine the response operator for **T** with the selected bunch now being equilateral.

**Step 5: Recovering Vertex Conditions at the Root** Knowing the block of the response operator associated with the equilateral bunch allows us to reconstruct the vertex conditions at the root as described in Sect. 20.4.2 (Subproblem III). The reconstruction contains one free phase parameter, that cannot be avoided (see Observation 20.2).

**Step 6: Cutting Away the Bunches—Pruning of the Tree** Knowing the vertex conditions at the root and the response operator associated with **T** one may recover the response operator for **T** with standard conditions at new pendant vertex *V* <sup>0</sup> (see Sect. 20.5.2).

Repeating this procedure sufficiently many times we not only recover the whole metric tree **T**, but also the potential *q* on the whole tree and the vertex conditions at the inner vertices. The vertex conditions are recovered up to *M* −*M∂* −1 = *N* −*M∂* phases.

See Fig 20.7 where we illustrate how the inverse problem for a tree may be solved by peeling branches one-by-one. Red colour indicates the branch, which is deleted on each step.

We summarise our studies in the following theorem.

**Fig. 20.7** Peeling trees

**Theorem 20.21** *Let L***<sup>S</sup>** *<sup>q</sup> (***T***) be the Schrödinger operator* <sup>−</sup> *<sup>d</sup>*<sup>2</sup> *dx*<sup>2</sup> + *q(x) defined on a finite compact metric tree* **<sup>T</sup>** *and* **<sup>S</sup>** = {*S*1*, S*2*,...,SM*} *collects the unitary matrices determining the vertex conditions. Assume in addition that* 


*Then the Titchmarsh-Weyl M-function, or equivalently the dynamical response operator, associated with the contact vertices ∂***T** *uniquely determines*


Note that the proposed reconstruction procedure is local and explicit. The first two conditions may be easily relaxed by allowing *q* to be just integrable, *q* ∈ *L*1*(***T***)*, and *S<sup>m</sup>* **<sup>v</sup>** *(*∞*)* to have few zero entries. The fourth condition is not essential and is related to the formula used to define the M-function.

# **Appendix 1: Calculation of the M-function for the Cross Graph**

The matrix *S*<sup>1</sup> is chosen Hermitian and irreducible; therefore the corresponding vertex scattering matrix is energy independent. It follows that the vertex conditions can be written using two projectors (see (3.33))

$$
\begin{pmatrix} -1 & \alpha & 0 \ \beta \\ \alpha & -1 \ \beta & 0 \end{pmatrix} \begin{pmatrix} \mu(\mathbf{x}\_2) \\ \mu(\mathbf{x}\_4) \\ \mu(\mathbf{x}\_6) \\ \mu(\mathbf{x}\_8) \end{pmatrix} = \vec{0}, \quad \begin{pmatrix} 1 \ \alpha \ 0 \ \beta \\ \alpha \ 1 \ \beta \ 0 \end{pmatrix} \begin{pmatrix} \mu'(\mathbf{x}\_2) \\ \mu'(\mathbf{x}\_4) \\ \mu'(\mathbf{x}\_6) \\ \mu'(\mathbf{x}\_8) \end{pmatrix} = \vec{0}. \tag{20.32}
$$

To calculate the M-function we have to consider any solution to the equation −*u* = *λu, λ* <sup>=</sup> *<sup>k</sup>*2*,*Im *<sup>λ</sup>* = <sup>0</sup> satisfying the vertex conditions at the central vertex. Then the M-function connects the boundary values of the solution:

$$
\begin{pmatrix} u'(\mathbf{x}\_1) \\ u'(\mathbf{x}\_3) \\ u'(\mathbf{x}\_5) \\ u'(\mathbf{x}\_7) \end{pmatrix} = \mathbf{M}(\lambda) \begin{pmatrix} u(\mathbf{x}\_1) \\ u(\mathbf{x}\_3) \\ u(\mathbf{x}\_5) \\ u(\mathbf{x}\_7) \end{pmatrix}. \tag{20.33}
$$

Every solution to the differential equation can be written as a combination of sin and cos functions

$$u(\mathbf{x}) = p\_j \sin k(\mathbf{x} - \mathbf{x}\_{2j-1}) + q\_j \cos k(\mathbf{x} - \mathbf{x}\_{2j-1}), \ \mathbf{x} \in (\mathbf{x}\_{2j-1}, \mathbf{x}\_{2j}), \ p\_j, q\_j \in \mathbb{C}.$$

Substitution into the vertex conditions (20.32) yields

$$P\ddot{p} + Q\ddot{q} = 0$$

with

$$P = \begin{pmatrix} -\sin kl\_1 & \alpha \sin kl\_2 & 0 & \beta \sin kl\_4 \\ \alpha \sin kl\_1 & -\sin kl\_2 & \beta \sin kl\_3 & 0 \\ -\cos kl\_1 & -\alpha \cos kl\_2 & 0 & -\beta \cos kl\_4 \\ -\alpha \cos kl\_1 & -\cos kl\_2 & -\beta \cos kl\_3 & 0 \end{pmatrix}.$$

and

$$\mathcal{Q} = \begin{pmatrix} -\cos kl\_1 & \alpha \cos kl\_2 & 0 & \beta \cos kl\_4 \\ \alpha \cos kl\_1 & -\cos kl\_2 & \beta \cos kl\_3 & 0 \\ \sin kl\_1 & \alpha \sin kl\_2 & 0 & \beta \sin kl\_4 \\ \alpha \sin kl\_1 & \sin kl\_2 & \beta \sin kl\_3 & 0 \end{pmatrix}.$$

Taking into account that *u(x*2*j*−1*)* = *qj , u (x*2*j*−1*)* = *kpj , j* = 1*,* 2*,* 3*,* 4 we conclude that the M-function is given by

$$\mathbf{M}(\lambda) = -kP^{-1}\mathcal{Q}.$$

The determinant of *P* is

$$\det P = \beta^2 \left( \alpha^2 \sin k(l\_1 - l\_3) \sin k(l\_2 - l\_4) - \sin k(l\_1 + l\_4) \sin k(l\_2 + l\_3) \right), \tag{20.34}$$

which explains the following short notations (extending (20.6)):

$$\begin{cases} c\_j := \cos kl\_j, & s\_j := \sin kl\_j, \\ c\_{l \pm j} := \cos k(l\_l \pm l\_j), & s\_{l \pm j} := \sin k(l\_l \pm l\_j), \end{cases} \quad i, j = 1, 2, 3, 4. \tag{20.35}$$

Then the inverse matrix is

*<sup>P</sup>* <sup>−</sup><sup>1</sup> <sup>=</sup> <sup>1</sup> *β*2*(α*2*s*1−3*s*2−<sup>4</sup> − *s*1+4*s*2+3*)* × ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ *<sup>β</sup>*2*s*2+3*c*<sup>4</sup> *αβ*2*s*2−4*c*<sup>3</sup> *<sup>β</sup>*2*s*2+3*s*<sup>4</sup> *αβ*2*s*2−4*s*<sup>3</sup> *αβ*2*s*1−3*c*<sup>4</sup> *<sup>β</sup>*2*s*1+4*c*<sup>3</sup> *αβ*2*s*1−3*s*<sup>4</sup> *<sup>β</sup>*2*s*1+4*s*<sup>3</sup> <sup>−</sup>*αβs*1+2*c*<sup>4</sup> <sup>−</sup>*β(α*2*s*2−4*c*<sup>1</sup> <sup>+</sup> *<sup>s</sup>*1+4*c*2*)* <sup>−</sup>*αβs*1+2*s*<sup>4</sup> <sup>−</sup>*β(α*2*s*2−4*s*<sup>1</sup> <sup>−</sup> *<sup>s</sup>*1+4*s*2*)* <sup>−</sup>*β(α*2*s*1−3*c*<sup>2</sup> <sup>+</sup> *<sup>s</sup>*2+3*c*1*)* <sup>−</sup>*αβs*1+2*c*<sup>3</sup> <sup>−</sup>*β(α*2*s*1−3*s*<sup>2</sup> <sup>−</sup> *<sup>s</sup>*2+3*s*1*)* <sup>−</sup>*αβs*1+2*s*<sup>3</sup> ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ *.* (20.36)

## **Appendix 2: Calderón Problem**

Let *L*st*(***T***)* be the standard Laplacian on any finite compact metric tree **T**. Then the metric tree is uniquely determined by the value of the M-function at the origin. This inverse problem can be called *Calderón* since **M***(*0*)* maps Dirichlet to Neumann data for any solution of the Laplace equation on the tree.

Assume that **T** is a connected metric tree and **M***(*0*)* is the Dirichlet-to-Neumann map associated with all degree one vertices and the standard Laplacian on **T**. Then:


$$
\vec{e}^{ij} = (0, 0, \ldots, 0, \underbrace{1}\_{i}, 0, \ldots, \underbrace{-1}\_{j}, 0, \ldots, 0).
$$

Clearly *<sup>e</sup> ij* <sup>∈</sup> **<sup>1</sup>**<sup>⊥</sup> since *<sup>e</sup> ij ,* **<sup>1</sup>** = <sup>0</sup>*.*

(4) Consider the vector *<sup>u</sup> ij* <sup>=</sup> **<sup>M</sup>**−1*(*0*)e ij .*

The vectors *<sup>u</sup> ij* and *∂u ij* <sup>=</sup>*<sup>e</sup> ij* determine the unique function *<sup>u</sup>* on  satisfying the equation *u* = 0 on the edges:

– *u* is equal to a linear function with slope 1 on the path connecting the terminating vertices *i* and *j* ;


$$d\_{lj} = u\_j^{ij} - u\_i^{ij}.$$

(6) The distances between all terminating points in a tree determine the tree, as we have already seen in Lemma 20.6.

Thus we have proven

**Theorem 20.22 ([237])** *The Dirichlet-to-Neumann map* **M***(*0*) for the standard Laplacian on a finite compact metric tree determines the metric tree uniquely.* 

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 21 Boundary Control for Graphs with Cycles: Dismantling Graphs**

The goal of this chapter is two-fold: we first describe the general strategy to solve the inverse problems for graphs with cycles; the second part describes how the classical BC-method may be applied to such graphs.

# **21.1 Inverse Problems for Graphs with Cycles: Boundary Control Versus Magnetic Boundary Control**

This and the following two chapters are devoted to the solution of the inverse problem for graphs with cycles. We shall always assume that together with the metric graph  we fix a certain set of contact vertices *∂* and use the corresponding M-function as the spectral data. If the contact set is large (for example contains all vertices), then the inverse problem is overdetermined. Therefore it is natural to look for **optimal** contact sets *i.e.* the smallest sets ensuring unique solvability of the inverse problem. We already know that for trees the set of all except one boundary vertices is optimal. For example in the case of the 3-star graph, it is enough to know the M-function associated with just two boundary vertices (instead of 4 vertices). It turns out that the optimal sets are difficult to characterise in presence of cycles, therefore we shall often be working with sets which are close to optimal (resembling the set of all boundary vertices in the case of trees). For clarity of the presentation we shall generally assume standard vertex conditions, unless introducing other Hermitian conditions does not influence the result.

In what follows we shall assume that the contact set *∂* contains all degree one vertices. With this convention the contact set will not be optimal even in the case of trees, to make it optimal only one point has to be removed. On the other hand this convention will drastically simplify our studies for the following reason: using locality of the BC-method all branches in a graph can be reconstructed and the inverse problem may be reduced to a smaller graph. Let **T** ⊂ be any branch

(subtree) in the original graph  with |*∂***T**| pendant vertices. All except one pendant vertices in *∂***T** also belong to *∂-*. Hence the M-function for *-*, or more precisely its block associated with *∂***T** ∩ *∂-*, determines the subtree and the potential on it.

Every metric graph with several cycles can be seen as a collection of trees attached to a subgraph without degree one vertices. Such a maximal subgraph will be called the **graph's core** (see Fig. 21.1, where all contact vertices are marked by dots distinguishing between the degree one (pendant) vertices (the black dots) and higher degree vertices (the red dots)). The graphs which coincide with their core will be called **pendant free** as they lack degree one vertices.

As a result the inverse problem is reduced to the graph's core with the contact set given by all original contact vertices that do not have degree one, and the formerly internal vertices to which the branches were attached. All new contact vertices can be seen as descendants of some degree one contact vertices in the original graph. One may say that the contact set for the core is inherited from the contact set on the original graph. Keeping in mind the reduction just described, we limit our studies to pendant free graphs, unless otherwise explicitly stated.

We have seen that dependence of the spectral properties on the magnetic potential is rather explicit—only fluxes of the magnetic field through the cycles play a role. This is well-known as the celebrated Aharonov-Bohm effect [9, 176]. Therefore the following approach appears attractive: reconstruct the metric graph and electric potential *q* from the M-function considered as a function not only of the spectral parameter *λ* but also of the magnetic fluxes through the cycles. In this way the contact set required to solve the problem sometimes may be drastically reduced: we shall see examples where one contact vertex is enough to solve the inverse problem for a complicated graph with several cycles. We call this new approach the **Magnetic Boundary Control-method** (MBC-method) since it uses ideas from the Boundary Control-method (BC-method) enriched by adding nontrivial magnetic fields. As in

the case of trees, the connection between the response operator and the M-function will be intensively exploited.

We start our studies in this chapter by solving the inverse problem using the traditional BC-method assuming that magnetic potential is zero. Our driving idea will be reduction of the problem to a tree or a set of trees taking into account that the inverse problem for trees is already solved (see Chap. 20). It is effective to look at the graph globally, assuming that cutting the graph at the contact set turns it into a set of trees. This procedure will be called **dismantling graphs** and it reduces the inverse problem for general graphs to the inverse problem for trees. To determine the Mfunctions for subtrees we use two ideas: hierarchy of the M-functions described in Sect. 17.3, and the explicit representations for quantum graph M-functions (17.26) and (17.37). To make this procedure work one has to assume that any two subtrees have at most one vertex in common and that their spectra are disjoint.

After that, in Chaps. 22 and 23, we consider the MBC-method. It seems natural to start with graphs having just one cycle, but it turns out that this case is slightly more difficult than the case of several cycles. Therefore we start by looking at graphs with several cycles in Chap. 22. Our approach is based on an operation we call **dissolving vertices**. Under this operation a graph  with a vertex *<sup>V</sup>* <sup>0</sup> of degree *d*<sup>0</sup> <sup>≥</sup> <sup>3</sup> is transformed into a graph  with the same set of edges and the same vertices except *V* 0, which is split into *d*<sup>0</sup> degree one vertices. More precisely, the equivalence class corresponding to the vertex *V* <sup>0</sup> is substituted with *d*<sup>0</sup> single-element equivalence classes. All other vertices remain unchanged. In this way all edges joined at *V* <sup>0</sup> become pendant in the new graph. Using the dependence on the magnetic fluxes, the M-function for  can be determined, provided *<sup>V</sup>* <sup>0</sup> <sup>∈</sup> *∂* and that it is not a *bottleneck* (which we will define in Definition 22.8). Applying the peeling procedure to the pendant edges in  the original graph reduces to a strictly smaller subgraph. Repetition of his procedure leads to two possible scenarios:


In other words, not all graphs can be reconstructed starting from just one contact vertex—this is not surprising. We call the subgraph which is reconstructable starting from a certain contact vertex by the **infiltration domain**. To accomplish the reconstruction one has to start again from another contact vertex and repeat the described procedure until the whole of  is recovered or the remaining subgraph is easy to handle.

Finally in Chap. 23 we solve first the inverse problem for the loop and lasso. This result implies in particular that loops in arbitrary graphs cause non-uniqueness, which cannot be removed using the trick with the magnetic flux. We proceed to arbitrary graphs with one cycle in order to illustrate how the MBC-method works and prove its effectiveness in the case of two vertices on the cycle and its redundancy for any higher number of vertices on the cycle. The reason for redundancy is trivial: any three points on a cycle dismantle it into three parts, each pair having one common vertex, allowing one to solve the inverse problem using the conventional BC-method (without involving magnetic fluxes).

We have thus described our strategy towards solution of the inverse problem for graphs with cycles. To guarantee a unique solution of the problem we shall use two types of additional conditions:


These conditions will not always be optimal, but the necessity of such conditions will be clear from the explicit examples.

## **21.2 Dismantling Graphs I: Independent Subtrees**

## *21.2.1 General Strategy*

In this section we are going to assume that the M-function for a Schrödinger operator on a finite compact pendant free metric graph  is given. It is associated with a given nonempty set of contact points *∂-*. As before our aim is to recover the metric graph, the (electric) potential *q* and the vertex conditions. We are going to assume standard vertex conditions at all contact vertices, hence only the vertex conditions at the internal vertices (not from *∂-*) need to be recovered.

To solve the inverse problem we are going to reduce it to the inverse problem on a collection of (sub)trees spanning the original metric graph. The inverse problem for trees is already solved (see Chap. 20). The reduction can be formally divided into two steps:

• **geometric reduction**:

dismantling the original metric graph into a collection of subtrees;

• **analytic reduction**: recovering the M-functions for the subtrees from the M-function for the original graph.

Let us remember that the contact set *∂***T** for a tree **T** contains all its pendant vertices. It is clear that these two steps cannot be separated from each other, especially if one is interested in obtaining optimal results: given a metric graph  one is interested in finding the smallest contact set *∂-*, which guarantees solvability of the inverse problem. Interconnection between the two reductions is sophisticated, hence changing between the two strategies one may obtain rather different results.

The analytic reduction will be based on the explicit formulas (17.26) and (17.37). In order to apply these formulas to inverse problems we shall use the following two fundamental results:

(1) for each singularity of the M-function at least two diagonal entries are singular, provided the metric graph is a tree;

(2) the singularities and the corresponding residues uniquely determine the Mfunctions.

The first of these results follows from the fact that each Dirichlet eigenfunction on a metric tree is non-zero close to at least two pendant vertices.

## *21.2.2 M-functions and Their Singularities*

In this section we prove the aforementioned two important facts concerning Mfunctions and their singularities.

**Lemma 21.1** *Let* **T** *be a metric tree with the Schrödinger operator Lq (***T***) determined by standard conditions at the boundary vertices ∂***T** *and arbitrary Hermitian conditions at the internal vertices, and let* **M***(λ) be the M-function associated with all pendant vertices in* **T***. Then it holds that* 


*Proof* Let us prove the lemma for *standard vertex conditions* first. Consider any Dirichlet eigenfunction *ψ*<sup>D</sup> *<sup>n</sup>* on the tree **T**. Assume that all its normal derivatives vanish at all boundary vertices. This eigenfunction is identically zero on all pendant edges as a solution to the second-order differential equation satisfying zero Cauchy data (both the function value and the derivative are zero at the corresponding pendant vertices). Consider any vertex *<sup>V</sup>* <sup>0</sup> of degree *d*<sup>0</sup> to which at least *d*<sup>0</sup> <sup>−</sup> <sup>1</sup> pendant edges are attached. Standard conditions imply that *ψ*<sup>D</sup> *<sup>n</sup> (V* <sup>0</sup>*)* <sup>=</sup> <sup>0</sup> and the normal derivative on the unique non-pendant edge emanating from *V* <sup>0</sup> is also zero. Hence *ψ*<sup>D</sup> *<sup>n</sup>* is identically equal to zero on that edge as well. Continuing in this way we conclude that *ψ*<sup>D</sup> *<sup>n</sup>* is identically zero on all of **T**. Thus statement (1) is proven.

To prove statement (2) it is enough to note that the above procedure can be carried out even in the case where we initially know that *ψ*<sup>D</sup> *<sup>n</sup>* has zero normal derivative at all except one pendant vertices.

Statement (3) then follows from the explicit formula (17.37): if two normal derivatives of *ψ*<sup>D</sup> *<sup>n</sup>* are nonzero, then at least two diagonal elements of **M***(λ)* have singularities at *λ*<sup>D</sup> *n* .

To prove the lemma for *general vertex conditions* we remember that we always assume that these conditions at each vertex *V <sup>m</sup>* are given by irreducible unitary matrices *S<sup>m</sup>* via (4.8). Therefore it is enough to show the following fact: let *V* <sup>0</sup> be a vertex of degree *d*<sup>0</sup> and let the eigenfunction *ψ*<sup>D</sup> *<sup>n</sup>* be identically equal to zero on *<sup>d</sup>*<sup>0</sup> <sup>−</sup> <sup>1</sup> edges joined at *<sup>V</sup>* 0, then the eigenfunction is also identically zero on the remaining edge joined at *V* 0. The function *ψ*<sup>D</sup> *<sup>n</sup>* satisfies the vertex conditions at *V* <sup>0</sup> if either


The second possibility does not occur since *S<sup>m</sup>* is always assumed irreducible. Zero Cauchy data implies that the eigenfunction is identically zero on the remaining edge. It remains to repeat the procedure as done above.

Lemma 21.1 cannot be generalised to include arbitrary graphs with cycles due to possible invisible eigenfunctions with support not overlapping with the contact set.

Assume that all singularities of the M-function are known. Then formula (17.37) allows one to reconstruct the M-function up to a constant matrix **M***(λ )*. In the following lemma we are going to prove explicit asymptotics for the M-function allowing us to determine the constant matrix **M***(λ )*.

**Lemma 21.2** *Under the assumptions of Lemma 21.1, the following asymptotic representation for the M-function holds:* 

$$\mathbf{M}(-\mathbf{s}^2) = -\mathbf{s}I + o(1), \quad \mathbf{s} \to \infty. \tag{21.1}$$

*Proof* Formula (19.12) relates the kernel of the response operator to the M-function. In particular the asymptotics of **M***(λ)* is determined by the short time behaviour of the response operator. For short times (less than double the length of the shortest pendant edge) the response operator for any tree coincides with the diagonal response operator for a collection of |*∂***T**| intervals with the potential inherited from the pendant edges. Therefore for short times the response operator can be written as an integral convolution operator with the generalised kernel

$$-\delta'(t)I + \text{diag}\left\{r\_l(t)\right\},\tag{21.2}$$

where all *ri(t)* are the kernels of the scalar response operators and are locally integrable [43].

To use (19.12) we need to take the Laplace transform. Let us first prove the representation (21.1) in the case that *ri* is an *L*1-function. For any  *>* 0, by taking *δ* sufficiently small, we can ensure that *δ* <sup>0</sup> |*ri(t)*| *dt* ≤ */*2, leading to

$$\begin{aligned} \left| \widehat{r}\_{l}(s) \right| &= \left| \int\_{0}^{\infty} r\_{l}(t) e^{-st} dt \right| \\ &\leq \underbrace{\int\_{0}^{\delta} \left| r\_{l}(t) \right| dt}\_{\leq \epsilon/2} + \underbrace{e^{-\delta s} \int\_{\delta}^{\infty} \left| r\_{l}(t) \right| dt}\_{\leq \left\| r\_{l} \right\|\_{L\_{1}} e^{-\delta s} \leq \epsilon/2} \end{aligned} $$

Then for *<sup>s</sup>* ≥ −<sup>1</sup> *<sup>δ</sup>* ln <sup>2</sup> *ri <sup>L</sup>*<sup>1</sup> the second integral is also less than */*2 and | ˆ*ri(t)*| ≤ *.* The Laplace transform of *δ (t)* is just *s.*

Essentially the same calculations lead to formula (21.1) in the case of the interval [0*,* ] with two contact points *x*<sup>1</sup> = and *x*<sup>2</sup> = 0:

$$\mathbf{M}\_{[0,\ell]}(-s^2) = \begin{pmatrix} M\_{[0,\ell]}^{11} M\_{[0,\ell]}^{12} \\ M\_{[0,\ell]}^{21} M\_{[0,\ell]}^{22} \end{pmatrix} = -sI + o(1), \quad s \to \infty,\tag{21.3}$$

where *I* is the unit 2 × 2 matrix. The kernel of the response operator contains *δ* singularities corresponding to the reflections from the opposite endpoints, but these singularities are delayed and therefore do not contribute to the asymptotics of the M-function.

Consider now an arbitrary metric tree **T** with |*∂***T**| pendant vertices. We choose any  *>* 0 less than the length of any of the pendant edges. Let us denote by **T**<sup>2</sup> the tree obtained from **T** by cutting away intervals of length from each of the pendant edges. Then the original tree **T** can be seen as a union of **T**<sup>2</sup> and its complement **T**<sup>1</sup> in **T**:

$$\mathbf{T} = \mathbf{T}\_1 \cup \mathbf{T}\_2.$$

The graph **T**<sup>1</sup> is just a union of |*∂***T**| edges of length and the corresponding Mfunction can be written in the block form (in analogy with (21.3)):

$$\mathbf{M}\_{\mathbf{l}}(-s^2) = \begin{pmatrix} M\_{\mathbf{l}}^{11} \ M\_{\mathbf{l}}^{12} \\ M\_{\mathbf{l}}^{21} \ M\_{\mathbf{l}}^{22} \end{pmatrix} = -sI + o(\mathbf{l}), \quad s \to \infty,\tag{21.4}$$

where the matrices *Mij* <sup>1</sup> have dimension |*∂***T**|×|*∂***T**| and the unit matrix *I* has dimension 2|*∂***T**| × 2|*∂***T**|. We are going to use Lemma 18.20 and in particular formula (18.34). In our notations the tree **T**<sup>1</sup> has 2|*∂***T**| contact points: the first |*∂***T**| points corresponding to the inner points on the pendant edges and the second |*∂***T**| points forming the contact set for **T**. The graph **T**<sup>2</sup> has |*∂***T**| contact points, all corresponding to the inner points on the pendant edges in **T**. Formula (18.34) does not have block structure in the current case as all contact vertices in **T** come from the contact vertices in **T**1:

$$\mathbf{M}\_{\mathbf{T}}(\lambda) = M\_{\mathbf{l}}^{\mathcal{D}2}(\lambda) - M\_{\mathbf{l}}^{\mathcal{D}1}(\lambda) \left( M\_{\mathbf{l}}^{\mathcal{U}1}(\lambda) + \mathbf{M}\_2(\lambda) \right)^{-1} M\_{\mathbf{l}}^{\mathcal{U}2}(\lambda),$$

where **M**2*(λ)* is the M-function for **T**2. The matrix valued functions *M*<sup>11</sup> <sup>1</sup> *(*−*s*2*)* and *M*<sup>22</sup> <sup>1</sup> *(*−*s*2*)* are asymptotically close to −*sI* , this follows from (21.4). The explicit representation (17.26) implies that the matrix valued function **M**2*(*−*s*2*)* is negative definite for sufficiently large *s*. Hence the inverse matrix function *M*<sup>11</sup> <sup>1</sup> *(*−*s*2*)* <sup>+</sup> **<sup>M</sup>**2*(*−*s*2*)* −<sup>1</sup> is uniformly bounded as *<sup>s</sup>* → ∞. It follows that

$$M\_1^{21}(-s^2)\left(M\_1^{11}(-s^2) + \mathbf{M}\_2(-s^2)\right)^{-1}M\_1^{12}(-s^2) = o(1), \quad s \to \infty, \tau$$

where we have taken into account that *M*<sup>21</sup> <sup>1</sup> *(*−*s*2*), M*<sup>12</sup> <sup>1</sup> *(*−*s*2*)* <sup>=</sup> *o(*1*)*, which again follows from (21.4).

The asymptotics of **MT***(*−*s*2*)* coincides with the asymptotics of *M*<sup>22</sup> <sup>1</sup> *(*−*s*2*)* and therefore satisfies (21.1).

The same result holds for any graph with the contact set given by degree one vertices since our proof was based on the explicit representation for the response operator for short times. We did not use that **M**2*(λ)* is associated with a metric tree. To generalise the result to general graphs we need to take into account the degrees of contact vertices.

**Lemma 21.3** *Let be a finite metric graph with the associated Schrödinger operator Lq,a(-) and let* **M***(λ) be the M-function associated with the set ∂ of contact vertices. Let Lq,a(-) be defined by standard vertex conditions on ∂ and arbitrary Hermitian conditions at all other (inner) vertices. Then the following asymptotic representation for the M-function holds:* 

$$\mathbf{M}(-\mathbf{s}^2) = -\mathbf{s} \text{diag}\left\{d\_f\right\} + o(1), \quad \mathbf{s} \to \infty,\tag{21.5}$$

*where dj is the degree of the contact vertex <sup>V</sup> <sup>j</sup>* <sup>∈</sup> *∂-.*

*Proof* The proof of Lemma 21.2 is based on the relation between the asymptotic behaviour of the M-function and short time behaviour of the dynamical response operator. For sufficiently short times the response operator for  is diagonal and each entry coincides with the response operator for the star graph of degree *dj* , with the potential inherited from *-.* The response operator for any star graph coincides with the sum of the response operators for single edges since we assumed standard vertex conditions at the contact vertices. It follows that the kernel of the response operator for small *t* possesses the representation

$$-\delta'(t)\text{diag}\left\{d\_{\rangle}\right\} + \text{diag}\left\{r\_{\rangle}(t)\right\}$$

instead of (21.2).

The rest of this section is devoted to describing the solution of the inverse problem through the method of dismantling graphs.

## *21.2.3 Dismantling Graphs I: Independent Subtrees*

Our goal in this subsection is to study how to solve the inverse problem by dismantling metric graphs into subtrees assuming that the magnetic potential is zero. Adding a magnetic potential does not help in solving the inverse problem using this method.

**Definition 21.4** We say that a set of vertices **dismantles** a metric graph  if and only if by completely separating the equivalence classes corresponding to these vertices the graph  is turned into a collection of (sub)trees **T***<sup>j</sup>* completely covering *-*.

Dismantling graphs can be illustrated by introducing Dirichlet conditions at the selected vertices. Assume that the vertex conditions at a vertex *<sup>V</sup>* <sup>0</sup> of degree *<sup>d</sup>*<sup>0</sup> = <sup>1</sup>*,* are replaced with Dirichlet conditions. Then the metric graph corresponding to the new operator is not the original one, but the graph with the vertex *V* <sup>0</sup> separated into *d*<sup>0</sup> pendant Dirichlet vertices.

It will be convenient to consider the resulting metric trees **T***<sup>j</sup>* as subsets of the original graph. Therefore two different trees may have common points (vertices). This happens if their pendant vertices come from the same vertices in the original graph, hence trees with common vertices are unavoidable unless  is itself a tree. It will be important to distinguish the case where pairs of subtrees have one or several common points.

**Definition 21.5** A set of subtrees **T***j* of a metric graph  is called **independent**  if any pair has at most one common vertex. Otherwise the set is called **dependent**.

The case of dependent subtrees is more subtle and requires using the Magnetic Boundary Control method—this direction will be investigated in Sect. 23.4. We restrict our studies here to the case of independent subtrees.

Formulating our results we shall separate assumptions having topological nature (enumerated by numbers) from the generically satisfied spectral assumptions (enumerated by letters).

**Theorem 21.6** *Let L*st *q,*0*(-) be the standard Schrödinger operator on a pendant free metric graph with a selected non-empty contact set ∂ that dismantles the graph into a set of trees* {**T***<sup>j</sup>* } *such that* 


*Then the M-function associated with the contact set ∂ generically determines the metric graph and the potential q, provided that* 

*(a) the Schrödinger operators L*st*,*<sup>D</sup> *q,*<sup>0</sup> *(***T***<sup>j</sup> ), j* = 1*,* 2*,... with Dirichlet conditions at the pendant vertices and standard vertex conditions at all internal vertices, have disjoint spectra:* 

$$
\lambda\_n^{\rm D}(\mathbf{T}\_j) \neq \lambda\_m^{\rm D}(\mathbf{T}\_l), \quad j \neq i. \tag{21.6}
$$

*Proof* The M-function for  is completely determined by the M-functions for the subtrees **T***<sup>j</sup> .* In principle the matrix functions **M***<sup>j</sup> (λ)* associated with the subtrees have dimension |*∂***T***<sup>j</sup>* |×|*∂***T***<sup>j</sup>* |, but we shall see them as |*∂-*|×|*∂-*| matrices with zero entries corresponding to contact points from *∂-*\ *∂***T***<sup>j</sup> .* Under this convention the M-function for is just equal to the sum of the M-functions for the subtrees:

$$\mathbf{M}(\lambda) = \sum\_{j} \mathbf{M}\_{j}(\lambda). \tag{21.7}$$

We denote by *λ*<sup>D</sup> *<sup>n</sup> (***T***<sup>j</sup> )* the eigenvalues of the Schrödinger operator determined by Dirichlet conditions at pendant vertices on **T***<sup>j</sup>* and standard conditions at all internal vertices. Formula (17.37) implies that the matrix Herglotz-Nevanlinna functions **M***<sup>j</sup> (λ)* may have singularities at *λ*<sup>D</sup> *<sup>n</sup> (***T***<sup>j</sup> )*, every singularity is present in the case of trees since every eigenfunction is visible (Lemma 21.1, statement (1)). Moreover, at each *λ*<sup>D</sup> *<sup>n</sup>* at least two diagonal entries of **M***<sup>j</sup> (λ)* are singular (Lemma 21.1, statement (3)).

All singularities are preserved in **M***(λ)* since the eigenvalues corresponding to different subtrees are different (assumption *a)*).

To illustrate the structure of the M-function associated with the original graph, it will be convenient to use colors. Consider the graph presented in Fig. 21.2 with all subtrees coloured (Fig. 21.3).

**Fig. 21.2** Dismantling a metric graph. Red dots—contact vertices in *-*; black dots—contact vertices for subtrees

**Fig. 21.3** Coloured subtrees

In the formula for **M***(λ)* we colour the entries corresponding to different subtrees:

$$\mathbf{M}(\lambda) = $$


The zero entries are left empty in the formula above.

Our next step is to identify the non-disjoint subsets *∂***T***<sup>j</sup>* ⊂ *∂* and sort the Dirichlet eigenvalues *λ*<sup>D</sup> *<sup>n</sup> (-)* into disjoint subsets corresponding to **T***<sup>j</sup>* . Consider first *λ*<sup>D</sup> <sup>1</sup> *(-).* This is the lowest Dirichlet eigenvalue and therefore it is a ground state for one of the trees. The corresponding eigenfunction has non-zero derivatives at all pendant vertices of the subtree. Hence taking all contact points from *∂* for which the corresponding diagonal entry of **M***(λ)* is singular at *<sup>λ</sup>* <sup>=</sup> *<sup>λ</sup>*<sup>D</sup> <sup>1</sup> *(-)* we get **all**  contact points for one of the trees. Without loss of generality we denote this set by *∂***T**1*.*

Let us continue with *λ*<sup>D</sup> <sup>2</sup> *(-).* There are two alternatives:

(1) *λ*<sup>D</sup> <sup>2</sup> *(-)* is the second lowest Dirichlet eigenvalue on **T**1;

(2) *λ*<sup>D</sup> <sup>2</sup> *(-)* is the lowest Dirichlet eigenvalue for another subtree.

To distinguish these cases we identify which diagonal entries of **M***(λ)* are singular at *<sup>λ</sup>* <sup>=</sup> *<sup>λ</sup>*<sup>D</sup> <sup>2</sup> *(-).* The first alternative occurs if **M***(λ*<sup>D</sup> <sup>2</sup> *(-))* is singular at the vertices from *∂***T**<sup>1</sup> and nowhere else. We get the second alternative if at most one of the singular entries correspond to a vertex from *∂***T**1, and all other entries correspond to vertices outside *∂***T**1. In the second case we form the new set *∂***T**<sup>2</sup> containing all vertices where the diagonal elements of **<sup>M</sup>***(λ)* are singular at *<sup>λ</sup>* <sup>=</sup> *<sup>λ</sup>*<sup>D</sup> <sup>2</sup> *(-)*.

This process can be continued. Assume that the sets *∂***T**1*, ∂***T**2*,...,∂***T***<sup>m</sup>* have been identified and we are about to consider *λ*<sup>D</sup> *<sup>n</sup> (-), n* ≥ *m*. We again have two alternatives:

(1) *λ*<sup>D</sup> *<sup>n</sup>* is one of the higher Dirichlet eigenvalues on one of the **T***<sup>j</sup> , j* = 1*,...,m*; (2) *λ*<sup>D</sup> *<sup>n</sup>* is the lowest Dirichlet eigenvalue for a new subtree **T***m*+1.

The second alternative is selected if the diagonal of **M***(λ*<sup>D</sup> *<sup>n</sup> )* has at most one singular entry at the vertices corresponding to each *∂***T***<sup>j</sup> , j* = 1*, . . . , m.* In that case we build a new set *∂***T***m*+1, otherwise *λ*<sup>D</sup> *<sup>n</sup>* is added to the set of Dirichlet eigenvalues for one of the selected subtrees. In a finite number of steps all contact sets *∂***T***<sup>j</sup>* are identified. Any two sets *∂***T***<sup>j</sup>* and *∂***T***i, j* = *i* have at most one common point. Then all Dirichlet eigenvalues are sorted into the disjoint sets {*λ*<sup>D</sup> *<sup>n</sup> (***T***<sup>j</sup> )*} using the fact that the eigenfunctions on **T***<sup>j</sup>* have at least two non-zero normal derivatives at the vertices from *∂***T***<sup>j</sup>* and the subtrees are independent.

The obtained information together with formula (17.37) (or equivalently (18.7)) allows us to reconstruct **M***<sup>j</sup> (λ)* up to the constant matrices **A***<sup>j</sup>* = **M***<sup>j</sup> (λ )*:

$$\mathbf{M}\_{j}(\boldsymbol{\lambda}) = \mathbf{A}\_{j} + \sum\_{\boldsymbol{\lambda}\_{n}^{\rm D}(\mathbf{T}\_{j})} \frac{\boldsymbol{\lambda} - \boldsymbol{\lambda}^{\prime}}{(\lambda\_{n}^{\rm D} - \boldsymbol{\lambda})(\lambda\_{n}^{\rm D} - \boldsymbol{\lambda}^{\prime})} \langle \boldsymbol{\partial}\boldsymbol{\psi}\_{n}^{\rm D}|\_{\partial\Gamma}, \cdot \rangle\_{\ell\_{2}(\partial\Gamma)} \boldsymbol{\partial}\boldsymbol{\psi}\_{n}^{\rm D}|\_{\partial\Gamma}, \tag{21.8}$$

subject to

$$\sum\_{j} \mathbf{A}\_{j} = \mathbf{M}(\lambda'). \tag{21.9}$$

It remains to determine the Hermitian matrices **A***<sup>j</sup> .* The representation (21.1) implies that the matrices **A***<sup>j</sup>* (as well as the matrix **M***(λ )*) are uniquely determined provided the singular part in (21.8) is known:

$$\mathbf{A}\_{\boldsymbol{f}} = \lim\_{\boldsymbol{s} \to \infty} \left( \sum\_{\boldsymbol{\lambda}\_{n}^{\rm D}(\mathbf{T}\_{\boldsymbol{f}})} \frac{\boldsymbol{s}^{2} + \boldsymbol{\lambda}^{\prime}}{(\boldsymbol{\lambda}\_{n}^{\rm D} + \boldsymbol{s}^{2})(\boldsymbol{\lambda}\_{n}^{\rm D} - \boldsymbol{\lambda}^{\prime})} \langle \boldsymbol{\partial} \boldsymbol{\psi}\_{n}^{\rm D}|\_{\partial \boldsymbol{\Gamma}}, \cdot \rangle\_{\ell\_{2}(\partial \boldsymbol{\Gamma})} \partial \boldsymbol{\psi}\_{n}^{\rm D}|\_{\partial \boldsymbol{\Gamma}} - \mathrm{s} \boldsymbol{I}\_{\boldsymbol{f}} \right) . \tag{21.10}$$

Here *Ij* denotes the diagonal matrix with entries equal to 1 at positions corresponding to vertices in *∂***T***<sup>j</sup>* and 0 otherwise. Note that the asymptotics of **M***(λ)* is determined by the degrees *dm* <sup>=</sup> *d(V m)* of the contact vertices:

$$\mathbf{M}(-\mathbf{s}^2) = -\text{diag}\{d(V^m)\} + o(1), \quad \mathbf{s} \to \infty. \tag{21.11}$$

The M-functions for the subtrees determine the subtrees and the potential there (Theorem 20.21). Here it is important that we assumed standard vertex conditions everywhere. Having the subtrees in our hands together with the sets *∂***T***<sup>j</sup>* allows us to reconstruct the original graph *-*. The subsets *∂***T***<sup>j</sup>* ⊂ *∂* indicate how to glue together the subtrees. Then the Schrödinger operator on  is obtained by introducing standard vertex conditions at the contact vertices.

Let us note that we never explicitly used in the proof that the vertex conditions at the internal vertices are standard, the only fact we needed is that the **M***<sup>j</sup> (λ)* determine the corresponding trees. Theorem 20.21 states that under mild assumptions on the vertex conditions the M-function determines the tree, the potential *q* and the vertex conditions at internal vertices. Hence we have in fact proved the following stronger result:

**Theorem 21.7** *Let L***<sup>S</sup>** *q,*0*(-) be a Schrödinger operator on a pendant free metric graph with a selected non-empty contact set ∂ that dismantles the graph into a set of trees* {**T***<sup>j</sup>* } *such that* 

*(1) no subtree* **T***<sup>j</sup> has two pendant vertices coming from the same vertex in -;* *(2) the subtrees* **T***<sup>j</sup> are independent, i.e. any two subtrees have at most one common point.* 

*Let the vertex conditions determining the Schrödinger operator L***<sup>S</sup>** *q,*0*(-) be generalised delta couplings (see Sect. 3.7) at all internal vertices and standard at all contact vertices ∂-.* 

*Assume in addition the following generically satisfied assumption:* 

*(a) the Schrödinger operators L***S***,*<sup>D</sup> *q,*<sup>0</sup> *(***T***<sup>j</sup> ), j* = 1*,* 2*,... with the vertex conditions at the internal vertices inherited from and Dirichlet conditions at the pendant vertices have disjoint spectra* 

$$
\lambda\_n^{\rm D}(\mathbf{T}\_j) \neq \lambda\_m^{\rm D}(\mathbf{T}\_l), \quad j \neq i. \tag{21.12}
$$

*Then the M-function associated with the contact vertices generically determines the metric graph, the potential q and the vertex conditions at internal vertices.* 

**Problem 92** *Check that the proof of Theorem 21.6 can be adapted to justify Theorem 21.7. Point out all necessary changes. It is assumed that the vertex conditions at the inner vertices are generalised delta couplings, why it is not enough to require that conditions at internal vertices are asymptotically properly connecting (see Definition 11.4)?* 

The conditions of the above theorem are not optimal. Let us go through all assumptions of the theorems discussing how they can be weakened without much affecting the proof:


As a corollary of Theorem 21.6 we may prove that the graph and the potential on it are uniquely determined by the M-function associated with **all** vertices.

**Theorem 21.8** *Let L*st *q,*0*(-) be the standard Schrödinger operator on a metric graph without loops or parallel edges. Assume the generically satisfied condition that* 

*(a) the Dirichlet Schrödinger operators L*<sup>D</sup> *q,*0*(En), n* = 1*,* 2*,...,N on the edges have disjoint spectra* 

$$
\lambda\_n^{\rm D}(E\_j) \neq \lambda\_m^{\rm D}(E\_l), \quad j \neq i. \tag{21.13}
$$

*Then the M-function associated with all vertices in determines the metric graph and the potential q.* 

*Proof* To prove the Theorem we need to check that all conditions of Theorem 21.6 are satisfied. The graph's edges form the simplest possible trees. Non-existence of loops implies condition *(1)*. The subtrees are independent (condition *(2)*) because parallel edges are excluded.

**Problem 93** *Prove Theorem 21.7 assuming that the conditions at internal vertices are asymptotically properly connecting but are not necessarily generalised delta couplings.* 

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 22 Magnetic Boundary Control I: Graphs with Several Cycles**

This is the first chapter devoted to the Magnetic Boundary Control method (MBCmethod). It appears that this method can effectively be applied to graphs having several independent cycles, while graphs with just one cycle may require special attention and are described in the following chapter. The MBC-method is based on the following idea: using the dependence of the M-function on the magnetic fluxes one may recover the M-function for the graph on the same set of edges but with some of the cycles being opened. In some sense the new graph is *closer* to a tree than the original graph. For example, consider an arbitrary graph  and some contact vertex *<sup>V</sup>* <sup>0</sup> having sufficiently large degree *<sup>d</sup>*<sup>0</sup> <sup>≥</sup> 3. Let  be the metric graph obtained from  by splitting the vertex *V* <sup>0</sup> into *d*<sup>0</sup> degree one vertices. We say that the vertex *V* <sup>0</sup> is **dissolved**. The M-function for  known for some different values of the magnetic fluxes determines the M-function for *-* . Then the classical BC-method can be used to recover the potential on the pendant edges in *-* . Peeling these edges away as described in Chap. 20 we reduce the inverse problem to a smaller graph. For some graphs, by repeating the procedure the inverse problem is reduced to the inverse problem on a tree and therefore can be solved completely, while for other graphs the procedure terminates, leaving a major part of the graph unknown.

We develop here ideas proposed in [334, 336], it might be interesting to see connections with [453].

## **22.1 Dissolving the Vertices**

Let us study how to determine the M-function when one of the vertices is dissolved.

**Definition 22.1** We say that the metric graph *-*<sup>1</sup> is obtained from a metric graph *-* by dissolving a certain vertex *V* <sup>0</sup> in if:

• the metric graphs  and *-*<sup>1</sup> share the same set of edges {*En*} *N <sup>n</sup>*=1;

**Fig. 22.2** Breaking cycles by

dissolving vertices


See Fig. 22.1 where the dissolving procedure is presented schematically. The green area represents the part of the graph which is not affected by the procedure. The degree four vertex *V* <sup>0</sup> is substituted with four degree one vertices *V* <sup>1</sup>*,...,V* 4.

We are going to exclude the case where dissolution of *V* <sup>0</sup> disconnects the original graph. Then the number of cycles in the original graph  and in the new graph *-*1 differ by *d*<sup>0</sup> − 1:

$$\begin{cases} \beta\_1(\Gamma) = N + 1 - M; \\ \beta\_1(\Gamma\_1) = \underbrace{N\_1}\_{=N} + 1 - \underbrace{M\_1}\_{=M + d\_0 - 1} \end{cases} \right\} \Rightarrow \beta\_1(\Gamma) - \beta\_1(\Gamma\_1) = d\_0 - 1. \tag{22.1}$$

In other words, dissolution of *<sup>V</sup>* <sup>0</sup> breaks precisely *<sup>d</sup>*0−<sup>1</sup> cycles in the original graph. This number does not depend on whether the vertex *V* <sup>0</sup> is situated well inside the graph or on its periphery (see the two graphs presented in Fig. 22.2).

Our goal is to compare the M-functions

$$\mathbf{M}(\lambda) := \mathbf{M}\_{\Gamma}(\lambda) \quad \text{and} \quad \mathbf{M}\_{\mathbb{I}}(\lambda) := \mathbf{M}\_{\Gamma\_{\mathbb{I}}}(\lambda)$$

corresponding to  and *-*<sup>1</sup> respectively. In our context these functions depend not only on the spectral parameter *λ* but also on the magnetic fluxes. Let us denote by and <sup>1</sup> the vectors collecting all fluxes for  and *-*1, respectively. Every cycle in *-*1 corresponds to a certain cycle in *-*, hence one may naturally assume that the entries in the vector <sup>1</sup> correspond to certain *β*1*(-*<sup>1</sup>*)* entries in the vector . The remaining *β*1*(-)*−*β*1*(-*<sup>1</sup>*)* = *d*0−1 entries in correspond to the cycles that are broken under the dissolution of *V* <sup>0</sup>*.* It will be convenient to denote the corresponding fluxes by 2, so that we have:

$$
\vec{\Phi} = (\vec{\Phi}^1, \vec{\Phi}^2). \tag{22.2}
$$

We assume from now on that the fluxes <sup>1</sup> through the preserved cycles are fixed and omit indicating the dependence of the M-functions on 1.

We denote by *V* <sup>1</sup>*,...,V <sup>d</sup>*<sup>0</sup> the pendant vertices in *-*<sup>1</sup> coming from the vertex *V* <sup>0</sup> in  and let *Cj* be a path connecting *<sup>V</sup> <sup>d</sup>*<sup>0</sup> to *<sup>V</sup> <sup>j</sup> , j* <sup>=</sup> <sup>1</sup>*,* <sup>2</sup>*,...,d*<sup>0</sup> <sup>−</sup> 1. The paths on *-*<sup>1</sup> correspond to the cycles in *-*, that are broken under the dissolution. The corresponding fluxes are

$$\Phi\_j = \int\_{C\_j} a(\mathbf{y}) d\mathbf{y} = \int\_{V^{d\_0}}^{V^j} a(\mathbf{y}) d\mathbf{y}, \quad j = 1, 2, \dots, d\_0 - 1. \tag{22.3}$$

These fluxes form the vector 2. It will be convenient to view <sup>2</sup> as an element of <sup>R</sup>*d*<sup>0</sup> despite that only *<sup>d</sup>*<sup>0</sup> <sup>−</sup> <sup>1</sup> of its coordinates may be non-zero:

$$
\vec{\Phi}^2 = (\Phi\_1, \Phi\_2, \dots, \Phi\_{d\_0 - 1}, 0).
$$

To reconstruct the M-function for *-*<sup>1</sup> it is enough to consider the fluxes equal to 0 and *π*, therefore we introduce the notations

$$\begin{aligned} \mu\_j &:= e^{l\Phi\_j}, \quad j = 1, 2, \ldots, d\_0; \\ \mu &= (\mu\_1, \mu\_2, \ldots, \mu\_{d\_0}) = e^{l\tilde{\Phi}}; \end{aligned} \tag{22.4}$$

*i,j*=1

and consider the M-functions depending on the indices *μj* instead of the phases *j* . To get the corresponding spectral data it is enough to consider the standard operators with zero magnetic potential and additional signing conditions (3.43) introduced on some of the cycles. These operators will be denoted *L*sign *<sup>q</sup> (-)* and called **signed Schrödinger operators**.

Our first step is to establish the relation between the diagonal element

$$\mathbf{M}^{00}(\lambda,\mu) =: \mathbb{M}(\lambda,\mu)$$

associated with the vertex *<sup>V</sup>* <sup>0</sup> and the diagonal *<sup>d</sup>*0×*d*<sup>0</sup> block of **<sup>M</sup>**1*(λ,μ)* associated with the degree one vertices in *-*<sup>1</sup> (coming from *V* 0). We shall find an explicit relation between the scalar Herglotz-Nevanlinna function M*(λ,μ)* and the *d*<sup>0</sup> <sup>×</sup> *<sup>d</sup>*<sup>0</sup> matrix valued Herglotz-Nevanlinna function <sup>M</sup>1*(λ,μ)* := **M***ij* <sup>1</sup> *(λ,μ) d*0 .

The dependence of M1*(λ,μ)* upon *μ* is trivial:

$$\mathbb{M}\_{\mathbf{l}}(\lambda,\,\vec{\Phi}^2) = \text{diag}\,\{\mu\_j\}\,\mathbb{M}\_{\mathbf{l}}(\lambda,\,\mathbf{1})\,\underbrace{\text{diag}\,\{\mu\_j\}^{-1}}\_{=\text{diag}\,\{\mu\_j\}},\quad\mathbf{1} = (1,\,1,\ldots,1).\tag{22.5}$$

To see this, let us eliminate the magnetic potential starting from *V <sup>d</sup>*<sup>0</sup> by using the transformation

$$f(\mathbf{x}) \mapsto g(\mathbf{x}) = e^{-i\int\_{V^{d\_0}}^{\chi} a(\mathbf{y})d\mathbf{y}}f(\mathbf{x}).$$

Under this transformation we have

$$f(V^j) = e^{i\Phi\_j} \mathbf{g}(V^j) = \mu\_j \mathbf{g}(V^j),$$

implying

$$
\begin{pmatrix} f(V^1) \\ \vdots \\ f(V^{d\_0}) \end{pmatrix} = \text{diag}\left\{ \mu\_j \right\} \begin{pmatrix} g(V^1) \\ \vdots \\ g(V^{d\_0}) \end{pmatrix},
$$

which leads to (22.5).

The diagonal entry M*(λ,μ)* is equal to the sum of all entries in M1*(λ,μ)*:

$$\underbrace{\mathbb{M}(\lambda,\mu)}\_{=\mathbf{M}^{00}(\lambda,\mu)} = \sum\_{i,j=1}^{d\_0} \mu\_i \mu\_j \underbrace{\mathbb{M}\_1^{ij}(\lambda,\mathbf{1})}\_{=\mathbf{M}\_1^{ij}(\lambda,\mathbf{1})} \tag{22.6}$$

This formula determines the M-function for any signed operator on  through the M-function for *-*1.

The key idea behind the reconstruction of M<sup>1</sup> from M is to use formula (17.37), which expresses each of these two Herglotz-Nevanlinna functions through the normal derivatives of the Dirichlet eigenfunctions, *i.e.* the eigenfunctions satisfying Dirichlet conditions at *V* <sup>0</sup> in  and at *V* <sup>1</sup>*,...,V <sup>d</sup>*<sup>0</sup> in *-*1. These eigenfunctions simply coincide since the Dirichlet condition does not *feel* whether pendant vertices are glued together or not.

Let *ψ*<sup>D</sup> *<sup>n</sup>* denote the eigenfunction corresponding to zero fluxes through the broken cycles. These eigenfunctions can be chosen real-valued since they satisfy standard and Dirichlet vertex conditions and the fluxes in <sup>1</sup> are all either 0 or *π*. Then the normal derivatives of the Dirichlet eigenfunctions for non-zero fluxes are given by

$$
\mu\_j \partial \psi\_n^{\mathbf{D}}(V^j),
$$

implying in particular that the normal derivative at *V* <sup>0</sup> is

$$\sum\_{j=1}^{d\_0} \mu\_j \partial \psi\_n^{\rm D}(V^j). \tag{22.7}$$

It follows that the singularity of M*(λ,μ)* is of the form

$$\begin{split} \mathcal{M}(\boldsymbol{\lambda},\boldsymbol{\mu}) \underset{\boldsymbol{\lambda}\to\boldsymbol{\lambda}\_{n}^{\rm D}}{\sim} \frac{1}{\lambda\_{n}^{\rm D}-\lambda} \sum\_{i,j=1}^{d\_{0}} \mu\_{i}\mu\_{j}\boldsymbol{\partial}\boldsymbol{\psi}\_{n}^{\rm D}(\boldsymbol{V}^{i})\boldsymbol{\partial}\boldsymbol{\psi}\_{n}^{\rm D}(\boldsymbol{V}^{j}) \\ = \quad \frac{1}{\lambda\_{n}^{\rm D}-\lambda} \Big( \sum\_{i=1}^{d\_{0}} \Big( \boldsymbol{\partial}\boldsymbol{\psi}\_{n}^{\rm D}(\boldsymbol{V}^{i}) \Big)^{2} + \sum\_{i,j=1,}^{d\_{0}} \mu\_{i}\mu\_{j}\boldsymbol{\partial}\boldsymbol{\psi}\_{n}^{\rm D}(\boldsymbol{V}^{i})\boldsymbol{\partial}\boldsymbol{\psi}\_{n}^{\rm D}(\boldsymbol{V}^{j}) \Big), \end{split} \tag{22.8}$$

where we have used that *∂ψ*<sup>D</sup> *<sup>n</sup>* are real-valued.

Introducing the notation *aj* := *∂ψ*<sup>D</sup> *<sup>n</sup> (V <sup>j</sup> )*, we are faced with the following trivial problem:

• Determine *aj* if the numbers

$$\left(\pm a\_1 \pm a\_2 \pm \dots \pm a\_{d\_0 - 1} + a\_{d\_0}\right)^2$$

are known for all possible combinations of the signs.

It is clear that this reconstruction is possible only up to the multiplication of all *aj* by −1, which corresponds to the multiplication of the corresponding eigenfunctions by −1.

The sum of the squares can be obtained by averaging over all possible signs:

$$\sum\_{j=1}^{d\_0} a\_j^2 = \frac{1}{2^{d\_0 - 1}} \sum\_{\mu \in (\{1, -1\}^{d\_0 - 1}, 1)} \left( \mu\_1 a\_1 + \mu\_2 a\_2 + \dots + \mu\_{d\_0 - 1} a\_{d\_0 - 1} + a\_{d\_0} \right)^2. \tag{22.9}$$

Hence we are able to determine the following combinations of the *aj* 's

$$\sum\_{\substack{i,l,j=1,\\l\neq j}}^{d\_0} \mu\_i \mu\_j a\_l a\_j = \left(\sum\_{i=1}^{d\_0} \mu\_i a\_l\right)^2 - \sum\_{i=1}^{d\_0} a\_j^2. \tag{22.10}$$

We recover the products by averaging a second time

$$a\_k a\_l = \frac{1}{2^{d\_0 - 1}} \sum\_{\substack{\mu \in (\{1, -1\}^{d\_0 - 1}, 1), \\ \mu\_k = \mu\_l}} \left( \sum\_{\substack{l, j = 1, \\ i \neq j}}^{d\_0} \mu\_l \mu\_j a\_l a\_j \right), \quad k \neq l. \tag{22.11}$$

The product *akal* <sup>=</sup> *alak* appears in the double sum precisely <sup>2</sup>*d*0−<sup>1</sup> times, while all other products cancel since *μiμj* attains +1 and −1 equally many times.

If at least three of the coefficients are nonzero, then the squares *a*<sup>2</sup> *<sup>j</sup>* are determined as

$$a\_l^2 = \frac{(a\_l a\_j) \ (a\_l a\_l)}{(a\_j a\_l)}, \quad \text{provided} \quad a\_j, a\_l \neq 0. \tag{22.12}$$

We are able to recover one nonzero *aj* up to a sign, but then all other non-zero coefficients are determined from the products *aj ai*. We conclude that if the squared sums *<sup>d</sup>*<sup>0</sup> *<sup>j</sup>*=<sup>1</sup> *μj aj* 2 are known for all *<sup>μ</sup>* of the form *<sup>μ</sup>* <sup>∈</sup> *(*{1*,* <sup>−</sup>1}*d*0−1*,* <sup>1</sup>*)*, then the coefficients *aj* are determined up to a common sign.

It follows that the diagonal element <sup>M</sup>*(λ,μ)* known for all *<sup>μ</sup>* <sup>∈</sup> *(*{1*,* <sup>−</sup>1}*d*0−1*,* <sup>1</sup>*)* determines the vector

$$
\partial \vec{\psi\_n^D}\_2 := \left( \partial \psi\_n^{\mathcal{D}}(V^1), \partial \psi\_n^{\mathcal{D}}(V^2), \dots, \partial \psi\_n^{\mathcal{D}}(V^{d\_0}) \right),
$$

up to the common sign, hence the singular part of M1*(λ,* 0*)* is determined, which as before allows us to reconstruct it up to the constant matrix A, yielding

$$\mathbb{M}\_{1}(\lambda,\vec{0}) = \mathbb{A} + \sum\_{\lambda\_{n}^{\rm D}(\Gamma\_{1})} \frac{\lambda - \lambda'}{(\lambda\_{n}^{\rm D} - \lambda)(\lambda\_{n}^{\rm D} - \lambda')} \left\langle \vec{\psi}\_{n}^{\rm D}, \cdot \right\rangle \partial \vec{\psi}\_{n}^{\rm D}.\tag{22.13}$$

To determine A we remember that the M-function possesses the asymptotics

$$\mathbb{M}\_1(-s^2,\vec{0}) = -s \ I\_{d0} + o(1), \quad s \to \infty,$$

(see (21.5)). We are now ready to prove the main result of this section:

**Theorem 22.2** *Let be a pendant free metric graph with contact set including the vertex V* <sup>0</sup>*, and let -*<sup>1</sup> *be the metric graph obtained from by dissolving the vertex V* <sup>0</sup>*. Assume that* 


*Let L*st *q,a be the standard magnetic Schrödinger operator. Consider the M-functions for and -*<sup>1</sup> *dependent on the spectral parameter λ and the magnetic fluxes through the cycles*  <sup>=</sup> *(* <sup>1</sup>*,* <sup>2</sup>*), where, following* (22.2)*,*  <sup>1</sup> *collects the fluxes corresponding to the cycles that are preserved under the dissolution of V* <sup>0</sup>*.* 

*Assume in addition two generically satisfied assumptions:* 

	- *among the normal derivatives either all derivatives are zero, or at least three derivatives are different from zero.*

*Then for any fixed*  <sup>1</sup> ∈ {0*, π*}*β*1*(-*<sup>1</sup>*) the* |*∂-*|×|*∂-*| *matrix valued M-function*  **M***-(λ, ) taken for all possible values of*  <sup>2</sup> ∈ {0*, π*}*d*0−<sup>1</sup> *determines the (*|*∂-*| + *d*<sup>0</sup> − 1*)* × *(*|*∂-*| + *d*<sup>0</sup> − 1*) matrix valued M-function* **M***-*<sup>1</sup> *(λ,* <sup>1</sup>*).* 

*Proof* We are going to assume that the magnetic fluxes <sup>1</sup> through the cycles in *-*<sup>1</sup> are fixed, and will omit indication that the M-functions and the eigenfunctions depend on 1.

Let us present the M-function for *-*<sup>1</sup> in the following block form separating the preserved and pendant vertices

$$\mathbf{M}\_{\mathbf{l}}(\lambda) = \begin{pmatrix} \mathbb{M}\_{\mathbf{l}}^{00}(\lambda) \, \mathbb{M}\_{\mathbf{l}}^{01}(\lambda) \\ \mathbb{M}\_{\mathbf{l}}^{10}(\lambda) \, \mathbb{M}\_{\mathbf{l}}^{11}(\lambda) \end{pmatrix},\tag{22.14}$$

where the quadratic *d*<sup>0</sup> <sup>×</sup> *<sup>d</sup>*<sup>0</sup> block M<sup>00</sup> <sup>1</sup> corresponds to the pendant (originating from *V* 0) vertices in *-*<sup>1</sup> and M<sup>11</sup> <sup>1</sup> is the quadratic *(*|*∂-*| − 1*)* × *(*|*∂-*| − 1*)* block corresponding to the preserved vertices *V <sup>j</sup>* from *-*, *<sup>V</sup> <sup>j</sup>* <sup>=</sup> *<sup>V</sup>* 0.

The first diagonal block M<sup>00</sup> <sup>1</sup> *(λ)* coincides with the matrix M<sup>1</sup> already reconstructed above. The second diagonal block M<sup>11</sup> <sup>1</sup> *(λ)* coincides with the corresponding block in the M-function for *-*.

It remains to reconstruct the non-diagonal blocks having the singularities determined by *∂ψ*<sup>D</sup> *<sup>n</sup> (V <sup>i</sup> )∂ψ*<sup>D</sup> *<sup>n</sup> (V <sup>j</sup> )* with *<sup>i</sup>* <sup>=</sup> <sup>1</sup>*,* <sup>2</sup>*,...,d*<sup>0</sup> and *<sup>V</sup> <sup>j</sup>* being one of the preserved vertices. Knowing the corresponding singularity in the 0*j* -entry of the original M-function

$$\mathbf{M}^{0j}(\lambda, \vec{\Phi}^2) \underset{\lambda \to \lambda\_n^{\rm D}}{\sim} \frac{1}{\lambda\_n^{\rm D} - \lambda} \partial \psi\_n^{\rm D}(V^j) \left( \sum\_{l=1}^{d\_0} e^{j\Phi\_l} \partial \psi\_n^{\rm D}(V^l) \right), \tag{22.15}$$

allows us to reconstruct *∂ψ*<sup>D</sup> *<sup>n</sup> (V <sup>j</sup> )*.

Hence the blocks M<sup>01</sup> <sup>1</sup> and <sup>M</sup><sup>10</sup> <sup>1</sup> are determined using formula (17.37) and taking into account the asymptotics (21.5). 

The theorem implies that the M-function for *-*<sup>1</sup> can be recovered, provided the M-functions of all signed operators on are known.

Theorem 22.2 can be proved for any fixed <sup>1</sup> not necessarily from {0*, π*}*β*1*(-*1*) .* The reason we restrict our statements to <sup>1</sup> ∈ {0*, π*}*β*1*(-*<sup>1</sup>*)* is that only those values of *j* will be used when we shall dissolve further vertices.

The assumption that at least three normal derivatives are non-zero can be weakened, one may require instead that the eigenfunction *ψ*<sup>D</sup> *<sup>n</sup>* has non-zero normal derivatives at one of the preserved vertices.

# **22.2 Geometric Ideas Behind the MBC-Method: First Examples**

In this section we discuss how to apply the MBC-method to solve inverse problems for metric graphs. As before we assume that the original graphs have no pendant edges. We start by presenting examples when the whole graph can be reconstructed starting from one vertex. We then continue by discussing what may prevent complete reconstruction of the graph.

**Example 22.3** Consider the graph presented in Fig. 22.3 and assume that the contact set consists of the single vertex *V* . Dissolving the vertex *V* and peeling away the pendant vertices we arrive at a smaller graph. Repeating the procedure by dissolving the vertices *V* and *V* the inverse problem is reduced to a tree with all pendant vertices in the contact set (see the upper sequence in Fig. 22.3). The MBC-method allows us to solve the inverse problem for this graph.

The inverse problem for this graph can be solved by dissolving the vertices *V* , *V* ∗, and *V* ∗∗ instead (see the lower sequence in Fig. 22.3). The resulting graph is the

**Fig. 22.3** The whole graph may be reconstructed by the MBC-method

cycle with 3 contact points—the inverse problem is again solvable by dismantling the cycle into three intervals (see Sect. 21.2).

This example shows that the MBC-method allows us to solve the inverse problem for rather complicated graphs with arbitrary number of cycles and very few contact points.

**Example 22.4** Consider the graph presented in Fig. 22.4 and assume that the contact set is given by the vertex *V* . Dissolving the vertex *V* and peeling away the pendant vertices we arrive at a graph with three contact vertices, each having degree 2. Theorem 22.2 cannot be applied to such vertices; the fat edges form a **wall** separating already reconstructed edges from the rest of the graph. Note that the graph in Fig. 22.4 is obtained from the graph in Fig. 22.3 by removing two internal edges.

**Example 22.5** Figure 22.5 presents another graph with a single contact vertex *V* . After dissolving *V* and removing the pendant edges we get the graph with three vertices. We may dissolve only the vertex *V* as the remaining two contact vertices have degree two. This leads to a graph with three contact vertices: two degree two vertices and one bottleneck vertex *V* —dissolution of this vertex would disconnect the graph (see Definition 22.8). The inverse problem for the remaining graph cannot be solved by dismantling it, since the corresponding trees are not independent. Note that the original graph in this example is again a slight modification of the graph presented in Fig. 22.3.

It is not surprising that not all pendant-free graphs may be reconstructed starting from a single contact vertex—the described procedure may terminate immediately or after a few steps. As Examples 22.4 and 22.5 show, there are two reasons for the termination


**Problem 94** Find new examples of graphs that can be reconstructed by dissolving vertices, starting from a single contact vertex.

## **22.3 Infiltration Domains, Walls and Bottlenecks**

Let us have a closer look at how graphs or at least parts of them may be reconstructed using the method presented in Theorem 22.2.

**Definition 22.6** Let  be a finite compact pendant free metric graph with contact set *∂-*. Consider any single contact vertex *<sup>V</sup> <sup>j</sup>* <sup>∈</sup> *∂* and apply the MBC-method by dissolving *V <sup>j</sup>* and the new contact vertices appearing after peeling away the pendant edges. We repeat this procedure until it terminates or the whole graph  is recovered without involving the other original contact vertices from *∂-* \ *<sup>V</sup> <sup>j</sup>* . The maximal subgraph *Dj* ⊂  recovered in this way is called the **infiltration domain**. The MBC-method determines not only the metric subgraph *Dj* but also the potential *q* on it.

Of course we do not exclude the case where the infiltration domain coincides with the whole original graph *-*, but we are also interested in the mechanisms preventing this.

Our first observation is that the reconstructed domain may depend on the order in which the vertices are dissolved (see Example 22.3). The infiltration domain using *V,V* ∗, and *V* ∗∗ is smaller than the infiltration domain obtained by dissolving *V* , *V* , and *V* .

In what follows we are going to choose the largest possible infiltration domains this will be our convention for the rest of this chapter.

For metric graphs it is natural to modify the notion of the set complement as follows.

**Definition 22.7** Let *-*<sup>1</sup> <sup>=</sup> *(E*1*, V* <sup>1</sup>*)* be a subgraph of the metric graph *-* = *(E, V ).* Then graph's **complement** *-* \ *-*<sup>1</sup> =: *-*<sup>2</sup> is the metric graph on the edge set *E*<sup>2</sup> = *<sup>E</sup>* \ *<sup>E</sup>*<sup>1</sup> and the vertex set *<sup>V</sup>* <sup>2</sup> <sup>=</sup> *V m(-)* \ *<sup>V</sup> m(-*1*) V <sup>m</sup>*∈*V -.*

In other words, the complement graph is built from all edges in  that are not edges in *-*1: the connections between the edges are inherited from *-*, that is two edges in *-* \ *-*<sup>1</sup> are connected at a vertex if they were connected at the same vertex in *-*. The corresponding equivalence class simply loses all endpoints of edges that belong to *-*<sup>1</sup>*.* Note that graph's complement may have non-trivial intersection with the original graph: it consists of all vertices that belong to both *-*<sup>1</sup> and *-* \*-*1. These vertices form the boundary of *-*<sup>1</sup> with respect to the original graph *-*:

$$\delta\_{\Gamma} \Gamma\_{\mathrm{I}} = \Gamma\_{\mathrm{I}} \bigcap \left( \Gamma \backslash \Gamma\_{\mathrm{I}} \right). \tag{22.16}$$

It consists of all vertices in  that belong to both *-*<sup>1</sup> and *-*2. Another way to characterise the subgraph's boundary is

$$\delta\_{\Gamma} \Gamma\_{\mathbb{I}} = \left\{ V^{m} \in \Gamma\_{\mathbb{I}} \, : \, \deg\_{\Gamma\_{\mathbb{I}}} (V^{m}) < \deg\_{\Gamma} (V^{m}) \right\},\tag{22.17}$$

where deg and deg*-*<sup>1</sup> denote the degrees of the vertices with respect to  and to *-*1 respectively.

Let an infiltration domain *Dj* be determined. Then for every vertex from the boundary *δ-Dj* at least one of the two topological conditions required by Theorem 22.2 fails to be satisfied:

(1) Dissolution of the vertex disconnects the complement graph.

Consider the example presented in Fig. 22.6: *V* <sup>0</sup> is the original contact vertex, *V* <sup>1</sup> is a new contact vertex; dissolving *V* <sup>1</sup> the complement graph falls into two disconnected components, the infiltration domain *D*<sup>0</sup> is marked by red with *V* <sup>1</sup> being its unique boundary vertex.

(2) The degree of the vertex in the complement graph equals 2.

Consider the example presented in Fig. 22.7. The infiltration domain *D*<sup>0</sup> corresponding to the original contact vertex *V* <sup>0</sup> is marked in red. The four boundary vertices *V* <sup>1</sup>*, V* <sup>2</sup>*, V* <sup>3</sup>*, V* <sup>4</sup> have degree two (with respect to the graph complement to *D*0) and therefore cannot be dissolved. The edges attached to these vertices (marked by fat red curves) can be seen as a wall surrounding the infiltration domain. We do not assume that walls belong to infiltration domains.

Degree two vertices are excluded in the original graph (they can be removed since we assume standard vertex conditions), but such vertices may appear after the reduction.

**Definition 22.8** A vertex *V* <sup>0</sup> in a connected graph is called a **bottleneck** vertex if dissolving this vertex the graph becomes disconnected.

**Fig. 22.8** Graph with a bottleneck vertex

This notion is close to that of bridges in a graph—edges whose removal makes graph disconnected. Bottleneck vertices for metric graphs play a role analogous to the bridges in discrete graphs. Consider the graph presented in Fig. 22.8. Only the vertex *V* <sup>1</sup> is a bottleneck.

One should not think that the resulting graph always has just two connected components, see for example the initial graph in Fig. 22.6.

**Definition 22.9** Let *D* be an infiltration domain in a metric graph *-*. Then the domain's **wall** *Wj* is the union of all edges in the complement to *D* connected by at least one of the endpoints to the boundary *∂D* (with respect to the original graph *-*).

Consider the graph in Fig. 22.7. The infiltration domain is marked in red and is given by the 4-star. The boundary is given by the pendant vertices in the star—the vertices *V* <sup>1</sup>*, V* <sup>2</sup>*, V* <sup>3</sup>*, V* 4. The wall is marked by thick red lines and form a single cycle connected via 5 other vertices to the rest of the graph.

With these definitions one may say that every infiltration domain is separated from the rest of the graph by its wall and the set of bottleneck vertices. Note that the two edges connected to degree two bottleneck vertices belong to the wall.

Bottlenecks connecting more than two components in the original graph always remain if the graph is reduced starting from a single contact vertex and therefore prevent further expansion of infiltration domains (see Fig. 22.6). On the other hand bottlenecks connecting just two components in the original graph are not dangerous and disappear, provided the degree of such vertices with respect to each of the graph components is not 2 (see Fig. 22.9).

One should not imagine that the infiltration domains are always surrounded by walls as in Fig. 22.7—such a picture is suitable for planar graphs only. In fact any metric graph may serve as a wall; the only restriction is that every edge should be

**Fig. 22.9** Degree two bottlenecks may allow dissolution

**Fig. 22.10** Any metric graph could be a wall

**Fig. 22.11** The infiltration domain for the graph *K*3*,*<sup>3</sup>

connected by one of the ends to a degree two vertex. Let us return to the graph presented in Fig. 22.3, as shown in Fig. 22.10 this graph serves as a wall for the infiltration domain on a larger graph. One places degree two dummy vertices in the middle of each edge in the original graph and connects all these vertices by a star. The middle vertex in the star serves as a contact vertex for the new larger graph.

**Example 22.10** Consider the graph *K*3*,*3—the complete bipartite graph on three vertices presented in Fig. 22.11. Assume without loss of generality that *V* <sup>4</sup> is a contact vertex. Dissolving the vertex and peeling the three pendant edges we get three new contact vertices *V* <sup>1</sup>*, V* <sup>3</sup>*,* and *V* 5. The procedure stops since all these vertices have degree two with respect to the remaining graph. It follows that the infiltration domain is just this 3-star. What is interesting is that the whole remaining graph forms the wall for the infiltration domain. The wall can be seen as a watermelon graph on three parallel edges with extra contact vertices in the

**Fig. 22.12** Graphs with disjoint infiltration domains and with (**a**) a common wall; (**b**) partially common walls; (**c**) disjoint walls; (**d**) disjoint walls

middle of the edges. We see that the wall not only contains cycles but also that these cycles are not independent.

We summarise our studies of infiltration domains as

**Observation 22.11** *Consider any infiltration domain and its wall in a finite graph -. The following possibilities may occur* 


One may continue these studies by investigating how two infiltration domains and their walls may be situated in relation to each other. See Fig. 22.12 for illustrations. Note that two subgraphs are considered disjoint if they share no more than a finite number of points.

## **22.4 Solution of the Inverse Problem Via the MBC-Method**

The idea behind solving the inverse problem for general graphs is to find a sufficient number of contact vertices so that the corresponding infiltration domains cover, or almost cover, the original graph. In other words we are going to assume first that the **skeleton** 

$$\mathbb{S} := \Gamma \backslash \left( \bigcup\_{V^\vee \in \partial \Gamma} D\_j \right) \tag{22.18}$$

is empty or sufficiently thin. Remember that graph complement is understood in the sense of Definition 22.7. We shall present theorems providing sufficient conditions for graph reconstruction, the theorems will be ordered so that the skeleton gets less thin. Naturally in each new theorem stronger assumptions on the spectrum or eigenfunctions will be required. As before we distinguish topological assumptions (enumerated by numbers) from spectral ones (enumerated by letters). The spectral assumptions are generically satisfied.

The first result (see Theorem 22.12 below) may seem rather straightforward, but to prove it rigorously a few important points must be clarified:


**Theorem 22.12** *Let be a finite compact pendant free metric graph without degree two vertices and loops, and let L*st *q,a be the corresponding standard Schrödinger operator. Assume that* 

*(1) the contact set ∂ is chosen so that the infiltration domains corresponding to each <sup>V</sup> <sup>j</sup>* <sup>∈</sup> *∂ cover the original graph -,* 

$$\bigcup\_{V^j \in \partial \Gamma} D\_j = \Gamma,\tag{22.19}$$

*i.e. the skeleton* S *is empty,* 

$$
\mathbb{S} = \emptyset. \tag{22.20}
$$

*Assume in addition the following generically satisfied assumption:* 

*(a) the Dirichlet eigenfunctions on connected subgraphs*1 *of do not vanish identically on any edge.* 

*Then the M-function associated with the contact set and known for all possible signings (magnetic fluxes i* = 0*, π, i* = 1*,* 2*,...,β*1*) determines the graph and potential q.* 

*Proof* Two properties of graph M-functions in the case of standard vertex conditions will be used:


Only the second statement needs a proof. This connection has already been established for metric trees (see Theorem 20.9); the same ideas can be applied to graphs with cycles as follows. 

**Step 1: M-function and Distances Between Contact Points** We will establish a relation between the M-function and the distances between the contact points. As was the case with trees the simplest way to establish such a connection is via the dynamical response operator. Definition 20.7 can be generalised as follows:

**Definition 22.13** Let **R***<sup>T</sup>* be the dynamical response operator associated with the metric graph  and the contact set *∂-.* Let *V <sup>i</sup>* and *V <sup>j</sup>* be any two vertices from *∂-.* Then the travelling time *t (V <sup>i</sup> , V <sup>j</sup> )* between the vertices is given by

$$\text{tr}(V^{i}, V^{j}) = \sup \left\{ T : \mathcal{R}^{T}\_{V^{i}, V^{j}} \equiv 0 \right\},\tag{22.21}$$

where *R<sup>T</sup> <sup>V</sup> i,V <sup>j</sup>* denotes the entry of the matrix operator **R***<sup>T</sup>* associated with the vertices *V <sup>i</sup>* and *V <sup>j</sup>* .

<sup>1</sup> Remember that we consider only subgraphs *-*<sup>1</sup> obtained by selecting several edges in  and preserving all possible connections between them, *i.e.* keeping the equivalence relations given by the vertices in *-*. The corresponding Dirichlet operator is given by introducing Dirichlet conditions at all boundary vertices with respect to *-*, *i.e.* at the vertices in *-*<sup>1</sup> which equivalence classes are strictly smaller than those in *-*. The subgraphs are connected if and only if the Dirichlet conditions at the boundary vertices do not make them disconnected.

Then Lemma 20.8 is modified as follows:

**Lemma 22.14** *Consider the Schrödinger equation on a finite metric graph with standard vertex conditions. Then the travelling time between any two contact vertices V <sup>i</sup> and V <sup>j</sup> is equal to the distance* dist *(V <sup>i</sup> , V <sup>j</sup> ) between the vertices.* 

*Proof of Lemma 22.14* The proof is almost identical to the proof of Lemma 20.8, since in determining the travelling time, one checks how the front of the wave initiated at the contact vertex *V <sup>j</sup>* spreads along the graph. If the path connecting *V <sup>j</sup>* to *V <sup>i</sup>* is unique, then the front of the wave evolves as if it were travelling along a certain tree cut from *-*: the tree contains the path connecting *V <sup>j</sup>* with *V <sup>i</sup>* and arbitrarily short but non-zero pendant edges adjusted to each vertex on the path. The entry of the response operator associated with the vertices *V <sup>j</sup>* and *V <sup>i</sup>* contains a *δ* term delayed by the length of the shortest path. The waves reflected from the vertices and/or coming along any other path are further delayed and do not contribute to this singularity. Then Lemma 20.8 implies that the travelling time coincides with the distance between the edges.

Consider the case in which there are several shortest paths connecting the two vertices. The same argument as above applies to each path. It follows that the kernel of the *V <sup>i</sup> , V <sup>j</sup>* entry of the response operator contains the sum of delayed *δ* terms coming from each of the shortest paths. The amplitude for each term is equal to the product of the transmission coefficients from the vertices along the corresponding path. In the case of standard vertex conditions, all transmission coefficients are positive (see (3.41)), hence contributions from different shortest paths cannot cancel each other and the travelling time again equals the distance. 

**Step 2: Recovery of the Infiltration Domains** We start by dissolving the contact vertex *V* <sup>1</sup> leading to the metric graph *-*<sup>1</sup> with *d*<sup>1</sup> <sup>=</sup> deg *<sup>V</sup>* <sup>1</sup> pendant edges. Choosing any of the pendant vertices, we recover the length and the potential on the corresponding edge, which we denote by *E*1. The length is reconstructed in the same way as for trees since for small times the nearest vertex acts as if it were a part of a star graph (see Sect. 20.2.2). Let us denote by *V* the vertex to which the pendant edge *E*<sup>1</sup> is attached. Note that *V* cannot coincide with *V* <sup>1</sup> since  has no loops. Now the edge *E*<sup>1</sup> can be peeled away. The contact set for the new graph contains *V* and all *d*<sup>1</sup> − 1 pendant vertices.

Now we turn to the second pendant vertex and reconstruct the corresponding pendant edge *E*<sup>2</sup> (*i.e.* its length and the potential on it). The second edge is connected to the vertex *V* if and only if the travelling time between the second pendant vertex and *V* is equal to the length of *E*2. If this is the case then the edges *E*<sup>1</sup> and *E*<sup>2</sup> were parallel. Otherwise we denote by *V* <sup>2</sup> the new vertex to which *E*<sup>2</sup> is connected. We can peel away the edge *E*2.

Repeating this procedure *d*<sup>1</sup> times, one gets a new graph *-*2. All newly-labeled vertices turn into contact vertices for *-*2. The number of contact vertices is between 1 (all peeled pendant edges are parallel) and *d*<sup>1</sup> (no two peeled pendant edges are parallel). If  is a watermelon, then *-*<sup>1</sup> is a star graph and the reconstruction is accomplished since *-*<sup>2</sup> is trivial as a metric graph (one vertex, no edges). If *-*2 contains pendant vertices we repeat this procedure until we obtain a pendant free graph.

The dissolution-peeling procedure is applied again to the smaller graph *-*2. The only difference is that the graph may have not one but several contact vertices. Therefore each time when a pendant edge is peeled away, one has to compare its length to the distance to any of the contact vertices: if the length and the distance are equal, then the edge is attached to that particular vertex, otherwise one introduces a new vertex to which the edge is attached.

This procedure stops when either the whole graph  is recovered or all contact vertices are either degree two or bottlenecks with respect to the unrecovered part of *-*. The corresponding recovered subgraph of  is the infiltration domain *D*1. (We reiterate that *D*<sup>1</sup> may depend on the order the vertices are dissolved.)

**Step 3: Connecting Different Infiltration Domains Together** Starting from different contact vertices *V* <sup>1</sup>*,...,V* <sup>|</sup>*∂-*<sup>|</sup> ∈ *∂-*, corresponding infiltration domains *Dj* are recovered. Under condition (22.19) some of these domains should have common vertices. One has to understand how the *Dj* 's are connected to each other. One should not exclude the case when *Di* ⊂ *Dj* for a certain *i* = *j* .

Assume that *D*<sup>1</sup> is recovered. It cannot be excluded that some other original contact vertices belong to *D*1, therefore each time pendant edges are peeled away when reconstructing *D*1, one should not only compare the length of the edge to be peeled to the distance between the pendant vertex and any other recovered vertex in *D*1, but also to all original contact vertices in *-*: if the length and the distance are equal, then the vertex the pendant edge is attached to should be identified with the already known contact vertex.

The same procedure should be applied when any other infiltration domain is recovered: peeling away pendant edges one compares the length of the edge to the distance to all recovered vertices and identifies the vertex the edge is connected to in the case of equality. In this way connections of new infiltration domains to already recovered domains are also established.

Note that when reconstructing infiltration domains, we do not pay attention to synergy effects that may come from the interaction between neighbouring infiltration domains. This is in order to make the formulation of the theorem more transparent.

Under condition (22.19) the graph  is completely recovered when all infiltration domains are determined and it is known how they are glued together.

In the above theorem the skeleton is assumed to be empty, which guarantees direct reconstruction of  from *Dj* . In the following theorem the skeleton may be non-empty but is assumed to be the smallest possible—every edge in the skeleton connects the infiltration domains.

**Theorem 22.15** *Let be a finite compact pendant free metric graph without degree two vertices and loops and let L*st *q,a be the corresponding standard Schrödinger operator. Assume that* 

*(1) the contact set ∂- <sup>V</sup> <sup>j</sup> is chosen so that the union of infiltration domains corresponding to vertices in ∂ contains all vertices in the original graph -:* 

$$\bigcup\_{V^j \in \partial \Gamma} D\_j \supset \bigcup\_{V^m \in \mathbf{V}} V^m. \tag{22.22}$$

*Then it holds that every edge in the skeleton* <sup>S</sup> <sup>=</sup> *-* \ " *V <sup>j</sup>*∈*∂- Dj connects two of its contact vertices.* 

*Assume in addition the following generically satisfied assumptions:* 


*Then the M-function associated with the contact set ∂ and known for all possible signings (magnetic fluxes i* = 0*, π, i* = 1*,* 2*,...,β*1*) determines the graph and potential q.* 

*Proof* Assume that the contact set *∂* is fixed so that all assumptions of the theorem are fulfilled. Repeating the proof of Theorem 22.12 we conclude that all infiltration domains (including the potential on them) and their connections are recovered. It remains to determine the skeleton S and the potential on it (Fig. 22.13). Removing all infiltration domains one obtains the M-function for the skeleton associated with all skeleton contact vertices coinciding with those vertices in the skeleton which are simultaneously boundary vertices for a certain infiltration domain:

$$\mathfrak{gg} = \left(\bigcup\_{V/\in\partial\Gamma} \mathfrak{d}D\_f\right) \cap \mathbb{S}.\tag{22.23}$$

⎞

Under condition (22.22) every vertex in S is a contact vertex.

⎛

<sup>2</sup> See footnote 1 on page 546.

If the original graph and hence the skeleton contain no parallel edges, then uniqueness of the skeleton and potential on it follows directly from Theorem 21.8.

To reconstruct the skeleton in the general case we need to modify the proofs of Theorems 21.6 and 23.6 and hence of Theorem 21.8. We shall see possible multiple edges in S as watermelon graphs W*<sup>j</sup>* connecting two vertices. The Dirichlet eigenvalues on the edges give singularities of the M-function for the skeleton. Each eigenfunction determines precisely two singularities since the Dirichlet spectra on the edges are disjoint (assumption *b)*) and loops are not allowed. In this way we obtain the Dirichlet spectra of each watermelon graph W*<sup>j</sup>* . Similar to (21.8) the M-functions are determined up to constant matrices **A***<sup>j</sup>*

$$\mathbf{M}\_{j}(\lambda) = \mathbf{A}\_{j} + \sum\_{\lambda\_{n}^{\mathrm{D}}(\mathbb{W}\_{j})} \frac{\lambda - \lambda'}{(\lambda\_{n}^{\mathrm{D}} - \lambda)(\lambda\_{n}^{\mathrm{D}} - \lambda')} \langle \partial \boldsymbol{\psi}\_{n}^{\mathrm{D}}|\_{\partial \mathbb{S}}, \cdot \rangle\_{\ell\_{2}(\partial \mathbb{S})} \partial \boldsymbol{\psi}\_{n}^{\mathrm{D}}|\_{\partial \mathbb{S}}.\tag{22.24}$$

In this formula we do not require knowledge of the normal derivatives *∂ψ*<sup>D</sup> *<sup>n</sup>* , instead we just sum the corresponding singularities in **M**S. To determine **A***<sup>j</sup>* from the asymptotics given by (21.11) we need to know the number of parallel edges in each watermelon graph W*<sup>j</sup>* . We check whether the singularities of **M**W*<sup>j</sup>* depend on any magnetic flux: if **M**W*<sup>j</sup>* depends on *n* fluxes, then the degrees of the vertices in W*<sup>j</sup>* are equal to *Nj* = *n* + 1. Modifying formula (21.10) we obtain the matrices **A***<sup>j</sup>* and therefore accomplish reconstruction of **M**W*<sup>j</sup>* .

If no parallel edges are present, then we are done, since we know that the Mfunction for a single interval determines its length and the potential on it. It remains to prove that the M-function for a watermelon graph determines the edge lengths and the potential on it.

Suppose W*<sup>j</sup>* is a watermelon graph formed by *Nj* edges connecting the vertices *<sup>V</sup>* <sup>1</sup> and *<sup>V</sup>* 2, which we assume to be contact vertices. If *Nj* <sup>=</sup> 2, then the edges form a cycle of discrete length two. The reconstruction follows from Theorem 23.4, case (2) from Chap. 23. Therefore in what follows we assume that the number of edges in the watermelon graph is at least three. Let us remove one of the two vertices, say *V* 2, from the set of contact vertices. Dissolving the remaining vertex *V* <sup>1</sup> we get the M-function for the star graph and can therefore determine the lengths of all edges in the watermelon and the potential on it. This completes the proof of the theorem since the skeleton is completely recovered. 

Assumptions *(a)* and *(b)* on the spectrum and eigenfunctions are generically satisfied in the following sense: assume that potential *q* on a copy of the real line R is fixed. Then choosing the edges *En* arbitrarily and in this way fixing the potential *q* on  leads almost surely to a quantum graph with the conditions above satisfied. More precisely, the set of endpoints for which some of Dirichlet eigenfunctions vanish identically on certain edges is meagre. This can be proved following [82] since it is assumed that the graphs do not have loops and the number of subgraphs involved is finite.

Analysing the proof we see that the assumptions are too restrictive and may be weakened without actually changing the proof. We decided not to include such weaker but cumbersome assumptions in order to make formulation of the main theorem clearer—these assumptions are generically satisfied anyway. The most obvious extensions of the theorem are as follows:


In the following theorem the skeleton is permitted to be even bigger; nevertheless the infiltration domains still cover a major part of the original graph. To compensate for the relaxation a new assumption *(2)* is introduced: it is not generically satisfied and has topological nature. In what follows it will be convenient to consider single intervals as a special case of star graphs.

**Theorem 22.16** *Let be a finite compact pendant free metric graph without degree two vertices and loops and let L*st *q,a be the corresponding standard Schrödinger operator. Assume that* 

*(1) the contact set ∂ is chosen so that the union of infiltration domains corresponding to the vertices from ∂ and their walls Wj cover the original graph -*

$$\bigcup\_{V^{\mathcal{I}} \in \partial \Gamma} (D\_{\mathcal{I}} \cup W\_{\mathcal{I}}) \supset \Gamma \supset \bigcup\_{V^{\mathcal{I}} \in \mathbf{V}} V^{\mathcal{I}};\tag{22.25}$$

*(2) the skeleton contains no cycles of discrete length less than or equal to* 4*.*  " 

*Then the skeleton* <sup>S</sup> <sup>=</sup> *-* \ *V <sup>j</sup>*∈*∂- Dj is a union of star graphs joined at skeleton contact vertices ∂*S*.* 

*Assume in addition the following generically satisfied assumptions:* 


*Then the M-function associated with the contact set ∂ and known for all possible signings (magnetic fluxes i* = 0*, π, i* = 1*,* 2*,...,β*1*) determines the graph and potential q.*

<sup>3</sup> See footnote 1, page 546.

*Proof* Our first step is to prove that the skeleton is formed by star graphs connected at contact vertices. Consider formula (22.23) for the skeleton contact set. Every edge in the skeleton belongs to a wall, and therefore at least one of its endpoints is contained in " *V <sup>j</sup>*∈*∂- ∂Dj* . This vertex must be from *∂*S, for otherwise the edge does not belong to the skeleton. Summing up, every edge in the skeleton has at least one endpoint from the contact set *∂*S. Introducing Dirichlet conditions at the skeleton contact points dissolves it into a set of star graphs and single edges, which are treated as star graphs as well.

Repeating the arguments used in the proof of Theorem 22.15, we conclude that the M-function for  determines the M-function for S associated with all contact vertices from *∂*S. We are going to use Theorem 21.7 to reconstruct the skeleton, so let us check that all conditions in the theorem are satisfied:<sup>4</sup>

	- *(1)* cycles of discrete length 2 are forbidden in the skeleton, hence no star graph in S has two pendant vertices coming from the same contact vertex;
	- *(2)* cycles of discrete length 2, 3 and 4 are forbidden in the skeleton, hence no two star graphs have more than one common vertex;
		- *(a)* it is assumed that the spectra of the Dirichlet operators on the star graphs forming the skeleton are disjoint.

We see that all conditions of Theorem 21.7 are satisfied for the skeleton dismantled by the contact vertices into star graphs. Hence the corresponding M-function determines the skeleton and the potential on it. This completes the reconstruction of and *q*. 

As we already pointed out, Assumptions *(a)* and *(b)* are generically satisfied, while Assumptions *(1)* and *(2)* are related to the topology of  and the choice of *∂-*. Assumption *(2)* can be weakened as follows: reconstructing the skeleton we have not used dependence of the corresponding M-function on the magnetic fluxes associated with the skeleton. One may allow parallel edges and parallel stars. We leave this as a problem for the reader.

**Problem 95** Study whether Assumption (3) can be weakened ensuring reconstruction of and *q* applying the MBC-method to the skeleton.

Another possible generalisation concerns vertex conditions: it is enough to assume that the vertex conditions at the skeleton inner vertices are generalised delta couplings (required by Theorem 21.7).

<sup>4</sup> The list refers to conditions in Theorem 21.7.

**Matryoshka-Type Structure of Reconstructable Graphs** It is clear that the graphs described by the three theorems above do not exhaust the whole family of graphs reconstructable via the MBC-method. Our goal here is to indicate how such a family can be described. We are going to focus on topological properties of the graphs assuming that generically satisfied spectral conditions are always fulfilled. It will be convenient to see any metric graph as a pair *(-, ∂-)* consisting of the metric graph  and the contact set *∂-*.

The families of graphs covered by Theorems 22.12, 22.15, and 22.16 have one common feature: the inverse problem for the corresponding skeleton has already been solved (or can be easily solved). One may determine an inductive procedure characterising pairs that are reconstructable by our methods. This procedure brings to mind Russian matryoshka dolls.

Assume that we have already characterised a family F<sup>0</sup> of reconstructable pairs, for example given by Theorem 22.16. Then we may also reconstruct all pairs whose skeletons belong to the original family F0. Denoting the new family by F<sup>1</sup> we may repeat our argument and obtain families F2, F3, *...* .

Not all pairs are reconstructable: bottlenecks and degree two vertices make it impossible to reconstruct certain graphs in the same way that they prevent further growth of infiltration domains. In fact every time when the infiltration domain does not cover the whole graph we have an example of a metric graph with one contact point that does not belong to the family (see Figs. 22.4, 22.5, and 22.7). We provide here a few more examples:


It is clear that the family <sup>F</sup> <sup>=</sup> " *<sup>n</sup>* F*<sup>n</sup>* of all reconstructable pairs possesses a certain monotonicity property:

*Increasing the contact set the pair remains in the family:* 

$$\left\{ (\Gamma, \partial \Gamma) \in \mathcal{F}, \begin{array}{c} \\ \Box \\ \end{array} \right\} \Rightarrow (\Gamma, \partial \prime \Gamma) \in \mathcal{F}. \tag{22.26}$$

Moreover Theorem 21.8 implies that every pair with the maximal possible contact set (consisting of all vertices) is reconstructable, provided the graph has no loops or parallel edges. Note that the assumption concerning parallel edges may be removed as was done in the proof of Theorem 22.15.

On the other hand fixing the contact set and making the metric graph smaller does **not** necessarily guarantee constructability:

$$\left\{ (\Gamma, \partial \Gamma) \in \mathcal{F}, \begin{array}{c} \\ \text{ $\Gamma$ } \\ \Gamma \subset \Gamma \end{array} \right\} \not\Rightarrow (\Gamma', \partial \Gamma) \in \mathcal{F}. \tag{22.27}$$

Consider for example the graphs presented in Figs. 22.3 and 22.4: the second graph is obtained from the first one by removing two edges.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 23 Magnetic Boundary Control II: Graphs on One Cycle and Dependent Subtrees**

The MBC-method as formulated in the previous chapter can only be applied to graphs with several independent cycles since it is required that dissolving vertices leads to at least two cycles being broken. Hence to complete the picture it is necessary to understand how to apply ideas of the MBC-method to graphs with one cycle. The set of such graphs is rather limited: loops and cycles with several contact vertices (remember that we need to look at pendant free graphs only). We present explicit procedures proving that the inverse problem for cycle graphs is generically solvable, while loops create problems that justify the exclusion of graphs with loops in our studies in Chap. 22. The methods we discuss here are specific for graphs with one cycle and cannot be directly extended to arbitrary graphs. This is the reason we study such graphs in a separate chapter.

Another result included in this chapter is the generalisation of the dismantling procedure presented first in Chap. 21: it turns out that the MBC-method helps to solve the inverse problem in the case where subtrees are parallel. This approach is close to the solution of the inverse problem for graphs with one cycle.

## **23.1 Inverse Problem for the Loop**

## *23.1.1 On Edge and Loop M-functions*

Our goal here is to establish an explicit connection between the edge and loop Mfunctions. This problem was considered for the first time in [188].

Consider any compact edge [*x*1*, x*2] and any magnetic differential expression *τq,a* (see (23.1)) on it. Two possible metric graphs can be formed from this edge: the single edge graph edge <sup>1</sup> <sup>=</sup> 1*.*<sup>1</sup> and the loop graph loop <sup>1</sup> = 1*.*2, <sup>1</sup> = *x*<sup>2</sup> − *x*1*.* The corresponding Schrödinger operators are then uniquely defined if we assume standard vertex conditions. An M-function is associated with each of these graphs:

• the 2 × 2 matrix edge M-function introduced in (5.51)

$$\mathbf{M}^{\text{edge}} = \mathbf{M}\_{\Gamma\_{\ell\_1}^{\text{edge}}};$$

• the scalar loop M-function

$$\mathbf{M}^{\text{loop}} = \mathbf{M}\_{\Gamma\_{\ell\_1}^{\text{loop}}}$$

(see (23.2) below).

The edge M-function connects together the boundary values for an arbitrary solution of the differential equation on the edge, hence it is not surprising that **M**edge determines **M**loop but the opposite reconstruction is not always possible.

To calculate the (scalar) M-function **M**loop, one has to solve the magnetic Schrödinger equation

$$
\pi\_{q,a}\psi = \left(-\frac{1}{i}\frac{d}{dx} + a(\chi)\right)^2\psi + q(\chi)\psi = \lambda\psi\tag{23.1}
$$

subject to the continuity condition at *V* 1. The M-function is

$$\mathbf{M}^{\text{loop}} = \frac{\partial \psi(V^1)}{\psi(V^1)} = \frac{\left(\psi'(\mathbf{x}\_1) - ia(\mathbf{x}\_1)\psi(\mathbf{x}\_1)\right) + \left(-\psi'(\mathbf{x}\_2) + ia(\mathbf{x}\_2)\psi(\mathbf{x}\_2)\right)}{\underbrace{\psi'(\mathbf{x}\_1)}\_{\equiv \ \psi'(\mathbf{x}\_2)}}. \tag{23.2}$$

Consider the unitary transformation

$$
\psi(\mathbf{x}) \mapsto \hat{\psi}(\mathbf{x}) = \left( U\psi \right)(\mathbf{x}) = \exp\left( -i \int\_{\chi\_1}^{\chi} a(\mathbf{y}) d\mathbf{y} \right) \psi(\mathbf{x}). \tag{23.3}
$$

Direct calculations show that this transformation eliminates the magnetic potential on the loop:

$$U\mathfrak{r}\_{q,a}U^{-1} = \mathfrak{r}\_{q,0},$$

and results in the modification of the boundary values

$$\begin{cases} \psi(V^1) = \hat{\psi}(\mathbf{x}\_1) = e^{i\Phi} \hat{\psi}(\mathbf{x}\_2), \\\\ \partial \psi(V^1) = \hat{\psi}'(\mathbf{x}\_1) - e^{i\Phi} \hat{\psi}'(\mathbf{x}\_2), \end{cases} \tag{23.4}$$

#### 23.1 Inverse Problem for the Loop 557

where

$$\Phi = \int\_{\chi\_1}^{\chi\_2} a(\mathbf{y}) d\mathbf{y} \tag{23.5}$$

is the flux of the magnetic field through the loop. Note that the magnetic potential is eliminated from the vertex conditions as well—conventional derivatives are used instead of the extended derivatives as in (23.2). It is clear that the M-function depends on the flux , not on the particular form of the magnetic potential:

$$\mathbf{M}^{\rm loop} = \mathbf{M}^{\rm loop}(\lambda, \Phi).$$

In what follows we are going to use the Schrödinger operator given by the differential expression *τq,*<sup>0</sup> = − *<sup>d</sup>*<sup>2</sup> *dx*<sup>2</sup> + *q(x)*, moving into the definition of the boundary values. There will be no reason to continue placing hats over *ψ.*

We repeat our calculations to derive

$$\mathbf{M}^{\rm loop}(\lambda, \Phi) = \frac{\partial \psi(V^1)}{\psi(V^1)} = \frac{\psi'(\mathbf{x}\_1) - e^{i\Phi} \psi'(\mathbf{x}\_2)}{\psi(\mathbf{x}\_1)},\tag{23.6}$$

where *ψ* is a solution to the stationary, free from magnetic potential, Schrödinger equation

$$
\pi\_{q,0}\mu = -\psi'' + q(\mathbf{x})\psi = \lambda\psi,\quad \text{Im}\,\lambda \neq 0. \tag{23.7}
$$

In fact we do not need to know how the solution looks inside the interval, it is enough to know how its boundary values are related. This relation is given by the edge M-function introduced in (5.51)

$$\mathbf{M}\_{\mathbf{e}}(\lambda) = \begin{pmatrix} -\frac{l\_{11}(k)}{l\_{12}(k)} & \frac{l}{l\_{12}(k)}\\ \frac{l}{l\_{12}(k)} & -\frac{l\_{22}(k)}{l\_{12}(k)} \end{pmatrix} : \begin{pmatrix} \psi(\mathbf{x}\_{1})\\ \psi(\mathbf{x}\_{2}) \end{pmatrix} \mapsto \begin{pmatrix} \psi'(\mathbf{x}\_{1})\\ -\psi'(\mathbf{x}\_{2}) \end{pmatrix},\tag{23.8}$$

where *tij* are the entries of the transfer matrix *Tq (λ)* (introduced in Sect. 5.1.1),

$$T\_q(\lambda; \mathbf{x}\_1, \mathbf{x}\_2) = \begin{pmatrix} t\_{11}(k) \ t\_{12}(k) \\ t\_{21}(k) \ t\_{22}(k) \end{pmatrix} : \begin{pmatrix} \psi(\mathbf{x}\_1) \\ \psi'(\mathbf{x}\_1) \end{pmatrix} \mapsto \begin{pmatrix} \psi(\mathbf{x}\_2) \\ \psi'(\mathbf{x}\_2) \end{pmatrix}. \tag{23.9}$$

Recall that when defining the edge M-function, we assumed that the magnetic potential is zero (as in (23.7)). The formula for the M-function holds for any Im *<sup>λ</sup>* = <sup>0</sup> since *t*12*(k)* = <sup>0</sup> outside the real axis of *<sup>λ</sup>* <sup>=</sup> *<sup>k</sup>*2; otherwise the (selfadjoint) Dirichlet Schrödinger operator on [*x*1*, x*2] would have non-real eigenvalues.

**Problem 96** Justify that if *t*12*(kj )* <sup>=</sup> 0, then *λj* <sup>=</sup> *<sup>k</sup>*<sup>2</sup> *<sup>j</sup>* is an eigenvalue of the Dirichlet Schrödinger operator on [*x*1*, x*2] with the same potential *q(x).*

In what follows we shall need asymptotic representations for the entries of the transfer matrix. All these representations follow directly from the integral equations (5.6) which can be solved by iterations. All entries of the transfer matrix are entire functions of *k* of exponential type <sup>1</sup> and the following asymptotic representations hold in the complex plane *<sup>k</sup>* <sup>∈</sup> <sup>C</sup>

$$\begin{aligned} t\_{11}(k) &= \cos \ell\_1 k + \mathcal{O}\left(\frac{e^{\ell\_1|\text{Im}\,k|}}{|k|}\right), \\ t\_{12}(k) &= \frac{1}{k} \sin \ell\_1 k + \mathcal{O}\left(\frac{e^{\ell\_1|\text{Im}\,k|}}{|k|^2}\right). \end{aligned} \tag{23.10}$$

The asymptotic representations for real *<sup>k</sup>* <sup>∈</sup> <sup>R</sup> are

$$\begin{split} t\_{11}(k) \equiv c(k, \mathbf{x}\_2) &= \cos k \ell\_1 - \frac{\sin k \ell\_1}{2k} \int\_{\chi\_1}^{\chi\_2} q(t) dt \\ &- \frac{1}{2k} \int\_{\chi\_1}^{\chi\_2} \sin k (\mathbf{x}\_1 + \mathbf{x}\_2 - 2t) q(t) dt + \mathcal{O}\left(\frac{1}{k^2}\right), \\ t\_{12}(k) \equiv s(k, \mathbf{x}\_2) &= \frac{\sin k \ell\_1}{k} + \frac{\cos k \ell\_1}{2k^2} \int\_{\chi\_1}^{\chi\_2} q(t) dt \\ &- \frac{1}{2k^2} \int\_{\chi\_1}^{\chi\_2} \cos k (\mathbf{x}\_1 + \mathbf{x}\_2 - 2t) q(t) dt + \mathcal{O}\left(\frac{1}{k^3}\right), \\ t\_{22}(k) \equiv s'(k, \mathbf{x}\_2) &= \cos k \ell\_1 - \frac{\sin k \ell\_1}{2k} \int\_{\chi\_1}^{\chi\_2} q(t) dt \\ &+ \frac{1}{2k} \int\_{\chi\_1}^{\chi\_2} \sin k (\mathbf{x}\_1 + \mathbf{x}\_2 - 2t) q(t) dt + \mathcal{O}\left(\frac{1}{k^2}\right). \end{split} \tag{23.11}$$

**Problem 97** Prove formulas (23.11) by iterating the integral equation (5.6) for the standard solutions *c(k, x)* and *s(k, x)*.

We introduce the functions

$$
\mu\_{\pm}(k) = \frac{1}{2} \left( t\_{11}(k) \pm t\_{22}(k) \right),
$$

where *u*+*(k)* is called the **Lyapunov function** due to its importance for the periodic Schrödinger (Hill) operator: the equation

$$u\_{+}(k) = \pm 1$$

determines the endpoints for the bands of the continuous spectrum for the periodic operator. The asymptotic formulae for *u*<sup>±</sup> can be obtained from (5.6) (see [382] and [231, 232]):

$$u\_{+}(k) = \frac{1}{2} \left( t\_{11}(k) + t\_{22}(k) \right) = \cos \ell\_1 k + \frac{\sin \ell\_1 k}{2k} \int\_0^{\ell\_1} q(t)dt + \mathcal{O}(k^{-2});$$

$$u\_{-}(k) = \frac{1}{2} \left( t\_{11}(k) - t\_{22}(k) \right) = \int\_0^{\ell\_1} \frac{\sin(\ell\_1 - 2t)k}{2k} q(t)dt + \mathcal{O}(k^{-2}),\tag{23.12}$$

implying in particular that

$$k\mu\_{-}(k)\in L\_{2}(\mathbb{R}),\tag{23.13}$$

provided *<sup>q</sup>* <sup>∈</sup> *<sup>L</sup>*2*(*R*).* This property will be crucial for the solution of the inverse problem.

To calculate the scalar function **M**loop from (23.6) we need the solution of the (23.7) with the boundary values

$$
\psi(\mathbf{x}\_1) = \psi(V^1), \quad \psi(\mathbf{x}\_2) = e^{-i\Phi}\psi(V^1).
$$

as follows from (23.4). The normal derivatives for this solution are given by the edge M-function:

$$\begin{cases} \psi'(\mathbf{x}\_1) = M\_{11}\psi(\mathbf{x}\_1) + M\_{12}\psi(\mathbf{x}\_2) = \left(M\_{11} + e^{-i\Phi}M\_{12}\right)\psi(V^1), \\\\ -\psi'(\mathbf{x}\_2) = M\_{21}\psi(\mathbf{x}\_1) + M\_{22}\psi(\mathbf{x}\_2) = \left(M\_{21} + e^{-i\Phi}M\_{22}\right)\psi(V^1), \end{cases}$$

where the terms *Mij* are the entries of the matrix **M**edge*(λ)*. Then the normal derivative *∂ψ(V* <sup>1</sup>*)* is

$$\partial \psi(V^1) = \psi'(\mathbf{x}\_1) - e^{l\Phi}\psi'(\mathbf{x}\_2) = \left(M\_{11} + e^{-l\Phi}M\_{12} + e^{l\Phi}M\_{21} + M\_{22}\right)\psi(V^1)'$$

implying that

$$\begin{split} \mathbf{M}^{\text{loop}}(\lambda, \Phi) &= M\_{11} + 2 \cos \Phi M\_{12} + M\_{22} \\ &= \frac{2 \cos \Phi - \text{Tr} \, T(k)}{t\_{12}(k)}, \end{split} \tag{23.14}$$

where we have taken into account that *M*<sup>12</sup> = *M*21*.*

Knowing **<sup>M</sup>**loop*(λ, )* for <sup>=</sup> <sup>0</sup> and <sup>=</sup> *<sup>π</sup>*, one may reconstruct the trace and the non-diagonal entry of the edge M-function,

$$\begin{aligned} M\_{12}(\lambda) & \quad (= M\_{21}(\lambda)) = \frac{1}{4} \left( \mathbf{M}^{\text{loop}}(\lambda, 0) - \mathbf{M}^{\text{loop}}(\lambda, \pi) \right), \\ \text{Tr}\, \mathbf{M}^{\text{edge}}(\lambda) &= M\_{11}(\lambda) + M\_{22}(\lambda) = \frac{1}{2} \left( \mathbf{M}^{\text{loop}}(\lambda, 0) + \mathbf{M}^{\text{loop}}(\lambda, \pi) \right). \end{aligned} \tag{23.15}$$

It is clear from formula (23.14) that considering values of the flux other than 0 and *π* will **not** help to determine further entries of **M**edge*(λ)*.

We conclude that reconstructing the transfer matrix we are able to determine only the entry *t*12*(k)* and the Lyapunov function *u*+*(k)*:

$$t\_{12}(k) = \begin{array}{c} 1 \\ \hline M\_{12}(\lambda) \end{array} = \frac{4}{\mathbf{M}^{\mathrm{loop}}(\lambda, 0) - \mathbf{M}^{\mathrm{loop}}(\lambda, \pi)};$$

$$\mu\_{+}(k) := \frac{1}{2} (t\_{11}(k) + t\_{22}(k)) = \frac{1}{2} \frac{\mathrm{Tr} \, \mathbf{M}^{\mathrm{edge}}(\lambda)}{M\_{12}(\lambda)} = \frac{\mathbf{M}^{\mathrm{loop}}(\lambda, 0) + \mathbf{M}^{\mathrm{loop}}(\lambda, \pi)}{\mathbf{M}^{\mathrm{loop}}(\lambda, 0) - \mathbf{M}^{\mathrm{loop}}(\lambda, \pi)}. \tag{23.16}$$

Knowledge of *t*12*(k)* and *u*+*(k)* in general is not enough to reconstruct the potential. We are going to show that all possible potentials can be parameterised by certain infinite sequences of signs.

## *23.1.2 Reconstructing Potential on the Loop*

The inverse problem for the loop was first discussed in [334] and our presentation here is inspired by that paper, but the new approach is much more straightforward and transparent. Formulae (23.16) reduce the solution of the inverse problem for the loop to the problem of reconstructing the whole transfer matrix *Tq (k)* from the entry *t*12*(k)* and the Lyapunov function *u*+*(k)*. This problem has already been solved by V.A. Marchenko and I.V. Ostrovskii [382] when all periodic potentials leading to prescribed band spectrum of the periodic Schrödinger (Hill) operator were characterised. They showed that this family is uniquely parametrised by the spectrum of the Dirichlet-Dirichlet operator on one period and a certain sequence of signs *νj* = ±1 associated with each of the Dirichlet-Dirichlet eigenvalues. In what follows we sketch how to solve the inverse problem. Note that the paper [382] also provides characterisation of all possible spectral data, while we already assume that the functions *t*12*(k)* and *u*+*(k)* come from the equation (23.7).

We consider the zeroes *k*<sup>D</sup> *<sup>j</sup>* of *t*12*(k)* determining the Dirichlet spectrum *λ*<sup>D</sup> *j* = *(k*<sup>D</sup> *<sup>j</sup> )*<sup>2</sup> for the edge. The zeroes satisfy Weyl asymptotics, allowing us to determine the edge length 1:

$$k\_j^{\mathcal{D}} \sim \left(\frac{\pi}{\ell\_1}\right)^2 j^2 \quad \Rightarrow \quad k\_j^{\mathcal{D}} \sim \frac{\pi}{\ell\_1} j \quad \Rightarrow \quad \ell\_1 = \lim\_{j \to \infty} \frac{\pi j}{k\_j^{\mathcal{D}}}.\tag{23.17}$$

The transfer matrix has unit determinant det *Tq (k)* = 1 (*cf.* (5.7)) hence we get the following system of equations involving the values of *t*11*(k*<sup>D</sup> *<sup>j</sup> ), t*22*(k*<sup>D</sup> *<sup>j</sup> )*:

$$\begin{cases} t\_{11}(k\_j^{\rm D})t\_{22}(k\_j^{\rm D}) = 1, \\\\ t\_{11}(k\_j^{\rm D}) + t\_{22}(k\_j^{\rm D}) = 2u\_+(k\_j^{\rm D}). \end{cases} \tag{23.18}$$

The system leads to the quadratic equation

$$t\tau + \frac{1}{t} = 2u\_+(k\_j^{\mathcal{D}}) \quad \Rightarrow \quad t^2 - 2u\_+(k\_j^{\mathcal{D}})t + 1 = 0,\tag{23.19}$$

which has two possible solutions:

$$\begin{cases} t\_{11}(k\_j^{\rm D}) = \boldsymbol{\mu}\_+(k\_j^{\rm D}) + \nu\_j \sqrt{(\boldsymbol{\mu}\_+(k\_j^{\rm D}))^2 - 1}, \\ t\_{22}(k\_j^{\rm D}) = \boldsymbol{\mu}\_+(k\_j^{\rm D}) - \nu\_j \sqrt{(\boldsymbol{\mu}\_+(k\_j^{\rm D}))^2 - 1}, \end{cases} \quad \nu\_j = \pm 1,\tag{23.20}$$

*i.e.* the values *t*11*(k*<sup>D</sup> *<sup>j</sup> ), t*22*(k*<sup>D</sup> *<sup>j</sup> )* are determined by the sequence *νj* .

We shall prove now that these values determine the functions *t*11*(k)* and *t*22*(k)*. Assume that two exponential type <sup>1</sup> functions *t*11*(k)* and *t* ˜ <sup>11</sup>*(k)*, satisfying the first equation in (23.20) with the asymptotics given by (23.10), are found. Then their difference *t*11*(k)* := *t* ˜ <sup>11</sup>*(k)*−*t*11*(k)* is again a function of exponential type at most 1, with the asymptotics

$$
\Delta t\_{11}(k) = \mathcal{O}\left(\frac{e^{\ell\_1|\text{Im}\,k|}}{|k|}\right).
$$

and it is equal to zero at *k*<sup>D</sup> *<sup>j</sup> .* We already know one such function—*t*11*(k) <sup>t</sup>*12*(k) .* The quotient above is an entire uniformly bounded function and therefore is a constant function.

We conclude that if *t* ˆ <sup>11</sup>*(k)* is one possible function having the prescribed values at *k*<sup>D</sup> *<sup>j</sup>* , then any other solution is given by

$$t\_{11}(k) = \hat{t}\_{11}(k) + \alpha t\_{12}(k), \quad \alpha \in \mathbb{R}.$$

Since the Lyapunov function *u*+*(k)* is known, the general solution for *t*22*(k)* is given by

$$t\_{22}(k) = \hat{t}\_{22}(k) - \alpha t\_{12}(k),$$

where *t* ˆ <sup>22</sup>*(k)* is a particular solution.

Then the function *u*<sup>−</sup> is given by

$$
\mu\_{-}(k) = \frac{1}{2} \left( \hat{t}\_{11}(k) - \hat{t}\_{22}(k) \right) + \alpha t\_{12}(k). \tag{23.21}
$$

For real *k* the function *t*12*(k)* has the asymptotics:

$$
\mu\_{12}(k) = \frac{1}{k} \sin \ell\_1 k + \mathcal{O}(k^{-2}), \tag{23.22}
$$

hence

$$kt\_{12}(k) = \sin \ell\_1 k + \mathcal{O}(k^{-1})$$

does not belong to *L*2*(*R*)*. Therefore there is unique *<sup>α</sup>* such that *u*−*(k)* as given by (23.21) is square-integrable and therefore satisfies (23.13).

We conclude that the entries *t*11*(k)* and *t*22*(k)* are uniquely determined by *t*12*(k), u*+*(k)* and the sequence of signs *νj* . We recover the entry *t*21*(k)* using that the determinant of the transfer matrix is equal to 1:

$$t\_{21}(k) = \frac{t\_{11}(k)t\_{22}(k) - 1}{t\_{12}(k)}.\tag{23.23}$$

Our studies can be summarised as the following theorem.

**Theorem 23.1** *Consider the loop graph* 1*.*<sup>2</sup> <sup>=</sup> loop <sup>1</sup> *depicted in Fig. 23.1 and assume that the unique vertex V* <sup>1</sup> *is a contact vertex. Let L*st *q,a(*loop <sup>1</sup> *) be the standard Schrödinger operator determined by a fixed (electric) potential q* ∈ *C(*1*.*2*) and varying magnetic potential <sup>a</sup>* <sup>∈</sup> *C(*1*.*2*). Let* **M**loop*(λ, ) be the corresponding (scalar) M-function depending on the spectral parameter λ and the magnetic flux*  <sup>=</sup> *<sup>x</sup>*<sup>2</sup> *<sup>x</sup>*<sup>1</sup> *a(x)dx.*

**Fig. 23.1** The loop graph 1*.*2

*Then the spectral data consisting of* 


#### *determine the length of the edge and the potential q on it.*

*Proof* We have proven that *t*12*(k), u*+*(k)* and the sequence *νj* determine <sup>1</sup> and the transfer matrix *Tq (k).* The transfer matrix determines the edge M-function (see (23.8)), which in turn allows us to determine the unique potential on the edge (see Sect. 20.3 Theorem 20.11). 

Reconstruction of the potential on the edge from the M-function or the transfer matrix can also be done using classical results due to Levitan-Gasymov [375] or Marchenko-Ostrovsky [382, 383]. In particular one may directly exploit the fact that the Dirichlet-Dirichlet and Dirichlet-Neumann spectra (*i.e.* zeroes of *t*12*(k)* and *t*22*(k)*) determine the potential.

In the case the potential is identically zero, there is no need to provide the sequence of signs to recover it. This is a very special case related to the Ambartsumian theorem discussed in detail in Chaps. 14 and 15.

The values *t*22*(k*<sup>D</sup> *<sup>j</sup> )* can be interpreted as possibly non-real fluxes, for which the eigenfunction *ψ*<sup>D</sup> *<sup>n</sup>* is invisible from the loop M-function. Really, putting

$$e^{i\Phi^\*} = t\_{22}(k\_f^{\mathcal{D}}),\tag{23.24}$$

we see that the eigenfunction is invisible if = ∗. Depending on whether <sup>|</sup>*t*22*(k*<sup>D</sup> *<sup>j</sup> )*| is greater than or equal to 1, the corresponding <sup>∗</sup> is lying in the lower or in the upper halfplane. Thus we may interpret the sequence *νj* as indicator of in which halfplane the flux determining the invisible eigenfunction is situated.

## **23.2 Inverse Problem for the Lasso**

The goal of this section is to illustrate how the MBC-method works for graphs with one loop and no other cycles. Looking at the lasso graph will allow us to clarify some connections and to develop our intuition further.

Consider the lasso graph 2*.*<sup>2</sup> <sup>=</sup> lasso 1*,*2 , where <sup>1</sup> is the length of the loop and <sup>2</sup> is the length of the outgrowth (see Fig. 23.2). The contact set is formed by the degree one vertex *V* <sup>2</sup>*.* The inverse problem for the lasso graph is solved by reducing it to the inverse problem for the loop since the outgrowth and potential on it can be recovered using the BC-method for trees.

Let us discuss the dependence of the lasso M-function on the magnetic potential *a* on the two edges. Of course, the particular form of the potential plays no role and removing the potentials on the loop and on the outgrowth we see that the M-function **Fig. 23.2** Lasso and loop graphs *(*2*.*2*)* and *(*1*.*2*)*

depends on the flux of the magnetic potential through the loop, but is independent of the magnetic potential on the pendant: *<sup>M</sup>* <sup>=</sup> *M(λ, ),*  <sup>=</sup> *<sup>x</sup>*<sup>2</sup> *<sup>x</sup>*<sup>1</sup> *a(y)dy.*

**Theorem 23.2** *Let* 2*.*<sup>2</sup> <sup>=</sup> lasso 1*,*<sup>2</sup> *be any compact lasso graph, as depicted in Fig. 23.2 with the contact vertex V* <sup>2</sup>*. Let L*st *q,a(*lasso 1*,*2 *) be the standard Schrödinger operator determined by a fixed (electric) potential q* ∈ *C(*2*.*2*) and varying magnetic potential a* ∈ *C(*2*.*2*). Let* **M**lasso 1*,*2 *(λ, ) be the corresponding M-function* 

*depending on the spectral parameter <sup>λ</sup> and the magnetic flux*  <sup>=</sup> *<sup>x</sup>*<sup>2</sup> *<sup>x</sup>*<sup>1</sup> *a(x)dx through the loop.* 

*Then the spectral data consisting of* 


*determine uniquely lasso graph and unique potential q on it.* 

*Proof* In solving the inverse problem for the lasso graph we are going to use two complementary ideas:

	- **M**lasso for the lasso,
	- **M**loop for the loop,
	- **M**pend for the pendant edge,

which allows one to recover one of these M-functions if the other two are known. We may therefore reduce the inverse problem to the loop.

*Reconstruction of the Pendant Edge* The response operator for *T <*<sup>2</sup> + 1*/*2 coincides with the response operator for the three-star graph as explained above. The kernel of this response operator contains the *δ* singularity delayed by 2<sup>2</sup> (see formula (19.59)), hence <sup>2</sup> is recovered. The response operator for *T <*<sup>2</sup> coincides

with the response operator for the Schrödinger operator on [*x*3*, x*4] and therefore determines the potential *q* on the pending edge.

*Reduction to the Loop* Let us establish an explicit formula connecting the three Mfunctions mentioned in the second item. Let *ψ* be any solution to the Schrödinger equation on the lasso, then its values at the endpoints of [*x*3*, x*4] are related via

$$\begin{aligned} \mathbf{M}^{\text{lasso}} \psi(\mathbf{x}\_4) &= -\psi'(\mathbf{x}\_4); \\\\ \mathbf{M}^{\text{loop}} \psi(\mathbf{x}\_3) &= -\psi'(\mathbf{x}\_3); \\\\ \mathbf{M}^{\text{pend}} \begin{pmatrix} \psi(\mathbf{x}\_3) \\ \psi(\mathbf{x}\_4) \end{pmatrix} &= \begin{pmatrix} \psi'(\mathbf{x}\_3) \\ -\psi'(\mathbf{x}\_4) \end{pmatrix} \end{aligned}$$

To exclude *ψ(x*3*), ψ*- *(x*3*)* we write the last equation explicitly

$$\begin{cases} M\_{11}^{\text{pend}} \psi(\mathbf{x}\_3) + M\_{12}^{\text{pend}} \psi(\mathbf{x}\_4) = \psi'(\mathbf{x}\_3), \\\\ M\_{21}^{\text{pend}} \psi(\mathbf{x}\_3) + M\_{22}^{\text{pend}} \psi(\mathbf{x}\_4) = -\psi'(\mathbf{x}\_4). \end{cases}$$

*.*

to get

$$\mathbf{M}^{\text{lasso}}(\lambda,\Phi) = M\_{22}^{\text{pend}}(\lambda) - M\_{21}^{\text{pend}}(\lambda) \left( M\_{11}^{\text{pend}}(\lambda) + \mathbf{M}^{\text{loop}}(\lambda,\Phi) \right)^{-1} M\_{12}^{\text{pend}}(\lambda). \tag{23.25}$$

It follows that the M-function for the loop is determined by the M-function for the lasso. This reduction from the lasso to the loop is essentially the pruning procedure introduced in Sect. 20.5. It remains to use Theorem 23.1 to conclude that the length of the loop and potential *q* on it are uniquely determined. 

The ideas developed above can be applied to arbitrary graphs with one loop. To perform the reduction to the loop, it is not important that the outgrowth is formed by a single edge: the same method works if a finite tree is attached. One may also attach any set of trees.

## **23.3 Inverse Problems for Graphs with One Cycle**

Let us turn to arbitrary graphs with one cycle, *i.e.* with *β*<sup>1</sup> = 1, excluding graphs with loops since the inverse problem for such graphs has already been solved in the preceding section. One might think that the inverse problem for such graphs is more sophisticated as slightly more complicated graph structures are allowed. In fact the situation is completely the opposite: the magnetic flux dependent Mfunction is in general sufficient to determine the potential, while in the case of loops, determination of the potential requires an additional sequence of signs. To simplify our presentation we assume that the vertex conditions at all vertices are standard.

The spectral data as usual consists of the magnetic flux-dependent M-function associated with the contact vertices which include all degree one vertices. The solution of the inverse problem can be divided into two steps:


**Step 1: Reduction to the Cycle** Assume that the graph with one cycle is given. Then it can be seen as a cycle formed by several edges with subtrees attached to the vertices on the cycle. The reconstruction procedure for trees developed in Chap. 20 is local, hence the M-function or the response operator associated with all degree one vertices allow us to reconstruct the attached subtrees and the potential on them. Pruning the attached trees we obtain the M-function for the cycle. This M-function is associated with all vertices on the cycle.

On the left in Fig. 23.3, we present a typical graph on one cycle. The red vertices indicate the minimal set of contact points.

Pruning all subtrees one obtains the cycle graph on the right. All vertices where the subtrees were attached now turn into contact vertices, again marked in red. Note that all these vertices have degree two. The dependence of **M** on the magnetic flux has not been used so far, this is related to the fact that the magnetic potential on each individual subtree can be eliminated.

**Step 2: Inverse Problem for the Cycle** Assume that the M-function for the cycle is known. To solve the inverse problem one may dismantle the cycle into the edges it is formed of. If the number of vertices is at least three, then the problem can be

**Fig. 23.3** Reduction to the cycle

solved without using the dependence of the M-function on the magnetic flux in the way that was done in Sect. 21.2. In the case of two contact vertices on the cycle, the edges are parallel and Theorem 21.7 cannot be applied: reconstruction of the potential even in the generic situation requires using the dependence of the spectral data on the magnetic flux. If the graph has a single vertex, then the cycle is a loop; this case is excluded.

The reconstruction procedure is generic and we shall always assume the following.

**Assumption 23.3** *The spectra of the Dirichlet operators on the edges forming the cycle have no common points.* 

The assumption is generically satisfied with respect to the edge lengths.

**Two Contact Points** Let us solve the inverse problem in the case the cycle is formed by two edges [*x*1*, x*2] and [*x*3*, x*4] connected as in Fig. 23.4, forming two contact vertices *<sup>V</sup>* <sup>1</sup> = {*x*1*, x*4} and *<sup>V</sup>* <sup>2</sup> = {*x*2*, x*3}*.*

Let us denote by **M**<sup>1</sup> and **M**<sup>2</sup> the M-functions for zero magnetic potential on the edges [*x*1*, x*2] and [*x*3*, x*4] respectively:

$$\begin{array}{l} \mathbf{M}^{1}(\lambda,0) = \begin{pmatrix} M\_{11}^{1}(\lambda) \ M\_{12}^{1}(\lambda) \\ M\_{21}^{1}(\lambda) \ M\_{22}^{1}(\lambda) \end{pmatrix}, \\\ \mathbf{M}^{2}(\lambda,0) = \begin{pmatrix} M\_{22}^{2}(\lambda) \ M\_{21}^{2}(\lambda) \\ M\_{12}^{2}(\lambda) \ M\_{11}^{2}(\lambda) \end{pmatrix}. \end{array} \tag{23.26}$$

It will be convenient to indicate the dependence of the M-functions on the integrals *<sup>j</sup>* of the magnetic potential along the edges. Note that we exchanged the indices in the matrix **<sup>M</sup>**<sup>2</sup> since the edge [*x*3*, x*4] is oriented opposite to the interval [*x*1*, x*2]*.*

We need a relation connecting the M-functions for zero and non-zero magnetic potential. Such relation between the interval transfer matrices *T <sup>j</sup> q,a* has already been derived in (5.12), namely

$$T\_{q,a}^{j} = e^{i\Phi^{j}} T\_{q,0}^{j},$$

with

$$
\Phi^j = \int\_{\mathfrak{X}\_{2j-1}}^{\mathfrak{X}\_{2j}} a(x) dx,
$$

being the integral of the magnetic potential along one of the edges. Denoting by *t* 1 *ij* the entries of the transfer matrix for [*x*1*, x*2] corresponding to zero magnetic potential, we repeat the calculations (5.50), yielding

$$\begin{cases} e^{i\Phi^1} t\_{11}^1 g(\mathbf{x}\_1) + e^{i\Phi^1} t\_{12}^1 \partial g(\mathbf{x}\_1) = \,^\|g(\mathbf{x}\_2), \\\\ e^{i\Phi^1} t\_{21}^1 g(\mathbf{x}\_1) + e^{i\Phi^1} t\_{22}^1 \partial g(\mathbf{x}\_1) = \partial g(\mathbf{x}\_2), \\\\ \Rightarrow \begin{cases} \partial g(\mathbf{x}\_1) &= -\frac{t\_{11}^1}{t\_{12}^1} g(\mathbf{x}\_1) + e^{-i\Phi^1} \frac{1}{t\_{12}^1} g(\mathbf{x}\_2), \\\\ -\partial g(\mathbf{x}\_2) &= \,^\|g(\mathbf{x}\_2) - \frac{t\_{22}^1}{t\_{12}^1} g(\mathbf{x}\_1) - \frac{t\_{22}^1}{t\_{12}^1} g(\mathbf{x}\_2), \end{cases} \end{cases}$$

to get

$$\mathbf{M}^{\mathrm{l}}(\lambda,\Phi^{\mathrm{j}}) = \begin{pmatrix} 1 & 0\\ 0 \ e^{i\Phi^{\mathrm{l}}} \end{pmatrix} \mathbf{M}^{\mathrm{l}}(\lambda,0) \begin{pmatrix} 1 & 0\\ 0 \ e^{-i\Phi^{\mathrm{l}}} \end{pmatrix}.\tag{23.27}$$

In the formula for **M**2*(λ,* 2*)* the phase gains an extra minus sign as the interval [*x*3*, x*4] is oriented from *<sup>V</sup>* <sup>2</sup> to *<sup>V</sup>* 1:

$$\mathbf{M}^2(\lambda, \Phi^j) = \begin{pmatrix} 1 & 0 \\ 0 \ e^{-l\Phi^2} \end{pmatrix} \mathbf{M}^2(\lambda, 0) \begin{pmatrix} 1 & 0 \\ 0 \ e^{l\Phi^2} \end{pmatrix}. \tag{23.28}$$

Hence the cycle M-function is given by

**M***(λ, )* = 1 0 0 *ei*<sup>1</sup> **M**<sup>1</sup> 1 0 0 *e*−*i*<sup>1</sup> + 1 0 0 *e*−*i*<sup>2</sup> **M**<sup>2</sup> 1 0 0 *ei*<sup>2</sup> = *M*<sup>1</sup> <sup>11</sup> <sup>+</sup> *<sup>M</sup>*<sup>2</sup> <sup>22</sup> *<sup>M</sup>*<sup>1</sup> 12*e*−*i*<sup>1</sup> <sup>+</sup> *<sup>M</sup>*<sup>2</sup> 21*ei*<sup>2</sup> *M*<sup>1</sup> 21*ei*<sup>1</sup> <sup>+</sup> *<sup>M</sup>*<sup>2</sup> 12*e*−*i*<sup>2</sup> *M*<sup>1</sup> <sup>22</sup> <sup>+</sup> *<sup>M</sup>*<sup>2</sup> <sup>11</sup> = *M*<sup>1</sup> <sup>11</sup> <sup>+</sup> *<sup>M</sup>*<sup>2</sup> <sup>22</sup> *M*<sup>1</sup> <sup>12</sup>*e*−*i* <sup>+</sup> *<sup>M</sup>*<sup>2</sup> 21 *ei*<sup>2</sup> *M*<sup>1</sup> <sup>21</sup>*ei* <sup>+</sup> *<sup>M</sup>*<sup>2</sup> 12 *e*−*i*<sup>2</sup> *M*<sup>1</sup> <sup>22</sup> <sup>+</sup> *<sup>M</sup>*<sup>2</sup> <sup>11</sup> = ⎛ ⎜ ⎝ −*t*1 11 *t* 1 12 − *t*2 22 *t* 2 12 1 *t* 1 12 *e*−*i*<sup>1</sup> <sup>+</sup> <sup>1</sup> *t* 2 12 *ei*<sup>2</sup> 1 *t*1 12 *ei*<sup>1</sup> <sup>+</sup> <sup>1</sup> *t*2 12 *e*−*i*<sup>2</sup> −*t*1 22 *t*1 12 − *t*2 11 *t*2 12 ⎞ ⎟ ⎠ *,* (23.29)

where <sup>=</sup> <sup>1</sup> <sup>+</sup> <sup>2</sup> is the total flux through the cycle and *<sup>t</sup>*<sup>1</sup> *<sup>j</sup>* and *<sup>t</sup>*<sup>2</sup> *ij* are the entries of the transfer matrices associated with [*x*1*, x*2] and [*x*3*, x*4] respectively.

In general two Herglotz-Nevanlinna functions are not uniquely determined by their sum. Therefore to recover **M**<sup>1</sup> and **M**<sup>2</sup> we are forced to use the dependence of **M** on the magnetic flux.

The non-diagonal entries of **M***<sup>j</sup> (λ)* may be reconstructed using the non-diagonal entries of **M***(λ, )* for real *λ* and = 0*, π.* In particular (23.29) implies that

$$\begin{cases} \left| M\_{12}(\lambda, 0) \right| &= \left| M\_{12}^{\mathrm{I}}(\lambda) + M\_{43}^{2}(\lambda) \right| = \left| \frac{1}{t\_{12}^{\mathrm{I}}(k)} + \frac{1}{t\_{12}^{2}(k)} \right|, \\\\ \frac{1}{4} \left( \left| M\_{12}(\lambda, 0) \right|^{2} - \left| M\_{12}(\lambda, \pi) \right|^{2} \right) = \left. M\_{12}^{\mathrm{I}}(\lambda) M\_{43}^{2}(\lambda) \right| &= \left. \frac{1}{t\_{12}^{\mathrm{I}}(k)} \frac{1}{t\_{12}^{2}(k)}. \end{cases} \tag{23.30}$$

It is clear that this system of equations is invariant under exchange of the edges. Moreover the set of solutions is invariant under simultaneous multiplication of *t*<sup>1</sup> 12 and *t*<sup>2</sup> <sup>43</sup> by −1*.* In fact these transformations exhaust all possibilities, implying that (23.30) allows one to recover *t* 1 <sup>12</sup> and *t*<sup>2</sup> <sup>43</sup> up to multiplication by −1. The sign is easily fixed using the fact that **M**12*(λ,* <sup>0</sup>*)* <sup>=</sup> **<sup>M</sup>**<sup>1</sup> <sup>12</sup>*(λ)* <sup>+</sup> **<sup>M</sup>**<sup>2</sup> <sup>21</sup>*(λ)*. Alternatively, one may remember that *t j* <sup>12</sup>*(k)* possess explicit asymptotics (23.11).

Consider the entry

$$\mathbf{M}\_{12}(\lambda,0) = -\frac{t\_{11}^1}{t\_{12}^1} - \frac{t\_{22}^2}{t\_{12}^2} = -\frac{t\_{11}^1(k)t\_{12}^2(k) + t\_{22}^2(k)t\_{12}^1(k)}{t\_{12}^1(k)t\_{12}^2(k)}.$$

The functions *t j* <sup>12</sup>*(k)* are already determined, hence it remains to discuss how to recover *t*<sup>1</sup> <sup>11</sup>*(k)* and *<sup>t</sup>*<sup>2</sup> <sup>22</sup>*(k)* from the numerator

$$f(k) := t\_{11}^1(k)t\_{12}^2(k) + t\_{22}^2(k)t\_{12}^1(k).$$

Consider the Dirichlet-Dirichlet spectrum on the upper edge: *t*<sup>1</sup> 12*(k*D*,*<sup>1</sup> *<sup>j</sup> )* = 0. The values of *t* 1 <sup>11</sup> at these points are

$$t\_{11}^1(k\_j^{\mathbf{D},1}) = \frac{f(k\_j^{\mathbf{D},1})}{t\_{12}^2(k\_j^{\mathbf{D},1})},$$

where we use that *t*<sup>2</sup> 12*(k*D*,*<sup>1</sup> *<sup>j</sup> )* = 0 due to Assumption 23.3. Let us discuss how to get the unique exponential type function *t*<sup>1</sup> <sup>11</sup>*(k)*. Assume that one such function *t* ˆ1 <sup>11</sup> with prescribed values *t*<sup>1</sup> 11*(k*D*,*<sup>1</sup> *<sup>j</sup> )* is obtained. Then any other solution is given by

$$
\hat{t}\_{11}^{\mathsf{I}}(k) + \alpha t\_{12}^{\mathsf{I}}(k), \quad \alpha \in \mathbb{R}.
$$

The constant *α* is unique, as follows from the asymptotics (23.11).

**Fig. 23.5** Cycle graph on three edges

All other entries of the transfer matrices associated with the upper and lower edges are reconstructed in the same way. We conclude that the potential on the edges is uniquely determined.

If Assumption 23.3 is not satisfied, then the potential on the edges may not be unique. Generically for every pair of common eigenvalues one has to provide one extra sign in order to determine the edge M-functions. This is very similar to the reconstruction of the edge M-function from the loop M-function, where an infinite sequence of signs was required to ensure uniqueness. This procedure is described in detail in [334].

**Three and More Contact Points** Assume first that the cycle has three contact points; it will become clear how to generalise the method for a larger number of contact points. Of course we make (the generically satisfied) Assumption 23.3.

The metric graph shown in Fig. 23.5 is formed by three edges [*x*1*, x*2], [*x*3*, x*4], and [*x*5*, x*6] joined at the contact vertices *<sup>V</sup>* <sup>1</sup> = {*x*1*, x*6}*, V* <sup>2</sup> = {*x*2*, x*3}*,* and *<sup>V</sup>* <sup>3</sup> <sup>=</sup> {*x*4*, x*5}. The corresponding M-function for zero magnetic flux is

$$\mathbf{M}(\lambda,0) = \begin{pmatrix} M\_{11}^1 + M\_{22}^3 & M\_{12}^1 & M\_{21}^3 \\ M\_{21}^1 & M\_{22}^1 + M\_{11}^2 & M\_{12}^2 \\ M\_{21}^3 & M\_{21}^2 & M\_{22}^2 + M\_{11}^3 \end{pmatrix}. \tag{23.31}$$

Under our assumption the singularities of the edge M-functions **M***<sup>j</sup> (λ)* do not coincide. Each singularity appears in precisely four entires of **M***(λ,* 0*)*: two diagonal and two off-diagonal. Hence every singularity of **M** can be identified as a Dirichlet eigenvalue for one of the edges by just examining the corresponding non-diagonal elements. Note that Dirichlet eigenfunctions on an interval cannot have normal derivatives equal to zero. Hence the edge M-functions are determined up to certain <sup>2</sup> <sup>×</sup> <sup>2</sup> block matrices **<sup>A</sup>***<sup>j</sup>* by the explicit formula

$$\mathbf{M}^{j}(\lambda) = \mathbf{A}^{j} + \sum\_{n=1}^{\infty} \frac{\lambda - \lambda^{\prime}}{(\lambda\_{n}^{\mathrm{D},j} - \lambda)(\lambda\_{n}^{\mathrm{D},j} - \lambda^{\prime})} P\_{\boldsymbol{\psi}\_{n}^{\mathrm{D},j}}, \quad j = 1, 2, 3,\tag{23.32}$$

with

$$P\_{\boldsymbol{\psi}\_{n}^{\rm D,j}} := \langle \partial \boldsymbol{\psi}\_{n}^{\rm D,j} |\_{\partial \Gamma}, \cdot \rangle\_{\mathbb{C}^{B}} \partial \boldsymbol{\psi}\_{n}^{\rm D,j} |\_{\partial \Gamma}, \tag{23.33}$$

subject to the condition

$$\mathbf{A}^1 + \mathbf{A}^2 + \mathbf{A}^3 = \mathbf{M}(\lambda').$$

Here the *ψ*D*,j <sup>n</sup>* denote the normalised eigenfunctions of the Dirichlet-Dirichlet operators on each of the three edges and extended by zero to the remaining two edges.

The matrices **A***<sup>j</sup>* are uniquely determined, taking into account the asymptotic formula (21.1). This accomplishes the reconstruction of **M***<sup>j</sup> (λ)* and hence of the edge lengths and of the potential *q*.

Generalisation for the case of *N* ≥ 4, is straightforward. Let us summarise our studies as

**Theorem 23.4** *Let be a finite metric graph with one cycle (β*<sup>1</sup> = 1*) formed by N*cycle *edges, and let the non empty contact set ∂ contain all degree one vertices. Assume in addition the following generically satisfied assumption:* 

*(b) the spectra of the Dirichlet-Dirichlet operators on the edges forming the cycle are disjoint,* 

*Then the inverse problem is uniquely solvable as follows:* 


Having extra contact points just increases the size of **M**, and one may always assume that *∂* contains just all degree one vertices (excluding the case of the single loop). We see that it is easier to recover the potential if *N* is large. This is not surprising since by increasing the number of contact points on a cycle, one obtains more information about the operator.

## **23.4 Dismantling Graphs II: Dependent Subtrees**

In solving the inverse problem by dismantling graphs into trees (see Sect. 21.2) the dependence on the magnetic fluxes is not used. On the other hand, as the example of the cycle graph on two edges shows, magnetic fields may help to solve the inverse

**Fig. 23.6** Cutting graphs: several contact points

problem when the subtrees **T***<sup>j</sup>* are dependent. The goal of this section is to pursue this observation further and prove unique solvability of the inverse problem in the case where the subtrees are not necessarily independent. We adopt notations from Sect. 23.3 (Fig. 23.6).

To simplify the description we are going to assume that the subtrees are not parallel:

**Definition 23.5** Two subtrees **T***<sup>j</sup>* and **T***<sup>i</sup>* are called **parallel** if either

$$
\partial \mathbf{T}\_f \subset \partial \mathbf{T}\_l \quad \text{or} \quad \partial \mathbf{T}\_l \subset \partial \mathbf{T}\_f. \tag{23.34}
$$

In other words, two subtrees are parallel if the boundary of one subtree is contained in the boundary of the other one. Assuming that subtrees are not parallel will simplify our presentation but is not necessary (see Problem 98 below).

The counterpart to Theorem 21.6 can be formulated as follows, note that assumption *(2)* is substituted with an assumption of a completely different nature.

**Theorem 23.6** *Let L*st *q,a() be a standard magnetic Schrödinger operator on a pendant-free metric graph with a selected non-empty contact set ∂ that dismantles the graph into a set of trees* {**T***<sup>j</sup>* }*, such that* 


*Assume in addition the following generically satisfied assumption:* 

*(a) the spectra (***T***<sup>j</sup> )* = {*λ*<sup>D</sup> *<sup>n</sup> (***T***<sup>j</sup> )*} *of the Schrödinger operators on* **T***<sup>j</sup> , with Dirichlet conditions at the pendant vertices and inherited from Lq,a() vertex conditions at all internal vertices, are disjoint,* 

$$
\lambda\_n^{\mathcal{D}}(\mathbf{T}\_j) \neq \lambda\_m^{\mathcal{D}}(\mathbf{T}\_l), \quad j \neq i. \tag{23.35}
$$

*Let us denote by i the magnetic fluxes associated with the loops in . Then the M-function* **M***(λ, i) associated with the contact vertices and known for i* = 0*, π, uniquely determines the metric graph, the potential q, and conditions at non-contact vertices.*

Note that the spectra of magnetic Schrödinger operators on the subtrees are independent of the magnetic potential. Moreover, as it will be seen from the proof, it is enough to know the dependence of the M-function on the magnetic fluxes through just the cycles formed by pairs of subtrees, but we might not know *a priori* which fluxes correspond to such cycles.

*Proof* We are going to modify the proof of Theorem 21.6. As before all Dirichlet eigenvalues on the subtrees can be seen as singularities of the M-function for *,* and in particular formula (21.7) holds.

Our first step is to identify the subsets *∂***T***<sup>j</sup>* ⊂ *∂.* Lemma 21.3 implies that for each *<sup>V</sup> <sup>m</sup>* <sup>∈</sup> *∂*, **M***(λ)* determines the number of subtrees to which *<sup>V</sup> <sup>m</sup>* belongs. It will be convenient to view *∂* as a multiset so that each contact vertex has multiplicity equal to its degree *dm*.

With each *λ*<sup>D</sup> *<sup>n</sup> ()* we associate the set *Bn* ⊂ *∂* consisting of the contact vertices at which the corresponding eigenfunctions have non-zero derivatives:

$$B\_n := \{ V^m \in \partial \Gamma : \partial \psi\_n^D(V^m) \neq 0 \}. \tag{23.36}$$

These sets can be identified by checking the diagonal elements of **M***(λ)* and selecting those which are singular at *λ*<sup>D</sup> *<sup>n</sup> ().* Each *∂***T***<sup>j</sup>* coincides with the set *Bn* corresponding to the ground state on **T***<sup>j</sup>* , since for the ground state the derivatives of the eigenfunction are non-zero at all boundary points of the corresponding subtree *∂***T***<sup>j</sup>* (see Theorem 4.16).

It is clear that *B*<sup>1</sup> is a contact set for a certain subtree, which we will denote by *∂***T**1*.* Consider *Bn* with the smallest index *n* and such that *Bn* ⊂ *B*1*,* then we denote *∂***T**<sup>2</sup> = *Bn*. Here we use that no two subtrees are parallel (assumption *(2)*). This process can be continued: assuming that the first few subsets *∂***T***<sup>j</sup> , j* ∈ *J* are identified, then a new subset *∂***T***i,j/*∈ *J* can be chosen equal to the set *Bn* with the lowest index *n* such that

$$B\_n \not\subset \partial \mathbf{T}\_j, \quad j \in J.$$

Assumption *(2)* guarantees that all subtree's boundaries are identified in this way. The process terminates when the sets

$$\bigcup\_{j \in J} \partial \mathbf{T}\_j \quad \text{and} \quad \partial \Gamma$$

coincide as multisets.

As before we not only identified the contact sets for the subtrees *∂***T***<sup>j</sup>* , but also the ground state energy *λ*<sup>D</sup> <sup>1</sup> *(***T***<sup>j</sup> )* for every subtree. Let us repeat that the ground state eigenfunctions on each subtree have non-zero normal derivatives at all boundary points *∂***T***<sup>j</sup>* . We proceed by separating the eigenvalues *λ*<sup>D</sup> *<sup>n</sup> ()* into the subsets *(***T***<sup>j</sup> )* := *λ*D *m(***T***<sup>j</sup> )* <sup>∞</sup> *<sup>m</sup>*=<sup>1</sup> *.* If there is just one set *∂***T***j*<sup>0</sup> which contains *Bn*, then the corresponding eigenvalue belongs to *(***T***j*<sup>0</sup> *).*

It remains to separate the eigenvalues corresponding to *Bn* belonging to several sets *∂***T***<sup>j</sup>* simultaneously. Dependence of the M-function on the magnetic fluxes *j* will help us. Of course only fluxes for cycles passing through *Bn* are relevant, but we do not assume we know *a priori* which fluxes are important. Lemma 21.1 implies that each set *Bn* contains at least two vertices. Assume without loss of generality that the vertices *V* <sup>1</sup> and *V* <sup>2</sup> belong to *Bn.* We denote by *J*<sup>0</sup> the set of trees that may have eigenfunctions with non-zero normal derivatives on *Bn*:

$$j \in J\_0 \Leftrightarrow B\_n \subset \partial \mathbf{T}\_j. \tag{23.37}$$

Let us denote by **M**<sup>12</sup> the non-diagonal entry of **M** associated with the vertices *V* <sup>1</sup> and *V* <sup>2</sup>*.* This entry has singularities at *λ*<sup>D</sup> *<sup>n</sup> ()* and at the ground states of all potential candidates for the subtree *λ*<sup>D</sup> <sup>1</sup> *(***T***<sup>j</sup> )*, for *j* ∈ *J*0. The residue coincides with the product of the normal derivatives of the eigenfunctions at *V* <sup>1</sup> and *V* 2. Its absolute value is non-zero and is independent of the magnetic fluxes *i*, while the phase depends on some of the fluxes. For all eigenfunctions associated with the same subtree the dependence of the derivatives on the magnetic fluxes coincide, hence the corresponding residues depend on the fluxes in the same way. This allows one to identify with which particular subtree **T***<sup>j</sup>* the residue is associated by comparing it with the residues for the ground states on the subtrees. The unique *j*<sup>0</sup> such that *λ*D *<sup>n</sup> ()* ∈ *(***T***j*<sup>0</sup> *)* is selected by the following equality that holds for all magnetic fluxes *i*:

$$\lim\_{\lambda \to \lambda\_{\rm t}^{\rm D}(\Gamma)} \frac{\mathbf{M}\_{12}(\lambda, \bar{\Phi})}{\mathbf{M}\_{12}(\lambda, \tilde{0})} = \lim\_{\lambda \to \lambda\_{\rm t}^{\rm D}(\mathbf{T}\_{\bar{\rm 0}})} \frac{\mathbf{M}\_{12}(\lambda, \bar{\Phi})}{\mathbf{M}\_{12}(\lambda, \tilde{0})}.\tag{23.38}$$

It is clear that this equality holds if *Bn* ∈ **T***<sup>j</sup> ,* but we need to show that such *j* is unique. Assume on the contrary that certain *j* - *, j* -- ∈ *J*<sup>0</sup> satisfy the above equality for any *,* then it follows that

$$\frac{\lim\_{\lambda \to \lambda\_1^{\mathrm{D}}(T\_{f'})} \frac{\mathbf{M}\_{12}(\lambda, \vec{\Phi})}{\mathbf{M}\_{12}(\lambda, \vec{0})}}{\lim\_{\lambda \to \lambda\_1^{\mathrm{D}}(T\_{f''})} \frac{\mathbf{M}\_{12}(\lambda, \vec{\Phi})}{\mathbf{M}\_{12}(\lambda, \vec{0})}} = e^{i\Phi\_{i\_0}},$$

where *i*<sup>0</sup> is the flux through the cycle formed by *Tj* and *Tj* - and containing *V* <sup>1</sup> and *<sup>V</sup>* 2. The quotient is different from one if for example *i*<sup>0</sup> <sup>=</sup> *π.*<sup>1</sup>

<sup>1</sup> The above reasoning does not imply that the derivatives of the Dirichlet eigenfunctions supported by any of the subtrees depend on the magnetic flux. It might happen that the magnetic potential is always identically zero on one of the subtrees, but this may not occur for two subtrees having at least two common vertices.

Repeating this procedure for each eigenvalue on we end up with the division of *λ*D *<sup>n</sup> ()* into the non-intersecting subsets *(***T***<sup>j</sup> ).* Then the singular part of each **M***<sup>j</sup> (λ)* can be reconstructed as in (21.8). The constant matrices **A***<sup>j</sup>* appearing in the representation can be determined from the asymptotics using (21.10).

The subtrees and potentials on them are uniquely determined by **M***<sup>j</sup> (λ)* and therefore give the complete solution of the inverse problem for the Schrödinger operator on *.* Here we used that the vertex conditions at the internal vertices of **T***<sup>j</sup>* are standard. 

Let us illustrate the proof with some informal diagrams. Assume that is dismantled into two subtrees **T**<sup>1</sup> and **T**2. The M-function for is equal to the sum of partial M-functions **M***<sup>j</sup> (λ)* ≡ **MT***<sup>j</sup> (λ)*:

$$\mathbf{M}\_{\Gamma}(\lambda) = \mathbf{M}\_{\Gamma}(\lambda) + \mathbf{M}\_{2}(\lambda). \tag{23.39}$$

We denote by *B* the set of common contact points from the subtrees

$$\mathcal{B} = \partial \mathbf{T}\_1 \cap \partial \mathbf{T}\_2.$$

Let us order the contact points of *∂* so that all contact points from *∂***T**<sup>1</sup> \ *B* come first, followed by points from *B*, and finally by points from *∂***T**<sup>2</sup> \ *B*. Using this ordering, formula (23.39) can be illustrated as in Fig. 23.7.

In Fig. 23.7 the entries of **M** which might be non-zero are marked by different shades of grey. Our goal is to reconstruct the singular parts of **M***<sup>j</sup> , j* = 1*,* 2, from the singular part of **M***.* Consider any singular point *λ*<sup>D</sup> *<sup>n</sup> ().* It is assumed that the spectra of **T**<sup>1</sup> and **T**<sup>2</sup> are disjoint, hence this particular *λ* may be a singular point for just one of the partial M-functions. We have three possibilities, as illustrated in Fig. 23.8. We indicate only entries which may be singular at the selected point. It is clear that in the first two cases the eigenvalue should be attributed to **T**<sup>1</sup> and **T**<sup>2</sup> respectively. In the third case it is not obvious whether the eigenvalue belongs to *(***T**1*)* or *(***T**2*).* This case does not occur if the subtrees are independent (or have just one common vertex) and was ignored in the proof of Theorem 21.6.

If the subtrees are allowed to be parallel, then we have to take into account that in accordance with Lemma 21.1 at least two normal derivatives are non-zero, and hence at least two diagonal entries of **M** are singular at each eigenvalue. In proving Theorem 23.6, the third case cannot be ignored, but the MBC-method helps

**Fig. 23.7** Structure of **M***(λ)I*

$$\mathbf{M}\_{\Gamma}(\lambda) = \left\{ \begin{array}{c} \mathbf{M}\_{1}(\lambda) \\\\ \hline \\ \\ \\ \mathbf{M}\_{2}(\lambda) \\\\ \hline \\ \end{array} \right\} \, B$$

**Fig. 23.8** Possible structure of the singularities in **M***(λ)*

to allocate the eigenvalue. Note that there is no need to use the MBC-method if each eigenfunction has non-zero normal derivatives outside *B*, but adding such an assumption to the formulation of the theorem appears cumbersome.

One may strengthen the result by weakening the assumptions of Theorem 23.6 as follows:

	- the ground state functions on **T***<sup>j</sup>* do not have zero normal derivatives at contact points,
	- the inverse problems for subtrees are uniquely solvable;

**Problem 98** Prove Theorem 23.6 dropping the assumption that no subtrees are parallel.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 24 Discrete Graphs**

The spectra of metric equilateral metric graphs are essentially determined by the spectra of the normalised or averaging Laplacian matrices associated with the corresponding discrete graphs. We are going to prove an explicit formula connecting these spectra, despite that the metric graphs have infinitely many eigenvalues but the spectra of Laplacian matrices are finite.

The second original goal was to check how does the idea of topological perturbations (developed originally for metric graphs, see Sect. 12.5) work for discrete graphs—a well established area of discrete mathematics. After the chapter was accomplished, we learned that this question for the normalised Laplacian matrices was studied earlier by H. Urakawa and collaborators [285, 413].1 We follow our original presentation in order to give a possibility to compare the two approaches.

# **24.1 Laplacian Matrices: Definitions and Elementary Properties**

Let *G* be a discrete graph with *M* vertices and *N* edges connecting some of the vertices. As before we are going to consider mostly finite graphs, *i.e.* graphs with a finite number of vertices and edges. Moreover, for simplicity we assume that no

<sup>1</sup> The author would like to thank Delio Mugnolo for discovering these references.

P. Kurasov, *Spectral Geometry of Graphs*, Operator Theory: Advances and Applications 293, https://doi.org/10.1007/978-3-662-67872-5\_24

loops and parallel edges are present. With such a graph one naturally associates the following matrices:

• the connectivity, or adjacency matrix *C* = {*cnm*}

*cnm* = ⎧ ⎪⎨ ⎪⎩ 1*,* the vertices *n* and *m* are neighbours*, i.e.* connected by an edge; 0*,* otherwise*,*

• the (diagonal) degree matrix *D* = diag {*d*1*, d*2*,...,dM*}*,* where *dm* are the degrees (valencies) of the corresponding vertices.

We shall be interested in so-called Laplacian matrices and their spectral properties. Laplacian matrices are certain generalisations of the (differential) Laplace operator for the case where the set of points is discrete. All these matrices are defined on the finite dimensional Hilbert space *-*<sup>2</sup>*(G)* <sup>=</sup> <sup>C</sup>*<sup>M</sup> ψ* = *(ψ(*1*), ψ(*2*), . . . , ψ(M))*, but using different formulas:

• **Combinatorial Laplacian** *L(G)* [152, 153, 391]

$$(L(G)\psi)\left(m\right) = \sum\_{n\sim m} \left(\psi\left(m\right) - \psi\left(n\right)\right),\tag{24.1}$$

where the sum is taken over all neighbouring vertices.2 This matrix can also be defined using the connectivity matrix *C* and the degree matrix *D*:

$$L(G) = D - C.$$

• **normalised Laplacian** *LN (G)* [132, 146]

$$(L\_N(G)\psi)\left(m\right) = \psi\left(m\right) - \frac{1}{\sqrt{d\_m}} \sum\_{n\sim m} \frac{1}{\sqrt{d\_n}} \psi\left(n\right),\tag{24.2}$$

also given by

$$L\_N(G) = D^{-1/2}L(G)D^{-1/2} = I - D^{-1/2}CD^{-1/2}.\tag{24.3}$$

The normalised Laplacian is similar to another Laplacian matrix to be called **averaging Laplacian** 

$$(L\_A(G)\psi)\left(m\right) = \psi(m) - \frac{1}{d\_m} \sum\_{n \sim m} \psi\left(n\right),\tag{24.4}$$

<sup>2</sup> Writing *<sup>n</sup>* <sup>∼</sup> *<sup>m</sup>* we indicate that there is an edge between the verices *<sup>n</sup>* and *m*.

or in matrix form

$$L\_A = I - D^{-1}C.\tag{24.5}$$

The second term here gives the average value of *ψ* over all vertices neighbouring to *m*. It follows that any solution to the Laplace equation *LAψ* = 0 possesses the following property:

*The value of ψ(m) is equal to the average value of ψ over all neighbouring vertices:* 

$$L\_A \psi = 0 \Rightarrow \psi(m) = \frac{1}{d\_m} \sum\_{n \sim m} \psi(n). \tag{24.6}$$

This property reminds of the Poisson formula for the differential Laplace operator. The averaging and normalised Laplacians are similar and therefore their spectra coincide:

$$L\_A(G) = D^{-1/2} L\_N(G) D^{1/2}.\tag{24.7}$$

Both the standard and the normalised Laplacians are Hermitian, since they are given by (finite) real symmetric matrices. The averaging Laplacian has real spectrum (since *LN* is Hermitian), but the corresponding matrix in general is not Hermitian. The operator associated with the matrix *LA* is self-adjoint in the weighted Hilbert space *-D* <sup>2</sup> *(G)* <sup>=</sup> <sup>C</sup>*<sup>M</sup>* with the scalar product given by

$$\langle \psi, \phi \rangle\_{\ell\_2^D(G)} = \langle D\psi, \phi \rangle\_{\mathbb{C}^M} = \sum\_{m=1}^M d\_m \psi\_m \overline{\phi\_m}. \tag{24.8}$$

**Problem 99** For which graphs the averaging Laplacian is given by a Hermitian matrix in the original space *-*<sup>2</sup>*(G)* <sup>=</sup> <sup>C</sup>*M*?

All three Laplacian matrices are generalisations of the second difference matrix. For example, consider the one-dimensional chain with the subsequent vertices connected to each other. Then the degrees of all vertices are equal to 2 and all three Laplacians remind of the discrete approximation of <sup>−</sup> *<sup>d</sup>*<sup>2</sup> *dx*2

$$\begin{aligned} \frac{1}{2}(L\psi)(m) &= (L\_N\psi)(m) = (L\_A\psi)(m) \\ \frac{1}{2} &= \frac{1}{2}\left(\psi(m) - \psi(m+1) + \psi(m) - \psi(m-1)\right) \\ &= -\frac{\psi(m+1) - 2\psi(m) + \psi(m-1)}{2}. \end{aligned} \tag{24.9}$$

Different Laplacian matrices are widely used in applications. The study of their spectral properties is a well-developed branch of modern discrete mathematics. It is not our goal to give an overview of these results, we shall focus instead on the relation between spectral properties of discrete Laplacian matrices and standard Laplacians on metric graphs. We shall also show how our methods originally developed for metric graphs, work in the discrete case.

All three Laplacian matrices are nonnegative as operators acting in the respective spaces. This can be seen from their quadratic forms

$$\begin{split} \langle L(G)\psi,\psi\rangle\_{\ell\_{2}(G)} &= \sum\_{m=1}^{M} \left( \sum\_{n\sim m} \frac{(\psi(m) - \psi(n))}{(\psi(m) - \psi(n))^{2}} \psi(m) \right) \\ &= \frac{1}{2} \sum\_{n,m: n\sim m} |\psi(m) - \psi(n)|^{2}, \\ \langle L\_{N}(G)\psi,\psi\rangle\_{\ell\_{2}(G)} &= \frac{1}{2} \sum\_{n,m: n\sim m} \left| \frac{1}{\sqrt{d\_{n}}} \psi(m) - \frac{1}{\sqrt{d\_{n}}} \psi(n) \right|^{2}, \\ \langle L\_{A}(G)\psi,\psi\rangle\_{\ell\_{2}^{D}(G)} &= \langle DL\_{A}(G)\psi,\psi\rangle\_{\ell\_{2}(G)} \\ &= \langle L\_{N}(G)D^{1/2}\psi,D^{1/2}\psi\rangle\_{\ell\_{2}(G)} \\ &= \frac{1}{2} \sum\_{n,m: n\sim m} |\psi(m) - \psi(n)|^{2}. \end{split} \tag{24.10}$$

**Problem 100** Prove that the quadratic forms for all three Laplacian matrices are given by formulas (24.10).

**Problem 101** The quadratic forms of *L(G)* and *LA(G)* are given by the same expressions. Is it possible to conclude that these operators are isospectral? Explain the reason and provide explicit examples to support your conclusion.

## **24.2 Topology and Spectra: Discrete Graphs**

In this section we derive a few elementary properties of Laplacian matrices connected to topological characteristics of discrete graphs such as the number of connected components and Euler characteristic.

**The Number of Connected Components** We see that all three operators have *μ*<sup>1</sup> = 0 as an eigenvalue. For the standard and averaging Laplacians the corresponding eigenfunction is equal to a constant on each connected component of *G*. For the normalised Laplacian the zero-energy eigenfunction should be modified as

$$
\psi\_1(m) = \sqrt{d\_m}c,\tag{24.11}
$$

where *c* is a constant which can be chosen different for each connected component. In fact no other eigenfunctions corresponding to zero eigenvalue are present (see Lemma 24.2 below). Therefore we conclude that the number *β*<sup>0</sup> of connected components in *G* is equal to the multiplicity of the ground state *μ*<sup>1</sup> = 0

$$
\beta\_0(G) = m(0),
\tag{24.12}
$$

where *m(*0*)* is the multiplicity of the eigenvalue *μ*<sup>1</sup> = 0*.* In particular, if the graph *G* is connected, then the ground state has multiplicity 1*.*

In what follows we are going to consider connected graphs only, since all results can be easily reformulated including not necessarily connected graphs. For connected graphs it is then natural to denote the eigenvalues *μj (L(G))* of the Laplacian matrices as follows

$$0 = \mu\_1 < \mu\_2 \le \mu\_3 \le \cdots \le \mu\_M. \tag{24.13}$$

**The Volume** The volume for a discrete graph *G* is just the number *M* of vertices. If we know the spectrum of any of the Laplacian matrices, then the volume of *G* is equal to the number of eigenvalues counted with multiplicities:

$$\#\{\mu\_j\} = M.\tag{24.14}$$

**Euler Characteristic** The trace of a Hermitian matrix is equal to the sum of its eigenvalues. Consider the combinatorial Laplacian matrix and calculate its trace in two different ways: using the eigenvalues and summing the diagonal elements:3

$$\sum\_{j=1}^{M} \mu\_j(L(G)) = \text{Tr} \, L(G) = \text{Tr} \, D = d\_1 + d\_2 + \dots + d\_M = 2N.$$

This formula allows one to calculate the number of edges from the spectrum leading to the formula for the Euler characteristic

$$\chi(G) = \#\{\mu\_m(L(G))\} - \frac{1}{2} \sum \mu\_m(L(G)). \tag{24.15}$$

This result cannot be generalised for normalised Laplacians, since it is not hard to provide examples of graphs, which are isospectral with respect to *LN* but have different Euler characteristic [114–116, 146]. Consider for example the following two graphs: three-star and four-cycle, shown in Fig. 24.1.

<sup>3</sup> Remember that we assumed that *<sup>G</sup>* has no loops and therefore Tr*(C)* <sup>=</sup> <sup>0</sup>*.*

Both graphs have 4 vertices and the corresponding normalised Laplacians are given by

$$L\_N^1 = \begin{pmatrix} 1 & -1/\sqrt{3} & -1/\sqrt{3} & -1/\sqrt{3} \\ -1/\sqrt{3} & 1 & 0 & 0 \\ -1/\sqrt{3} & 0 & 1 & 0 \\ -1/\sqrt{3} & 0 & 0 & 1 \end{pmatrix},$$

$$L\_N^2 = \begin{pmatrix} 1 & -1/2 & 0 & -1/2 \\ -1/2 & 1 & -1/2 & 0 \\ 0 & -1/2 & 1 & -1/2 \\ -1/2 & 0 & -1/2 & 1 \end{pmatrix}.\tag{24.16}$$

The characteristic polynomials coincide and are given by

$$p(\lambda) = \det(L\_N^j - \lambda) = (1 - \lambda)^4 - (1 - \lambda)^2 \tag{24.17}$$

showing that both normalised Laplacians have eigenvalues 0*,* 1*,* 1*,* 2*.* Obviously the two graphs have different Euler characteristic. Note that the corresponding eigenfunctions are different

$$\begin{aligned} \psi\_1^1 &= \begin{pmatrix} 1 \\ 1/\sqrt{3} \\ 1/\sqrt{3} \\ 1/\sqrt{3} \end{pmatrix}, \psi\_2^1 = \begin{pmatrix} 1 \\ -1/\sqrt{3} \\ -1/\sqrt{3} \\ -1/\sqrt{3} \end{pmatrix}, \psi\_3^1 = \begin{pmatrix} 0 \\ 1 \\ -1 \\ 0 \end{pmatrix}, \psi\_4^1 = \begin{pmatrix} 0 \\ 1 \\ 0 \\ -1 \end{pmatrix}; \\\ \psi\_1^2 &= \begin{pmatrix} 1 \\ 1 \\ 1 \\ 1 \end{pmatrix}, \qquad \psi\_2^2 = \begin{pmatrix} 1 \\ 1 \\ -1 \\ -1 \end{pmatrix}, \quad \psi\_3^2 = \begin{pmatrix} 1 \\ -1 \\ -1 \\ 1 \end{pmatrix}, \quad \psi\_4^2 = \begin{pmatrix} 1 \\ -1 \\ 1 \\ -1 \end{pmatrix}. \end{aligned} \tag{24.18}$$

**Problem 102** Construct your own pair of discrete graphs with isospectral combinatorial Laplacians.

## **24.3 Normalised Laplacians and Equilateral Metric Graphs**

We are going to discuss the relations between the eigenvalues of normalised Laplacians *LN (G)* on discrete graphs *G* and the spectrum of the standard Laplacians *L*st*()* on metric graphs . Recall that the normalised and averaging Laplacian matrices are isospectral, hence all our results apply to averaging Laplacians. Moreover, both matrices will be used in the proofs.

In order to be able to compare these spectra, we need a one-to-one correspondence between discrete graphs *G* and metric graphs . With any finite discrete graph *G* we may associate unique metric graph by assigning unit lengths to all edges in *G*. This rule establishes a one-to-one correspondence between the discrete and equilateral metric graphs and will be used throughout this chapter. If the metric graph does not have degree two vertices, then the corresponding discrete graph *G* is the same as the discrete graph used in Chap. 6 to get secular polynomials.

Consider any discrete graph *G* with *M* vertices 1*,* 2*,...,M* and the corresponding normalised Laplacian *LN (G)*. The quadratic form is nonnegative as can be seen from formula (24.10). The quadratic form of *LN* − 2 is nonpositive

$$\begin{split} \langle \langle L\_N(G) - 2 \rangle \psi, \psi \rangle\_{\ell\_2(G)} &= \frac{1}{2} \sum\_{n \sim m} \left| \frac{1}{\sqrt{d\_m}} \psi(m) - \frac{1}{\sqrt{d\_n}} \psi(n) \right|^2 - 2 \sum\_m |\psi(m)|^2 \\ &= -\frac{1}{2} \sum\_{n \sim m} \left| \frac{1}{\sqrt{d\_m}} \psi(m) + \frac{1}{\sqrt{d\_n}} \psi(n) \right|^2. \end{split} \tag{24.19}$$

It follows that the eigenvalues of the normalised Laplacian are always lying between 0 and 2. Let us denote the eigenvalues of *LN* by *μj (LN )* ordering them following (24.13)

$$0 = \mu\_1(L\_N) \le \mu\_2(L\_N) \le \dots \mu\_M(L\_N) \le 2\dots$$

The number *μ* = 0 is always an eigenvalue, while *μM* can be less than 2*.* The eigenvalues *μ* = 0 and *μ* = 2 will be called *extremal*.

The spectrum of the metric graph is discrete tending to +∞. It is easy to see that the spectrum is 2*π*-periodic if one uses the variable *k, k*<sup>2</sup> <sup>=</sup> *<sup>λ</sup>* instead of *<sup>λ</sup>* and ignores that the multiplicities of *<sup>λ</sup>* <sup>=</sup> <sup>0</sup> and *<sup>λ</sup>* <sup>=</sup> *(*2*π )*<sup>2</sup> could be different. Moreover, the spectrum in *k*-scale is symmetric with respect to the origin, hence it is enough to study the eigenvalues between 0 and *π*2. We shall use our standard convention to denote the eigenvalues of the standard Laplacian

$$0 = \lambda\_1 \le \lambda\_2 \le \dots \le \lambda\_n \le \dots \tag{24.20}$$

It turns out that while the correspondence between the eigenvalues *μ* = 0*,* 2 and *λ* <sup>=</sup> *(mπ )*2*, m* <sup>∈</sup> <sup>Z</sup> can be described by an explicit formula, the correspondence between the extremal eigenvalues is slightly more involved. This is related to the fact that *(mπ )*<sup>2</sup> are the eigenvalues of the Dirichlet Laplacian on the unit interval. Therefore extremal and all other (to be called generic) eigenvalues will be considered separately.

**Generic Eigenvalues** Let us discuss the relation between the eigenvalues *μn* = 0*,* 2 and *λj* <sup>=</sup> *<sup>π</sup>*2*m*2*, m* <sup>∈</sup> <sup>Z</sup> first.

**Theorem 24.1** *Assume that μn are the eigenvalues of the normalised Laplacian LN on a discrete graph G and λj are the eigenvalues of the standard Laplacian L*st*() on the corresponding equilateral metric graph with the common edge length one. Then λj* : *λj* <sup>=</sup> *<sup>π</sup>*2*m*2*, m* <sup>∈</sup> <sup>Z</sup> *is an eigenvalue of <sup>L</sup>*st*() if and only if* 

$$1 - \cos\sqrt{\lambda\_j} = \mu\_n,\tag{24.21}$$

*for a certain μn* = 0*,* 2 *from the spectrum of LN (G). Moreover the multiplicities of the eigenvalues coincide.* 

*Proof* It is much easier to prove the theorem using the average Laplacian *LA* instead of the normalised Laplacian *LN* . These two matrices are isospectral due to (24.7). Consider any eigenvalue *μ* and any corresponding eigenvector *ψ*

$$L\_A \psi = \mu \psi,\tag{24.22}$$

or in more details

$$
\psi(m) - \frac{1}{d\_m} \sum\_{l \sim m} \psi\_n(l) = \mu \,\psi(m), \,\, m = 1, 2, \dots, M. \tag{24.23}
$$

Let us construct an eigenfunction *ψ(x)* ˜ of *L*st*()* corresponding to a certain positive eigenvalue *<sup>λ</sup>* <sup>=</sup> *<sup>k</sup>*<sup>2</sup> so that it attains the same values at the vertices as *<sup>ψ</sup>*

$$
\tilde{\psi}(V^m) = \psi(m), \ m = 1, 2, \dots, M. \tag{24.24}
$$

The eigenvalue *λ* is not determined yet, we shall see which values are possible in a few steps.

Consider any edge in , say connecting the vertices *V <sup>m</sup>*<sup>1</sup> and *V <sup>m</sup>*<sup>2</sup> . Then the unique function satisfying the differential equation

$$-\tilde{\psi}^{\prime\prime}(\mathbf{x}) = k^2 \tilde{\psi}(\mathbf{x})$$

and having values at the endpoints prescribed by (24.24) is given by

$$\begin{split} \tilde{\psi}(\mathbf{x}) &= \frac{\psi(m\_1) - \cos k \,\psi(m\_2)}{\sin^2 k} \cos \left( k \,\text{dist}\{\mathbf{x}, \, V^{m\_1}\} \right) \\ &+ \frac{\psi(m\_2) - \cos k \,\psi(m\_1)}{\sin^2 k} \cos \left( k \,\text{dist}\{\mathbf{x}, \, V^{m\_2}\} \right). \end{split} \tag{24.25}$$

The normal derivative of *ψ*˜ *<sup>j</sup>* at the endpoints can be calculated as well. For example we have:

$$
\partial\_n \tilde{\psi}(V^{m\_1}) = \frac{k}{\sin k} \left( -\cos k \,\,\psi(m\_1) + \psi(m\_2) \right). \tag{24.26}
$$

Note that the last formula may also be obtained directly from (5.55).

We repeat this procedure for every edge in the metric graph *.* The function *ψ*˜ obtained in this way is continuous at the vertices by construction, it satisfies the same differential equation on each edge. Hence in order to check that it is really an eigenfunction of *L*st*()* it remains to show that the sum of the normal derivatives at each vertex is zero

$$\sum\_{\alpha\_l \in V^m} \partial\_n \tilde{\psi} (\alpha\_l) = 0.$$

Using (24.26) this equation can be written as

$$\sum\_{l \sim m} \frac{k}{\sin k} \left( \cos k \,\,\psi(m) - \psi(l) \right) = 0 \tag{24.27}$$
 
$$\Leftrightarrow d\_m \cos k \,\,\psi(m) = \sum\_{\ell \sim m} \psi(\ell).$$

It is easy to transform the equation into the following form

$$
\psi(m) - \frac{1}{d\_m} \sum\_{l \sim m} \psi(l) = (1 - \cos k)\psi(m),
\tag{24.28}
$$

which is precisely the eigenfunction equation for *ψ*, provided *λ* and *μ* satisfy (24.21). Of course if *μ* is fixed, then Eq. (24.21) possesses infinitely many solutions *k.*

The constructed mapping

$$
\psi \leftrightarrow \tilde{\psi}
$$

is one-to-one, provided *λ* is fixed: the functions *ψ* and *ψ*˜ have the same values at the vertices and *ψ*˜ on each edge is uniquely determined by its values at the endpoints. This implies that the multiplicities of the two eigenvalues connected via (24.21) coincide.

Formula (24.21) is often refereed to as **von Below** formula as it appeared for the first time in [488]. One may find different generalisations of this formula for equilateral metric graphs in [418, 422].

Formual (24.21) implies that every eigenvalue *μn* of *LA(G)* determines an infinite series of eigenvalues of *L*st*()*, since equation (24.21) has infinitely many solutions with respect to *λ* : if *λj* is a solution, then any ± *λj* <sup>+</sup> <sup>2</sup>*πm*<sup>2</sup> *, m* <sup>∈</sup> <sup>Z</sup> is also a solution. One may use the variable *<sup>k</sup>* instead of *<sup>λ</sup>* <sup>=</sup> *<sup>k</sup>*<sup>2</sup> to describe the spectrum. The observation above means that the set of *k /*<sup>∈</sup> <sup>Z</sup> in the spectrum of *L*st*()* is periodic and symmetric with respect to the origin. In the interval *(*0*,* 2*π )* the eigenvalues are symmetric with respect to the middle point *k* = *π.* The number of eigenvalues of *L*st*()* in the interval *(*0*,π)* is equal to the number of eigenvalues of *LN (G)* different from 0 and 2*.*

**Extremal Eigenvalues** It remains to study the case where *μn* = 0*,* 2 and *λj* = *<sup>π</sup>*2*n*2*, n* <sup>∈</sup> <sup>Z</sup>*.*

We are going to prove four Lemmas describing the cases


The following Lemma determines the multiplicity of the zero eigenvalue for both normalised and standard Laplacians.

**Lemma 24.2** *The point zero is an eigenvalue for both the normalised Laplacian LN (G) and the standard Laplacian L*st*(). The multiplicity of the eigenvalue is equal to the number of connected components in the graph.* 

*Proof* It is clear from the construction that the number of connected components in the discrete graph *G* and in the corresponding metric graph are equal.

Consider first the quadratic form *LN (G)* given by (24.10). If *G* is connected then the vector *ψ*1*(m)* <sup>=</sup> <sup>√</sup>*dmc* is an eigenvector corresponding to *μ*<sup>1</sup> <sup>=</sup> <sup>0</sup>*.* This vector is unique (up to multiplication of course). For non-connected graphs the multiplicity is equal to the number of connected components, since the constant *c* can be chosen different on each connected component.

For the standard Laplacian we repat the proof of Lemma 4.10. The quadratic form is given by formula (3.55), which simplifies as follows

$$\langle L^{\rm st}(\Gamma)\mu, u\rangle\_{L\_2(\Gamma)} = \sum\_{n=1}^{N} \int\_{E\_n} |u'(x)|^2 dx. \tag{24.29}$$

The constant function *ψ*1*(x)* = *c* is an eigenfunction corresponding to *λ*<sup>1</sup> = 0*.* Every function that minimises the quadratic form is a constant function on every edge. Standard matching conditions imply that the function is equal to a constant on every connected component of *.* It follows that the multiplicity of the zero eigenvalue is equal to the number of connected components.

This Lemma implies in particular that the multiplicities of the zero eigenvalue for the normalised Laplacian and standard Laplacian coincide.

**Lemma 24.3** *Let G be a connected discrete graph. Then the point μ* = 2 *is an eigenvalue of the normalised Laplacian if and only if the graph G is bipartite.*<sup>4</sup>

#### **Problem 103** Prove Lemma 24.3

If the graph *G* is not connected, then *μ* = 2 is an eigenvalue of the averaging Laplace matrix with the multiplicity equal to the number of bipartite components. Here multiplicity zero means that *μ* = 2 is not an eigenvalue.

**Lemma 24.4** *Let the equilateral metric graph with the common edge length one be connected. Then the points <sup>λ</sup>* <sup>=</sup> *(*<sup>1</sup> <sup>+</sup> <sup>2</sup>*m)*2*π*2*, m* <sup>∈</sup> <sup>Z</sup> *are eigenvalues of the standard Laplace operator on with the multiplicities equal to* 


*Here β*<sup>1</sup> *is the number of independent cycles either on the metric graph , or on the discrete graph G.* 

*Proof* Let us prove the lemma for *k* = *π*, then any soluton to the eigenfunction equation on each edge is given by

$$
\psi(\alpha) = \alpha \cos \pi \alpha + \beta \sin \pi \alpha \dots
$$

It follows that the values of the function at all endpoints have the same absolute value, but differ by sign:

$$
\bar{\psi}(\mathbf{x}\_{2j-1}) = -\bar{\psi}(\mathbf{x}\_{2j}).
$$

Let us denote by *a* the common absolute value of the function at all vertices

$$a := |\psi(x\_j)|.$$

<sup>4</sup> We repeat that a graph *G* is called bipartite if the vertices can be divided into two classes, so that the edges connect only vertices from different classes.

<sup>5</sup> Note that if *<sup>β</sup>*<sup>1</sup> <sup>=</sup> 0, then the graph is a tree, and therefore it is bipartite. Hence the multiplicities determined by the Lemma are never negative.

An eigenfunction with *a* = 0 exists if and only if the graph is bipartite. Such eigenfunction resembles the averaged Laplacian eigenfunction for *μ* = 2 (see Lemma 24.3). Let us denote the corresponding (unique up to a multiplier) eigenfunction by *ψ*˜ <sup>0</sup>*.*

If *a* = 0 then the eigenfunctions are given by *βj* sin *π(x*−*x*2*j*−1*)* on each interval. Such an eigenfunction exists if and only if one is able to combine the sine functions on the edges, so that the sum of derivatives at each vertex is equal to zero (certain balance condition is fulfilled).

Assume now that the graph is bipartite, then all cycles in it have even lengths and it is easy to construct eigenfunctions supported on such cycles using multiples of sin *π(x* − *x*2*j*−1*).* Every graph *G* can be turned into a tree **T** by cutting away certain *<sup>β</sup>*<sup>1</sup> edges. Let us denote these edges by *i, i* <sup>=</sup> <sup>1</sup>*,* <sup>2</sup>*,...,β*1*,* and by *<sup>ψ</sup>*˜ *<sup>i</sup>* the eigenfunctions supported on the (shortest) cycles contained in **T** ∪ *i.* We have constructed *β*<sup>1</sup> + 1 eigenfunctions corresponding to *k* = *π.* It follows that the multiplicity of the corresponding eigenvalue is at least *β*<sup>1</sup> + 1*.*

Let *<sup>ψ</sup>*˜ be any eigenfunction corresponding to *<sup>λ</sup>* <sup>=</sup> *<sup>π</sup>*2*.* Consider then the function

$$
\hat{\psi}(\mathbf{x}) = \tilde{\psi}(\mathbf{x}) - f\_0 \tilde{\psi}^0(\mathbf{x}) - \sum\_{i=1}^{\beta\_1} f\_i \tilde{\psi}^i(\mathbf{x}),
$$

where the constants *f*0*, fi* are chosen so that the function *ψ*ˆ is equal to zero at all vertices and on all edges *i, i* = 1*,* 2*,...,β*1*.* Then the function *ψ*ˆ is supported on the tree **T** and therefore it is identically equal to zero. To show this, let us look at the pendant edges in **T**. Obviously the function *ψ*ˆ is equal to zero on every such edge. One may take away this edge from the tree **T** and repeat the argument. Since the tree is finite we conclude that *ψ*ˆ is identically zero.

It remains to study the case where the graph *G* is not bipartite and the eigenfunction is equal to zero at all vertices. Consider again the edges *i* and the (shortest) cycles on **T** ∪ *i* introduced above. Among such cycles there exists at least one cycle of odd length, since otherwise the graph is bipartite. Without loss of generality we assume that this cycle corresponds to *β*<sup>1</sup> *.* For cycles of even length there exists an eigenfunction supported on it. For cycles *i* of odd length, there exists an eigenfunction supported on **T** ∪ *i* ∪ *β*! *.* Hence we have constructed *<sup>β</sup>*<sup>1</sup> <sup>−</sup> <sup>1</sup> linearly independent eigenfunctions to be denoted by *<sup>ψ</sup>*˜ *<sup>i</sup> .*

To prove that the multiplicity of the eigenvalue is really equal to *β*<sup>1</sup> − 1 consider an arbitrary eigenfunction *ψ*˜ and the function *ψ*ˆ given by

$$
\hat{\psi}(\mathbf{x}) = \tilde{\psi} - \sum\_{l=1}^{\beta\_1 - 1} f\_l \tilde{\psi}^{\beta\_1}(\mathbf{x}),
$$

where as before the parameters *fi* are adjusted so that the function *ψ*ˆ is not only equal to zero at all vertices but on all edges *i, i* = 1*,* 2*,...,β*<sup>1</sup> − 1 as well. Then the function *ψ*ˆ is an eigenfunction supported on **T** ∪ *β*<sup>1</sup> *.* This graph contains only one cycle and this cycle has an odd length. As before the function *ψ*ˆ is equal to zero on all pendant edges. Repeating the argument we deduce that *ψ*ˆ is supported by the unique cycle in **T** ∪ *g,* but this cycle has odd length and therefore *ψ*ˆ ≡ 0*.* It follows that the multiplicity of the eigenvalue is *β*<sup>1</sup> − 1 in this case.

The proof for *k* = *(*1 + 2*m)π, m* = 0 is almost identical.

The Lemma can easily be generalised for the case of not connected graphs by repeating the argument for every connected component.

**Lemma 24.5** *The points <sup>λ</sup>* <sup>=</sup> <sup>4</sup>*π*2*m*2*, m* <sup>=</sup> <sup>1</sup>*,* <sup>2</sup>*,... are eigenvalues of <sup>L</sup>*st*() with multiplicities β*<sup>1</sup> + 1*.* 

*Proof* We assume first that *k* = 2*π.* Then it is easy to construct *β*<sup>1</sup> + 1 linearly independent eigenfunctions corresponding to this particular *k,* where *g* is the genus of . Let us denote by *ψ*˜ <sup>0</sup>*(x, k)* the eigenfunction given by

$$
\tilde{\psi}^0(\mathbf{x}, k) = \cos k(\mathbf{x} - \mathbf{x}\_{2j-1}), \ \mathbf{x} \in [\mathbf{x}\_{2j-1}, \mathbf{x}\_{2j}].
$$

This function satisfies the differential equation on each edge, is continuous at all vertices (in fact it is equal to 1 at all vertices) and all normal derivatives are equal to zero, which implies that their sums at each vertex are also zero and the standard vertex conditions are satisfied.

If is not a tree, then it can be transformed to a tree **T** by removing exactly *β*<sup>1</sup> edges denoted without loss of generality by <sup>1</sup>*,*2*,...,β*<sup>1</sup> . Let *Ci* be the (shortest) cycle on **T** ∪ *i* passing *i* in the positive direction. Note that every such cycle comes across exactly one removed edge. Consider the functions *ψ*˜ *<sup>i</sup>* defined by

$$
\tilde{\psi}^l(\mathbf{x}, k) = \begin{cases}
\pm \sin k(\mathbf{x} - \mathbf{x}\_{2j-1}), \text{ provided } \mathbf{x} \in \Delta\_j \subset C\_l; \\
0, & \text{otherwise};
\end{cases}
$$

where the sign depends on whether the path *Cj* runs along *j* in the positive *(*+*)* or in the negative *(*−*)* direction. The function *ψi* is not only continuous along the path *Ci* but its first derivative is continuous as well.

Each function *ψi* satisfies the eigenfunction equation, is continuous at all vertices (in fact equal to zero there) and the sum of normal derivatives at each vertex is zero (if the vertex is on the path *Ci* then only two normal derivatives are different from zero but cancel each other, if the vertex is not on the path, then all normal derivatives are zero).

It is clear that the functions *ψ*˜ <sup>0</sup>*, ψ*˜ <sup>1</sup>*,..., ψ*˜ *<sup>β</sup>*<sup>1</sup> are linearly independent and this implies that the multiplicity of the eigenvalue *k* is not less than 1 + *β*1*.*

Repeating the arguments used in the proof of Lemma 24.4 one shows that the multiplicity does not exceed 1 + *β*1. The proof for *λ* = *(*2*πm)* <sup>2</sup> *, m* <sup>=</sup> <sup>2</sup>*,* <sup>3</sup>*,...,* follows the same lines.

Our studies allow us to characterise the spectrum of any equilateral metric graph.

**Theorem 24.6** *Let L*st*() be the standard Laplace operator on a connected compact equilateral metric graph obtained from the discrete graph G by assigning unit length to each edge. Then the spectrum λn* <sup>=</sup> *<sup>k</sup>*<sup>2</sup> *n* <sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *of the standard Laplacian L*st*() has the following properties:* 


$$
\beta\_1 + 1 = 2 - \chi = N - M + 2.
$$

*(5) The number of eigenvalues inside each interval* 

$$(\left(m\pi\right)^2, \left((m+1)\pi\right)^2), \ m = 0, 1, \dots$$

*counted with multiplicities is equal to* 

 $M-2$ , if  $G$  is bipartite;

 $M-1$ ,  $otherwise$ .

*(6) The eigenvalues* 

$$\lambda = \left(\pi(2m+1)\right)^2, \ m = 0, 1, \ldots$$

*have multiplicity* 

$$\begin{aligned} \beta\_1 + 1 &= -\chi + 2 = N - M + 2, \text{ if } G \text{ is bipartite.}\\ \beta\_1 - 1 &= -\chi = N - M, \quad &\text{otherwise.} \end{aligned} \tag{24.31}$$

*Proof* All statements of the Theorem are straightforward corollaries of Theorem 24.1 and Lemmas 24.2–24.5.


*G* is bipartite (*M* − 2) or not (*M* − 1). Then formula (24.21) implies that the standard Laplacian has the same number of eigenvalues inside each interval *((mπ )*2*, ((m* <sup>+</sup> <sup>1</sup>*)π*2*), m* <sup>=</sup> <sup>0</sup>*,* <sup>1</sup>*,...*

(6) Lemma 24.4 describes the multiplicities of the eigenvalues *(π(*2*<sup>m</sup>* <sup>+</sup> <sup>1</sup>*))*2*.*

Note that the theorem can be proved without any use of formula (24.21), this approach was developed in [333].

We are going to use these results to prove that an inequality between the standard and Dirichlet eigenvalues holds true for all equilateral bipartite graphs and for the majority of the eigenvalues if the equilateral graph is not bipartite.

**Equilateral Graphs and Dirichlet Eigenvalues** Consider the Laplace operator defined on the functions satisfying Dirichlet conditions at all vertices. This operator is equal to the orthogonal sum of Dirichlet Laplacians on *N* independent intervals. Its spectrum will be denoted by *μD m().*

Let us discuss whether the inequality

$$
\lambda\_{n+1}^{\text{st}}(\Gamma) \le \lambda\_n^D(\Gamma), \tag{24.32}
$$

holds true, where *λ*st *m()* are the eigenvalues of the standard Laplacians on *.*

**Lemma 24.7** *Let be an equilateral graph with N edges of length one, then the spectrum of LD() is given by the eigenvalues (πm)*2*, m* <sup>=</sup> <sup>1</sup>*,* <sup>2</sup>*,... with multiplicity N.*

The proof is straightforward since *LD()* is given as the orthogonal sum of *N* Dirichlet Laplace operators on the intervals [0*,* 1]*.* Each such Laplacian has the spectrum *(πm)*2*, m* <sup>=</sup> <sup>1</sup>*,* <sup>2</sup>*,....*

So the first 2*N* Dirichlet eigenvalues are as follows

$$\begin{aligned} \lambda\_1^D = \lambda\_2^D = \dots = \lambda\_N^D = \pi; \\ \lambda\_{N+1}^D = \lambda\_{N+2}^D = \dots = \lambda\_{2N}^D = 2\pi. \end{aligned} \tag{24.33}$$

If the graph is not bipartite, then the first 2*N* + 1 eigenvalues of the standard Laplacian satisfy the following inequalities (Theorem 24.6):

$$\begin{aligned} 0 = \lambda\_1^{\text{st}} < \lambda\_2^{\text{st}} \le \cdots \le \lambda\_M^{\text{st}} < \lambda\_{M+1}^{\text{st}} = \cdots = \lambda\_N^{\text{st}} = \pi; \\ \pi < \lambda\_{N+1}^{\text{st}} \le \cdots \le \lambda\_{N+M-1}^{\text{st}} < \lambda\_{N+M}^{\text{st}} = \cdots = \lambda\_{2N+1}^{\text{st}} = 2\pi. \end{aligned} \tag{24.34}$$

It follows that inequality (24.32) is satisfied for all *n* = 1*,* 2*,...,N* − 1*, N* + 1*,...,* 2*N.* Moreover, if *n* = *N*, then the inequality is violated.

Considering higher eigenvalues, we see that the structure repeats and inequality is violated only for *n* = *(*2*m* + 1*)N.* For all other eigenvalues the inequality holds.

Assume now that the graph *G* is bipartite, then the first 2*N* + 1 eigenvalues of the standard Laplacian satisfy:

$$\begin{aligned} 0 = \lambda\_1^{\text{st}} < \lambda\_2^{\text{st}} \le \cdots \le \lambda\_{M-1}^{\text{st}} < \lambda\_M^{\text{st}} = \cdots = \lambda\_{N+1}^{\text{st}} = \pi; \\ \pi < \lambda\_{N+2}^{\text{st}} \le \cdots \le \lambda\_{N+M-1}^{\text{st}} < \lambda\_{N+M}^{\text{st}} = \cdots = \lambda\_{2N+1}^{\text{st}} = 2\pi. \end{aligned} \tag{24.35}$$

We see that (24.32) holds for any *<sup>n</sup>* <sup>=</sup> <sup>1</sup>*,* <sup>2</sup>*,...,* <sup>2</sup>*<sup>N</sup>* and hence for any *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>*.* We have just proven the following theorem, first appeared in [349].

**Theorem 24.8** *Let be a connected equilateral metric graph with N edges. Then inequality* (24.32) *between the eigenvalues for Dirichlet and standard Laplacians holds for any n if and only if the corresponding discrete graph G is bipartite. If G is not bipartite, then* (24.32) *holds for any n* = *(*2*m* + 1*)N , m* = 0*,* 1*,..., moreover* (24.32) *is violated for n* = *(*2*m* + 1*)N , m* = 0*,* 1*,....*

The theorem implies that for not bipartite *G* the portion of eigenvalues for which inequality is violated is 1*/(*2*N ).* Moreover, it is easy to see that the inequality turns into equality for *<sup>λ</sup>* <sup>=</sup> *(mπ )*2*, m* <sup>=</sup> <sup>1</sup>*,* <sup>2</sup>*,...* only. Approximately 2*(N* <sup>−</sup> *M)* eigenvalues out of 2*N* lead to equality in (24.32) (2*(N* −*M)*+1 if *G* is not bipartite and 2*(N* − *M)* + 4 if *G* is bipartite).

## **24.4 Isospectrality of Normalised and Standard Laplacians**

Let us discuss the relation between the isospectrality of standard Laplacians on equilateral graphs and the isospectrality of normalised Lapalcians on corresponding discrete graphs. It is clear that the standard Lapalcians are isospectral only if the generic eigenvalues of the normalised Laplacians coincide. On the other hand, isospectrality of standard Laplacians requires that the graphs have the same Euler characteristics, but normalised Laplacians could be isospectral on graphs with different number of cycles.

**Theorem 24.9** *Let j and Gj be certain equilateral connected metric graphs and their discrete counterparts, respectively j* = 1*,* 2*. The standard Laplacians on* <sup>1</sup> *and* <sup>2</sup> *are isospectral if and only if the normalised Laplacians on G*<sup>1</sup> *and G*<sup>2</sup> *are isospectral and the Euler characteristics of G*<sup>1</sup> *and G*<sup>2</sup> *(equal to those of* <sup>1</sup> *and*  2*) are equal.* 

*Proof* Assume that the standard Laplacians on <sup>1</sup> and <sup>2</sup> are isospectral. Theorem 24.1 states that the generic eigenvalues of *LN (G*1*)* and *LN (G*2*)* coincide. It remains to check the extremal eigenvalues of the normalised Laplacian. Theorem 24.6 implies that the Euler characteristics are equal, hence looking at the multiplicity of the eigenvalue *<sup>λ</sup>* <sup>=</sup> *((*2*<sup>m</sup>* <sup>+</sup> <sup>1</sup>*)π )*<sup>2</sup> it is possible to determine whether the discrete graph is bipartite or not: multiplicity *β*<sup>1</sup> + 1 = 2 − *χ* implies that *G* is bipartite, multiplicity *β*1−1 = −*χ*—that *G* is not bipartite. In the first case we have *μM(LN (G))* = 2, in the second case *μM(LN (G)) <* 2 is generic. The multiplicities of *λ* = 0 are given by the number of connected components and are equal as well. We conclude that *LN (G*1*)* and *LN (G*2*)* are isospectral.

Assume now that the two normalised Laplacians are isospectral and the graphs have the same Euler characteristics. As before we conclude that all generic eigenvalues are equal and focus on the extremal eigenvalues. The Euler characteristic determines the multiplicity of *<sup>λ</sup>* <sup>=</sup> *(*2*mπ )*2. If *<sup>μ</sup>* <sup>=</sup> <sup>2</sup> is an eigenvalue of the normalised Laplacians, then the graphs are bipartite and the multiplicity of *λ* = *((*2*<sup>m</sup>* <sup>+</sup> <sup>1</sup>*)π )*<sup>2</sup> is *<sup>β</sup>*<sup>1</sup> <sup>+</sup> <sup>1</sup> = −*<sup>χ</sup>* <sup>+</sup> 2, otherwise the multiplicity is *<sup>β</sup>*<sup>1</sup> <sup>−</sup> <sup>1</sup> = −*χ*. The multiplicities of *λ* = 0 are determined by the number of connected components and are equal again. It follows that the standard Laplacians are isospectral.

The theorem implies that to get equilateral isospectral metric graphs it is enough to check all families of discrete graphs with the same spectrum of the normalised Laplacian and then keep only graphs with the same Euler characteristic. The list of graphs leading to isospectral normalised Laplacians can be found *e.g.* in [116, 479] (see also [141, 342]).

Analysing the proof of the theorem one may notice that equality of Euler characteristics was used only to get equal multiplicities of the extremal eigenvalues. The reason for that is that for equilateral graphs formula (9.1) can be proven directly using the structure of the spectrum prescribed by Theorem 24.6. Particular values of the generic eigenvalues plays no role.

*Proof of Formula* (9.1) *for Equilateral Graphs* Assume first that the graph is connected. Let us denote by *ω*<sup>2</sup> *<sup>j</sup> , j* <sup>=</sup> <sup>1</sup>*,* <sup>2</sup>*,...,J* the eigenvalues of *L*st*()* inside the interval *(*0*,* 2*π )*. Then the limit on the right hand side of (9.1) can be written as follows:

$$\begin{split} 2 - 2 \lim\_{t \to \infty} & \sum\_{k\_n \neq 0} \frac{1 - 2 \cos k\_n / t + \cos 2k\_n / t}{(k\_n / t)^2} \\ = 2 - 2(1 + \beta\_{\text{l}}) \lim\_{t \to \infty} \sum\_{m=1}^{\infty} \frac{1 - 2 \cos 2\pi m / t + \cos 4\pi m / t}{(2\pi m / t)^2} \\ & - 2 \lim\_{t \to \infty} \sum\_{m=0}^{\infty} \sum\_{j=1}^{J} \frac{1 - 2 \cos(\omega\_j + 2\pi m) / t + \cos 2(\omega\_j + 2\pi m) / t}{((\omega\_j + 2\pi m) / t)^2}, \end{split} \tag{24.36}$$

where we used that all points *k* = 2*πm, m* = 1*,* 2*,...* have multiplicity 1 + *β*<sup>1</sup> and the point *k* = 0 has multiplicity 1. The first limit can be calculated using formula (9.5).

To calculate the second limit let us use that the points *ωj* are situated symmetrically with respect to the center of the interval *(*0*,* 2*π )*

$$\begin{split} &\sum\_{m=0}^{\infty} \sum\_{j=1}^{J} \frac{1 - 2\cos(\omega\_{j} + 2\pi m)/t + \cos 2(\omega\_{j} + 2\pi m)/t}{((\omega\_{j} + 2\pi m)/t)^{2}} \\ &= \frac{1}{2} \sum\_{m=0}^{\infty} \sum\_{j=1}^{J} \left\{ \frac{1 - 2\cos(m + \omega\_{j}/2\pi)/(t/2\pi) + \cos 2(m + \omega\_{j}/2\pi)/(t/2\pi)}{((m + \omega\_{j}/2\pi)/(t/2\pi))^{2}} \right. \\ &\left. + \frac{1 - 2\cos(m + (2\pi - \omega\_{j})/2\pi)/(t/2\pi) + \cos 2(m + (2\pi - \omega\_{j})/2\pi)/(t/2\pi)}{((m + (2\pi - \omega\_{j})/2\pi)/(t/2\pi))^{2}} \right\} \\ &= \frac{1}{2} \sum\_{m \in \mathbb{Z}} \sum\_{j=1}^{J} \frac{1 - 2\cos(m + \omega\_{j}/2\pi)/(t/2\pi) + \cos 2(m + \omega\_{j}/2\pi)/(t/2\pi)}{((m + \omega\_{j}/2\pi)/(t/2\pi))^{2}}. \end{split} \tag{24.37}$$

We are going to prove that the last sum is equal to zero using the formula

$$\sum\_{m \in \mathbb{Z}} \frac{e^{i(m+\alpha)\chi}}{(m+\alpha)^2} = \frac{2\pi e^{2\pi i \alpha}}{1 - e^{2\pi i \alpha}} x - \frac{(2\pi)^2 e^{2\pi i \alpha}}{(1 - e^{2\pi i \alpha})^2}, \quad \alpha \notin \mathbb{Z}. \tag{24.38}$$

To prove this formula one may exploit the following idea: find a linear function *f (x)* = *ax* + *b,* 0 ≤ *x* ≤ 2*π*, such that the series on the left hand side of the formula is exactly the Fourier series for *f* in the orthogonal basis *ei(n*+*α)x .* The function *f* is represented by the following almost everywhere converging Fourier series

$$f(\mathbf{x}) = \frac{1}{2\pi} \sum\_{m \in \mathbb{Z}} f\_m e^{i(m+a)\mathbf{x}},\tag{24.39}$$

where

$$f\_m = \int\_0^{2\pi} (a\chi + b)e^{-l(m+\alpha)\chi} d\chi. \tag{24.40}$$

The function *f* may be chosen equal to

$$f(\mathbf{x}) = \frac{e^{2\pi i \alpha}}{1 - e^{2\pi i \alpha}} \mathbf{x} - \frac{2\pi e^{2\pi i \alpha}}{(1 - e^{2\pi i \alpha})^2} \mathbf{x}$$

Then the Fourier coefficients are given by

$$f\_m = \int\_0^{2\pi} \left( \frac{e^{2\pi i \alpha}}{1 - e^{2\pi i \alpha}} x - \frac{2\pi e^{2\pi i \alpha}}{(1 - e^{2\pi i \alpha})^2} \right) e^{-i(m+\alpha)\chi} d\chi = \frac{1}{(m+\alpha)^2}.$$

Thus formula (24.38) is proven and it implies in particular that

$$\sum\_{m \in \mathbb{Z}} \frac{1 - 2e^{l(m+\alpha)\chi} + e^{2l(m+\alpha)\chi}}{(m+\alpha)^2} = 0, \quad \text{provided } \alpha \notin \mathbb{Z}. \tag{24.41}$$

Indeed, using (24.38) we have:

$$\begin{split} &\sum\_{m\in\mathbb{Z}} \frac{1 - 2e^{i(m+\alpha)\chi} + e^{2i(m+\alpha)\chi}}{(m+\alpha)^2} \\ &= \sum\_{m\in\mathbb{Z}} \frac{1}{(m+\alpha)^2} - 2\sum\_{m\in\mathbb{Z}} \frac{e^{i(m+\alpha)\chi}}{(m+\alpha)^2} + \sum\_{m\in\mathbb{Z}} \frac{e^{2i(m+\alpha)\chi}}{(m+\alpha)^2} \\ &= -\frac{(2\pi)^2 e^{2\pi i\alpha}}{(1 - e^{2\pi i\alpha})^2} - 2\left(-\frac{(2\pi)^2 e^{2\pi i\alpha}}{(1 - e^{2\pi i\alpha})^2} + \frac{2\pi e^{2\pi i\alpha}}{1 - e^{2\pi i\alpha}}x\right) \\ &\quad - \frac{(2\pi)^2 e^{2\pi i\alpha}}{(1 - e^{2\pi i\alpha})^2} + \frac{2\pi e^{2\pi i\alpha}}{1 - e^{2\pi i\alpha}}2x \\ &= 0, \end{split}$$

where to calculate the first sum we used that the series (24.39) at *x* = 0 converges to <sup>1</sup> <sup>2</sup> *(f (*+0*)* <sup>+</sup> *<sup>e</sup>*−2*π*1*αf (*2*<sup>π</sup>* <sup>−</sup> <sup>0</sup>*)).* It turns out that

$$\sum\_{m \in \mathbb{Z}} \frac{1 - 2\cos(m+a)x + \cos 2(m+a)x}{(m+a)^2} = 0, \quad \text{provided } a \notin \mathbb{Z}, \tag{24.42}$$

and therefore the second sum in (24.36) (the sum (24.37) ) is equal to zero. Finally we get

$$2 - 2\lim\_{t \to \infty} \sum\_{k\_n \neq 0} \frac{1 - 2\cos k\_n/t + \cos 2k\_n/t}{(k\_n/t)^2} = 2 - 2(1 + \beta\_1)\frac{1}{2} + 0 = 1 - \beta\_1 = \chi.$$

Now it is straightforward to generalise this result to include not connected graphs to get (9.1).

Thus we have proven the formula for the Euler characteristic for equilateral graphs without any use of the trace formula. It might be important to find a similar proof for arbitrary graphs. Such an alternative to the trace formula approach may provide a new insight on the structure of the spectral asymptotics for *L*st*().*

## **24.5 Spectral Gap for Discrete Laplacians**

The goal of this section is to derive elementary estimates for the spectral gaps of combinatorial and normalised Laplacians based on the methods developed for standard Laplacians on metric graphs (see Chaps. 12 and 13). Since the spectral theory of discrete Laplacians has a long history, one may say that we return back and check how our ideas work in the discrete case. Our presentation here shall be limited since we are planning to discuss these questions in full detail in our forthcoming book with Delio Mugnolo and James Kennedy. We restrict our presentation to discussing what happens when edges are added and vertices are cut. It turns out that the answers have simplest form for combinatorial and normalised Laplacians, respectively.

**Adding Edges: Combinatorial Laplacian** Our aim is to understand how the spectral gap—the difference between the lowest two eigenvalues *μ*<sup>2</sup> −*μ*1—changes when the discrete graph is getting larger. We are interested in "small" perturbation of the graph like adding one edge between two existing vertices or adding one pendant edge. All results will be proved for the combinatorial Laplacian *L(G)* given by (24.1), also their generalisation for the normalised Laplacian is often straightforward.

The following statement is a direct analog of Theorem 12.11 for combinatorial graphs:

**Proposition 24.10** *Let G be a connected discrete graph and let G be the discrete graph obtained from G by adding one edge between the vertices m*<sup>1</sup> *and m*2*. Let L denote the combinatorial Laplacian defined by* (24.1)*. Then the following holds:* 

*(1) The first excited eigenvalues satisfy the inequality:* 

$$
\mu\_2(L(G)) \le \mu\_2(L(G')).
$$

*(2) The equality μ*2*(L(G))* = *μ*2*(L(G )) holds if and only if the second eigenfunction ψ<sup>G</sup>* <sup>2</sup> *on the graph G may be chosen attaining equal values at the vertices m*<sup>1</sup> *and m*<sup>2</sup>

$$
\psi\_2^G(m\_1) = \psi\_2^G(m\_2).
$$

*Proof* The first statement follows from the fact that

$$L(G') - L(G) = \begin{pmatrix} \vdots & \vdots \\ \dots & 1 & \dots & -1 & \dots \\ & \vdots & & \vdots \\ & \dots & -1 & \dots & 1 & \dots \\ & \vdots & & \vdots \end{pmatrix} \tag{24.43}$$

is a matrix with just four non-zero entries. It is easy to see that the matrix is positive semi-definite, since the eigenvalues are 0 (with the multiplicity *M*−1) and 2 (simple eigenvalue) and therefore *L(G )* − *L(G)* ≥ 0 which implies the first statement.

To prove the second assertion let us recall that *μ*2*(L(G ))* can be calculated using the Rayleigh quotient

$$\mu\_2(L(G)) = \min\_{\psi \perp \mathbf{1}} \frac{\langle \psi, L(G)\psi \rangle}{\langle \psi, \psi \rangle} \le \min\_{\psi \perp \mathbf{1}} \frac{\langle \psi, L(G')\psi \rangle}{\langle \psi, \psi \rangle} = \mu\_2(L(G')).$$

Here the trial function *ψ* should be chosen orthogonal to the ground state, *i.e.* having mean value zero. We have equality in the last formula if and only if *ψ* minimizing the first and the second quotients can be chosen such that *(L(G )* − *L(G))ψ* = 0*,* i. e. *ψ(m*1*)* = *ψ(m*2*)*. In other words, the vector *(ψ(m*1*), ψ(m*2*))* can be chosen orthogonal to the eigenvector *(*1*,* −1*)* corresponding to the nonzero eigenvalue of the matrix <sup>1</sup> <sup>−</sup><sup>1</sup> <sup>−</sup>1 1 *.*

Next we are interested in what happens if we add a pendant edge, that is an edge connected to the graph at one already existing node.

**Proposition 24.11** *Let G be a connected discrete graph and let G be another graph obtained from G by adding one vertex and one edge between the new vertex and the vertex m*1*. Then the following holds:* 

*(1) The first excited eigenvalues of the combinatorial Laplacian satisfy the following inequality:* 

$$
\mu\_2(L(G)) \ge \mu\_2(L(G')).
$$

*(2) The equality μ*2*(L(G))* = *μ*2*(L(G )) holds if and only if every eigenfunction ψ<sup>G</sup>* <sup>2</sup> *corresponding to μ*2*(L(G)) on G is equal to zero at m*<sup>1</sup>

$$
\psi\_2^G(m\_\mathbb{I}) = 0.
$$

*Proof* Let us define the following vector on *G* :

$$\varphi(n) := \begin{cases} \psi\_2^G(n), & \text{on } G, \\ \psi\_2^G(m\_1) \text{ on } G' \backslash G. \end{cases}$$

This vector is not orthogonal to the zero energy eigenfunction **<sup>1</sup>** <sup>∈</sup> <sup>C</sup>*M*+1, where we keep the same notation **1** for the vector build up of ones now on *G* . Therefore consider the following nonzero vector *γ* , which is obtained from *ϕ* by adding a certain constant *c*

$$\chi(n) := \varphi(n) + c.!$$

Here *c* is chosen so that the orthogonality condition in *l*2*(G )* <sup>=</sup> <sup>C</sup>*M*+<sup>1</sup> holds

$$0 = \langle \boldsymbol{\gamma}, \mathbf{1} \rangle\_{l\_2(G')} = \underbrace{\langle \boldsymbol{\psi}\_2^G, \mathbf{1} \rangle\_{l\_2(G)}}\_{=0} + \boldsymbol{\psi}\_2^G(m\_1) + c\boldsymbol{M}',$$

where *M* = *M* + 1 is the number of vertices in *G* . This implies

$$c = -\frac{\psi\_2^G(m\_1)}{M'}.\tag{24.44}$$

Using this vector the following estimate on the first exited eigenvalue holds

$$\begin{split} \mu\_{2}(L(G')) &\leq \frac{\langle L(G')\chi,\,\chi\rangle\_{l\_{2}(G')}}{\|\chi'\|\_{l\_{2}(G')}^{2}} \\ &= \frac{\langle L(G)\psi\_{2}^{G},\psi\_{2}^{G}\rangle\_{l\_{2}(G)}}{\|\psi\_{2}^{G}\|\_{l\_{2}(G)}^{2} + c^{2}M + |\psi\_{2}^{G}(m\_{1}) + c|^{2}} \leq \mu\_{2}(L(G)), \end{split} \tag{24.45}$$

where we took into account (24.10). The last inequality follows from the fact that

$$\langle L(G)\psi\_2^G, \psi\_2^G\rangle\_{l\_2(G)} = \mu\_2(L(G)) \|\psi\_2^G\|\_{l\_2(G)}^2,$$

and

$$\|\psi\_2^G\|\_{l\_2(G)}^2 + c^2 M + |\psi\_2^G(m\_1) + c|^2 \ge \|\psi\_2^G\|\_{l\_2(G)}^2.$$

Note that we have equality if and only if *<sup>c</sup>* <sup>=</sup> <sup>0</sup> and |*ψ<sup>G</sup>* <sup>2</sup> *(m*1*)* + *c*| <sup>2</sup> <sup>=</sup> <sup>0</sup> which implies *ψ<sup>G</sup>* <sup>2</sup> *(m*1*)* = 0 (even without assuming (24.44)). If there exists an eigenfunction *ψ<sup>G</sup>* <sup>2</sup> , such that *ψ<sup>G</sup>* <sup>2</sup> *(m*1*)* = 0, then the inequality in (24.45) is strict and we get

$$
\mu\_2(L(G)) > \mu\_2(L(G')).
$$

We see that the first excited eigenvalue has a tendency to decrease if a pendant edge is attached to a graph. It is clear from the proof that gluing of any connected graph (instead of one edge) would lead to the same result, provided there is just one contact vertex. If the number of contact vertices is larger, then the spectral gap may increase as shown in Proposition 24.10.

Note that a different proof of the first part of Proposition 24.10 may be found in [221], Corollary 3.2. In the same paper, a bit weaker claim related to the first part of Proposition 24.11 is provided as Property 3.3.

**Splitting Vertices: Normalised Laplacians** The first question we would like to answer is what happens to the spectral gap when a vertex in a graph *G* is chopped into two vertices. The graph *G*ˆ obtained in this way has one vertex more than *G*. We shall answer this question for the normalised Laplacian *LN* . Recall that this question has already been addressed for standard Laplacians on metric graphs in Chap. 12.

The same question for discrete graphs is slightly more sophisticated since chopping a vertex into two increases the number of vertices and therefore changes the Hilbert space where the discrete Laplacian is defined.

**Proposition 24.12** *Let G be a connected discrete graph and let G*ˆ *be another graph obtained from G by chopping one vertex into two, then the first excited eigenvalues of the normalised Laplacian satisfy the following inequality:* 

$$
\mu\_2(L\_N(G)) \ge \mu\_2(L\_N(G)).\tag{24.46}
$$

*Proof* The theorem is an easy corollary of the following two theorems:


Nevertheless, we present here an alternative direct proof.

Let us denote by *V* <sup>0</sup> the vertex that is chopped and by *V* <sup>0</sup> and *V* <sup>0</sup> the two new vertices in *G*ˆ , so that the following relation for the corresponding degrees holds *d*<sup>0</sup> = *d* <sup>0</sup> + *d* <sup>0</sup> *.* Consider the eigenvector *ψ*<sup>2</sup> corresponding to *μ*2*(LN (G)).* Let us introduce the following vector on *G*ˆ

$$\hat{u}(V^{m}) = \begin{cases} \psi\_2(V^{m}), \; m \neq 0, \\ a', & V^{m} = V^{0'}, \\ a'', & V^{m} = V^{0''}. \end{cases} \tag{24.47}$$

The parameters *a* and *a* will be chosen so that the Rayleigh quotient does not change. It is natural to introduce the notation *ψ*2*(V* <sup>0</sup>*)* <sup>=</sup> *a.* Consider first the difference between the quadratic forms given by (24.10) on *G* and *G*ˆ respectively, assuming without loss of generality that *ψ*<sup>2</sup> is real valued

$$\begin{split} & \quad \langle L\_{N}\hat{\mu}, \hat{u}\_{\ell} \rangle\_{\ell\_{2}(\hat{G})} - \langle L\_{N}\psi\_{2}, \psi\_{2} \rangle\_{\ell\_{2}(G)} \\ & = \sum\_{V^{n} \sim\_{\tilde{G}} V^{0}} \left( \frac{1}{\sqrt{d\_{n}}} \psi\_{2}(V^{n}) - \frac{1}{\sqrt{d\_{0}'}} a' \right)^{2} + \sum\_{V^{n} \sim\_{\tilde{G}} V^{0^{n}}} \left( \frac{1}{\sqrt{d\_{n}}} \psi\_{2}(V^{n}) - \frac{1}{\sqrt{d\_{0}''}} a'' \right)^{2} \\ & - \sum\_{V^{n} \sim\_{\tilde{G}} V^{0}} \left( \frac{1}{\sqrt{d\_{n}}} \psi\_{2}(V^{n}) - \frac{1}{\sqrt{d\_{0}}} a \right)^{2} \\ & = (a')^{2} + (a'')^{2} - a^{2} - 2 \frac{1}{\sqrt{d\_{0}'}} a' \sum\_{V^{n} \sim\_{\tilde{G}} V^{0^{n}}} \frac{1}{\sqrt{d\_{n}}} \psi\_{2}(V^{n}) \\ & - 2 \frac{1}{\sqrt{d\_{0}'}} a'' \sum\_{V^{n} \sim\_{\tilde{G}} V^{0^{n}}} \frac{1}{\sqrt{d\_{n}}} \psi\_{2}(V^{n}) \\ & + 2 \frac{1}{\sqrt{d\_{0}}} a \sum\_{V^{n} \sim\_{\tilde{G}} V^{0}} \frac{1}{\sqrt{d\_{n}}} \psi\_{2}(V^{n}). \end{split}$$

We use now that *ψ*<sup>2</sup> is an eigenvector and in particular the eigenvector equation holds for *m* = 0

$$\begin{aligned} a &= \frac{1}{\sqrt{d\_0}} \sum\_{V^n \sim\_G V^0} \frac{1}{\sqrt{d\_n}} \psi\_2(V^n) = \mu\_2(L\_N(G))a \\\\ \Rightarrow & \sum\_{V^n \sim\_G V^0} \frac{1}{\sqrt{d\_n}} \psi\_2(V^n) = \sqrt{d\_0}(1 - \mu\_2(L\_N(G)))a. \end{aligned}$$

We continue with the difference between the quadratic forms

$$\begin{split} & \quad \langle \hat{L}\_N \hat{\mu}, \hat{u} \rangle\_{\ell\_2(\hat{G})} - \langle L\_N \psi\_2, \psi\_2 \rangle\_{\ell\_2(G)} \\ &= (a')^2 + (a'')^2 - a^2 + 2(1 - \mu\_2)a^2 - 2 \frac{\sqrt{d\_0}}{\sqrt{d\_0''}} (1 - \mu\_2) a a'' \\ &+ 2 \left( -\frac{a'}{\sqrt{d\_0''}} + \frac{a''}{\sqrt{d\_0''}} \right) \sum\_{V'' \sim\_{\hat{G}} V^{0'}} \frac{1}{\sqrt{d\_n}} \psi\_2(V''). \end{split} \tag{24.48}$$

Since it is hard to control the sum *<sup>V</sup> <sup>n</sup>*∼*G*<sup>ˆ</sup> *<sup>V</sup>* <sup>0</sup> <sup>√</sup> 1 *dn ψ*2*(V n)*, let us assume that the coefficient in front of it vanishes:

$$-\frac{a'}{\sqrt{d\_0'}} + \frac{a''}{\sqrt{d\_0'}} = 0\tag{24.49}$$

Under this condition the difference between the quadratic forms is given by

$$
\langle \hat{L}\_N \hat{u}, \hat{u} \rangle\_{\ell\_2(\hat{G})} - \langle L \psi\_2, \psi\_2 \rangle\_{\ell\_2(G)} = (a')^2 + (a'')^2 - a^2 + 2(1 - \mu\_2)a^2
$$

$$
$$

The second equation on *a , a* (in addition to (24.49))is obtained by requiring that *<sup>u</sup>*<sup>ˆ</sup> is orthogonal to the ground state eigenvector *<sup>ψ</sup>*<sup>ˆ</sup> <sup>1</sup>*(V n)* <sup>=</sup> <sup>√</sup>*dn*

$$
\langle \hat{\mu}, \hat{\psi}\_1 \rangle\_{\ell\_2(\hat{G})} = \langle \psi\_2, \psi\_1 \rangle\_{\ell\_2(G)} - \sqrt{d\_0}a + \sqrt{d\_0'}a' + \sqrt{d\_0'}a'',\tag{24.51}
$$

where *ψ*<sup>1</sup> is the ground state for *G.* Taking into account that *ψ*2*, ψ*1*-*<sup>2</sup>*(G)* = 0 we shall require that

$$
\sqrt{d\_0'}a' + \sqrt{d\_0''}a'' = \sqrt{d\_0}a.\tag{24.52}
$$

The system of linear equations (24.49) and (24.52) is easy to solve

$$\begin{cases} a' = \frac{\sqrt{d\_0'}}{\sqrt{d\_0}} a, \\ a'' = \frac{\sqrt{d\_0''}}{\sqrt{d\_0}} a. \end{cases} \tag{24.53}$$

It follows in particular that

$$(a')^2 + (a'')^2 = a^2.\tag{24.54}$$

With these values of *a* and *a* the vector *u*ˆ is admissible and its Rayleigh quotient gives an upper estimate for the *μ*2*(LN (G))* ˆ

$$\mu\_2(L\_N(\hat{G})) \le \frac{\langle \hat{L}\_N \hat{u}, \hat{u} \rangle\_{\ell\_2(\hat{G})}}{\|\|\hat{u}\|\|\_{\ell\_2(\hat{G})}^2}. \tag{24.55}$$

It turns out that with chosen *a* and *a* both the quadratic form and the norm remain the same (24.54):

$$\begin{cases} \langle \hat{L}\_N \hat{u}, \hat{u} \rangle\_{\ell\_2(\hat{G})} = \langle L\_N \psi\_2, \psi\_2 \rangle\_{\ell\_2(G)},\\ \|\| \hat{u} \|\|\_{\ell\_2(\hat{G})}^2 = \|\| \psi\_2 \|\|\_{\ell\_2(G)}^2. \end{cases}$$

Since *ψ*<sup>2</sup> is an eigenfunction, it holds *LN ψ*2*, ψ*2*-*<sup>2</sup>*(G)* <sup>=</sup> *<sup>μ</sup>*2*(LN (G)) <sup>ψ</sup>*<sup>2</sup> <sup>2</sup> *-*2*(G)* and (24.46) follows.

**Remark** if *ψ*2*(V* <sup>0</sup>*)* <sup>=</sup> 0, then the constructed function *u*<sup>ˆ</sup> is an eigenvector for the Laplacian on *G*ˆ with the same eigenvalue. Most probably this statement can be generalised.

We are ready to prove the main result of this section

**Theorem 24.13 (Fiedler)** *Let G be a connected discrete graph, then the spectral gap for the normalised Laplacian LN (G) satisfies the following lower estimate* 

$$
\mu\_2(L\_N(G)) \ge 1 - \cos(\frac{\pi}{N}),\tag{24.56}
$$

*where N is the number of edges in G.*

*Proof* We start by doubling all edges in the original graph *G*. Let us denote the corresponding graph by *G*2. The new normalised Laplacian *LN (G*2*)* just coincides with *LN (G)* and therefore *<sup>μ</sup>*2*(LN (G*2*))* <sup>=</sup> *<sup>μ</sup>*2*(LN (G)).*

All vertices in *G*<sup>2</sup> have even degrees and therefore there exists an Eulerian path *P* - any closed path going along each edge precisely once. This path can be seen as a loop obtained by chopping vertices in *G*2*.* As we have proven (Proposition 24.12) chopping the vertices does not increase the spectral gap. Thus we have

$$
\mu\_2(L\_N(P)) \le \mu\_2(L\_N(G^2)) = \mu\_2(L\_N(G)).\tag{24.57}
$$

To prove the theorem it remains to note that

$$
\mu\_2(L\_N(P)) = 1 - \cos\frac{\pi}{N}.\tag{24.58}
$$

It is clear that every eigenfunction of *LN (P )* is quasi invariant:

$$
\psi(V^n) = \psi(V^1) z^n,
$$

where *<sup>z</sup>* is any <sup>2</sup>*N*-th root of <sup>1</sup> : *zj* <sup>=</sup> exp{*<sup>i</sup> <sup>π</sup> <sup>N</sup> j* }*, j* = 0*,* 1*,...,* 2*N* −1*.* Substituting into the eigenfunction equation

$$(L\_N \psi)(V^n) = \psi(V^n) - \frac{1}{2} \left( z + 1/z \right) \psi(V^n) = \mu \psi(V^n)$$

gives the following values of *μ*

$$\mu(L\_N(P)) = 1 - \cos\frac{\pi}{N}(j-1), \quad j = 0, 1, 2, \dots, N,$$

where all eigenvalues except the lowest and the largest, that is *μ* = 0*,* 2, have multiplicity 2*.* (All together there are 2*N* eigenvalues.) The two lowest eigenvalues

are

$$
\mu\_1(L\_N(P)) = 0 \text{ and } \mu\_2(L\_N(P)) = 1 - \cos\frac{\pi}{N}
$$

and the spectral gap is given by (24.56)

Let us prove that the estimate is sharp. Consider the chain graph *GN* formed by *<sup>M</sup>* <sup>+</sup> <sup>1</sup> vertices *<sup>V</sup>* <sup>0</sup>*, V* <sup>1</sup>*,...,V <sup>N</sup>* consequently connected by *<sup>N</sup>* edges (like a chain). Then the first eigenfunction for the normalised Laplacian is given by

$$\psi\_2(V^m) = \begin{cases} 1/\sqrt{2}, & m = 0, \\ \cos\frac{\pi}{N}m, & m = 1, 2, \dots, N - 1, \\ -1/\sqrt{2}, & m = N. \end{cases} \tag{24.59}$$

The corresponding eigenvalue is *μ*<sup>2</sup> = 1 − cos *π/N*, where *N* is the number of edges.

**Problem 104** Calculate the spectrum of the complete graph *K*<sup>3</sup> formed by three vertices connected. Use both combinatorial and normalised Laplacians. What is the connection between their spectra?

**Problem 105** Generalise the previous problem and calculate the spectrum of an arbitrary complete graph *KM.* What is the reason that the spectrum is highly degenerate? Both combinatorial and normalised Laplacians should be considered.

**Problem 106** Prove counterparts of Propositions 24.10 and 24.11, now for the normalised Laplacian.

**Problem 107** What is the analog of Theorem 24.13 for the combinatorial Laplacian?

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **More Metric Graphs**

Mathematical and physical literature devoted to differential operators on graphs is a rapidly growing cloud with hardly well-defined boundaries. Moreover, the day after the book will be published the list will need to be extended. Therefore let me just mention here few names and several directions that have not been covered by the book, but which are important in my very personal opinion.

First of all let me point out three important directions that have been discussed in particular in Chaps. 12, 13, and 24 without aiming to provide a comprehensive account of all results:

	- multiplicity of the eigenvalues [279, 360, 394, 428, 493, 494],
	- spectral asymptotics [20, 22, 101, 126, 131, 131, 147, 157, 197, 298, 363], non-dissipative eigenvalue dependent vertex conditions [490],
	- explicit estimates [30, 32, 87, 88, 107, 290–292, 294, 423, 448],
	- behaviour of the ground state [187, 193, 354, 371];
	- equilateral Laplacians [127, 129, 131, 141, 488],
	- equilateral graphs with potential [368, 418, 422],
	- spectra of Platonic solids [198];
	- Sunada construction [254, 478],
	- for normalised Laplacians [48, 108, 114–116, 146, 253, 377, 467],
	- for metric graphs [53, 55, 56, 96, 141, 216, 217, 252, 275, 342, 355, 415, 454, 464, 489].

The reason not to mention these topics in the book is two-fold: on the one hand we are planning to write an extended review and a book investigating two of these areas; on the other hand it seems that studies in these directions are far from being

accomplished. Therefore the book focuses just on a few important ideas that can be used to pursue these studies.

There are also areas of research connected with spectral theory of metric graphs that have not been touched at all. These areas either have been already covered in the monographs mentioned in the introduction or are left for specialists to explain recent progress in these areas:

	- trace formula [59, 61, 93, 94, 230, 314, 396, 455, 484, 504, 505],
	- zeta-functions [145, 228, 262, 264, 265, 281, 370, 396, 481–483];
	- semigroups [121, 313, 315, 393, 495, 496],
	- heat kernels [95, 145, 154, 155],
	- Brownian motion [316–318];
	- in addition to already cited references see [122, 130, 141, 155, 178–180, 252, 392, 427, 453, 489],
	- using spectral mapping [509, 510],
	- inverse source problems [38, 39],
	- Stieltjes strings [430] and later references by the authors;
	- Neumann domains, nodal statistics [26, 27, 54, 57, 58, 60, 77, 78, 80, 229, 260, 261, 415],
	- chaos on metric graphs: starting with the pioneering work [320], see also [83– 85, 287, 288, 321, 322, 361, 362, 366, 416, 432, 475, 480];
	- general vertex conditions [499],
	- anti-standard conditions [186, 497, 498],
	- spectral monotonicity [450],
	- with preferred orientation [62, 199, 207, 208, 215],
	- Krein-von Neumann extensions [395],
	- decorated graphs [112, 466],
	- approximations of general vertex conditions [119, 138–140, 190, 194, 200, 201, 203, 204, 209, 296];
	- fat graphs [194, 202, 296, 330, 435–438];
	- see collection [407] as well as [1–8, 102–104, 120, 167–174, 468];

More Metric Graphs 607

	- random potential [10–13, 16–19, 211],
	- random edges [15, 17, 18, 300, 312, 417, 419],
	- random vertex couplings [299];
	- general [123, 125, 127, 128, 210, 214, 240, 251, 303, 304, 306–308, 491, 492, 500], many results are nicely summarised in the recent monograph [305],
	- discrete graphs [289],
	- infinite trees [161, 213, 449],
	- quasi-periodic [420, 421],
	- spectrum and eigenfunction expansion [111, 369],
	- fractals [28, 31, 439, 440, 477, 481–483] especially the books [297, 476];
	- graphene [64, 65, 158, 222, 329, 406],
	- blood flow [124],
	- nerve impulse transmission [398],
	- population dynamics [128],
	- solid state in magnetic field [65, 110, 113, 117, 118].

Selected special topics in spectral analysis of metric graphs:


The reference list above is not aimed to provide a complete account of all research articles devoted to the spectral theory of metric graphs, the aim is to give readers possibility to navigate in the literature, find interesting topics and indicate the names of researchers involved in the studies. I apologise in advance for all missing and incomplete references.

# **References**


org/10.1007/s10958-005-0471-x (Russian, with English and Russian summaries); English transl., J. Math. Sci. (N.Y.) **132**(1), 11–25 (2006). MR2092610


Må detta vara nog. Och vill du nå ditt liv genom att läsa mer, fundera själv – och skriv!

> Bo Setterlind (1923–1991)

Inspired by

Freund, es ist auch genug. Im Fall, du mehr willst lesen, so geh und werde selbst die Schrift und selbst das Wesen.

> Angelius Silesius (1624–1677)

Friend let this be enough. If thou wouldst go on reading. Go thyself and become the writing and the meaning.

# **Index**

#### **A**

Adding edge, 303 Algebraic multiplicity, 184 Ambartsumian theorem, 333, 356, 377–379 geometric version, 336 strong version, 343 Asymptotically isospectral, 272, 374

#### **B**

Balanced graph, 288 Bottleneck, 541 Boundary control, 467 Boundary Control method (BC-method), 464 algorithm, 476 for graphs, 478 Bunch, 496

#### **C**

Calderón problem, 515 Characteristic equation via the edge M-function, 118 via the scattering matrix, 111 via the secular polynomial, 125 via the transfer matrix, 101 Cheeger estimate, 292 Classification of graphs, 129 Cleaning procedure, 509 Complement of a subgraph, 540 Connecting operator, 471, 476 Contact vertices/contact set, 402, 464 Contraction of graphs, 151 Control operator, 470 Core (of a graph), 518

Crum's procedure, 348 Crystalline measure, 239 Cutting edges, 310 Cutting vertices, 324

#### **D**

Davies theorem, 371 Degree (of a vertex), 11 Deleting edges, 311 Delone set, 250 Delta coupling, 55 Dependent subtrees, 525 Diameter, 495 Dirac comb, 239 generalised, 239 Discrete set, 239 Dismantling, 524, 571 Dissolution of vertices, 531 Doubly connected graph, 288

#### **E**

Edge M-function, 116 Edge scattering matrix, 106 Energy curve, 405, 437 Euler characteristic, 12, 209, 216, 581 Eulerian path, 284 Extended normal derivative, 18 Extension of graphs, 158 Extremal eigenvalues, 583, 584, 586

#### **F**

Figure eight graph, 24

© The Author(s) 2024 P. Kurasov, *Spectral Geometry of Graphs*, Operator Theory: Advances and Applications 293, https://doi.org/10.1007/978-3-662-67872-5

#### **G**

Generalised delta coupling, 49 Generalised Dirichlet condition, 261 Generalised Neumann condition, 267 Generalised Robin condition, 261 Generalised zero (of a matrix-valued function), 432 Generic eigenvalues, 584 Gluing vertices, 301, 326, 441 general vertex conditions, 453 Goursat problem, 470 Graph scattering matrix, 457 Ground state, 80

#### **H**

Heat kernel, 357 Herglotz-Nevanlinna function, 115, 419, 430

#### **I**

Independent subtrees, 525 Infiltration domain, 519, 540, 547 Internal vertices, 402 Inverse problem for graphs with cycles, 517, 531, 571 for graphs with one cycle, 565 for the lasso, 563 for the loop, 556 for trees (algorithm), 511 Isospectral Laplacians, 592 Isospectral operators/graphs, 26

#### **L**

Laplace operator, 15 Laplacian averaging, 578 combinatorial, 578 normalised, 578 Lasso graph, 22 Laurent polynomial, 131 reduced, 134 Leaf-pealing procedure, 487 Length discrete, 188 geometric, 188 total, 12

#### **M**

Magnetic Boundary Control method (MBC-method), 464, 531, 538, 544 Magnetic flux, 385, 557

Magnetic Schrödinger operator, 15, 53, 67 Matryoshka structure, 553 Metric graph, 10 M-function, 403, 453 explicit formula, 416, 421 hierarchy, 424 Modified spectrum, 234

#### **N**

Normal derivative, 13 Number of connected components, 11, 79, 181, 580

#### **P**

Pendant edge, 487, 495 Pendant free graph, 518 Pruning, 510

#### **Q**

Quadratic form, 54, 261

#### **R**

Rayleigh quotient, 82, 92 Reduced spectrum, 238 Reference Laplacian, 268 Response operator, 467, 476 Ring graph, 21

#### **S**

Schrödinger operator, 15 Secular polynomials, 124, 129 redicubility, 173 reduced, 133 Signed Schrödinger operator, 533 Singularity (of a matrix-valued function), 432 Skeleton, 545 Sobolev estimate, 262 Spectral estimate, 92, 318, 320, 321 general vertex conditions, 281 standard vertex conditions, 272 Spectral gap, 283, 303, 305, 307, 444, 446 discrete Laplacians, 595 Spectral multiplicity, 184 Standard Laplacian, 123 Standard magnetic Schrödinger operator, 20 Stieltjes functions, 438 Symmetrisation, 286

Index 639

#### **T**

Topological perturbations, 300, 324 Trace formula, 189, 206 Transfer matrix, 97, 558 Trimming, 499

**U**  Uniformly discrete set, 240

#### **V**

Vertex conditions admissible, 52 asymptotically properly connecting, 268 asymptotically standard, 268 Hermitian, 32 hyperplanar, 327 properly connecting, 31 scaling-invariant, 43

standard, 19, 44 via Hermitian matrices, 41, 62 via linear relations, 33, 61 via the vertex scattering matrix, 39, 63 Vertex phase, 385 Vertex scattering matrix, 36 von Below formula, 586

#### **W**

Wall (of infiltration domain), 542 Watermelon graph, 166 Weyl's asymptotics, 77 Wigner function, 430

#### **Z**

Zero set, 125 reduced, 238