Operator Theory Advances and Applications 287

Christian Seifert Sascha Trostorff Marcus Waurick

# Evolutionary Equations

Picard's Theorem for Partial Differential Equations, and Applications

## **Operator Theory: Advances and Applications**

#### **Volume 287**

#### **Founded in 1979 by Israel Gohberg**

#### **Editors:**

Joseph A. Ball (Blacksburg, VA, USA) Albrecht Böttcher (Chemnitz, Germany) Harry Dym (Rehovot, Israel) Heinz Langer (Wien, Austria) Christiane Tretter (Bern, Switzerland)

Vadim Adamyan (Odessa, Ukraine) Lewis A. Coburn (Buffalo, NY, USA) Wolfgang Arendt (Ulm, Germany) J. William Helton (San Diego, CA, USA) B. Malcolm Brown (Cardiff, UK) Marinus A. Kaashoek (Amsterdam, NL) Raul Curto (Iowa, IA, USA) Thomas Kailath (Stanford, CA, USA) Kenneth R. Davidson (Waterloo, ON, Canada) Peter Lancaster (Calgary, Canada) Fritz Gesztesy (Waco, TX, USA) Peter D. Lax (New York, NY, USA) Pavel Kurasov (Stockholm, Sweden) Bernd Silbermann (Chemnitz, Germany) Vern Paulsen (Houston, TX, USA) Mihai Putinar (Santa Barbara, CA, USA) Ilya Spitkovsky (Abu Dhabi, UAE)

#### **Associate Editors: Honorary and Advisory Editorial Board:**

#### **Subseries Linear Operators and Linear Systems**

*Subseries editors:* Daniel Alpay (Orange, CA, USA) Birgit Jacob (Wuppertal, Germany) André C.M. Ran (Amsterdam, The Netherlands)

#### **Subseries Advances in Partial Differential Equations**

*Subseries editors:* Bert-Wolfgang Schulze (Potsdam, Germany) Jerome A. Goldstein (Memphis, TN, USA) Nobuyuki Tose (Yokohama, Japan) Ingo Witt (Göttingen, Germany)

More information about this series at https://link.springer.com/bookseries/4850

Christian Seifert • Sascha Trostorff • Marcus Waurick

# Evolutionary Equations

Picard's Theorem for Partial Differential Equations, and Applications

Christian Seifert Institut für Mathematik Technische Universität Hamburg Hamburg, Germany

Marcus Waurick Institut für Angewandte Analysis TU Bergakademie Freiberg Freiberg, Germany

Sascha Trostorff Mathematisches Seminar Christian-Albrechts-Universität zu Kiel Kiel, Germany

ISSN 0255-0156 ISSN 2296-4878 (electronic) Operator Theory: Advances and Applications ISBN 978-3-030-89396-5 ISBN 978-3-030-89397-2 (eBook) https://doi.org/10.1007/978-3-030-89397-2

© The Editor(s) (if applicable) and The Author(s) 2022. This book is an open access publication.

**Open Access** This book is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this book are included in the book's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the book's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.

The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This book is published under the imprint Birkhäuser, www.birkhauser-science.com, by the registered company Springer Nature Switzerland AG.

The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

# **Preface**

The theory of evolutionary equations has its origins in the seminal paper [82] by Rainer Picard, working at the Technische Universität Dresden, Germany. All three of us were students at this university at the time. Thus, we were lucky enough to learn the theory of evolutionary equations from its early days on. We took and still take the opportunity to be part of the continuously growing group of people actively developing the theory further. In fact, both the PhD and the habilitation theses of S.T. and M.W. are concerned with generalisations of the initial theory as well as opening up new directions of research. It is also an aim of these lecture notes to present some of these latest results in a coherent text.

In general terms, the theory of evolutionary equations provides a Hilbert space method to understand differential equations. It comprises a unified approach to solving both ordinary and partial differential equations as well as to show general well-posedness results for both stationary and nonstationary, that is, timedependent problems. Besides well-posedness theorems for large classes of differential equations (including nonlinear problems), the theory addresses quantitative and qualitative questions related to exponential stability, homogenisation and regularity. This list is bound to get longer in future. The general approach, furthermore, allows for either a comparison or unification (depending on the context) of approaches initially tailored for particular types of equations, such as parabolic, hyperbolic or elliptic. In particular, mixed type equations can be considered and understood with the presented perspective. Thus, many fundamental equations of mathematical physics such as the heat equation, wave equation, Maxwell's equations and the equations of elasticity theory can be treated using this method.

The abovementioned equations fitting into a general solution theory posed a surprising fact (at least for us). Even more so as the general problem class of evolutionary equations bases on four rather elementary observations being shortly summarised as follows:

• the (distributional, time) derivative can be realised as a boundedly invertible, normal operator in exponentially weighted *L*2-spaces,


The last observation is particularly striking in as much as the monotonicity of the time derivative multiplied with the material law operator is rather easily obtained in many applications. This provides a well-posedness criterion that is both elementary and general, often leading to generalisations of known solution criteria for particular situations. From an applied perspective, these criteria can often be verified without diving into the intricacies of more involved solution methods and, thus, the existing numerical methods for evolutionary equations can be used to numerically solve the considered equation at hand.

In the context of time-dependent equations and related topics, there is a wellestablished format of introducing various subjects to advanced master or diploma students as well as PhD students, namely the Internet Seminar on Evolution Equations. Since 1997, it has been organised by various groups from Germany, Hungary, Italy, the UK and the Netherlands, providing virtual lectures as well as supervised student projects. In the academic year 2019–2020, we organised the Internet Seminar focussing on evolutionary equations. The present book is an extended version of the lecture notes for the virtual lectures. As such, it presents a thorough introduction to the theory of evolutionary equations and the corresponding solution theory and provides many properties, different classes of examples and properties of solutions, taking the reader from the early beginning of Picard's theorem to (almost) the state-of-the-art in this theory.

As the text is based on weekly virtual lectures, each chapter of the book is intended to (roughly) comprise a selection of material that covers 4 h of lectures and 2 h of exercise classes. Hence, this book covers material for one or two semesters. It is intended for master or diploma students as well as PhD students and researchers and requires only basic knowledge on functional analysis, foundations in Hilbert space theory and complex analysis in one variable. The needed amount of these is similar to the ones provided in basic courses on these topics. Apart from these prerequisites, the material of the book is self-contained. At the end of each chapter, we appended 7 exercises of varying difficulties from easy to challenging and also we commented on further reading and/or on the wider context of the contents of the chapter.

We are indebted to Rainer Picard for introducing this theory to us more than a decade ago and for his past and ongoing support in many areas. We are very grateful to the participants of the 23rd Internet Seminar for reading the manuscript, working with the material and thus checking large parts of the present text. In particular, we cordially thank Jürgen Voigt, Hendrik Vogt and Michael Doherty for their valuable comments, which led to many improvements. M.W. thanks Jussi Behrndt for the invitation on a guest professorship at the TU Graz at the end of 2020 and the beginning of 2021. This guest appointment led to the presentation of the course at TU Graz with many interested students, in particular, Julia Hauser, Peter Schlosser, Georg Stenzel and Raphael Watschinger, studying the material and providing useful feedback that helped to profoundly improve the text. We thank the anonymous referees for their comments that led to further improvements. All the remaining mistakes are our own.

We thank Christiane Tretter, Editor of the *Operator Theory* series, for her encouragement and guidance. Moreover, we thank Dorothy Mazlum for her support during the earlier stages of the manuscript (and its submission) as well as Daniel Jagadisan for the completion and final submission process. Last but not the least, we thank the TU Bergakademie Freiberg for providing the open access costs for this manuscript, thus making the final version of these lecture notes easily available around the world without further costs.

Hamburg, Germany Christian Seifert Kiel, Germany Sascha Trostorff Freiberg, Germany Marcus Waurick August 2021

# **Contents**






# **Chapter 1 Introduction**

This chapter is intended to give a brief introduction as well as a summary of the present text. We shall highlight some of the main ideas and methods behind the theory and will also aim to provide some background on the main concept in the manuscript: the notion of so-called

#### **Evolutionary Equations**

dating back to Picard in the seminal paper [82]; see also [84, Chapter 6].

Another expression used to describe the same thing (and in order to distinguish the concept from *evolution equations*) is that of *evo-systems*. Before going into detail on what we think of when using the term evolutionary equations, we provide some wider context to (some) solution methods of partial differential equations.

## **1.1 From ODEs to PDEs**

In order to study and understand partial differential equations (PDEs) in general people have started out looking for methods known from the theory of ordinary differential equations (ODEs) to apply these to PDEs. The process of getting from a PDE to some ODE is by no means unique nor 'canonical'. That is to say there might be more than one way of reformulating a PDE into an (generalised) ODE setting (if at all).

The benefits of such a strategy, if it works, are obvious: Since for ODEs solution methods are well-known and well understood, some intuition from ODEs may be passed onto the solution process for PDEs. One way of directly apply ODEmethods to PDEs can be carried out for transport type equations, where the method of characteristics uses the fact that—using the implicit function theorem—some solutions of PDEs correspond to solutions of ODEs. In this section we shall not delve into this direction of PDE theory but refer to the standard literature such as [39] instead.

Another way of using ODE theory for PDEs is summarised by what might be called infinite-dimensional generalisations. In a nutshell instead of solving a PDE directly, one solves (infinitely many) ODEs instead. For some equations this strategy can be applied by the separation of variables ansatz. Somewhat similarly, one can generalise linear ODEs into an infinite-dimensional setting under the umbrella term evolution equation to signify differential equations involving time. In order to provide some more detail to this strategy we shortly recall how to solve linear ODEs: Let us consider an *<sup>n</sup>* <sup>×</sup> *<sup>n</sup>*-matrix *<sup>A</sup>* with entries from the field <sup>K</sup> of complex or real numbers, C or R, and address the system of ordinary differential equations

$$\begin{cases} \mu'(t) = A\mu(t), & t > 0, \\ \mu(0) = \mu\_0 \end{cases}$$

for some given initial datum, *<sup>u</sup>*<sup>0</sup> <sup>∈</sup> <sup>K</sup>*n*. This solution can be computed with the help of the matrix exponential

$$\mathbf{e}^{tA} = \sum\_{k=0}^{\infty} \frac{(tA)^k}{k!} \in \mathbb{K}^{n \times n}$$

in the form

$$\boldsymbol{u}(t) = \mathbf{e}^{\boldsymbol{\varpi}\boldsymbol{A}} \boldsymbol{u}\_0.$$

As it turns out, this *u* is continuously differentiable and *u* satisfies the above equation. We note in particular that e*tAu*<sup>0</sup> <sup>→</sup> <sup>e</sup><sup>0</sup>*Au*<sup>0</sup> <sup>=</sup> *<sup>u</sup>*<sup>0</sup> as *<sup>t</sup>* <sup>→</sup> <sup>0</sup><sup>+</sup> and that <sup>e</sup>*(t*+*s)A* <sup>=</sup> <sup>e</sup>*tA*e*sA*. In a way, to obtain the solution for the system of ordinary differential equations we need to construct *(*e*tA)t*-0, the so-called fundamental solution.

In order to have a particular example for the infinite-dimensional generalisation in mind, let us have a look at the heat equation next. This is the prototypical example for an (infinite-dimensional) evolution equation: Let <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* be open. Then consider

$$\begin{cases} \partial\_t \theta(t, \mathbf{x}) = \Delta \theta(t, \mathbf{x}), & (t, \mathbf{x}) \in (0, \infty) \times \Omega, \\ \theta(0, \mathbf{x}) = \theta\_0(\mathbf{x}), & \mathbf{x} \in \Omega, \end{cases}$$

where <sup>=</sup> *<sup>d</sup> <sup>j</sup>*=<sup>1</sup> *<sup>∂</sup>*<sup>2</sup> *<sup>j</sup>* is the usual Laplacian carried out with respect to the '*x*variables' or 'spatial variables', and *θ*<sup>0</sup> is a given initial heat distribution and *θ* is the unknown (scalar-valued) heat distribution. The above heat equation is also accompanied with some boundary conditions for *θ (t , x)* which are required to be valid for all *t >* 0 and *x* ∈ *∂*. For definiteness, we consider homogeneous Dirichlet boundary conditions, that is, *θ (t , x)* = 0 for all *t >* 0 and *x* ∈ *∂*, in the following.

#### 1.1 From ODEs to PDEs 3

In order to mark the considered boundary conditions we shall write <sup>D</sup> instead of just and look at the heat equation in the form

$$
\mu' = \Delta\_{\mathcal{D}} \mu, \quad \mu(0) = \mu\_0,
$$

with the understanding that *u* is considered to be a vector-valued function assigning each time *t* - 0 to a function space *<sup>X</sup>* of functions <sup>→</sup> <sup>K</sup>; here we choose *X* = *L*2*()*. If is bounded, it is possible to diagonalise <sup>D</sup> and the corresponding eigenvector expansion leads to infinitely many ODEs of the form

$$
\mu\_k' = \lambda\_k \mu\_k, \quad \mu\_k(0) = \mu\_{0,k}
$$

for suitable scalars *λk*, *<sup>k</sup>* <sup>∈</sup> <sup>N</sup>. The solution sequence *(uk)k* for these ODEs is the sequence of coefficients of the eigenvector expansion of *u*.

A different infinite-dimensional generalisation of the finite-dimensional setting leads to a solution method valid for all .

This generalisation does not consist in changing the PDE to many ODEs but only to a single one with an infinite-dimensional state space. The method is described best by looking at the fundamental solution in the ODE setting rather than the equation. The idea is to find a fundamental solution with state space *X* so that we replace the family *(*e*tA)t*-<sup>0</sup> of matrices acting on K*<sup>n</sup>* by a family *(T (t))t*-<sup>0</sup> of linear operators in *X*. This leads to the notion of so-called *C*0-semigroups and the fundamental solution of the heat equation is then the (appropriately interpreted) family *(*e*t*D*)t*-0, see [38, 48, 81] for some standard references. More precisely, for *X* = *L*2*()* and *<sup>θ</sup>*<sup>0</sup> <sup>∈</sup> *<sup>L</sup>*2*()*, the function *<sup>θ</sup>* : *<sup>t</sup>* → <sup>e</sup>*t*D*θ*<sup>0</sup> <sup>∈</sup> *<sup>L</sup>*2*()* satisfies the above heat equation in a certain *generalised* sense.

In general, for equations written in the form *u*- = *Au* for appropriate *A*, a solution theory, that is, the proof for existence, uniqueness and continuous dependence on the data, is then contained in the construction of the fundamental solution (e.g., *C*0 semigroup) in terms of the ingredients of the equation. This infinite-dimensional generalisation from the ODE case proves to be versatile and has been applied to many different particular PDEs of the form *u*-= *Au*.

Albeit quite successful there are also some drawbacks in the application of the abovementioned theories. For particular PDEs either the considered methods are not applicable or their application necessitates more or less involved workarounds.

In the next section, we describe a particular problem for which invoking for instance semigroup theory would seem unnatural let alone not at all straightforward. It follows, however, the general scheme of looking at fundamental solutions in an infinite-dimensional context.

## **1.2 Time-independent Problems**

The construction of fundamental solutions is also a valuable method for obtaining a solution for time-independent problems, see, e.g., [39]. To see this, let us consider Poisson's equation in <sup>R</sup>3: Given *<sup>f</sup>* <sup>∈</sup> *<sup>C</sup>*<sup>∞</sup> <sup>c</sup> *(*R3*)* we want to find a function *<sup>u</sup>*: <sup>R</sup><sup>3</sup> <sup>→</sup> R with the property that

$$-\Delta u(\mathbf{x}) = f(\mathbf{x}) \quad (\mathbf{x} \in \mathbb{R}^3).$$

It can be shown that *u* given by

$$\mu(\mathbf{x}) = \frac{1}{4\pi} \int\_{\mathbb{R}^3} \frac{1}{|\mathbf{x} - \mathbf{y}|} f(\mathbf{y}) \,\mathrm{d}\mathbf{y}$$

is well-defined, twice continuously differentiable and satisfies Poisson's equation; cf. Exercise 1.3. Note that *<sup>x</sup>* → <sup>1</sup> <sup>4</sup>*π*|*x*<sup>|</sup> is also referred to as the *fundamental solution* or *Green's function* for Poisson's equation. The formula presented for *u* is the *convolution* with the fundamental solution. The formula used to define *u* also works for *f* being merely bounded and measurable with compact support. In this case, however, the pointwise formula of Poisson's equation cannot be expected to hold anymore, since changing *f* on a set of measure 0 does not influence the values of *u*. Thus, only a posteriori estimates under additional conditions on *f* render *u* to be twice continuously differentiable (say) with Poisson's equation holding for all *<sup>x</sup>* <sup>∈</sup> <sup>R</sup>3. However, similar to the semigroup setting, it is possible to *generalise* the meaning of −*u* = *f* . Then, again, the fundamental solution can be used to construct a solution for Poisson's equation for more general *f* .

The situation becomes different when we consider a boundary value problem instead of the problem above. More precisely, let <sup>⊆</sup> <sup>R</sup><sup>3</sup> be an open set and let *f* ∈ *L*2*().* We then ask whether there exists *u* ∈ *L*2*()* such that

$$\begin{cases} -\Delta u = f, & \text{on } \Omega, \\\quad u = 0, & \text{on } \partial \Omega. \end{cases}$$

Notice that the task of just (mathematically) formulating this equation, let alone establishing a solution theory, is something that needs to be addressed. Indeed, we emphasise that it is unclear as to what *u* is supposed to mean if *u* ∈ *L*2*()*, only. It turns out that the problem described is not well-posed in general. In particular depending on the shape of and the norms involved—it might, for instance, lack continuous dependence on the data, *f* .

In any case, the solution formula that we have used for the case when <sup>=</sup> <sup>R</sup><sup>3</sup> does not work anymore. Indeed, only particular shapes of permit to explicitly construct a fundamental solution; see [39, Section 2.2]. Despite this, when is merely bounded, it is still possible to construct a solution, *u*, for the above problem. There are two key ingredients required for this approach. One is a clever application of Riesz's representation theorem for functionals in Hilbert spaces and the other one involves inventing 'suitable' interpretations of *u* in and *u* = 0 on *∂*. Thus, the method of 'solving' Poisson's equation amounts to posing the correct question, which then can be addressed *without* invoking the fundamental solution. With this in mind, one could argue that the *setting* makes the problem solvable.

## **1.3 Evolution***ary* **Equations**

The central aim for evolutionary equations is to combine the rationales from both the *C*0-semigroup theory and that from the time-independent case. That is to say, we wish to establish a setting that treats time-independent problems as well as timedependent problems. At the same time we need to *generalise* solution concepts. We shall not aim to construct the fundamental solution in either the spatial or the temporal directions. The problem class will comprise of problems that can be written in the form

$$\left(\partial\_{\mathbb{I}}M(\partial\_{\mathbb{I}}) + A\right)U = F$$

where *U* is the unknown and *F* the known right-hand side. Furthermore, *A* is an (unbounded, skew-selfadjoint) operator acting in some Hilbert space that is thought of as modelling spatial coordinates; *∂t* is a realisation of the (time-)derivative operator and *M(∂t)* is an analytic, bounded operator-valued function *M*, which is evaluated at the time derivative. In the course of the next chapters, we shall specify the definitions and how standard problems fit into this problem class. In particular, we will specify the Hilbert spaces modelling space-time in which the above equation is considered.

Before going into greater depth on this approach, we would like to emphasise the key differences and similarities which arise when compared to the derivation of more traditional solution theories that we outlined above.

Since the solution theory for evolutionary equations will also encapsulate timeindependent problems, we predominantly focus on inhomogeneous problems. In fact, the choice of Hilbert spaces implies implicit homogeneous initial conditions at *t* = −∞. However, inhomogeneous initial values at *t* = 0 will also be considered in this manuscript. In fact, it turns out that these initial value problems can be recast into problems of the above type.

In any case, as we do not want to require the existence of any fundamental solution we will also need to introduce a *generalisation* of the concept of a solution. Moreover, we shall see that both *∂t* and *A* are *unbounded* operators whereas *M(∂t)* is a bounded operator. Thus, we need to make sense of the operator sum of the two unbounded operators *∂tM(∂t)* and *A*, which, in general, cannot be realised as being onto but rather as having dense range, only.

A post-processing procedure will then ensure that for more regular right-hand sides, *F*, the solution *U* will also be more regular. In some cases this will, for instance, amount to *U* being continuous in the time variable. We shall entirely confine ourselves within the Hilbert space case though. In this sense, the solution theory to be presented will be, in essence, an application of the projection theorem applied in a Hilbert space that combines both spatial and temporal variables.

The operator *M(∂t)* is thought of as carrying all the 'complexity' of the model. What we mean by complexity will become more apparent when we discuss some examples.

Finally, let us stress that *A* being 'skew-selfadjoint' is a way of implementing first order systems in our abstract setting. In fact, we shall focus on first order equations in *both* time *and* space. This is also another change in perspective when compared to classical approaches. As classical treatments might emphasise the importance of the Laplacian (and hence Poisson's equation) and variants thereof, evolutionary equations rather emphasise *Maxwell's equations* as the prototypical PDE. This change of point of view will be illustrated in the following section, where we address some classical examples.

## **1.4 Particular Examples and the Change of Perspective**

Here we will focus on three examples. These examples will also be the first to be readdressed when we discuss the solution theory of evolutionary equations in a later chapter. In order to simplify the current presentation we will not consider boundary value problems but solely concentrate on problems posed on <sup>=</sup> <sup>R</sup>3. Furthermore, we shall dispose of any initial conditions. For a more detailed account on the derivation of these equations, we refer to the appendix of this manuscript.

#### **Maxwell's Equations**

The prototypical evolutionary equation is the system provided by Maxwell's equations. Maxwell's equations consist of two equations describing an electromagnetic field, *(E, H )*, subject to a given certain external current, *j* ,

$$
\partial\_t \varepsilon E + \sigma E - \operatorname{curl} H = j,
$$

$$
\partial\_t \mu H + \operatorname{curl} E = 0.
$$

We shall detail the properties of the material parameters *ε, μ*, and *σ* later on; for a definition of curl see Sect. 6.1. For the time being it is safe to assume that they are non-negative real numbers and that they additionally satisfy that *μ(ε* + *σ) >* 0. Now, in the setting of evolutionary equations, we gather the electro-magnetic field into one column vector and obtain

$$
\left(\partial\_l \begin{pmatrix} \varepsilon & 0 \\ 0 \ \mu \end{pmatrix} + \begin{pmatrix} \sigma & 0 \\ 0 \ 0 \end{pmatrix} + \begin{pmatrix} 0 & -\operatorname{curl} \\ \operatorname{curl} & 0 \end{pmatrix}\right) \begin{pmatrix} E \\ H \end{pmatrix} = \begin{pmatrix} j \\ 0 \end{pmatrix} \dots
$$

We shall see later that we obtain an evolutionary equation by setting

$$M(\partial\_l) := \begin{pmatrix} \varepsilon & 0 \\ 0 \ \mu \end{pmatrix} + \partial\_l^{-1} \begin{pmatrix} \sigma & 0 \\ 0 & 0 \end{pmatrix} \text{ and } A := \begin{pmatrix} 0 & -\text{curl} \\ \text{curl} & 0 \end{pmatrix}.$$

A formulation that fits well into an infinite-dimensional ODE-setting would be, for example,

$$
\partial\_l \begin{pmatrix} E \\ H \end{pmatrix} = \begin{pmatrix} \varepsilon & 0 \\ 0 \ \mu \end{pmatrix}^{-1} \begin{pmatrix} -\sigma & \text{curl} \\ -\text{curl} & 0 \end{pmatrix} \begin{pmatrix} E \\ H \end{pmatrix} + \begin{pmatrix} \varepsilon & 0 \\ 0 \ \mu \end{pmatrix}^{-1} \begin{pmatrix} j \\ 0 \end{pmatrix}.
$$

provided that *ε >* 0. The inhomogeneous right-hand side *(* <sup>1</sup> *<sup>ε</sup> j,* 0*)* can then be dealt with by means of the variation of constants formula, which is the incarnation of the convolution of *(* <sup>1</sup> *<sup>ε</sup> j,* 0*)* with the fundamental solution in this time-dependent situation. Thus, in order to apply for example semigroup theory, the main task lies in showing that

$$
\widetilde{A} := \begin{pmatrix} -\frac{1}{\varepsilon}\sigma & \frac{1}{\varepsilon}\operatorname{curl} \\ -\frac{1}{\mu}\operatorname{curl} & 0 \end{pmatrix},
$$

gives rise to a suitable interpretation of *(*e*tA )t*-0.

A different formulation needs to be put in place if *ε* = 0 everywhere. The situation becomes even more complicated if *ε* and *σ* are bounded, non-negative, measurable functions of the spatial variable such that *ε* + *σ c* for some *c >* 0. In the setting of evolutionary equations, this problem, however, *can* be dealt with. Note that then one cannot expect *E* to be continuous with respect to the temporal variable unless *j* is smooth enough.

#### **Wave Equation**

We shall discuss the scalar wave equation in a medium where the wave propagation speed is inhomogeneous in different directions of space. This is modelled by finding *<sup>u</sup>*: <sup>R</sup> <sup>×</sup> <sup>R</sup><sup>3</sup> <sup>→</sup> <sup>R</sup> such that, given a suitable forcing term *<sup>f</sup>* : <sup>R</sup> <sup>×</sup> <sup>R</sup><sup>3</sup> <sup>→</sup> <sup>R</sup> (again we skip initial values here), we have

$$
\partial\_t^2 u - \operatorname{div} a \operatorname{grad} u = f,
$$

where *<sup>a</sup>* <sup>=</sup> *<sup>a</sup>* <sup>∈</sup> <sup>R</sup>3×<sup>3</sup> is positive definite; that is, *ξ, aξ* <sup>R</sup><sup>3</sup> *<sup>&</sup>gt;* 0 for all *<sup>ξ</sup>* <sup>∈</sup> <sup>R</sup><sup>3</sup> \ {0}. In the context of evolutionary equations, we rewrite this as a first order problem in time *and* space. For this, we introduce *v* := *∂tu* and *q* := −*a* grad *u* and obtain that

$$
\left(\partial\_l \begin{pmatrix} 1 & 0 \\ 0 \ a^{-1} \end{pmatrix} + \begin{pmatrix} 0 & \text{div} \\ \text{grad} & 0 \end{pmatrix} \right) \begin{pmatrix} \upsilon \\ q \end{pmatrix} = \begin{pmatrix} f \\ 0 \end{pmatrix} \dots
$$

Thus,

$$M(\partial\_l) := \begin{pmatrix} 1 & 0 \\ 0 \ a^{-1} \end{pmatrix} \text{ and } A := \begin{pmatrix} 0 & \text{div} \\ \text{grad } 0 \end{pmatrix}.$$

render the wave equation as an evolutionary equation.

Let us mention briefly that it is also possible to rewrite the wave equation as a first order system in time only. For this, a standard ODE trick is used: one simply sticks with the additional variable *v* = *∂tu* and obtains that

$$
\partial\_t \begin{pmatrix} u \\ v \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ \operatorname{div} a \operatorname{grad} 0 \end{pmatrix} \begin{pmatrix} u \\ v \end{pmatrix} + \begin{pmatrix} 0 \\ f \end{pmatrix}.
$$

In this formulation the 'complexity' of the model is contained in the operator

$$
\begin{pmatrix} 0 & 1 \\ \operatorname{div} a \operatorname{grad} 0 \end{pmatrix}.
$$

#### **Heat Equation**

We have already formulated classical approaches to the heat equation

$$
\partial\_t \theta - \operatorname{div} a \operatorname{grad} \theta = \mathcal{Q},
$$

in which we have added a heat source *<sup>Q</sup>* and a conductivity *<sup>a</sup>* <sup>=</sup> *<sup>a</sup>* <sup>∈</sup> <sup>R</sup>3×<sup>3</sup> being positive definite. Here, however, we reformulate the heat equation as a first order system in time and space to end up (again setting *q* := −*a* grad *θ*) with

$$
\left(\partial\_l \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} + \begin{pmatrix} 0 & 0 \\ 0 & a^{-1} \end{pmatrix} + \begin{pmatrix} 0 & \text{div} \\ \text{grad} & 0 \end{pmatrix}\right) \begin{pmatrix} \theta \\ q \end{pmatrix} = \begin{pmatrix} \mathcal{Q} \\ 0 \end{pmatrix}.
$$

In the context of evolutionary equations we then have that

$$M(\partial\_t) := \begin{pmatrix} 1 \ 0 \\ 0 \ 0 \end{pmatrix} + \partial\_t^{-1} \begin{pmatrix} 0 & 0 \\ 0 \ a^{-1} \end{pmatrix} \text{ and } A := \begin{pmatrix} 0 & \text{div} \\ \text{grad} & 0 \end{pmatrix}.$$

The advantage of this reformulation is that it becomes easily comparable to the first order formulation of the wave equation outlined above. For instance it is now possible to easily consider mixed type problems of the form

$$
\begin{pmatrix} \partial\_l \begin{pmatrix} 1 & 0 \\ 0 \ (1-s)a^{-1} \end{pmatrix} + \begin{pmatrix} 0 & 0 \\ 0 \ sa^{-1} \end{pmatrix} + \begin{pmatrix} 0 & \text{div} \\ \text{grad } 0 \end{pmatrix} \end{pmatrix} \begin{pmatrix} \theta \\ q \end{pmatrix} = \begin{pmatrix} \mathcal{Q} \\ 0 \end{pmatrix},
$$

with *<sup>s</sup>*: <sup>R</sup><sup>3</sup> → [0*,* <sup>1</sup>] being an arbitrary measurable function. In fact, in the solution theory for evolutionary equations, this does not amount to any additional complication of the problem. Models of this type are particularly interesting in the context of so-called solid-fluid interaction, where the relations of a solid body and a flow of fluid surrounding it are addressed.

## **1.5 A Brief Outline of the Course**

We now present an overview of the contents of the following chapters.

#### **Basics**

In order to properly set the stage, we shall begin with some background of operator theory in Banach and Hilbert spaces. We assume the reader to be acquainted with some knowledge on bounded linear operators, such as the uniform boundedness principle, and basic concepts in the topology of metric spaces, such as density and closure. The most important new material will be the adjoint of an operator, which needs not be bounded anymore. In order to deal with this notion, we will consider relations rather than operators as they provide the natural setting for *unbounded* operators. Having finished this brief detour on operator theory, we will turn to a generalisation of Lebesgue spaces. More precisely, we will survey ideas from Lebesgue's integration theory for functions attaining values in an infinitedimensional Banach space.

#### **The Time Derivative**

Banach space-valued (or rather Hilbert space-valued) integration theory will play a fundamental role in defining the time derivative as an unbounded, continuously invertible operator in a suitable Hilbert space. In order to obtain continuous invertibility, we have to introduce an exponential weighting function, which is akin to the exponential weight introduced in the space of continuous functions for a proof of the Picard–Lindelöf theorem; that is, the unique existence theorem for solutions for ODEs. It is therefore natural to discuss the application of this operator to ODEs. Hence, in passing, we will present a Hilbert space solution theory for ordinary differential equations. Here, we will also have the opportunity to discuss ordinary differential equations with delay and memory. After this short detour, we will turn back to the time derivative operator and describe its spectrum. For this we introduce the so-called Fourier–Laplace transformation which transforms the time derivative into a multiplication operator. This unitary transformation will additionally serve to define (analytic and bounded) functions of the time derivative. This is absolutely essential for the formulation of evolutionary equations.

#### **Evolutionary Equations**

Having finished the necessary preliminary work, we will then be in a position to provide the proper justification of the formulation and solution theory for evolutionary equations. We will accompany this solution theory not only with the three leading examples from above, but also with some more sophisticated equations. Amazingly, the considered space-time setting will allow us to discuss (time-)fractional differential equations, partial differential equations with delay terms and even a class of integro-differential equations. Withdrawing the focus on regularity with respect to the temporal variable, we are en passant able to generalise well-posedness conditions from the classical literature. However, we shall stick to the treatment of analytic operator-valued functions *M* only. Therefore, we will also include some arguments as to why this assumption seems to be *physically* meaningful. It will turn out that analyticity and causality are intimately related via both the so-called Paley–Wiener theorem and a representation theorem for time translation invariant causal operators.

#### **Initial Value Problems for Evolutionary Equations**

As it has been outlined above, the focus of evolutionary equations is on inhomogeneous right-hand sides rather than on initial value problems. However, there is also the possibility to treat initial value problems with the approach discussed here. For this, we need to introduce extrapolation spaces. This then enables us to formulate initial value problems as inhomogeneous equations. We have to make a concession on the structure of the problem, however. In fact, we will focus on the case when *M(∂t)* <sup>=</sup> *<sup>M</sup>*<sup>0</sup> <sup>+</sup> *<sup>∂</sup>*−<sup>1</sup> *<sup>t</sup> M*<sup>1</sup> for some bounded linear operators *M*0*, M*<sup>1</sup> acting in the spatial variables alone. The initial condition will then read as *(M*0*U)(*0+*)* = *M*0*U*0. Hence, one might argue that the initial condition *U (*0+*)* = *U*<sup>0</sup> is only assumed in a rather generalised sense. This is due to the fact that *M*<sup>0</sup> might be zero. However, for the case *A* = 0 we will also discuss the initial condition *U (*0+*)* = *U*0, which amounts to a treatment of so-called differential-algebraic equations in both finiteand inifinite-dimensional state spaces.

#### **Properties of Solutions and Inhomogeneous Boundary Value Problems**

Turning back to the case when *A* = 0 we will discuss qualitative properties of solutions of evolutionary equations. One of which will be exponential decay. We will identify a subclass of evolutionary equations where it is comparatively easy to show that if the right-hand side decays exponentially then so too must the solution. If the right-hand side is smooth enough we obtain that *U (t)*, the solution of the evolutionary equation at time *t*, decays exponentially if *t* → ∞. Furthermore, we will frame inhomogeneous boundary value problems in the setting of evolutionary equations. The method will require a bit more on the regularity theory for evolutionary equations and a definition of suitable boundary values. In particular, we shall present a way of formulating classical inhomogeneous boundary value problems for domains without any boundary regularity.

#### **Properties of the Solution Operator and Extensions**

In the final part, we shall have another look at the advantages of the problem formulation. In fact, we will have a look at the notion of homogenisation of differential equations. In the problem formulation presented here, we shall analyse the continuity properties of the solution operator with respect to weak operator topology convergence of the operator *M(∂t)*. We will address an example for ordinary differential equations (when *A* = 0) and one for partial differential equations (when *A* = 0). It will turn out that the respective continuity properties are profoundly different from one another.

Furthermore, we have the occasion to address the notion of 'maximal regularity' in the context of evolutionary equations. Maximal regularity has initially been coined for parabolic-type problems like the heat equation. It turns out that evolutionary equations have a property similar to maximal regularity if one assumes the block structure of *M(∂t)* and *A* to satisfy certain requirements. These requirements lead to a subclass of evolutionary equations containing classical parabolic type equations. We conclude the body of the text with two extensions of Picard's theorem. The first of which addresses non-autonomous problems and the second non-linear evolutionary inclusions.

## **1.6 Comments**

The focus presented here on the main notions behind evolutionary equations is mostly in order to properly motivate the theory and highlight the most striking differences in the philosophy. There are other solution concepts (and corresponding general settings) developed for partial differential equations; either time-dependent or without involving time.

There is an abundance of examples and additional concepts for *C*0-semigroups for which we refer to the aforementioned standard treatments again. There is also a generalisation to problems that are second order in time, e.g., *u*-- = *Au*, where *u(*0*)* and *u*- *(*0*)* are given. This gives rise to cosine families of bounded linear operators which is another way of generalising the fundamental solution concept, see, for example, [107].

The main focus of all of these equations is to address *initial value problems*, where the (first/second) time derivative of the unknown is explicit.

Another way of writing many PDEs from mathematical physics into a common form uses the notion of Friedrichs systems, see [43, 44]. However, the main focus of Friedrichs systems is on static, that is, time-independent partial differential equations. A time-dependent variant of constant coefficient Friedrichs systems are so-called symmetric-hyperbolic systems, see e.g. [12]. In these cases, whether the authors treat constant coefficients or not, the framework of evolutionary equations adds a profound amount of additional complexity by including the operator *M(∂t)*.

The treatment of time-dependent problems in space-time settings and addressing corresponding well-posedness properties of a sum of two unbounded operators has also been considered in [26] with elaborate conditions on the operators involved. In their studies, the flexibility introduced by the operator *M(∂t)* in our setting is missing, thus the time derivative operator is not thought of having any variable coefficients attached to it.

## **Exercises**

**Exercise 1.1** Let *<sup>φ</sup>* <sup>∈</sup> *C(*R*,* <sup>R</sup>*)*. Assume that *φ(t* <sup>+</sup> *s)* <sup>=</sup> *φ(t)φ(s)* for all *t,s* <sup>∈</sup> <sup>R</sup>, *φ(*0*)* <sup>=</sup> 1. Show that *φ(t)* <sup>=</sup> <sup>e</sup>*αt* (*<sup>t</sup>* <sup>∈</sup> <sup>R</sup>) for some *<sup>α</sup>* <sup>∈</sup> <sup>R</sup>.

**Exercise 1.2** Let *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>, *<sup>T</sup>* : <sup>R</sup> <sup>→</sup> <sup>R</sup>*n*×*<sup>n</sup>* continuously differentiable such that *T (t*<sup>+</sup> *s)* <sup>=</sup> *T (t)T (s)* for all *t,s* <sup>∈</sup> <sup>R</sup>, *T (*0*)* <sup>=</sup> *<sup>I</sup>* . Show that there exists *<sup>A</sup>* <sup>∈</sup> <sup>R</sup>*n*×*<sup>n</sup>* with the property that *T (t)* <sup>=</sup> <sup>e</sup>*tA* (*<sup>t</sup>* <sup>∈</sup> <sup>R</sup>).

**Exercise 1.3** Show that *<sup>x</sup>* → *u(x)* <sup>=</sup> <sup>1</sup> 4*π* <sup>R</sup><sup>3</sup> <sup>1</sup> |*x*−*y*| *f (y)* d*y* satisfies Poisson's equation, given *f* ∈ *C*<sup>∞</sup> <sup>c</sup> *(*R3*)*.

**Exercise 1.4** Let *f* ∈ *C*<sup>∞</sup> <sup>c</sup> *(*R*)*. Define *u(t, x)* := *f (x* <sup>+</sup> *t)* for *x,t* <sup>∈</sup> <sup>R</sup>. Show that *<sup>u</sup>* satisfies the differential equation *∂tu* <sup>=</sup> *∂xu* and *u(*0*,x)* <sup>=</sup> *f (x)* for all *<sup>x</sup>* <sup>∈</sup> <sup>R</sup>.

**Exercise 1.5** Let *X, Y* be Banach spaces, *(Tn)n*∈<sup>N</sup> be a sequence in *L(X, Y )*, the set of bounded linear operators. If sup {*Tn* ; *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>} = ∞*,* show that there is *<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>* and a strictly increasing sequence *(nk )k*∈<sup>N</sup> in <sup>N</sup> such that  *Tnk <sup>x</sup>*  → ∞*.*

**Exercise 1.6** Let *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>. Denote by GL*(n*; <sup>K</sup>*)* the set of continuously invertible *<sup>n</sup>* <sup>×</sup> *<sup>n</sup>* matrices. Show that GL*(n*; <sup>K</sup>*)* <sup>⊆</sup> <sup>K</sup>*n*×*<sup>n</sup>* is open.

**Exercise 1.7** Let *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>. Show that : GL*(n*; <sup>K</sup>*) <sup>A</sup>* → *<sup>A</sup>*−<sup>1</sup> <sup>∈</sup> <sup>K</sup>*n*×*<sup>n</sup>* is continuously differentiable. Compute - .

## **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 2 Unbounded Operators**

We will gather some information on operators in Banach and Hilbert spaces. Throughout this chapter let *X*0, *X*1*,* and *X*<sup>2</sup> be Banach spaces and *H*0, *H*1, and *<sup>H</sup>*<sup>2</sup> be Hilbert spaces over the field <sup>K</sup> ∈ {R*,* <sup>C</sup>}.

## **2.1 Operators in Banach Spaces**

We define the set of continuous linear operators

$$L(X\_0, X\_1) := \left\{ B \colon X\_0 \to X\_1 \text{ ; } \begin{array}{l} B \ \text{linear}, \ \|B\| := \sup\_{\boldsymbol{x} \in X\_0 \backslash \{0\}} \frac{\|\|B\boldsymbol{x}\|}{\|\boldsymbol{x}\|} < \infty \end{array} \right\}$$

with the usual abbreviation *L(X*0*)* := *L(X*0*, X*0*)*. In contrast to a bounded linear operator, a discontinuous or unbounded linear operator only needs to be defined on a proper albeit possibly dense subset of *X*0. In order to define unbounded linear operators, we will first take a more general point of view and introduce (linear) relations. This perspective will turn out to be the natural setting later on.

**Definition** A subset *A* ⊆ *X*<sup>0</sup> × *X*<sup>1</sup> is called a *relation in X*<sup>0</sup> *and X*1. We define the *domain*, *range* and *kernel of A* as follows

$$\text{dom}(A) := \{ \mathbf{x} \in X\_0 \colon \exists \mathbf{y} \in X\_1 \colon (\mathbf{x}, \mathbf{y}) \in A \},$$

$$\text{ran}(A) := \{ \mathbf{y} \in X\_1 \colon \exists \mathbf{x} \in X\_0 \colon (\mathbf{x}, \mathbf{y}) \in A \},$$

$$\text{ker}(A) := \{ \mathbf{x} \in X\_0 \colon (\mathbf{x}, \mathbf{0}) \in A \}.$$

The *image, A*[*M*]*, of a set M* ⊆ *X*<sup>0</sup> *under A* is given by

$$A[M] := \{ \mathbf{y} \in X\_{\mathbf{I}} \; ; \; \exists \mathbf{x} \in M \colon (\mathbf{x}, \mathbf{y}) \in A \} \; .$$

A relation *A* is called *bounded* if for all bounded *M* ⊆ *X*<sup>0</sup> the set *A*[*M*] ⊆ *X*<sup>1</sup> is bounded. For a given relation *A* we define the *inverse relation*

$$A^{-1} := \{ (\mathbf{y}, \boldsymbol{x}) \in X\_{\mathbb{I}} \times X\_0 \colon (\mathbf{x}, \mathbf{y}) \in A \} \dots$$

A relation *A* is called *linear* if *A* ⊆ *X*<sup>0</sup> × *X*<sup>1</sup> is a linear subspace. A linear relation *A* is called *linear operator* or just *operator from X*<sup>0</sup> *to X*<sup>1</sup> if

$$A[\{0\}] = \{ \mathbf{y} \in X\_{\mathbf{I}} \; ; \; (\mathbf{0}, \mathbf{y}) \in A \} = \{ \mathbf{0} \}.$$

In this case, we also write

$$A \colon \text{dom}(A) \subseteq X\_0 \to X\_1$$

to denote a linear operator from *X*<sup>0</sup> to *X*1. Moreover, we shall write *Ax* = *y* instead of *(x, y)* ∈ *A* in this case. A linear operator *A*, which is not bounded, is called *unbounded*.

For completeness, we also define the sum, scalar multiples, and composition of relations.

**Definition** Let *<sup>A</sup>* <sup>⊆</sup> *<sup>X</sup>*<sup>0</sup> <sup>×</sup>*X*1, *<sup>B</sup>* <sup>⊆</sup> *<sup>X</sup>*<sup>0</sup> <sup>×</sup>*X*<sup>1</sup> and *<sup>C</sup>* <sup>⊆</sup> *<sup>X</sup>*<sup>1</sup> <sup>×</sup>*X*<sup>2</sup> be relations, *<sup>λ</sup>* <sup>∈</sup> <sup>K</sup>. Then we define

$$\begin{aligned} A + \mathcal{B} &:= \{ (\mathbf{x}, \mathbf{y} + w) \in X\_0 \times X\_1 \; ; \; (\mathbf{x}, \mathbf{y}) \in A, (\mathbf{x}, w) \in \mathcal{B} \}, \\ \lambda A &:= \{ (\mathbf{x}, \lambda \mathbf{y}) \in X\_0 \times X\_1 \; ; \; (\mathbf{x}, \mathbf{y}) \in A \} \; , \\ CA &:= \{ (\mathbf{x}, z) \in X\_0 \times X\_2 \; ; \; \exists \mathbf{y} \in X\_1 \colon (\mathbf{x}, \mathbf{y}) \in A, (\mathbf{y}, z) \in C \} \; . \end{aligned}$$

For a relation *A* ⊆ *X*<sup>0</sup> × *X*<sup>1</sup> we will use the abbreviation −*A* := −1*A* (so that the minus sign only acts on the second component). We now proceed with topological notions for relations.

**Definition** Let *A* ⊆ *X*<sup>0</sup> ×*X*<sup>1</sup> be a relation. *A* is called *densely defined* if dom*(A)* is dense in *X*0. We call *A closed* if *A* is a closed subset of the direct sum of the Banach spaces *X*<sup>0</sup> and *X*1. If *A* is a linear operator then we will call *A closable*, whenever *A* ⊆ *X*<sup>0</sup> × *X*<sup>1</sup> is a linear operator.

**Proposition 2.1.1** *Let A* ⊆ *X*<sup>0</sup> × *X*<sup>1</sup> *be a relation, C* ∈ *L(X*2*, X*0*) and B* ∈ *L(X*0*, X*1*). Then the following statements hold.*


*Proof* Statement (a) follows upon realising that *X*<sup>0</sup> × *X*<sup>1</sup> *(x, y)* → *(y, x)* ∈ *X*<sup>1</sup> × *X*<sup>0</sup> is an isomorphism.

For statement (b), it suffices to show that the closedness of *A* implies the same for *A* + *B*. Let *((xn, yn))n* be a sequence in *A* + *B* convergent in *X*<sup>0</sup> × *X*<sup>1</sup> to some *(x, y)*. Since *B* ∈ *L(X*0*, X*1*)*, it follows that *((xn, yn* − *Bxn))n* in *A* is convergent to *(x, y* − *Bx)* in *X*<sup>0</sup> ×*X*1. Since *A* is closed, *(x, y* − *Bx)* ∈ *A*. Thus, *(x, y)* ∈ *A* + *B*.

For statement (c), let *((wn, yn))n* be a sequence in *AC* convergent in *X*<sup>2</sup> × *X*<sup>1</sup> to some *(w, y)*. Since *C* is continuous, *(Cwn)n* converges to *Cw*. Hence, *(Cwn, yn)* → *(Cw, y)* in *X*<sup>0</sup> × *X*<sup>1</sup> and since *(Cwn, yn)* ∈ *A* and *A* is closed, it follows that *(Cw, y)* ∈ *A*. Equivalently, *(w, y)* ∈ *AC*, which yields closedness of *AC*.

We shall gather some other elementary facts about closed operators in the following. We will make use of the following notion.

**Definition** Let *A*: dom*(A)* ⊆ *X*<sup>0</sup> → *X*<sup>1</sup> be a linear operator. Then the *graph norm* of *<sup>A</sup>* is defined by dom*(A) <sup>x</sup>* → *<sup>x</sup> <sup>A</sup>* := *<sup>x</sup>* <sup>2</sup> <sup>+</sup> *Ax*2.

**Lemma 2.1.2** *Let A*: dom*(A)* ⊆ *X*<sup>0</sup> → *X*<sup>1</sup> *be a linear operator. Then the following statements are equivalent:*


*Proof* For the equivalence (i)⇔(ii), it suffices to observe that dom*(A) x* → *(x, Ax)* ∈ *A*, where dom*(A)* is endowed with the graph norm, is an isomorphism. The equivalence (i)⇔(iii) is an easy reformulation of the definition of closedness of *A* ⊆ *X*<sup>0</sup> × *X*1.

Unless explicitly stated otherwise (e.g. in the form dom*(A)* ⊆ *X*0, where we regard dom*(A)* as a subspace of *X*0), for closed operators *A* we always consider dom*(A)* as a Banach space in its own right; that is, we shall regard it as being endowed with the graph norm.

**Lemma 2.1.3** *Let A*: dom*(A)* ⊆ *X*<sup>0</sup> → *X*<sup>1</sup> *be a closed linear operator. Then A is bounded if and only if* dom*(A)* ⊆ *X*<sup>0</sup> *is closed.*

*Proof* First of all note that boundedness of *A* is equivalent to the fact that the graph norm and the *X*0-norm on dom*(A)* are equivalent. Hence, the closedness and boundedness of *A* implies that dom*(A)* ⊆ *X*<sup>0</sup> is closed. On the other hand, the embedding

$$\iota\colon (\text{dom}(A), \|\cdot\|\_{A}) \hookrightarrow (\text{dom}(A), \|\cdot\|\_{X\_0})$$

is continuous and bijective. Since the range is closed, the open mapping theorem implies that *ι* <sup>−</sup><sup>1</sup> is continuous. This yields the equivalence of the graph norm and the *X*0-norm and, thus, the boundedness of *A*.

For unbounded operators, obtaining a precise description of the domain may be difficult. However, there may be a subset of the domain which essentially (or approximately) describes the operator. This gives rise to the following notion of a core.

**Definition** Let *A* ⊆ *X*<sup>0</sup> × *X*1. A set *D* ⊆ dom*(A)* is called a *core for A* provided *A* ∩ *(D* × *X*1*)* = *A*.

**Proposition 2.1.4** *Let A* ∈ *L(X*0*, X*1*), and D* ⊆ *X*<sup>0</sup> *a dense linear subspace. Then D is a core for A.*

**Corollary 2.1.5** *Let A*: dom*(A)* ⊆ *X*<sup>0</sup> → *X*<sup>1</sup> *be a densely defined, bounded linear operator. Then there exists a unique B* ∈ *L(X*0*, X*1*) with B* ⊇ *A. In particular, we have B* = *A and*

$$\|\|B\|\| = \sup\_{\boldsymbol{x}\in\text{dom}(A),\boldsymbol{x}\neq 0} \frac{\|\boldsymbol{A}\boldsymbol{x}\|\|}{\|\boldsymbol{x}\|}.$$

The proofs of Proposition 2.1.4 and Corollary 2.1.5 are asked for in Exercise 2.2.

## **2.2 Operators in Hilbert Spaces**

Let us now focus on operators on Hilbert spaces. In this setting, we can additionally make use of scalar products ·*,*· , which in this course are considered to be linear in the second argument (and anti-linear in the first, in the case when <sup>K</sup> <sup>=</sup> <sup>C</sup>).

For a linear operator *A*: dom*(A)* ⊆ *H*<sup>0</sup> → *H*<sup>1</sup> the graph norm of *A* is induced by the scalar product

$$(\mathbf{x}, \mathbf{y}) \mapsto \langle \mathbf{x}, \mathbf{y} \rangle + \langle \mathbf{A}\mathbf{x}, A\mathbf{y} \rangle,$$

known as the *graph scalar product of A*. If *A* is closed then dom*(A)* (equipped with the graph norm) is a Hilbert space.

Of course, no presentation of operators in Hilbert spaces would be complete without the central notion of the adjoint operator. We wish to pose the adjoint within the relational framework just established. The definition is as follows.

**Definition** For a relation *A* ⊆ *H*<sup>0</sup> × *H*<sup>1</sup> we define the *adjoint relation A*<sup>∗</sup> by

$$A^\* := -\left(\left(A^{-1}\right)^\perp\right) \subseteq H\_1 \times H\_0,$$

where the orthogonal complement is computed in the direct sum of the Hilbert spaces *H*<sup>1</sup> and *H*0; that is, the set *H*<sup>1</sup> × *H*<sup>0</sup> endowed with the scalar product *(x, y), (u, v)* → *x,u <sup>H</sup>*<sup>1</sup> + *y,v H*0 .

*Remark 2.2.1* Let *A* ⊆ *H*<sup>0</sup> × *H*1. Then we have

$$A^\* = \left\{ (\mu, \upsilon) \in H\_1 \times H\_0 \colon \forall (x, \mathbf{y}) \in A : \langle \mu, \mathbf{y} \rangle\_{H\_1} = \langle \upsilon, \upsilon \rangle\_{H\_0} \right\}.$$

In particular, if *A* is a linear operator, we have

$$A^\* = \left\{ (u, v) \in H\_1 \times H\_0; \,\forall \mathbf{x} \in \text{dom}(A) : \langle u, A\mathbf{x} \rangle\_{H\_1} = \langle v, \mathbf{x} \rangle\_{H\_0} \right\}.$$

**Lemma 2.2.2** *Let A* ⊆ *H*<sup>0</sup> × *H*<sup>1</sup> *be a relation. Then A*<sup>∗</sup> *is a linear relation. Moreover, we have*

$$A^\* = -\left(\left(A^\perp\right)^{-1}\right) = \left((-A)^{-1}\right)^\perp = \left(-\left(A^{-1}\right)\right)^\perp = \left((-A)^\perp\right)^{-1} = \left(-\left(A^\perp\right)\right)^{-1}.$$

The proof of this lemma is left as Exercise 2.3.

*Remark 2.2.3* Let *<sup>A</sup>* <sup>⊆</sup> *<sup>H</sup>*<sup>0</sup> <sup>×</sup>*H*1. Since *<sup>A</sup>*<sup>∗</sup> is the orthogonal complement of <sup>−</sup>*A*−1, it follows immediately that *A*<sup>∗</sup> is closed. Moreover, *A*<sup>∗</sup> = *A* <sup>∗</sup> since *<sup>A</sup>*<sup>⊥</sup> <sup>=</sup> *A* ⊥.

**Lemma 2.2.4** *Let A* ⊆ *H*<sup>0</sup> × *H*<sup>1</sup> *be a linear relation. Then*

$$A^{\*\*} := (A^\*)^\* = \overline{A}.$$

*Proof* We compute using Lemma 2.2.2

$$A^{\*\*} = \left(\left(-\left(A^\*\right)\right)^{-1}\right)^\perp = \left(\left(-\left(-\left(\left(A^\perp\right)^{-1}\right)\right)\right)^{-1}\right)^\perp = \left(A^\perp\right)^\perp = \overline{A}.\qquad\square$$

**Theorem 2.2.5** *Let A* ⊆ *H*<sup>0</sup> × *H*<sup>1</sup> *be a linear relation. Then*

$$\text{ran}(A)^\perp = \text{ker}(A^\*) \quad \text{and} \quad \overline{\text{ran}}(A^\*) = \text{ker}(\overline{A})^\perp.$$

*Proof* Let *u* ∈ ker*(A*∗*)* and let *y* ∈ ran*(A)*. Then we find *x* ∈ dom*(A)* such that *(x, y)* ∈ *A*. Moreover, note that *(u,* 0*)* ∈ *A*∗. Then, we compute

$$\langle u, \chi \rangle\_{H\_1} = \langle 0, x \rangle\_{H\_0} = 0.$$

This equality shows that ran*(A)*<sup>⊥</sup> ⊇ ker*(A*∗*)*. If on the other hand, *u* ∈ ran*(A)*<sup>⊥</sup> then for all *(x, y)* ∈ *A* we have that

$$0 = \langle u, \mathbf{y} \rangle\_{H\_1} \mathbf{.}$$

which implies *(u,* 0*)* ∈ *A*<sup>∗</sup> and hence *u* ∈ ker*(A*∗*)*. The remaining equation follows from Lemma 2.2.4 together with the first equation applied to *A*∗.

The following decomposition result is immediate from the latter theorem and will be used frequently throughout the text.

**Corollary 2.2.6** *Let A* ⊆ *H*<sup>0</sup> × *H*<sup>1</sup> *be a closed linear relation. Then*

$$H\_{\mathcal{I}} = \overline{\text{ran}}(A) \oplus \ker(A^\*) \quad \text{and} \quad H\_0 = \ker(A) \oplus \overline{\text{ran}}(A^\*).$$

We will now turn to the case where the adjoint relation is actually a linear operator.

**Lemma 2.2.7** *Let A* ⊆ *H*<sup>0</sup> × *H*<sup>1</sup> *be a linear relation. Then A*<sup>∗</sup> *is a linear operator if and only if A is densely defined. If, in addition, A is a linear operator, then A is closable if and only if A*∗ *is densely defined.*

*Proof* For the first equivalence, it suffices to observe that

$$A^\*[\{0\}] = \text{dom}(A)^\perp. \tag{2.1}$$

Indeed, *A* being densely defined is equivalent to having dom*(A)*<sup>⊥</sup> = {0}. Moreover, *A*<sup>∗</sup> is an operator if and only if *A*∗[{0}] = {0}. Next, we show (2.1). For this, apply Theorem 2.2.5 to the linear relation *A*−1. One obtains *(*ran *A*−1*)* <sup>⊥</sup> <sup>=</sup> ker*(A*−1*)*∗. Hence, *(*dom*(A))*<sup>⊥</sup> = ker*(A*∗*)* <sup>−</sup><sup>1</sup> <sup>=</sup> *<sup>A</sup>*∗[{0}], which is (2.1). For the remaining equivalence, we need to characterise *A* being an operator. Using Lemma 2.2.4 and the first equivalence, we deduce that *A* = *(A*∗*)* ∗ is a linear operator if and only if *A*<sup>∗</sup> is densely defined.

*Remark 2.2.8* Note that the statement "*A*∗ is an operator if *A* is densely defined" asserted in Lemma 2.2.7 is also true for *any* relation. For this, it suffices to observe that (2.1) is true for any relation *A* ⊆ *H*<sup>0</sup> × *H*1. Indeed, let *A* ⊆ *H*<sup>0</sup> × *H*<sup>1</sup> be a relation; define *B* := lin *A*. Then dom*(B)* = lin dom*(A)*. Also, we have

$$A^\* = -(A^\perp)^{-1} = -(B^\perp)^{-1} = B^\*.$$

With these preparations, we can write

$$\text{dom}(A)^\perp = (\text{lin}\,\text{dom}(A))^\perp = \text{dom}(B)^\perp = B^\*[\{0\}] = A^\*[\{0\}],$$

where we used that (2.1) holds for linear relations.

**Lemma 2.2.9** *Let A* ⊆ *H*<sup>0</sup> × *H*<sup>1</sup> *be a linear relation. Then A* ∈ *L(H*0*, H*1*) if and only if A*<sup>∗</sup> ∈ *L(H*1*, H*0*). In either case, A*∗ =  *A .*

*Proof* Note that *A* ∈ *L(H*0*, H*1*)* implies that *A* is closable and densely defined. Thus, by Lemma 2.2.7, *A*<sup>∗</sup> is a densely defined, closed linear operator. For *u* ∈ dom*(A*∗*)* we compute using Lemma 2.2.4

$$\left\| A^\* u \right\| = \sup\_{x \in H\_0 \backslash \{0\}} \frac{|\langle A^\* u, x \rangle|}{\|x\|} = \sup\_{x \in H\_0 \backslash \{0\}} \frac{|\langle u, \overline{A} x \rangle|}{\|x\|} \lesssim \left\| \overline{A} \right\| \left\| u \right\|\,,$$

yielding *A*∗  *A* . On the one hand, this implies that *<sup>A</sup>*<sup>∗</sup> is bounded, and on the other, since *A*<sup>∗</sup> is densely defined we deduce *A*<sup>∗</sup> ∈ *L(H*1*, H*0*)* by Lemma 2.1.3. The other implication (and the other inequality) follows from the first one applied to *A*<sup>∗</sup> instead of *A* using *A*∗∗ = *A*.

We end this section by defining some special classes of relations and operators.

**Definition** Let *H* be a Hilbert space and *A* ⊆ *H* × *H* a linear relation. We call *A (skew-)Hermitian* if *A* ⊆ *A*<sup>∗</sup> (*A* ⊆ −*A*∗). We say that *A* is *(skew-)symmetric* if *A* is (skew-)Hermitian and densely defined (so that *A*∗ is a linear operator), and *A* is called *(skew-)selfadjoint* if *A* = *A*<sup>∗</sup> (*A* = −*A*∗). Additionally, if *A* is densely defined, then we say that *A* is *normal* if *AA*<sup>∗</sup> = *A*∗*A*.

## **2.3 Computing the Adjoint**

In general it is a very difficult task to compute the adjoint of a given (unbounded) operator. There are, however, cases, where the adjoint of a sum or the product can be computed more readily. We start with the most basic case of bounded linear operators.

**Proposition 2.3.1** *Let A,B* ∈ *L(H*0*, H*1*), C* ∈ *L(H*2*, H*0*). Then (A* + *B)* <sup>∗</sup> = *A*<sup>∗</sup> + *B*<sup>∗</sup> *and (AC)* <sup>∗</sup> = *C*∗*A*∗*.*

The latter results are special cases of more general statements to follow.

**Theorem 2.3.2** *Let A,B* ⊆ *H*<sup>0</sup> × *H*<sup>1</sup> *be relations. Then A*<sup>∗</sup> + *B*<sup>∗</sup> ⊆ *(A* + *B)*∗*. If, in addition, B* ∈ *L(H*0*, H*1*), then (A* + *B)*<sup>∗</sup> = *A*<sup>∗</sup> + *B*∗*.*

*Proof* In order to show the claimed inclusion, let *(u, r)* ∈ *A*<sup>∗</sup> + *B*∗. By definition of the sum of relations, we find *v, w* ∈ *H*0, *r* = *v* + *w*, with *(u, v)* ∈ *A*<sup>∗</sup> and *(u, w)* ∈ *B*∗. We compute for all *(x, s)* ∈ *A* + *B*, that is, *(x, y)* ∈ *A* and *(x, z)* ∈ *B* for some *y,z* ∈ *H*<sup>1</sup> with *s* = *y* + *z*

$$\begin{aligned} \langle \langle x, r \rangle\_{H\_0} &= \langle x, v + w \rangle\_{H\_0} = \langle x, v \rangle\_{H\_0} + \langle x, w \rangle\_{H\_0} \\ &= \langle \mathbf{y}, u \rangle\_{H\_1} + \langle z, u \rangle\_{H\_1} = \langle \mathbf{y} + z, u \rangle\_{H\_1} = \langle s, u \rangle\_{H\_1} \dots \end{aligned}$$

This shows the desired inclusion. Next, we assume in addition that *B* ∈ *L(H*0*, H*1*)*. For the equality, it remains to show that *(A*+*B)*<sup>∗</sup> ⊆ *A*<sup>∗</sup> +*B*∗, which in conjunction with the above follows if dom*((A*+*B)*∗*)* ⊆ dom*(A*∗+*B*∗*)* = dom*(A*∗*)*∩dom*(B*∗*)*. By Lemma 2.2.9, we have dom*(B*∗*)* = *H*1. Hence, it suffices to show that dom*((A*+ *B)*∗*)* ⊆ dom*(A*∗*)*. For this, let *(u, v)* ∈ *(A* + *B)* <sup>∗</sup>. Then we compute for all *(x, y)* ∈ *A* using Lemma 2.2.9 again

$$\langle \langle \mathbf{x}, \upsilon \rangle\_{H\_0} = \langle \mathbf{y} + B\mathbf{x}, \mu \rangle\_{H\_1} = \langle \mathbf{y}, \mu \rangle\_{H\_1} + \langle \mathbf{x}, B^\* \mu \rangle\_{H\_0}.$$

Thus, *x,v* − *B*∗*u <sup>H</sup>*<sup>0</sup> = *y,u <sup>H</sup>*<sup>1</sup> , which yields *(u, v* − *B*∗*u)* ∈ *A*∗; whence, *u* ∈ dom*(A*∗*)* as desired.

**Corollary 2.3.3** *Let A* ⊆ *H*<sup>0</sup> × *H*1*, B* ∈ *L(H*0*, H*1*). If A is densely defined, then A*<sup>∗</sup> + *B*<sup>∗</sup> *is an operator and (A* + *B)*<sup>∗</sup> = *A*<sup>∗</sup> + *B*∗*.*

**Theorem 2.3.4** *Let A* ⊆ *H*<sup>0</sup> × *H*<sup>1</sup> *and C* ⊆ *H*<sup>2</sup> × *H*0*. Then C*∗*A*<sup>∗</sup> ⊆ *(AC)*∗*. If, in addition, A* ⊆ *H*<sup>0</sup> × *H*<sup>1</sup> *is closed and linear as well as C* ∈ *L(H*2*, H*0*), then (AC)* <sup>∗</sup> = *C*∗*A*∗*.*

*Proof* For the first inclusion, let *(u, w)* ∈ *C*∗*A*∗. Thus, we find *v* ∈ *H*<sup>0</sup> such that *(u, v)* ∈ *A*<sup>∗</sup> and *(v, w)* ∈ *C*∗. Next, let *(r, y)* ∈ *AC*. Then we find *x* ∈ *H*<sup>0</sup> such that *(r, x)* ∈ *C* and *(x, y)* ∈ *A*. We compute

$$\langle \mathbf{y}, \boldsymbol{\mu} \rangle\_{H\_{\mathbb{L}}} = \langle \boldsymbol{x}, \boldsymbol{\upsilon} \rangle\_{H\_{0}} = \langle \boldsymbol{r}, \boldsymbol{w} \rangle\_{H\_{2}} \dots$$

Since *(r, y)* ∈ *AC* were chosen arbitrarily, we infer *C*∗*A*<sup>∗</sup> ⊆ *(AC)*∗. As every adjoint is closed, we obtain *C*∗*A*<sup>∗</sup> ⊆ *(AC)*∗.

Next, we assume that *A* is closed and linear as well as that *C* is bounded and linear. Then, by what we have just shown, we obtain *AC* ⊆ *(C*∗*A*∗*)* ∗. Next, let *(w, y)* ∈ *(C*∗*A*∗*)* <sup>∗</sup>. Then for all *(u, v)* ∈ *A*<sup>∗</sup> and *z* = *C*∗*v* we obtain

$$\langle \boldsymbol{u}, \boldsymbol{y} \rangle\_{H\_1} = \langle \boldsymbol{z}, \boldsymbol{w} \rangle\_{H\_2} = \langle \boldsymbol{C}^\* \boldsymbol{v}, \boldsymbol{w} \rangle\_{H\_2} = \langle \boldsymbol{v}, \boldsymbol{C} \boldsymbol{w} \rangle\_{H\_0}.$$

Thus, we obtain *(Cw, y)* ∈ *A*∗∗ = *A* = *A*. Thus, *(w, y)* ∈ *AC.* Hence,

$$AC = \left(C^\* A^\*\right)^\*,$$

which yields the assertion by adjoining this equation.

**Corollary 2.3.5** *Let A* ⊆ *H*<sup>0</sup> × *H*<sup>1</sup> *be a linear relation and C* ∈ *L(H*2*, H*0*). Then AC*<sup>∗</sup> <sup>=</sup> *<sup>C</sup>*∗*A*∗*.*

*Proof* The result follows upon realising that *A*<sup>∗</sup> = *A*∗∗∗ = *A* ∗ .

**Corollary 2.3.6** *Let A* ⊆ *H*<sup>0</sup> × *H*<sup>1</sup> *be a linear relation and C* ∈ *L(H*2*, H*0*). If AC is densely defined, then C*∗*A*<sup>∗</sup> *is a closable linear operator with C*∗*A*<sup>∗</sup> = *AC*<sup>∗</sup> *.*

*Remark 2.3.7* Let us comment on the equalities in the prevoius statements.


We have already seen that *<sup>A</sup>*<sup>∗</sup> <sup>=</sup> *<sup>A</sup>*<sup>∗</sup> . We can even restrict *A* to a core and still obtain the same adjoint.

**Proposition 2.3.8** *Let A* ⊆ *H*<sup>0</sup> × *H*<sup>1</sup> *be a linear relation, D* ⊆ dom*(A) a linear subspace. Then D is a core for A if and only if (A* ∩ *(D* × *H*1*))* <sup>∗</sup> = *A*∗*.*

*Proof* We set *A*|*<sup>D</sup>* := *A* ∩ *(D* × *H*1*)*. Then

*D* core ⇐⇒ *A*|*<sup>D</sup>* = *A* ⇐⇒ *A*|*<sup>D</sup>* <sup>⊥</sup> <sup>=</sup> *<sup>A</sup>*<sup>⊥</sup> ⇐⇒ *A*|*<sup>D</sup>* <sup>⊥</sup> = *A*<sup>⊥</sup> ⇐⇒ *A*| ∗ *<sup>D</sup>* = *A*∗*.*

## **2.4 The Spectrum and Resolvent Set**

In this section, we focus on operators acting on a single Banach space. As such, throughout this section let *<sup>X</sup>* be a Banach space over <sup>K</sup> ∈ {R*,* <sup>C</sup>} and let *A*: dom*(A)* ⊆ *X* → *X* be a closed linear operator.

**Definition** The set

$$\rho(A) := \left\{ \lambda \in \mathbb{K} \; ; \; (\lambda - A)^{-1} \in L(X) \right\}$$

is called the *resolvent set* of *A*. We define

$$\sigma(A) := \mathbb{K} \backslash \rho(A)$$

to be the *spectrum* of *A*.

We state and prove some elementary properties of the spectrum and the resolvent set. We shall see natural examples for *<sup>A</sup>* which satisfy that *σ (A)* <sup>=</sup> <sup>K</sup> or *σ (A)* <sup>=</sup> <sup>∅</sup> later on.

For a metric space *(X, d)*, we will write *B (x,r)* = {*y* ∈ *X* ; *d(x, y) < r*} for the open ball around *x* of radius *r* and *B* [*x,r*] = {*y* ∈ *X* ; *d(x, y) r*} for the closed ball.

**Proposition 2.4.1** *If λ,μ* ∈ *ρ(A), then the* resolvent identity *holds. That is*

$$(\lambda - A)^{-1} - (\mu - A)^{-1} = (\mu - \lambda) \left(\lambda - A\right)^{-1} (\mu - A)^{-1}.$$

*Moreover, the set ρ(A) is open. More precisely, if λ* ∈ *ρ(A) then B λ,* 1 *(λ* <sup>−</sup> *A)*−<sup>1</sup> ⊆ *ρ(A) and for μ* ∈ *B λ,* 1 *(λ* <sup>−</sup> *A)*−<sup>1</sup> *we have*

$$(\mu - A)^{-1} = \sum\_{k=0}^{\infty} (\lambda - \mu)^k \left( (\lambda - A)^{-1} \right)^{k+1}$$

*as well as*

$$\left\| \left( \left( \mu - A \right)^{-1} \right\| \right\| \leqslant \frac{\left\| \left( \lambda - A \right)^{-1} \right\|}{1 - |\lambda - \mu| \left\| \left( \lambda - A \right)^{-1} \right\|}.$$

*The mapping ρ(A) <sup>λ</sup>* → *(λ* <sup>−</sup> *A)*−<sup>1</sup> <sup>∈</sup> *L(X) is analytic.*

*Proof* For the first assertion, we let *λ,μ* ∈ *ρ(A)* and compute

$$(\lambda - A)^{-1} - (\mu - A)^{-1} = (\lambda - A)^{-1} \left( (\mu - A) - (\lambda - A) \right) (\mu - A)^{-1}$$

$$= (\lambda - A)^{-1} (\mu - \lambda) (\mu - A)^{-1}$$

$$= (\mu - \lambda) (\lambda - A)^{-1} (\mu - A)^{-1}.$$

Next, let *λ* ∈ *ρ(A)* and *μ* ∈ *B λ,* 1*/ (λ* <sup>−</sup> *A)*−<sup>1</sup> . Then

$$\left\| \left( (\lambda - \mu)(\lambda - A)^{-1} \right\| < 1. \right\|$$

Hence, 1 <sup>−</sup> *(λ* <sup>−</sup> *μ)(λ* <sup>−</sup> *A)*−<sup>1</sup> admits an inverse in *L(X)* satisfying

$$\left(1 - (\lambda - \mu)(\lambda - A)^{-1}\right)^{-1} = \sum\_{k=0}^{\infty} \left((\lambda - \mu)(\lambda - A)^{-1}\right)^k. \tag{2.2}$$

We claim that *μ* ∈ *ρ(A)*. For this, we compute

$$
\mu - A = \lambda - A - (\lambda - \mu) = (\lambda - A) \left( 1 - (\lambda - \mu)(\lambda - A)^{-1} \right).
$$

Since <sup>1</sup> <sup>−</sup> *(λ* <sup>−</sup> *μ)(λ* <sup>−</sup> *A)*−1 is an isomorphism in *L(X)*, we deduce that the right-hand side admits a continuous inverse if and only if the left-hand side does. As *λ* ∈ *ρ(A)*, we thus infer *μ* ∈ *ρ(A)*. The estimate follows from (2.2). Indeed, we have

$$\begin{aligned} \left\|(\mu - A)^{-1}\right\| &\leqslant \left\|(\lambda - A)^{-1}\right\| \left\|\sum\_{k=0}^{\infty} \left((\lambda - \mu)(\lambda - A)^{-1}\right)^{k} \right\| \\ &\leqslant \left\|(\lambda - A)^{-1}\right\| \sum\_{k=0}^{\infty} \left\|(\lambda - \mu)(\lambda - A)^{-1}\right\|^{k} = \frac{\left\|(\lambda - A)^{-1}\right\|}{1 - \left\|(\lambda - \mu)(\lambda - A)^{-1}\right\|}. \end{aligned}$$

For the final claim of the present proposition, we observe that

$$(\mu - A)^{-1} = \left(1 - (\lambda - \mu)(\lambda - A)^{-1}\right)^{-1}(\lambda - A)^{-1}$$

$$= \sum\_{k=0}^{\infty} (\lambda - \mu)^k \left((\lambda - A)^{-1}\right)^{k+1},$$

which is an operator norm convergent power series expression for the resolvent at *μ* about *λ*. Thus, analyticity follows.

For a given measure space *(, , μ)* we shall consider multiplication operators in *<sup>L</sup>*2*(μ)* next. For a measurable function *<sup>V</sup>* : <sup>→</sup> <sup>R</sup> we will use the notation [*<sup>V</sup> <sup>c</sup>*] := *<sup>V</sup>* <sup>−</sup><sup>1</sup> *(*−∞*, c*] for some constant *<sup>c</sup>* <sup>∈</sup> <sup>R</sup> (and similarly for other relational symbols).

*Remark 2.4.2* Before we turn to more general multiplication operators, we like to reason our notation for them by illustrating the example case of multiplication operators in *L*2*(*R*)*. A multiplication operator that immediately comes to mind is the so-called multiplication-by-the-argument operator on *L*2*(*R*)*, which we shall denote by m. Expressed differently, let

$$\text{im} \colon \text{dom}(\mathbf{m}) \subseteq L\_2(\mathbb{R}) \to L\_2(\mathbb{R}),\\ f \mapsto (\mathbf{x} \mapsto \mathbf{x}f(\mathbf{x})),$$

where dom*(*m*)* consists of all those *<sup>L</sup>*2*(*R*)*-functions *<sup>f</sup>* such that *(x* → *xf (x))* <sup>∈</sup> *L*2*(*R*)*. Being a multiplication operator, m admits what is called a 'functional calculus': It is possible to define functions of m, which will turn out to be operators themselves. Thus, if *<sup>V</sup>* : <sup>R</sup> <sup>→</sup> <sup>C</sup> is measurable, we can define *V (*m*)* to denote an operator in *L*2*(*R*)* acting as follows

$$(V(\mathbf{m})f)(\mathbf{x}) := V(\mathbf{x})f(\mathbf{x})$$

for suitable *f* . To apply *V* to m turns out to be the same as the operator of multiplication by *V* . This correspondence serves to justify the notation of multiplication operators acting on *L*2*(μ)* for some measure space *(,,μ)*. We will re-use the notation *V (*m*)* to denote the operator of multiplication-by-*V*, even in cases where there is no well-defined multiplication-by-argument-operator m in *L*2*(μ)*.

**Theorem 2.4.3** *Let (,,μ) be a measure space and <sup>V</sup>* : <sup>→</sup> <sup>K</sup> *a measurable function. Then the operator*

$$V(\mathfrak{m}) \colon \operatorname{dom}(V(\mathfrak{m})) \subseteq L\_2(\mu) \to L\_2(\mu)$$

$$f \mapsto \left(\omega \mapsto V(a)f(a)\right),$$

*with* dom*(V (*m*))* := *f* ∈ *L*2*(μ)*; *ω* → *V (ω)f (ω)* ∈ *L*2*(μ) satisfies the following properties:*


$$\frac{1}{V}(\omega) := \begin{cases} \frac{1}{V(\omega)}, & V(\omega) \neq 0, \\ 0, & V(\omega) = 0, \end{cases}$$

*for all ω* ∈ *.*

*Proof* For the whole proof we let *n* := [|*<sup>V</sup>* <sup>|</sup> *<sup>n</sup>*] and put <sup>1</sup>*<sup>n</sup>* := <sup>1</sup>*n* .

(a) We first show that *V (*m*)* is densely defined. Let *f* ∈ *L*2*(μ)*. Then, we have for all *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> that <sup>1</sup>*nf* <sup>∈</sup> dom*(V (*m*))*. From <sup>=</sup> *<sup>n</sup> n* and *n* ⊆ *n*+<sup>1</sup> it follows that <sup>1</sup>*nf* <sup>→</sup> *<sup>f</sup>* in *<sup>L</sup>*2*(μ)* as *<sup>n</sup>* → ∞.

Next, we confirm that *V (*m*)* is closed. Let *(fk)k* in dom*(V (*m*))* convergent in *L*2*(μ)* with *(V (*m*)fk)k* be convergent in *L*2*(μ)*. Denote the respective limits by *<sup>f</sup>* and *<sup>g</sup>*. It is clear that for all *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> we have <sup>1</sup>*nfk* <sup>→</sup> <sup>1</sup>*nf* as *<sup>k</sup>* → ∞. Also, we have

$$\mathbb{1}\_n\text{g} = \lim\_{k \to \infty} \mathbb{1}\_n V(\mathbf{m}) f\_k = \lim\_{k \to \infty} V(\mathbf{m}) (\mathbb{1}\_n f\_k) = V(\mathbf{m}) (\mathbb{1}\_n f) = \mathbb{1}\_n V f.$$

Hence, *g* = *Vf μ*-almost everywhere and since *g* ∈ *L*2*(μ)*, we have that *f* ∈ dom*(V (*m*))*.

(b) It is easy to see that *V* <sup>∗</sup>*(*m*)* ⊆ *V (*m*)*∗. For the other inclusion, we let *u* ∈ dom*(V (*m*)*∗*)*. Then, for all *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*(μ)* and *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> we have <sup>1</sup>*nf* <sup>∈</sup> dom*(V (*m*))* and, hence,

$$\begin{aligned} \langle f, \mathbb{1}\_n V^\* u \rangle &= \int\_{\Omega\_n} f^\* V^\* u \, \mathrm{d}\mu = \langle V(\mathrm{m})(\mathbb{1}\_n f), u \rangle = \langle \mathbb{1}\_n f, V(\mathrm{m})^\* u \rangle \\ &= \langle f, \mathbb{1}\_n V(\mathrm{m})^\* u \rangle \,. \end{aligned}$$

It follows that <sup>1</sup>*nV* <sup>∗</sup>*<sup>u</sup>* <sup>=</sup> <sup>1</sup>*nV (*m*)*∗*<sup>u</sup>* for all *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>. Thus, <sup>=</sup> *<sup>n</sup> n* implies *V* <sup>∗</sup>*u* = *V (*m*)*∗*u* and therefore *u* ∈ dom*(V* <sup>∗</sup>*(*m*))* and *V* <sup>∗</sup>*(*m*)u* = *V (*m*)*∗*u*.


The spectrum of *V (*m*)* from the latter example can be computed once we consider a less general class of measure spaces. We provide a characterisation of these measure spaces first.

**Proposition 2.4.4** *Let (,,μ) be a measure space. Then the following statements are equivalent:*


*Proof* (i)⇒(ii): Let *ε >* 0 and *Aε* := [|*V* | - *V (*m*) L(L*2*(μ))* + *ε*]. Assume that *μ(Aε) >* 0. Since *(, , μ)* is semi-finite we find *Bε* ⊆ *Aε* such that 0 *< μ(Bε) <* <sup>∞</sup>. Define *<sup>f</sup>* := *μ(Bε)*−1*/*21*Bε* <sup>∈</sup> *<sup>L</sup>*2*(μ)* with *<sup>f</sup> <sup>L</sup>*2*(μ)* <sup>=</sup> 1. Consequently, we obtain

$$\|V(\mathfrak{m})\|\_{L(L\_2(\mu))} \gtrsim \|V(\mathfrak{m})f\|\_{L\_2(\mu)} \gtrsim \|V(\mathfrak{m})\|\_{L(L\_2(\mu))} + \varepsilon,$$

which yields a contradiction, and hence (ii).

(ii)⇒(i): Assume that *(, , μ)* is not semi-finite. Then we find *A* ∈ with *μ(A)* = ∞ such that for each *B* ⊆ *A* measurable, we have *μ(B)* ∈ {0*,*∞}. Then *<sup>V</sup>* := <sup>1</sup>*<sup>A</sup>* is bounded and measurable with *<sup>V</sup> <sup>L</sup>*∞*(μ)* <sup>=</sup> 1. However, *V (*m*)* <sup>=</sup> 0. Indeed, if *f* ∈ *L*2*(μ)* then [*f* = 0] = *<sup>n</sup>*∈N[|*<sup>f</sup>* <sup>|</sup> <sup>2</sup> *<sup>n</sup>*−1]. Thus,

$$[V(\mathfrak{m})f \neq 0] = [f \neq 0] \cap A = \bigcup\_{n \in \mathbb{N}} [|f|^2 \gg n^{-1}] \cap A.$$

Since *μ(*[|*f* | <sup>2</sup> *<sup>n</sup>*−1]*) <* <sup>∞</sup> as *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*(μ)*, we infer *μ(*[|*<sup>f</sup>* <sup>|</sup> <sup>2</sup> *<sup>n</sup>*−1] ∩ *A)* <sup>=</sup> <sup>0</sup> by the property assumed for *A*. Thus, *μ(*[*V (*m*)f* = 0]*)* = 0 implying *V (*m*)* = 0. Hence, *V (*m*) L(L*2*(μ))* = 0 *<* 1 = *V <sup>L</sup>*∞*(μ)*. *Remark 2.4.5* Any *σ*-finite measure space is semi-finite. Indeed, let *(, , μ)* be *σ*-finite and *A* ∈ with *μ(A)* = ∞. We find a sequence *(Gn)n* of pairwise disjoint, measurable sets with finite measure satisfying *<sup>n</sup> Gn* = . Hence, *μ(Gn* ∩ *A) μ(Gn) <* ∞. If *μ(Gn* ∩ *A)* = 0 for all *n*, then *μ(A)* = 0 by the *σ*-additivity of *μ*. Thus, as *μ(A)* = 0, we find *n* such that 0 *< μ(Gn* ∩ *A) <* ∞ and *(, , μ)* is semi-finite.

A straightforward consequence of Theorem 2.4.3 (c) and Proposition 2.4.4 is the following.

**Proposition 2.4.6** *Let (,,μ) be a semi-finite measure space, <sup>V</sup>* : <sup>→</sup> <sup>K</sup> *measurable and bounded. Then V (*m*) L(L*2*(μ))* = *V <sup>L</sup>*∞*(μ).*

**Theorem 2.4.7** *Let (,,μ) be a semi-finite measure space and let <sup>V</sup>* : <sup>→</sup> <sup>K</sup> *be measurable. Then*

$$\sigma\left(V(\mathfrak{m})\right) = \text{ess-ran}\,V := \{\lambda \in \mathbb{K} \; ; \; \forall \varepsilon > 0 \colon \mu\left([|\lambda - V| < \varepsilon]\right) > 0\}\,.$$

*Proof* Let *<sup>λ</sup>* <sup>∈</sup> ess-ran *<sup>V</sup>* . For all *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> we find *Bn* <sup>∈</sup> with non-zero, but finite measure such that *Bn* ⊆ <sup>|</sup>*<sup>λ</sup>* <sup>−</sup> *<sup>V</sup>* <sup>|</sup> *<sup>&</sup>lt;* <sup>1</sup> *n .* We define *fn* := ! 1 *μ(Bn)*1*Bn* <sup>∈</sup> *<sup>L</sup>*2*(μ)*. Then *fn <sup>L</sup>*2*(μ)* = 1 and

$$|V(\omega)f\_n(\omega)| \lesssim |V(\omega) - \lambda| \left|f\_n(\omega)\right| + |\lambda| \left|f\_n(\omega)\right| \lesssim \left(\frac{1}{n} + |\lambda|\right) \left|f\_n(\omega)\right| $$

for *ω* ∈ *,* which shows that *(fn)n* is in dom*(V (*m*))*. A similar estimate, on the other hand, shows that

$$\|(V(\mathbf{m}) - \lambda) \, f\_n\|\_{L\_2(\mu)} \to 0 \quad (n \to \infty).$$

Thus, *(V (*m*)* − *λ)* <sup>−</sup><sup>1</sup> cannot be continuous as *fn <sup>L</sup>*2*(μ)* <sup>=</sup> 1 for all *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>.

Let now *<sup>λ</sup>* <sup>∈</sup> <sup>K</sup>\ess-ran *<sup>V</sup>* . Then there exists *ε >* 0 such that *<sup>N</sup>* := [|*<sup>λ</sup>* <sup>−</sup> *<sup>V</sup>* <sup>|</sup> *< ε*] is a *μ*-nullset. In particular, *λ* − *V* = 0 *μ*-a.e. Hence, *(λ* − *V (*m*))* <sup>−</sup><sup>1</sup> <sup>=</sup> <sup>1</sup> *<sup>λ</sup>*−*<sup>V</sup> (*m*)* is a linear operator. Since, 1 *λ*−*V* <sup>1</sup>*/ε μ*-almost everywhere, we deduce that *(λ* − *V (*m*))* <sup>−</sup><sup>1</sup> <sup>∈</sup> *L(L*2*(μ))* and hence, *<sup>λ</sup>* <sup>∈</sup> *ρ(V (*m*))*.

We conclude this chapter by sketching that multiplication operators as discussed in Theorem 2.4.3, Propositions 2.4.4, 2.4.6, and Theorem 2.4.7 are *the* prototypical example for normal operators. In fact it can be shown that normal operators are unitarily equivalent to multiplication operators on some *L*2*(μ)*. This fact is also known as the 'spectral theorem'. It is also important to note that, as we have seen in Theorem 2.4.3, a multiplication operator in *L*2*(μ)* is self-adjoint if and only if *V* assumes values in the real numbers, only.

## **2.5 Comments**

The material presented in this chapter is basic textbook knowledge. We shall thus refer to the monographs [54, 139]. Note that spectral theory for self-adjoint operators is a classical topic in functional analysis. For a glimpse on further theory of linear relations we exemplarily refer to [7, 14, 25]. The restriction in Proposition 2.4.6 and Theorem 2.4.7 to semi-finite measure spaces is not very severe. In fact, if *(, , μ)* was not semi-finite, it is possible to construct a semi-finite measure space *(*loc*,* loc*, μ*loc*)*such that *Lp(μ)* is isometrically isomorphic to *Lp(μ*loc*)*, see [129, Section 2].

## **Exercises**

**Exercise 2.1** Let *A* ⊆ *X*<sup>0</sup> × *X*<sup>1</sup> be an unbounded linear operator. Show that for every linear operator *B* ⊆ *X*<sup>0</sup> × *X*<sup>1</sup> with *B* ⊇ *A* and dom*(B)* = *X*0, we have that *B* is not closed.

**Exercise 2.2** Prove Proposition 2.1.4 and Corollary 2.1.5. Hint: One might use that bounded linear relations are always operators.

**Exercise 2.3** Prove Lemma 2.2.2.

**Exercise 2.4** Let *A*: dom*(A)* ⊆ *H*<sup>0</sup> → *H*<sup>0</sup> be a closed and densely defined linear operator. Show that for all *<sup>λ</sup>* <sup>∈</sup> <sup>K</sup> we have

$$
\lambda \in \rho(A) \iff \lambda^\* \in \rho(A^\*).
$$

**Exercise 2.5** Let *<sup>U</sup>* <sup>⊆</sup> *<sup>H</sup>*<sup>0</sup> <sup>×</sup> *<sup>H</sup>*<sup>1</sup> satisfy *<sup>U</sup>*−<sup>1</sup> <sup>=</sup> *<sup>U</sup>*∗. Show that *<sup>U</sup>* <sup>∈</sup> *L(H*0*, H*1*)* and that *U* is *unitary*, that is, *U* is onto and for all *x* ∈ *H*<sup>0</sup> we have *U x <sup>H</sup>*<sup>1</sup> = *x <sup>H</sup>*<sup>0</sup> .

**Exercise 2.6** Let *<sup>δ</sup>* : *<sup>C</sup>* [0*,* 1] <sup>⊆</sup> *<sup>L</sup>*2*(*0*,* <sup>1</sup>*)* <sup>→</sup> <sup>K</sup>*, f* → *f (*0*)*, where *<sup>C</sup>* [0*,* 1] denotes the set of <sup>K</sup>-valued continuous functions on [0*,* <sup>1</sup>]. Show that *<sup>δ</sup>* is not closable. Compute *δ*.

**Exercise 2.7** Let *<sup>C</sup>* <sup>⊆</sup> <sup>C</sup> be closed. Provide a Hilbert space *<sup>H</sup>* and a densely defined closed linear operator *A* on *H* such that *σ (A)* = *C*.

## **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 3 The Time Derivative**

It is the aim of this chapter to define a derivative operator on a suitable *L*2-space, which will be used as the derivative with respect to the temporal variable in our applications. As we want to deal with Hilbert space-valued functions, we start by introducing the concept of Bochner–Lebesgue spaces, which generalises the classical scalar-valued *Lp*-spaces to the Banach space-valued case.

## **3.1 Bochner–Lebesgue Spaces**

Throughout, let *(, , μ)* be a *σ*-finite measure space and *X* a Banach space over the field <sup>K</sup> ∈ {R*,* <sup>C</sup>}. We are aiming to define the spaces *Lp(μ*; *X)* for 1 *<sup>p</sup>* <sup>∞</sup>. This is the space of (equivalence classes of) measurable functions attaining values in *X*, which are *p*-integrable (if *p <* ∞*)*, or essentially bounded (if *p* = ∞) with respect to the measure *μ.* We begin by defining the space of simple functions on with values in *X* and the notion of Bochner-measurability.

**Definition** For a function *f* : → *X* and *x* ∈ *X* we set

$$A\_{f, \boldsymbol{\chi}} := f^{-1} \| \{ \boldsymbol{x} \} \|.$$

A function *f* : → *X* is called *simple* if *f* [] is finite and for each *x* ∈ *X* \ {0} the set *Af,x* belongs to and has finite measure. We denote the set of simple functions by *S(μ*; *X)*. A function *f* : → *X* is called *Bochner-measurable* if there exists a sequence *(fn)n*∈<sup>N</sup> in *S(μ*; *X)* such that

$$f\_n(a) \to f(a) \quad (n \to \infty).$$

for *μ*-a.e. *ω* ∈ .

*Remark 3.1.1* Let us comment on the definition of Bochner-measurability.

(a) For a simple function *f* we have

$$f = \sum\_{x \in X} x \cdot \mathbf{1}\_{A\_{f,x}},$$

where the sum is actually finite, since <sup>1</sup>*Af,x* <sup>=</sup> 0 for all *x /*<sup>∈</sup> *<sup>f</sup>* [].

(b) If *<sup>X</sup>* <sup>=</sup> <sup>K</sup>, then a function is Bochner-measurable if and only if it has a *μ*-measurable representative. Indeed, if *f* is Bochner-measurable, we find a sequence *(fn)n* in *S(μ*; <sup>K</sup>*)* such that *fn* <sup>→</sup> *<sup>f</sup>* pointwise *<sup>μ</sup>*-a.e. Hence, we find <sup>a</sup> *<sup>μ</sup>*-nullset *<sup>N</sup>* <sup>∈</sup> such that *gn* := <sup>1</sup>\*<sup>N</sup> fn* <sup>→</sup> <sup>1</sup>\*<sup>N</sup> <sup>f</sup>* =: *<sup>g</sup>* pointwise on all of . Since *gn* is *μ*-measurable and *μ*-measurable functions are stable under pointwise limits, *g* is *μ*-measurable itself. Since *f* = *g* except for a *μ*-nullset, *f* has a *μ*-measurable representative. If, on the other hand, *f* has a *μ*-measurable representative, let *g* be this representative. Approximating real and imaginary parts separately, it suffices to treat the case <sup>K</sup> <sup>=</sup> <sup>R</sup>. Then consider for *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>

$$s\_n := \sum\_{k \in \mathbb{Z}} \frac{k+1}{n} \mathbf{1}\_{M\_n^k},$$

where *M<sup>k</sup> <sup>n</sup>* := *<sup>g</sup>*−1[*( <sup>k</sup> <sup>n</sup> , <sup>k</sup>*+<sup>1</sup> *<sup>n</sup>* ]]. It is easy to see that sup*ω*∈ |*sn(ω)* − *g(ω)*| 1*/n* for all *ω* ∈ . Hence,

$$\widetilde{s}\_n := \sum\_{k \in \mathbb{Z}, |k| \lesssim 2^n} \frac{k+1}{n} \mathbf{1}\_{M\_n^k} \in S(\mu; \mathbb{R}),$$

converges pointwise everywhere to *g*. In consequence, *f* is Bochnermeasurable.


$$\|f(\cdot)\|\_{X} = \lim\_{n \to \infty} \|f\_n(\cdot)\|\_{X}$$

*μ*-a.e. and a sequence *(fn)n*∈<sup>N</sup> in *S(μ*; *X)*, it suffices to show that *fn(*·*) <sup>X</sup>* is simple for all *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>*.* The latter follows since *Afn,x* <sup>∩</sup> *Afn,y* <sup>=</sup> <sup>∅</sup> for *<sup>x</sup>* <sup>=</sup> *<sup>y</sup>* and thus

$$\|f\_n(\cdot)\|\_X = \sum\_{\boldsymbol{\chi}\in f\_n[\Omega]} \|\boldsymbol{\chi}\|\_{\boldsymbol{X}} \cdot \mathbb{1}\_{A\_{f\_n,\boldsymbol{\chi}}}$$

is a real-valued simple function.

(e) If one deals with arbitrary measure spaces, the definition of simple functions has to be weakened by allowing the sets *Af,x* to have infinite measure. However, since in the applications to follow we only work with weighted Lebesgue measures, we restrict ourselves to *σ*-finite measure spaces.

## **Definition (Bochner–Lebesgue Spaces)** For *p* ∈ [1*,*∞] we define

$$\mathcal{L}\_p(\mu; X) := \left\{ f \colon \Omega \to X \text{ } \text{; } f \text{ Boolean-measurable, } \|f(\cdot)\|\_X \in \mathcal{L}\_p(\mu) \right\},$$

as well as

$$L\_p(\mu; X) := \mathcal{L}\_p(\mu; X)\_{\bigtriangleup \sim, \bullet}$$

where ∼ denotes the usual equivalence relation of equality *μ*-almost everywhere. We equip *Lp(μ*; *X)* with the norm

$$\|f\|\_{p} := \begin{cases} \left(\int\_{\Omega} \|f(\omega)\|\_{X}^{p} \, \mathrm{d}\mu(\omega)\right)^{\frac{1}{p}}, & \text{if } p < \infty, \\ \mathrm{ess-sup}\_{\omega \in \Omega} \, \|f(\omega)\|\_{X}, & \text{if } p = \infty \end{cases} \quad (f \in L\_{p}(\mu; X)).$$

We first prove a density result.

**Lemma 3.1.2** *The space S(μ*; *X) is dense in Lp(μ*; *X) for p* ∈ [1*,*∞*).*

*Proof* Let *f* ∈ *Lp(μ*; *X).* Then there exists a sequence *(fn)n*∈<sup>N</sup> in *S(μ*; *X)* such that *fn(ω)* → *f (ω)* for all *ω* ∈ \ *N* for some nullset *N* ⊆ . W.l.o.g. we may assume that *fn(*·*) <sup>X</sup>* and *f (*·*) <sup>X</sup>* are *<sup>μ</sup>*-measurable on \ *<sup>N</sup>* for each *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>. For *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> we define the set

$$I\_n := \left\{ \omega \in \Omega \mid N \; ; \; \|f\_n(\omega)\|\_X \leqslant 2 \, \|f(\omega)\|\_X \right\} \in \Sigma,$$

and set *f <sup>n</sup>* := *fn*1*In* . Then *<sup>f</sup> <sup>n</sup>* <sup>∈</sup> *S(μ*; *X)* and we claim that *<sup>f</sup> n(ω)* <sup>→</sup> *f (ω)* for all *ω* ∈ \ *N*. Indeed, if *f (ω)* = 0 then *f n(ω)* <sup>=</sup> 0 and the claim follows. If *f (ω)* <sup>=</sup> 0, then there is some *<sup>n</sup>*<sup>0</sup> <sup>∈</sup> <sup>N</sup> such that *fn(ω) <sup>X</sup>* <sup>2</sup> *f (ω) <sup>X</sup>* for *<sup>n</sup>* <sup>≥</sup> *<sup>n</sup>*0, and hence *ω* ∈ " *n<sup>n</sup>*<sup>0</sup> *In.* Consequently *f n(ω)* <sup>=</sup> *fn(ω)* <sup>→</sup> *f (ω).* By dominated convergence, it now follows that

$$\int\limits\_{\Omega} \left\| \widetilde{f}\_n(\omega) - f(\omega) \right\|\_{X}^p \mathrm{d}\mu(\omega) \to 0 \quad (n \to \infty),$$

which proves the claim.

As a consequence of the latter lemma, we can show that Bochner-measurability is preserved by pointwise convergence almost everywhere.

**Proposition 3.1.3** *Let fn, f* : <sup>→</sup> *<sup>X</sup> for <sup>n</sup>* <sup>∈</sup> <sup>N</sup>*. Moreover, assume that fn is Bochner-measurable for each <sup>n</sup>* <sup>∈</sup> <sup>N</sup> *and fn(ω)* <sup>→</sup> *f (ω) as <sup>n</sup>* → ∞ *for <sup>μ</sup>-almost every ω* ∈ *. Then f is Bochner-measurable.*

*Proof* Since *fn* → *f* almost everywhere, we have [*f* = 0]\*N* ⊆ *<sup>n</sup>*∈N[*fn* <sup>=</sup> <sup>0</sup>]\*<sup>N</sup>* for some nullset *N* ⊆ . Moreover, since *fn* is Bochner-measurable, the definition of simple functions yields that *<sup>n</sup>*∈N[*fn* <sup>=</sup> <sup>0</sup>] ⊆ *<sup>n</sup>*∈<sup>N</sup> *Bn*, where, for all *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>, *Bn* is measurable with *μ(Bn) <* ∞. The latter implies that there exists a sequence of measurable sets *(An)n*∈<sup>N</sup> such that *An* <sup>⊆</sup> *An*+1, *μ(An) <* <sup>∞</sup> for all *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> and

$$[f \neq 0] \backslash N \subseteq \bigcup\_{n \in \mathbb{N}} A\_n.$$

For *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> we set *gn* := <sup>1</sup>*An*∩[*<sup>f</sup> nn*]*fn*, where *<sup>f</sup> <sup>n</sup>* : <sup>→</sup> <sup>R</sup> is measurable and equals *fn(*·*) <sup>X</sup> μ*-almost everywhere (cp. Remark 3.1.1(d) and (b)). In this way we obtain a sequence of Bochner-measurable functions with *gn* → *f μ*-almost everywhere. Moreover, *gn* <sup>∈</sup> *<sup>L</sup>*1*(μ*; *X)* for each *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> and thus, for each *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> we find a simple function *hn* with *gn* <sup>−</sup> *hn* <sup>1</sup> <sup>2</sup>−*<sup>n</sup>* by Lemma 3.1.2. Then

$$\int\_{\Omega} \sum\_{n \in \mathbb{N}} \|g\_n(\omega) - h\_n(\omega)\|\_{X} \, \text{d}\mu(\omega) < \infty$$

and hence, *<sup>n</sup>*∈<sup>N</sup> *gn(ω)* <sup>−</sup> *hn(ω) <sup>X</sup> <sup>&</sup>lt;* <sup>∞</sup> for *<sup>μ</sup>*-almost every *<sup>ω</sup>* <sup>∈</sup> , which particularily implies *gn* − *hn* → 0 *μ*-almost everywhere. Hence, *hn* → *f μ*-almost everywhere, which shows the Bochner-measurability of *f* .

We can now prove that the spaces *Lp(μ*; *X)* are actually Banach spaces.

**Proposition 3.1.4** *Let p* ∈ [1*,*∞]*. Then (Lp(μ*; *X),* ·*p) is a Banach space and if X* = *H is a Hilbert space, then so too is L*2*(μ*; *H ) with the scalar product given by*

$$\langle f, g \rangle\_2 := \int\_{\Omega} \langle f(\omega), g(\omega) \rangle\_H \, \mathrm{d}\mu(\omega) \quad (f, g \in L\_2(\mu; H)).$$

*Proof* We just show the completeness of *Lp(μ*; *X).* Let *(fn)n*∈<sup>N</sup> be a sequence in *Lp(μ*; *X)* such that <sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *fn <sup>p</sup> <sup>&</sup>lt;* <sup>∞</sup>. We set

$$\mathfrak{g}\_n(\omega) := \|f\_n(\alpha)\|\_X \quad (n \in \mathbb{N}, \omega \in \mathfrak{Q}).$$

Then *(gn)n*∈<sup>N</sup> is a sequence in *Lp(μ)* such that <sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *gn <sup>p</sup> <sup>&</sup>lt;* <sup>∞</sup>*.* By the completeness of *Lp(μ)* we infer that

$$\mathbf{g} := \sum\_{n=1}^{\infty} \mathbf{g}\_n$$

exists and is an element in *Lp(μ).* In particular, *g(ω) <* ∞ for *μ*-a.e. *ω* ∈ and thus,

$$\sum\_{n=1}^{\infty} \|f\_n(\omega)\|\_{X} = \sum\_{n=1}^{\infty} g\_n(\omega) < \infty$$

for *μ*-a.e. *ω* ∈ *.* By the completeness of *X* we can define

$$f(a) := \sum\_{n=1}^{\infty} f\_n(a)$$

for *μ*-a.e. *ω* ∈ . Note that *f* is Bochner-measurable by Proposition 3.1.3. We need to prove that *<sup>f</sup>* <sup>∈</sup> *Lp(μ*; *X)* and that *<sup>k</sup> <sup>n</sup>*=<sup>1</sup> *fn* <sup>→</sup> *<sup>f</sup>* in *Lp(μ*; *X)* as *<sup>k</sup>* → ∞*.* For this, it suffices to prove that

$$\sum\_{n=k}^{\infty} f\_n \in L\_p(\mu; X) \text{ and } \sum\_{n=k}^{\infty} f\_n \to 0 \text{ in } L\_p(\mu; X) \text{ as } k \to \infty. \tag{3.1}$$

Indeed, this would imply both *<sup>f</sup>* <sup>−</sup> *<sup>k</sup> <sup>n</sup>*=<sup>1</sup> *fn* <sup>∈</sup> *Lp(μ*; *X)* and the desired convergence result. We prove (3.1) for *p <* ∞ and *p* = ∞ separately.

First, let *<sup>p</sup>* = ∞. For each *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> we have *fn* <sup>∈</sup> *<sup>L</sup>*∞*(μ*; *X)* and thus *fn(ω) <sup>X</sup> fn*∞ for all *<sup>ω</sup>* <sup>∈</sup> \*Nn* and some nullset *Nn* <sup>⊆</sup> . We set *<sup>N</sup>* := <sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *Nn,* which is again a nullset. For *<sup>k</sup>* <sup>∈</sup> <sup>N</sup> and *<sup>ω</sup>* <sup>∈</sup> \ *<sup>N</sup>* we then estimate

$$\left\| \sum\_{n=k}^{\infty} f\_n(\omega) \right\|\_X \lesssim \sum\_{n=k}^{\infty} \|f\_n(\omega)\|\_X \lesssim \sum\_{n=k}^{\infty} \|f\_n\|\_{\infty},$$

which yields (3.1).

Now, let *p <* <sup>∞</sup>. For *<sup>k</sup>* <sup>∈</sup> <sup>N</sup> we estimate

$$\begin{aligned} \left(\int\_{\Omega} \left(\left\|\sum\_{n=k}^{\infty} f\_n(\omega)\right\|\_{X}\right)^p \, \mathrm{d}\mu(\omega)\right)^{\frac{1}{p}} &\leq \left(\int\_{\Omega} \left(\sum\_{n=k}^{\infty} \|f\_n(\omega)\|\_{X}\right)^p \, \mathrm{d}\mu(\omega)\right)^{\frac{1}{p}}\\ &= \left(\int\_{\Omega} \lim\_{m \to \infty} \left(\sum\_{n=k}^m \|f\_n(\omega)\|\_{X}\right)^p \, \mathrm{d}\mu(\omega)\right)^{\frac{1}{p}} \end{aligned}$$

$$\begin{aligned} &= \lim\_{m \to \infty} \left( \int\_{\Omega} \left( \sum\_{n=k}^{m} \|f\_n(\omega)\|\_{X} \right)^p \, \mathrm{d}\mu(\omega) \right)^{\frac{1}{p}} \\ &\le \lim\_{m \to \infty} \sum\_{n=k}^{m} \|f\_n\|\_{p} = \sum\_{n=k}^{\infty} \|f\_n\|\_{p} \,, \end{aligned}$$

where we have used monotone convergence in the third line. This estimate yields (3.1).

We now want to define an *X*-valued integral for functions in *L*1*(μ*; *X)*; the so-called Bochner-integral.

**Proposition 3.1.5** *The mapping*<sup>1</sup>

$$\int\_{\Omega} \text{d}\mu \colon S(\mu; X) \subseteq L\_1(\mu; X) \to X$$

$$f \mapsto \sum\_{\mathbf{x} \in X} \mathbf{x} \cdot \mu(A\_{f, \mathbf{x}})$$

*is linear and continuous, and thus has a unique continuous linear extension to L*1*(μ*; *X), called the* Bochner-integral*. Moreover,*

$$\left\| \int\_{\Omega} f \, \mathrm{d}\mu \right\|\_{X} \leqslant \| f \|\_{1} \quad (f \in L\_{1}(\mu; X)),$$

*and for A* ∈ *, f* ∈ *L*1*(μ*; *X) we set*

$$\int\_{A} f \operatorname{d} \mu := \int\_{\Omega} f \cdot \mathbf{1}\_{A} \operatorname{d} \mu.$$

*Proof* We first show linearity. Let *f, g* <sup>∈</sup> *S(μ*; *X)* and *<sup>λ</sup>* <sup>∈</sup> <sup>K</sup>. Then, for *<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>* we have

$$A\_{\lambda f + \mathbf{g}, \mathbf{x}} = \left(\lambda f + \mathbf{g}\right)^{-1} [\{\mathbf{x}\}] = \bigcup\_{\mathbf{y} \in X} \left(f^{-1} [\{\mathbf{y}\}] \cap \mathbf{g}^{-1} [\{\mathbf{x} - \lambda \mathbf{y}\}]\right) = \bigcup\_{\mathbf{y} \in X} A\_{f, \mathbf{y}} \cap A\_{\mathbf{g}, \mathbf{x} - \lambda \mathbf{y}},$$

<sup>1</sup> Note that the sum is indeed finite and all summands are well-defined if we set 0*<sup>X</sup>* · ∞ := <sup>0</sup>*X.*

and therefore *μ(Aλf*+*g,x)* = *<sup>y</sup>*∈*<sup>X</sup> μ(Af,y* <sup>∩</sup> *Ag,x*−*λy)*. Thus, we compute

$$\begin{split} \int\_{\Omega} (\lambda f + \mathbf{g}) \, \mathrm{d}\mu &= \sum\_{\mathbf{x} \in X} \mathbf{x} \cdot \mu(A\_{\lambda f + \mathbf{g}, \mathbf{x}}) = \sum\_{\mathbf{x} \in X} \sum\_{\mathbf{y} \in X} \mathbf{x} \cdot \mu(A\_{f, \mathbf{y}} \cap A\_{\mathbf{g}, \mathbf{x} - \lambda \mathbf{y}}) \\ &= \sum\_{\mathbf{y} \in X} \sum\_{\mathbf{x} \in X} \lambda \mathbf{y} \cdot \mu(A\_{f, \mathbf{y}} \cap A\_{\mathbf{g}, \mathbf{x} - \lambda \mathbf{y}}) \\ &+ \sum\_{\mathbf{y} \in X} \sum\_{\mathbf{x} \in X} (\mathbf{x} - \lambda \mathbf{y}) \cdot \mu(A\_{f, \mathbf{y}} \cap A\_{\mathbf{g}, \mathbf{x} - \lambda \mathbf{y}}) \\ &= \sum\_{\mathbf{y} \in X} \sum\_{\mathbf{x} \in X} \lambda \mathbf{y} \cdot \mu(A\_{f, \mathbf{y}} \cap A\_{\mathbf{g}, \mathbf{x} - \lambda \mathbf{y}}) + \sum\_{\mathbf{y} \in X} \sum\_{\mathbf{z} \in X} \mathbf{z} \cdot \mu(A\_{f, \mathbf{y}} \cap A\_{\mathbf{g}, \mathbf{z}}), \end{split}$$

where we interchanged the finite sums. Now,

$$\sum\_{\mathbf{x}\in X} \mu(A\_{f,\mathbf{y}} \cap A\_{\mathbf{g},\mathbf{x}-\mathbf{\lambda}\mathbf{y}}) = \mu\left(A\_{f,\mathbf{y}} \cap \bigcup\_{\mathbf{x}\in X} A\_{\mathbf{g},\mathbf{x}-\mathbf{\lambda}\mathbf{y}}\right) = \mu(A\_{f,\mathbf{y}})$$

as well as

$$\sum\_{\mathbf{y}\in X} \mu(A\_{f,\mathbf{y}} \cap A\_{\mathbf{g},\mathbb{Z}}) = \mu\left(\bigcup\_{\mathbf{y}\in X} A\_{f,\mathbf{y}} \cap A\_{\mathbf{g},\mathbb{Z}}\right) = \mu(A\_{\mathbf{g},\mathbb{Z}}),$$

and therefore we conclude

$$\int\_{\Omega} (\lambda f + \mathbf{g}) \, \mathrm{d}\mu = \lambda \sum\_{\mathbf{y} \in X} \mathbf{y} \cdot \mu(A\_{f, \mathbf{y}}) + \sum\_{\mathbf{z} \in X} \mathbf{z} \cdot \mu(A\_{\mathbf{g}, \mathbf{z}}) = \lambda \int\_{\Omega} f \, \mathrm{d}\mu + \int\_{\Omega} \mathbf{g} \, \mathrm{d}\mu.$$

In order to prove continuity, let *f* ∈ *S(μ*; *X)*. We estimate

$$\begin{aligned} \left\| \int\_{\Omega} f \, \mathrm{d}\mu \right\|\_{X} &= \left\| \sum\_{\boldsymbol{x} \in f[\Omega]} \boldsymbol{x} \cdot \mu(A\_{f,\boldsymbol{x}}) \right\|\_{X} \leqslant \sum\_{\boldsymbol{x} \in f[\Omega]} \left\| \boldsymbol{x} \right\|\_{X} \mu(A\_{f,\boldsymbol{x}}) \\ &= \int\_{\Omega} \sum\_{\boldsymbol{x} \in f[\Omega]} \left\| \boldsymbol{x} \right\|\_{X} \mathbbm{1}\_{A\_{f,\boldsymbol{x}}} \, \mathrm{d}\mu \\ &= \int\_{\Omega} \left\| f(\cdot) \right\|\_{X} \, \mathrm{d}\mu = \left\| f \right\|\_{1} \, . \end{aligned}$$

The remaining assertions now follow from Lemma 3.1.2 by continuous extension (see Corollary 2.1.5).

The next proposition tells us how the Bochner-integral of a function behaves if we compose the function with a bounded or closed linear operator first. In what follows, let *X*-:= *L(X,* <sup>K</sup>*)* denote the dual space of *<sup>X</sup>*.

**Proposition 3.1.6** *Let f* ∈ *L*1*(μ*; *X), Y a Banach space.*

(a) *Let B* ∈ *L(X, Y ). Then B* ◦ *f* ∈ *L*1*(μ*; *Y ) and*

$$\int\_{\Omega} B \diamond f \,\mathrm{d}\mu = B \int\_{\Omega} f \,\mathrm{d}\mu.$$


$$A \int\_{\Omega} f \, \mathrm{d}\mu = \int\_{\Omega} A \circ f \, \mathrm{d}\mu.$$

#### *Proof*

(a) At first we observe that, if *f* ∈ *S(μ*; *X)*, then

$$B \diamond f = B \diamond \sum\_{\boldsymbol{\chi} \in X \backslash \{0\}} \boldsymbol{\chi} \cdot \mathbb{1}\_{A\_{f,\boldsymbol{\chi}}} = \sum\_{\boldsymbol{\chi} \in X \backslash \{0\}} B \boldsymbol{\chi} \cdot \mathbb{1}\_{A\_{f,\boldsymbol{\chi}}}.$$

Thus, *<sup>B</sup>*◦*<sup>f</sup>* <sup>∈</sup> *S(μ*; *Y )* since *Bx* ·1*Af,x* <sup>∈</sup> *S(μ*; *Y )*, the sum is finite and *S(μ*; *Y )* is a vector space. Let now be *f* ∈ *L*1*(μ*; *X)*. Then there is *(fn)n*∈<sup>N</sup> a sequence in *S(μ*; *X)* such that *fn* → *f μ*-a.e. Then *B* ◦ *fn* ∈ *S(μ*; *Y )* (see above) and due to the continuity of *B* we have that *B* ◦ *fn* → *B* ◦ *f μ*-a.e., hence *B* ◦ *f* is Bochner-measurable. Moreover, *B* ◦ *f (*·*) <sup>Y</sup> B f (*·*) <sup>X</sup>*, which yields that *B* ◦ *f* ∈ *L*1*(μ*; *Y ).* By continuity of both *B* and d*μ*, it suffices to check the interchanging property for any *f* ∈ *S(μ*; *X)* alone. However, this is clear, since for a simple function *f*

$$B \diamond f = B \left( \sum\_{\chi \in X} \mathbf{x} \cdot \mathbf{1}\_{A\_{f,\chi}} \right) = \sum\_{\chi \in X} B \mathbf{x} \cdot \mathbf{1}\_{A\_{f,\chi}},$$

where the sum is actually finite and hence,

$$\int\_{\Omega} B \diamond f \, \mathrm{d}\mu = \int\_{\Omega} \sum\_{\boldsymbol{x} \in X} B \mathbf{x} \cdot \mathbb{1}\_{A\_{f,\boldsymbol{x}}} \, \mathrm{d}\mu = \sum\_{\boldsymbol{x} \in X} \int\_{\Omega} B \mathbf{x} \cdot \mathbb{1}\_{A\_{f,\boldsymbol{x}}} \, \mathrm{d}\mu$$

$$= \sum\_{\boldsymbol{x} \in X} B \mathbf{x} \cdot \mu(A\_{f,\boldsymbol{x}}) = B \left( \sum\_{\boldsymbol{x} \in X} \boldsymbol{x} \cdot \mu(A\_{f,\boldsymbol{x}}) \right) = B \int\_{\Omega} f \, \mathrm{d}\mu,$$

where in the third equality we have used that *Bx* · <sup>1</sup>*Af,x* is a simple function.

(b) Let *x*- ∈ *X* with *x*- |*X*<sup>0</sup> = 0*.* It follows from (a) that

$$\mathbf{x}'\left(\int\_{\Omega} f \, \mathrm{d}\mu\right) = \int\_{\Omega} \mathbf{x}' \diamond f \, \mathrm{d}\mu = 0,$$

and since *x* was arbitrary, it follows that *f* d*μ* ∈ *X*<sup>0</sup> from the Theorem of Hahn–Banach.

(c) Consider the space *L*1*(μ*; *X* × *Y ).* By assumption, it follows that

$$(f, A \diamond f) \in L\_1(\mu; X \times Y).$$

However,*(f, A*◦*f )(ω)* = *(f (ω),(A* ◦ *f )(ω))* ∈ *A* ⊆ *X*×*Y* for *μ*-a.e. *ω* ∈ , and since *A* is closed we can use (b) to derive that

$$\int\_{\Omega} (f, A \diamond f) \, \mathrm{d}\mu \in A. \tag{3.2}$$

Let *π*1*, π*<sup>2</sup> be the projection from *X*×*Y* to *X* and *Y* , respectively. It then follows from part (a) that

$$\pi\_1\left(\int\_{\Omega} (f, A \diamond f) \, \mathrm{d}\mu\right) = \int\_{\Omega} \pi\_1(f, A \diamond f) \, \mathrm{d}\mu = \int\_{\Omega} f \, \mathrm{d}\mu,$$

and analogously for *π*2*.* Using these equalities we derive from (3.2) that *f* d*μ* ∈ dom*(A)* and that *A f* d*μ* = *A* ◦ *f* d*μ*.

As a consequence of the latter proposition, we derive the fundamental theorem of calculus for Banach space-valued functions.

**Corollary 3.1.7 (Fundamental Theorem of Calculus)** *Let a, b* <sup>∈</sup> <sup>R</sup>*,a < b and consider the measure space (*[*a, b*]*,B(*[*a, b*]*), λ), where B(*[*a, b*]*) denotes the Borel-σ-algebra of* [*a, b*] *and λ is the Lebesgue measure. Let f* : [*a, b*] → *X be continuously differentiable.*<sup>2</sup> *Then*

$$f(b) - f(a) = \int\_{[a,b]} f' \, \mathbf{d} \lambda.$$

*Proof* Note first of all that continuous functions are Bochner-measurable (which can be easily seen using Theorem 3.1.10 below). Thus, the integral on the righthand side is well-defined. Let *ϕ* ∈ *X*- *.* Then *<sup>ϕ</sup>* ◦ *<sup>f</sup>* : [*a, b*] → <sup>K</sup> is continuously differentiable, and *(ϕ* ◦ *f )* - *(t)* = *ϕ* ◦ *f* - *(t)*. Using Proposition 3.1.6 (a) together

<sup>2</sup> By this we mean that *f* is continuous on [*a, b*], continuously differentiable on *(a, b)* and *f* has a continuous extension to [*a, b*].

with the fundamental theorem of calculus for the scalar-valued case we get

$$\varphi\left(\int\_{\left[a,b\right]} f' \, \mathrm{d}\lambda\right) = \int\_{\left[a,b\right]} \left(\varphi \circ f'\right) \, \mathrm{d}\lambda = \varphi\left(f(b)\right) - \varphi\left(f(a)\right) = \varphi\left(f(b) - f(a)\right).$$

Since this holds for all *ϕ* ∈ *X*- *,* the assertion follows from the Theorem of Hahn– Banach.

Next we state a density result, which will be useful throughout the course.

**Lemma 3.1.8** *Let* 1 *p <* ∞*, D* ⊆ *Lp(μ) be total in Lp(μ) and X a Banach space. Then the set* {*ϕ(*·*)x* ; *x* ∈ *X, ϕ* ∈ *D*} *is total in Lp(μ*; *X).*

*Proof* By Lemma 3.1.2, we know that *S(μ*; *X)* is dense in *Lp(μ*; *X)*. Thus, it suffices to approximate <sup>1</sup>*Ax* for some *<sup>A</sup>* <sup>∈</sup> with *μ(A) <* <sup>∞</sup> and *<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*. For this, however, take a sequence *(φn)n* in the linear hull of *<sup>D</sup>* with *φn* <sup>→</sup> <sup>1</sup>*<sup>A</sup>* in *Lp(μ)* as *n* → ∞. Then

$$\|\mathbb{1}\_A \ge -\phi\_n \ge \|\_{L\_p(\mu; X)} = \|\mathbb{x}\|\_X \|\mathbb{1}\_A - \phi\_n\|\_{L\_p(\mu)} \to 0 \quad (n \to \infty).$$

Thus, the claim follows.

The following application of Lemma 3.1.8 also deals with a dense subset of *X*.

**Lemma 3.1.9** *Let* 1 *p <* ∞*, D* ⊆ *Lp(μ) be total in Lp(μ), X a Banach space, D*<sup>0</sup> ⊆ *X total in X. Then* {*ϕ(*·*)x* ; *x* ∈ *D*0*, ϕ* ∈ *D*} *is total in Lp(μ*; *X).*

*Proof* The proof follows upon realising that the set {*ϕ(*·*)x* ; *x* ∈ *D*0*, ϕ* ∈ *D*} is total in the set {*ϕ(*·*)x* ; *x* ∈ *X, ϕ* ∈ *D*}. From here we just apply Lemma 3.1.8.

We conclude this section by stating and proving the celebrated Theorem of Pettis, which characterises Bochner-measurability in terms of weak measurability.

**Theorem 3.1.10 (Theorem of Pettis)** *Let f* : → *X. Then f is Bochnermeasurable if and only if*


*Proof* If *f* is Bochner-measurable, then clearly it is weakly Bochner-measurable. Further, as *f* is the almost everywhere limit of simple functions, it is almost separably-valued, since each simple function attains values in a finite-dimensional subspace of *X*.

$$\square$$

Assume now conversely that *f* satisfies (a) and (b). We define *Y* := lin *f* [ \ *N*0], which is a separable Banach space by (b). Thus, there exists a sequence *(x*- *n)n*∈<sup>N</sup> in *X*such that

$$\|\mathbf{y}\| = \sup\_{n \in \mathbb{N}} |\mathbf{x}\_n'(\mathbf{y})| \quad (\mathbf{y} \in Y).$$

Since for each *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> the function *gn* := |*x*- *<sup>n</sup>* ◦ *f* | is Bochner-measurable by (a) and Remark 3.1.1(d), we find a *<sup>μ</sup>*-nullset *Nn* and a measurable function *gn* : <sup>→</sup> <sup>R</sup> such that *gn* <sup>=</sup> *gn* on \ *Nn* by Remark 3.1.1(b). Then sup*n*∈<sup>N</sup> *gn(*·*)* is measurable and

$$\|f(\omega)\| = \sup\_{n \in \mathbb{N}} \widetilde{\mathfrak{g}}\_n(\omega) \quad (\omega \in \Omega \backslash N),$$

where *N* := *<sup>n</sup>*∈N<sup>0</sup> *Nn*, which shows that *f (*·*)* is Bochner-measurable. Let *ε >* 0, *(yn)n*∈<sup>N</sup> a dense sequence in *Y* . Applying the previous argument to the function *fk(*·*)* := *f (*·*)* <sup>−</sup> *yk* for *<sup>k</sup>* <sup>∈</sup> <sup>N</sup> we infer that *fk (*·*)* is Bochner-measurable and hence, there is a *μ*-nullset *N*- *<sup>k</sup>* and a measurable funtion *f <sup>k</sup>* : <sup>→</sup> <sup>R</sup> such that *fk* = *f <sup>k</sup>* on \ *<sup>N</sup>*- *<sup>k</sup>*. Consequently, the sets

$$E\_k := \{ \widetilde{f}\_k \lessdot \varepsilon \} = \{ \omega \in \Omega \; ; \; \widetilde{f}\_k(\omega) \lessapprox \varepsilon \} \quad (k \in \mathbb{N})$$

 are measurable. Moreover, by the density of {*yn* ; *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>} in *<sup>Y</sup>* , we get that \*N*- ⊆ *<sup>k</sup>*∈<sup>N</sup> *Ek* with *<sup>N</sup>*- := <sup>∞</sup> *<sup>k</sup>*=<sup>1</sup> *<sup>N</sup>*- *<sup>k</sup>* ∪*N*0. Setting *F*<sup>1</sup> := *E*<sup>1</sup> and *Fn*+<sup>1</sup> = *En*+<sup>1</sup> \ *<sup>n</sup> <sup>k</sup>*=<sup>1</sup> *Fk* for *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>, we obtain a sequence of pairwise disjoint measurable sets *(Fn)n*∈<sup>N</sup> with \ *N*- ⊆ *<sup>n</sup>*∈<sup>N</sup> *Fn.* We set

$$\mathfrak{g} := \sum\_{k=1}^{\infty} \mathfrak{y}\_k \mathbf{1}\_{F\_k}$$

and obtain *f (ω)* − *g(ω) ε* for each *ω* ∈ \ *N*- *.* Hence, if *g* is Bochnermeasurable, then *f* is Bochner-measurable as well. Indeed, we find a sequence of such functions converging to *f μ*-almost everywhere and so Proposition 3.1.3 applies. For showing the Bochner-measurability of *g*, let *(k)k*∈<sup>N</sup> be a sequence of pairwise disjoint measurable sets such that *<sup>k</sup>*∈<sup>N</sup> *k* <sup>=</sup> and *μ(k) <* <sup>∞</sup> for each *<sup>k</sup>* <sup>∈</sup> <sup>N</sup>*.* For *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> we set

$$\mathbf{g}\_n := \sum\_{k,j=1}^n \mathbf{y}\_k \mathbf{1}\_{F\_k \cap \mathfrak{Q}\_j}.$$

Then *(gn)n*∈<sup>N</sup> is a sequence of simple functions with *gn* → *g* pointwise as *n* → ∞ and thus, *g* is Bochner-measurable.

## **3.2 The Time Derivative as a Normal Operator**

Now let *<sup>H</sup>* be a Hilbert space over <sup>K</sup> ∈ {R*,* <sup>C</sup>}. For *<sup>ν</sup>* <sup>∈</sup> <sup>R</sup> and *<sup>p</sup>* <sup>∈</sup> [1*,*∞*)* we define the measure

$$\mu\_{p,\boldsymbol{\upsilon}}(A) := \int\_A \mathbf{e}^{-p\boldsymbol{\upsilon}t} \, \mathbf{d}\lambda(t)$$

for *<sup>A</sup>* in the Borel-*σ*-algebra, *<sup>B</sup>(*R*)*, of <sup>R</sup>. As our underlying Hilbert space for the time derivative we set

$$L\_{2,\boldsymbol{\nu}}(\mathbb{R};H) := L\_2(\mu\_{2,\boldsymbol{\nu}};H).$$

In the same way we define

$$L\_{p, \boldsymbol{\upsilon}}(\mathbb{R}; H) := L\_p(\mu\_{p, \boldsymbol{\upsilon}}; H),$$

for *<sup>p</sup>* <sup>∈</sup> [1*,*∞*)*. If *<sup>H</sup>* <sup>=</sup> <sup>K</sup> we abbreviate *Lp,ν (*R*)* := *Lp,ν(*R; <sup>K</sup>*)*. Thus, *<sup>f</sup>* <sup>∈</sup> *Lp,ν(*R; *H )* if and only if *<sup>f</sup>* is Bochner measurable and

$$\int\_{\mathbb{R}} \|f(t)\|\_{H}^{p} \, \mathrm{d}\mu\_{p,\boldsymbol{\nu}}(t) = \int\_{\mathbb{R}} \|f(t)\|\_{H}^{p} \, \mathrm{e}^{-p\boldsymbol{\nu}t} \, \mathrm{d}t < \infty.$$

Our aim is to define the time derivative on *<sup>L</sup>*2*,ν(*R; *H )*. For this, we define a suitable anti-derivative as an operator, which for *ν* = 0 turns out to be one-toone and bounded. Then we introduce the time derivative as the inverse of this antiderivative. The reason for doing it that way is to easily get a formula for the adjoint for the time derivative using the boundedness of the anti-derivative.

We start our considerations with the definition of convolution operators in *<sup>L</sup>*2*,ν(*R; *H )*.

**Lemma 3.2.1** *Let <sup>k</sup>* <sup>∈</sup> *<sup>L</sup>*1*,ν(*R*). We define the convolution operator*

$$k\* \colon L\_{2,\boldsymbol{\nu}}(\mathbb{R}; H) \to L\_{2,\boldsymbol{\nu}}(\mathbb{R}; H)$$

*by*

$$(k \ast f)\left(t\right) := \int\_{\mathbb{R}} k(s)f(t-s)\,\mathrm{d}s,$$

*which exists for a.e. <sup>t</sup>* <sup>∈</sup> <sup>R</sup>*. Then, <sup>k</sup>*<sup>∗</sup> *is linear and bounded with <sup>k</sup>*<sup>∗</sup> *<sup>k</sup> <sup>L</sup>*1*,ν (*R*).*

*Proof* Let *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν (*R; *H )*. We first prove that *<sup>s</sup>* → *k(s)f (t* <sup>−</sup> *s)* <sup>∈</sup> *<sup>L</sup>*1*(*R; *H )* for a.e. *<sup>t</sup>* <sup>∈</sup> <sup>R</sup>. The Bochner-measurability is clear since *<sup>k</sup>* and *<sup>f</sup>* are both Bochnermeasurable. Moreover,

$$\begin{split} &\int\_{\mathbb{R}} \left( \int\_{\mathbb{R}} \|k(s)f(t-s)\|\_{H} \, \mathrm{d}s \right)^{2} \mathrm{e}^{-2\nu t} \, \mathrm{d}t \\ &= \int\_{\mathbb{R}} \left( \int\_{\mathbb{R}} |k(s)|^{\frac{1}{2}} \, \mathrm{e}^{-\frac{\nu}{\nu}s} \left| k(s) \right|^{\frac{1}{2}} \, \mathrm{e}^{-\frac{\nu}{\nu}s} \left\| f(t-s) \right\|\_{H} \, \mathrm{e}^{-\nu(t-s)} \, \mathrm{d}s \right)^{2} \, \mathrm{d}t \\ &\leq \int\_{\mathbb{R}} \left( \int\_{\mathbb{R}} |k(s)| \, \mathrm{e}^{-\nu s} \, \mathrm{d}s \right) \left( \int\_{\mathbb{R}} \left| k(s) \right| \, \mathrm{e}^{-\nu s} \left\| f(t-s) \right\|\_{H}^{2} \, \mathrm{e}^{-2\nu(t-s)} \, \mathrm{d}s \right) \, \mathrm{d}t \\ &= \|k\|\_{L\_{1,\nu}(\mathbb{R})} \int\_{\mathbb{R}} |k(s)| \int\_{\mathbb{R}} \left\| f(t-s) \right\|^{2} \, \mathrm{e}^{-2\nu(t-s)} \, \mathrm{d}t \, \mathrm{e}^{-\nu s} \, \mathrm{d}s \\ &= \|k\|\_{L\_{1,\nu}(\mathbb{R})}^{2} \, \|f\|^{2}\_{L\_{2,\nu}(\mathbb{R})} \, \mathrm{f} \, \end{split}$$

which on the one hand proves that

$$\int\_{\mathbb{R}} \|k(s)f(t-s)\|\_{H} \,\mathrm{d}s < \infty$$

for a.e. *<sup>t</sup>* <sup>∈</sup> <sup>R</sup> and on the other hand shows the norm estimate, once we have shown the Bochner-measurability of *k*∗*f* . For proving the latter, we apply Theorem 3.1.10. Since *<sup>f</sup>* is Bochner-measurable, we find a nullset *<sup>N</sup>* such that *<sup>H</sup>*<sup>0</sup> := lin *<sup>f</sup>* [<sup>R</sup> \ *<sup>N</sup>*] is separable. Hence, for almost every *<sup>t</sup>* <sup>∈</sup> <sup>R</sup> we have

$$(k\*f)(t) = \int\_{\mathbb{R}} k(s)f(t-s)\,\mathrm{d}s = \int\_{\mathbb{R}\backslash N} k(t-s)f(s)\,\mathrm{d}s \in H\_0.$$

by Proposition 3.1.6(b). Thus, *k* ∗ *f* is almost separably-valued. Moreover, for *x*- ∈ *H*we have by Proposition 3.1.6(a)

$$
\alpha' \circ (k \ast f) = k \ast (\alpha' \circ f),
$$

almost everywhere and thus, the weak Bochner-measurability follows from the fact that the convolution of two measurable scalar-valued functions is measurable. Since the linearity of *k*∗ is clear the proof is done.

**Definition** For *ν* = 0 we define the operator

$$I\_{\boldsymbol{\nu}} \colon L\_{2,\boldsymbol{\nu}}(\mathbb{R}; H) \to L\_{2,\boldsymbol{\nu}}(\mathbb{R}; H)$$

$$I\_{\upsilon} := \begin{cases} \mathbf{1}\_{[0,\infty)^{\mathsf{A}}}, & \text{if } \upsilon > 0, \\ -\mathbf{1}\_{(-\infty,0]^{\mathsf{A}}}, & \text{if } \upsilon < 0. \end{cases}$$

Note that, by Lemma 3.2.1, *Iν* is bounded with *Iν* <sup>1</sup> |*ν*| . *Remark 3.2.2* For *ν >* 0, *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν (*R; *H )* we have

$$I\_{\nu}f(t) = \mathbb{1}\_{[0,\infty)} \* f(t) = \int\_0^{\infty} f(t-s) \, \mathrm{d}s = \int\_{-\infty}^t f(s) \, \mathrm{d}s \quad (\text{a.e.} \, t \in \mathbb{R}).$$

Analogously, for *ν <* 0, *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *H )* we have

$$I\_{\mathbb{V}}f(t) = -\int\_{1}^{\infty} f(\mathbf{s}) \, \mathrm{d}s \quad (\text{a.e.} \ t \in \mathbb{R}).$$

**Proposition 3.2.3** *Let <sup>ν</sup>* <sup>=</sup> <sup>0</sup>*. Then Iν is one-to-one and <sup>C</sup>*<sup>1</sup> <sup>c</sup> *(*R; *H ), the space of continuously differentiable, compactly supported functions on* R *with values in H, is in the range of Iν .*

*Proof* We just prove the assertion for the case when *ν >* 0. Let *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν (*R; *H )* satisfy *Iνf* <sup>=</sup> 0. In particular, we obtain for all *<sup>t</sup>* <sup>∈</sup> <sup>R</sup> \ *<sup>N</sup>* that 0 <sup>=</sup> *Iνf (t)* <sup>=</sup> *<sup>t</sup>* −∞ *f (s)* <sup>d</sup>*<sup>s</sup>* for some Lebesgue nullset, *<sup>N</sup>* <sup>⊆</sup> <sup>R</sup>. Then for *a, b* <sup>∈</sup> <sup>R</sup> \ *<sup>N</sup>* with *a<b* and *x* ∈ *H* we have that

$$\begin{aligned} \left\langle f, \mathbf{e}^{2\boldsymbol{\nu}(\cdot)} \mathbb{1}\_{[a,b]} \cdot \mathbf{x} \right\rangle\_{L\_{2,\boldsymbol{\nu}}(\mathbb{R}; H)} &= \int\_{\mathbb{R}} \left\langle f(t), \mathbf{e}^{2\boldsymbol{\nu}t} \mathbb{1}\_{[a,b]}(t) \cdot \mathbf{x} \right\rangle\_{H} \mathbf{e}^{-2\boldsymbol{\nu}t} \,\mathrm{d}t \\ &= \left\langle \int\_{a}^{b} f(t) \, \mathrm{d}t, \mathbf{x} \right\rangle\_{H} \\ &= \langle (I\_{\boldsymbol{\nu}} f) \, (b) - (I\_{\boldsymbol{\nu}} f) \, (a), \mathbf{x} \rangle\_{H} = 0. \end{aligned}$$

Thus *<sup>f</sup>* <sup>=</sup> 0. Indeed, since <sup>R</sup> \ *<sup>N</sup>* is dense in <sup>R</sup>, e2*ν(*·*)* <sup>1</sup>[*a,b*] ; *a, b* <sup>∈</sup> <sup>R</sup> \ *<sup>N</sup>* is total in *L*2*,ν(*R*)*. Hence, e2*ν(*·*)* <sup>1</sup>[*a,b*] · *<sup>x</sup>* ; *a, b* <sup>∈</sup> <sup>R</sup> \ *N, x* <sup>∈</sup> *<sup>H</sup>* is total in *<sup>L</sup>*2*,ν (*R; *H )* by Lemma 3.1.8. This proves the injectivity of *Iν* . Moreover, if *<sup>ϕ</sup>* <sup>∈</sup> *<sup>C</sup>*<sup>1</sup> <sup>c</sup> *(*R; *H )* then by Corollary 3.1.7 we have

$$\varphi(t) = \int\_{-\infty}^{t} \varphi'(s) \, \mathrm{d}s = \left( I\_0 \varphi' \right)(t) \quad \text{(a.e.} \, t \in \mathbb{R} \text{)}. \tag{7}$$

**Definition** For *<sup>ν</sup>* <sup>=</sup> 0 we define the *time derivative, ∂t ,ν*, on *<sup>L</sup>*2*,ν (*R; *H )* by

$$
\partial\_{\mathfrak{l},\boldsymbol{\nu}} := I\_{\boldsymbol{\nu}}^{-1} \boldsymbol{\cdot}
$$

by

Note that by Lemma 3.2.1 and Proposition 3.2.3, *∂t ,ν* is a closed linear operator for which *C*<sup>1</sup> <sup>c</sup> *(*R; *H )* <sup>⊆</sup> dom*(∂t ,ν)*. Since

$$\mathcal{C}^{\mathsf{l}}\_{\mathsf{c}}(\mathbb{R};\ H) \supseteq \lim \left\{ \varphi \cdot \boldsymbol{x} \; ; \; \varphi \in \mathcal{C}^{\mathsf{l}}\_{\mathsf{c}}(\mathbb{R}), \; \boldsymbol{x} \in H \right\}.$$

we infer that *∂t ,ν* is densely defined by Lemma 3.1.8 and Exercise 3.2. Moreover, since *Iνϕ*- <sup>=</sup> *<sup>ϕ</sup>* for *<sup>ϕ</sup>* <sup>∈</sup> *<sup>C</sup>*<sup>1</sup> <sup>c</sup> *(*R; *H )* we get that

$$
\partial\_{\mathfrak{l},\mathbb{U}}\varphi = \varphi';
$$

that is, *∂t ,ν* extends the classical derivative of continuously differentiable functions. We shall discuss the actual domain of *∂t ,ν* in the next chapter.

**Proposition 3.2.4** *Let ν* = 0*. Then D<sup>H</sup>* := lin *ϕ* · *x* ; *ϕ* ∈ *C*<sup>∞</sup> <sup>c</sup> *(*R*), x* <sup>∈</sup> *<sup>H</sup> is a core for ∂t ,ν. Here, C*<sup>∞</sup> <sup>c</sup> *(*R*) denotes the space of smooth functions on* <sup>R</sup> *with compact support.*

*Proof* We first prove that

$$\left\{\varphi' \text{ : } \varphi \in \mathcal{C}\_{\text{c}}^{\infty}(\mathbb{R})\right\} \tag{3.3}$$

is dense in *L*2*,ν(*R*)*. As *C*<sup>∞</sup> <sup>c</sup> *(*R*)* is dense in *<sup>L</sup>*2*,ν (*R*)* (see Exercise 3.2), it suffices to approximate functions in *C*∞ <sup>c</sup> *(*R*)*. For this, let *<sup>f</sup>* <sup>∈</sup> *<sup>C</sup>*<sup>∞</sup> <sup>c</sup> *(*R*)*. We now define

$$\varphi\_n(t) := \begin{cases} \int\_{-\infty}^t f(s) - f(s - n) \, \mathrm{d}s & \text{if } \nu > 0, \\ \int\_{-\infty}^t f(s) - f(s + n) \, \mathrm{d}s & \text{if } \nu < 0 \end{cases} \quad (t \in \mathbb{R}, n \in \mathbb{N}).$$

Then *ϕn* ∈ *C*<sup>∞</sup> <sup>c</sup> *(*R*)* for each *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> and

$$\varphi'\_n(t) = \begin{cases} f(t) - f(t - n) & \text{if } \upsilon > 0, \\ f(t) - f(t + n) & \text{if } \upsilon < 0 \end{cases} \quad (t \in \mathbb{R}, n \in \mathbb{N}).$$

Consequently,

$$\left\|\boldsymbol{\varphi}\_{n}^{\prime} - \boldsymbol{f}\right\|\_{L\_{2,\boldsymbol{\nu}}(\mathbb{R})}^{2} = \begin{cases} \int\_{\mathbb{R}} |\boldsymbol{f}(t-n)|^{2} \mathbf{e}^{-2\boldsymbol{\nu}t} \,\mathrm{d}t & \text{if } \boldsymbol{\nu} > \boldsymbol{0}, \\ \int\_{\mathbb{R}} |\boldsymbol{f}(t+n)|^{2} \mathbf{e}^{-2\boldsymbol{\nu}t} \,\mathrm{d}t & \text{if } \boldsymbol{\nu} < \boldsymbol{0} \end{cases}$$

$$= \left\|\boldsymbol{f}\right\|\_{L\_{2,\boldsymbol{\nu}}(\mathbb{R})}^{2} \mathbf{e}^{-2|\boldsymbol{\nu}|n} \to 0 \quad (n \to \infty),$$

which shows the density of (3.3) in *L*2*,ν(*R*)*. By Lemma 3.1.8 we have that

$$\left\{\varphi' \cdot x \; ; \; \varphi \in C\_c^{\infty}(\mathbb{R}), \; x \in H \right\}$$

is total in *<sup>L</sup>*2*,ν (*R; *H )* and so *∂t ,ν*[*D<sup>H</sup>* ] is dense in *<sup>L</sup>*2*,ν (*R; *H )*. Now let *<sup>f</sup>* <sup>∈</sup> dom*(∂t ,ν )* and *ε >* 0. By what we have shown above there exists some *ϕ* ∈ *D<sup>H</sup>* such that

$$\|\partial\_{\mathfrak{t},\boldsymbol{\upsilon}}\varphi - \partial\_{\mathfrak{t},\boldsymbol{\upsilon}}f\|\_{L\_{2,\boldsymbol{\upsilon}}(\mathbb{R};H)} \lesssim \varepsilon.$$

Since *∂*−<sup>1</sup> *t ,ν* = *Iν* is bounded with *∂*−<sup>1</sup> *t ,ν* <sup>1</sup> |*ν*| , the latter implies that

$$\|\varphi - f\|\_{L\_{2,\nu}(\mathbb{R}; H)} \lesssim \frac{\varepsilon}{|\nu|},$$

and hence, *D<sup>H</sup>* is indeed a core for *∂t ,ν*.

**Corollary 3.2.5** *For <sup>ν</sup>* <sup>∈</sup> <sup>R</sup> *the mapping*

$$\begin{aligned} \text{exp}(-\nu \text{m}) : L\_{2,\mathbb{V}}(\mathbb{R}; H) &\to L\_2(\mathbb{R}; H) \\ f &\mapsto (t \mapsto \text{e}^{-\nu t} f(t)) \end{aligned}$$

*is unitary, and for ν,μ* = 0 *one has*

$$\exp(-\nu \mathbf{m})(\partial\_{\mathbf{l},\boldsymbol{\nu}}-\nu)\exp(-\nu \mathbf{m})^{-1} = \exp(-\mu \mathbf{m})(\partial\_{\mathbf{l},\boldsymbol{\mu}}-\mu)\exp(-\mu \mathbf{m})^{-1}.$$

*Proof* The proof is left as Exercise 3.5. For this we recall that the equality to be proven is an equality of relations and particularly includes the equality of the (natural) domains of the operators involved. Furthermore, note that it suffices to show equality on *C*∞ <sup>c</sup> *(*R; *H )* and then to use an appropriate density result.

By Corollary 3.2.5 we can now define *∂t ,*0. Let *ν* = 0. Then

$$
\partial\_{\mathfrak{l},0} := \exp(-\nu \mathbf{m}) (\partial\_{\mathfrak{l},\mathbb{V}} - \nu) \exp(-\nu \mathbf{m})^{-1}.
$$

Note that in view of Corollary 3.2.5, the assertion of Proposition 3.2.4 now also holds for *ν* = 0.

Finally, we want to compute the adjoint of *∂t ,ν*.

**Corollary 3.2.6** *Let <sup>ν</sup>* <sup>∈</sup> <sup>R</sup>*. The adjoint of ∂t ,ν is given by*

$$
\partial\_{\mathfrak{t},\boldsymbol{\nu}}^\* = -\partial\_{\mathfrak{t},\boldsymbol{\nu}} + 2\boldsymbol{\nu}.
$$

*In particular, ∂t ,ν is a normal operator with* Re *∂t ,ν* := <sup>1</sup> 2 *∂t ,ν* + *∂*<sup>∗</sup> *t ,ν* = *ν, and ∂t ,*<sup>0</sup> *is skew-selfadjoint.*

*Proof* Let *ν* = 0 first. Integrating by parts, one obtains

$$\begin{aligned} \int\_{\mathbb{R}} \left< \partial\_{t,\boldsymbol{\nu}} \varphi(t), \boldsymbol{\psi}(t) \right> \mathbf{e}^{-2\boldsymbol{\nu}t} \, \mathrm{d}t &= \int\_{\mathbb{R}} \left< \boldsymbol{\varphi}'(t), \boldsymbol{\psi}(t) \right> \mathbf{e}^{-2\boldsymbol{\nu}t} \, \mathrm{d}t \\ &= \int\_{\mathbb{R}} \left< \boldsymbol{\varphi}(t), -\boldsymbol{\psi}'(t) + 2\boldsymbol{\nu}\boldsymbol{\psi}(t) \right> \mathbf{e}^{-2\boldsymbol{\nu}t} \, \mathrm{d}t \end{aligned}$$

for *ϕ,ψ* ∈ *C*<sup>∞</sup> <sup>c</sup> *(*R; *H )*. Since *<sup>C</sup>*<sup>∞</sup> <sup>c</sup> *(*R; *H )* is a core for *∂t ,ν* by Proposition 3.2.4, the latter shows

$$
\partial\_{\mathfrak{t},\boldsymbol{\nu}} \subseteq -\partial\_{\mathfrak{t},\boldsymbol{\nu}}^\* + 2\boldsymbol{\nu}.
$$

Since we know that *∂t ,ν* is onto, it suffices to prove that −*∂*<sup>∗</sup> *t ,ν* + 2*ν* is one-to-one, since this would imply equality in the latter operator inclusion. For doing so, we apply Theorem 2.2.5 to compute

$$\ker(-\partial^\*\_{l,\nu} + \mathcal{2}\nu) = \text{ran}(-\partial\_{l,\nu} + \mathcal{2}\nu)^\perp.$$

Moreover, we have that −*∂t ,ν* + 2*ν* is unitarily equivalent to −*∂t ,*−*<sup>ν</sup>* by Corollary 3.2.5 and since *∂t ,*−*<sup>ν</sup>* is onto, so is −*∂t ,ν* + 2*ν* and thus ker*(*−*∂*<sup>∗</sup> *t ,ν* + 2*ν)* = *<sup>L</sup>*2*,ν(*R; *H )*<sup>⊥</sup> = {0}, which yields the assertion.

The case *ν* = 0 follows directly from the definition of *∂t ,*0.

## **3.3 Comments**

Standard references for Bochner integration and related results are [6, 31].

Considering the derivative operator in an exponentially weighted space goes back (at least) to Morgenstern [67], where ordinary differential equations were considered in a classical setting. In fact, we shall return to this observation in the next chapter when we devote our study to some implications of the already developed concepts on ordinary and delay differential equations.

A first occurrence of the derivative operator in exponentially weighted *L*2-spaces can be found in [83], where a corresponding spectral theorem has been focussed on. We will prove in a later chapter that the spectral representation of the time derivative as a multiplication operator can be realised by a shifted variant of the Fourier transformation—the so-called Fourier–Laplace transformation.

In an applied context, the time derivative operator discussed here has been introduced in [82].

## **Exercises**

**Exercise 3.1** A sequence *(ϕn)n* in *C*<sup>∞</sup> <sup>c</sup> *(*R*<sup>d</sup> )* is called a *<sup>δ</sup>-sequence* if


Let *ϕ* ∈ *C*<sup>∞</sup> <sup>c</sup> *(*R*<sup>d</sup> )* with spt *<sup>ϕ</sup>* <sup>⊆</sup> [−1*,* 1] *<sup>d</sup>* , *ϕ* - 0 and <sup>R</sup>*<sup>d</sup> ϕ* = 1. Prove that *(ϕn)n* given by *ϕn(x)* := *ndϕ(nx)* for *<sup>x</sup>* <sup>∈</sup> <sup>R</sup>*<sup>d</sup>* , *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> defines a *<sup>δ</sup>*-sequence. Moreover, give an example for such a function *ϕ*.

**Exercise 3.2** It is well-known that {1*<sup>I</sup>* ; *I d*-dimensional bounded interval} is total in *L*2*(*R*<sup>d</sup> )*.

(a) Let *ϕ* ∈ *C*<sup>∞</sup> <sup>c</sup> *(*R*<sup>d</sup> )*, *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*(*R*<sup>d</sup> )*. Define as usual

$$f \ast \varphi := \left(\mathbf{x} \mapsto \int\_{\mathbb{R}^d} f(\mathbf{x} - \mathbf{y}) \varphi(\mathbf{y}) \, \mathrm{d}\mathbf{y}\right).$$

Prove that *<sup>f</sup>* <sup>∗</sup> *<sup>ϕ</sup>* <sup>∈</sup> *<sup>C</sup>*∞*(*R*<sup>d</sup> )* with *<sup>∂</sup><sup>α</sup> (f* <sup>∗</sup> *ϕ)* <sup>=</sup> *<sup>f</sup>* <sup>∗</sup> *<sup>∂</sup>αϕ* for all *<sup>α</sup>* <sup>∈</sup> <sup>N</sup>*<sup>d</sup>* <sup>0</sup> , where *<sup>∂</sup>αϕ* <sup>=</sup> *<sup>∂</sup>α*<sup>1</sup> <sup>1</sup> *...∂αd <sup>d</sup> ϕ*. Moreover, prove that spt *f* ∗ *ϕ* ⊆ spt *f* + spt *ϕ*.

(b) Let *(ϕn)n* be a *<sup>δ</sup>*-sequence and *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*(*R*<sup>d</sup> )*. Show that *<sup>f</sup>* <sup>∗</sup> *ϕn* <sup>→</sup> *<sup>f</sup>* in *<sup>L</sup>*2*(*R*<sup>d</sup> )* as *n* → ∞.

*Hint*: Prove that <sup>1</sup>*<sup>I</sup>* <sup>∗</sup> *ϕn* <sup>→</sup> <sup>1</sup>*<sup>I</sup>* in *<sup>L</sup>*2*(*R*<sup>d</sup> )* for all *<sup>d</sup>*-dimensional bounded intervals and use that *f* ∗ *ϕn* <sup>2</sup> *f* <sup>2</sup> (see also Lemma 3.2.1).

(c) Prove that *C*∞ <sup>c</sup> *(*R*<sup>d</sup> )* is dense in *<sup>L</sup>*2*(*R*<sup>d</sup> )*.

**Exercise 3.3** Let *a<b*, *X*0*, X*1*, X*<sup>2</sup> be Banach spaces, *f* : *(a, b)* → *X*<sup>0</sup> and *g* : *(a, b)* → *X*<sup>1</sup> both continuously differentiable, : *X*<sup>0</sup> × *X*<sup>1</sup> → *X*<sup>2</sup> bilinear and continuous. Prove that *h*: *(a, b)* → *X*<sup>2</sup> given by

$$h(t) := \ell(f(t), \mathbf{g}(t)) \quad (t \in (a, b))$$

is continuously differentiable with

$$h'(t) = \ell(f'(t), \mathbf{g}(t)) + \ell(f(t), \mathbf{g}'(t)) \quad (t \in (a, b)).$$

If *f, f* - *, g, g* have continuous extensions to [*a, b*]*,* prove the integration by parts formula:

$$\int\_{a}^{b} \ell(f'(t), \mathbf{g}(t)) \, \mathrm{d}t = \ell(f(b), \mathbf{g}(b)) - \ell(f(a), \mathbf{g}(a)) - \int\_{a}^{b} \ell(f(t), \mathbf{g}'(t)) \, \mathrm{d}t.$$

**Exercise 3.4** For *<sup>ν</sup>* <sup>=</sup> 0, show that *Iν* <sup>=</sup> <sup>1</sup> |*ν*| .

#### **Exercise 3.5** Prove Corollary 3.2.5.

**Exercise 3.6** Let *<sup>ν</sup>* <sup>∈</sup> <sup>R</sup> and *<sup>H</sup>* be a complex Hilbert space. Prove that *σ (∂t ,ν)* <sup>⊆</sup> {i*<sup>t</sup>* <sup>+</sup> *<sup>ν</sup>* ; *<sup>t</sup>* <sup>∈</sup> <sup>R</sup>}, where *∂t ,*<sup>0</sup> is defined in Corollary 3.2.6. *Hint*: For *<sup>f</sup>* <sup>∈</sup> dom*(∂t ,ν), z* <sup>∈</sup> <sup>C</sup> compute Re *(z* − *∂t ,ν)f, f <sup>L</sup>*2*,ν (*R;*H )* by using Corollary 3.2.6. For proving the surjectivity of *z* − *∂t ,ν* for a suitable *z*, use the formula

$$\overline{\text{ran}}(z - \partial\_{\mathfrak{l}, \mathbb{V}}) = \text{ker}(z^\* - \partial\_{\mathfrak{l}, \mathbb{V}}^\*)^\perp.$$

*Remark*: Later we will see that, actually, *σ (∂t ,ν )* <sup>=</sup> {i*<sup>t</sup>* <sup>+</sup> *<sup>ν</sup>* ; *<sup>t</sup>* <sup>∈</sup> <sup>R</sup>}.

**Exercise 3.7** Consider the differential equation

$$\left(\partial\_{\mathfrak{r},\nu}^2 - 1\right)\mu = \mathbb{1}\_{[-1,1]\cdot \mathbb{1}}$$

Since *∂*<sup>2</sup> *t ,ν* − 1 = *∂t ,ν* − 1 *∂t ,ν* + 1 , it follows by Exercise 3.6 that there is a unique *<sup>u</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R*)* solving this equation if *ν /*∈ {−1*,* <sup>1</sup>}. Compute these solutions. *Hint*: For *u* ∈ dom*(∂t ,ν)* use the fact that *u* is necessarily continuous (which we shall establish in the next chapter).

## **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 4 Ordinary Differential Equations**

In this chapter, we discuss a first application of the time derivative operator constructed in the previous chapter. More precisely, we analyse well-posedness of ordinary differential equations and will at the same time provide a Hilbert space proof of the classical Picard–Lindelöf theorem.<sup>1</sup> We shall furthermore see that the abstract theory developed here also allows for more general differential equations to be considered. In particular, we will have a look at so-called delay differential equations with finite or infinite delay; neutral differential equations are considered in the exercises section.

We start with some information on the time derivative and its domain.

## **4.1 The Domain of** *∂t,ν* **and the Sobolev Embedding Theorem**

Let *H* be a Hilbert space. Readers familiar with the notion of Sobolev spaces might have already realised that the domain of *∂t ,ν* can be described as *<sup>L</sup>*2*,ν(*R; *H )* functions with distributional derivative lying in *<sup>L</sup>*2*,ν(*R; *H )*. We shall also use

$$H^1\_\nu(\mathbb{R}; H) := \text{dom}(\partial\_{l, \mathbb{V}}) \subseteq L\_{2, \mathbb{V}}(\mathbb{R}; H),$$

if we want to emphasise the target Hilbert space of the dom*(∂t ,ν )*-functions. In order to stress the distributional character of the derivative introduced, we include the following result. Later on, we have the opportunity to have a more detailed look at Sobolev spaces in more general contexts.

<sup>1</sup> There are different notions for this theorem. It is also called existence and uniqueness theorem for initial value problems for ordinary differential equations as well as Cauchy–Lipschitz theorem.

C. Seifert et al., *Evolutionary Equations*, Operator Theory: Advances and Applications 287, https://doi.org/10.1007/978-3-030-89397-2\_4

**Proposition 4.1.1** *Let <sup>ν</sup>* <sup>∈</sup> <sup>R</sup> *and f, g* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *H ). Then the following conditions are equivalent:*


$$-\int\_{\mathbb{R}} \phi' f = \int\_{\mathbb{R}} \phi \mathbf{g},$$

*where these integrals are Bochner integrals of the H-valued functions t* → *φ*- *(t)f (t) and t* → *φ(t)g(t), respectively.*

*Proof* Assume that *f* ∈ dom*(∂t ,ν)*. By Proposition 3.2.4 and Corollary 3.2.6, we have that *D<sup>H</sup>* = lin *ϕ* · *x* ; *ϕ* ∈ *C*<sup>∞</sup> <sup>c</sup> *(*R*), x* <sup>∈</sup> *<sup>H</sup>* <sup>⊆</sup> dom*(∂*<sup>∗</sup> *t ,ν)* (which also holds for *ν* = 0) and

$$\left< \partial\_{\boldsymbol{t}, \boldsymbol{\nu}} f, \boldsymbol{\psi} \cdot \boldsymbol{x} \right>\_{L\_{2, \boldsymbol{\nu}}} = \left< f, \left( -\boldsymbol{\psi}' + 2\boldsymbol{\nu}\boldsymbol{\psi} \right) \cdot \boldsymbol{x} \right>\_{L\_{2, \boldsymbol{\nu}}}$$

for all *x* ∈ *H* and *ψ* ∈ *C*<sup>∞</sup> <sup>c</sup> *(*R*)*. Hence, we obtain for all *<sup>ψ</sup>* <sup>∈</sup> *<sup>C</sup>*<sup>∞</sup> <sup>c</sup> *(*R*)*

$$\int\_{\mathbb{R}} \left( -\psi' + 2\nu\psi \right) f \mathbf{e}^{-2\nu \cdot} = \int\_{\mathbb{R}} \psi \partial\_{\mathfrak{t},\boldsymbol{\nu}} f \mathbf{e}^{-2\boldsymbol{\nu} \cdot};$$

putting *<sup>φ</sup>* := <sup>e</sup>−2*ν*· *ψ* and using that multiplication by e−2*ν*· is a bijection on *C*<sup>∞</sup> <sup>c</sup> *(*R*)*, we deduce the claimed formula with *g* = *∂t ,νf* .

On the other hand, the equation involving *<sup>g</sup>* applied to *<sup>φ</sup>* <sup>=</sup> <sup>e</sup>−2*ν*· *ψ* for *ψ* ∈ *C*∞ <sup>c</sup> *(*R*)* implies that

$$\int\_{\mathbb{R}} \left( -\psi' + 2\nu\psi \right) f \mathbf{e}^{-2\nu \cdot} = \int\_{\mathbb{R}} \psi g \mathbf{e}^{-2\nu \cdot}.$$

Testing this equation with *x* ∈ *H* yields

$$\langle \langle \mathbf{g}, \boldsymbol{\psi} \cdot \mathbf{x} \rangle\_{L\_{2,\boldsymbol{\upsilon}}} = \left\langle f, \left( -\boldsymbol{\psi}' + 2\boldsymbol{\nu}\boldsymbol{\psi} \right) \cdot \mathbf{x} \right\rangle\_{L\_{2,\boldsymbol{\upsilon}}} = \left\langle f, \left( -\partial\_{\boldsymbol{\iota},\boldsymbol{\nu}}\boldsymbol{\psi} \cdot \mathbf{x} + 2\boldsymbol{\nu}\boldsymbol{\psi} \cdot \mathbf{x} \right) \right\rangle\_{L\_{2,\boldsymbol{\upsilon}}}.$$

Since *D<sup>H</sup>* is dense in dom*(∂t ,ν)* by Proposition 3.2.4, we infer that

$$\langle \mathbf{g}, h \rangle\_{L\_{2, \nu}} = \langle f, \left( -\partial\_{\mathbf{f}, \boldsymbol{\nu}} h + 2\boldsymbol{\nu}h \right) \rangle\_{L\_{2, \boldsymbol{\nu}}}$$

for all *h* ∈ dom*(∂t ,ν )*. Now, Corollary 3.2.6, yields

$$\langle \mathfrak{g}, h \rangle\_{L\_{2, \nu}} = \left\langle f, \partial\_{t, \nu}^\* h \right\rangle\_{L\_{2, \nu}} \quad (h \in \text{dom}(\partial\_{t, \nu}^\*)).$$

Thus, *f* ∈ dom*(∂*∗∗ *t ,ν)* = dom*(∂t ,ν)* and *∂t ,νf* = *g*.

The next result is a version of the Sobolev embedding theorem. It particularly confirms that functions in the domain of *∂t ,ν* are continuous. This result was announced in Exercise 3.7. Here, we make use of the explicit form of the domain of *∂t ,ν* as being the range space of the integral operator *Iν* . We define

$$C\_{\nu}(\mathbb{R}; H) := \left\{ f \colon \mathbb{R} \to H \text{ } \newline \begin{aligned} f \colon \mathbb{R} \to H \text{ } \newline f \text{ continuous, } \|f\|\_{\upsilon, \infty} := \sup\_{t \in \mathbb{R}} \left\| \mathbf{e}^{-\nu t} f(t) \right\|\_{H} < \infty \right\} \end{aligned}$$

and regard it as being endowed with the obvious norm.

**Theorem 4.1.2 (Sobolev Embedding Theorem)** *Let <sup>ν</sup>* <sup>∈</sup> <sup>R</sup>*. Then every <sup>f</sup>* <sup>∈</sup> dom*(∂t ,ν ) has a continuous representative, and the mapping*

$$\text{dom}(\partial\_{\mathfrak{l},\mathbb{V}}) \ni f \mapsto f \in \mathcal{C}\_{\mathbb{V}}(\mathbb{R}; H)$$

*is continuous.*

*Proof* We restrict ourselves to the case when *ν >* 0; the remaining cases can be proved by invoking Corollary 3.2.5. Let *f* ∈ dom*(∂t ,ν)*. By definition, we find *<sup>g</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *H )* such that *<sup>f</sup>* <sup>=</sup> *<sup>∂</sup>*−<sup>1</sup> *t ,ν <sup>g</sup>* <sup>=</sup> *Iνg.* Then for all *<sup>t</sup>* <sup>∈</sup> <sup>R</sup> we compute

$$\int\_{-\infty}^{t} \|\boldsymbol{g}(\tau)\| \, d\tau = \int\_{-\infty}^{t} \|\boldsymbol{g}(\tau)\| \, \mathbf{e}^{-\nu \tau} \mathbf{e}^{\nu \tau} \, d\tau \leqslant \sqrt{\int\_{-\infty}^{t} \|\boldsymbol{g}(\tau)\|^{2} \, \mathbf{e}^{-2\nu \tau} \, d\tau} \sqrt{\int\_{-\infty}^{t} \mathbf{e}^{2\nu \tau} \, d\tau}$$

$$\leqslant \|\boldsymbol{\partial}\_{t,\nu} \boldsymbol{f}\|\_{L\_{2,\nu}} \sqrt{\frac{1}{2\nu}} \mathbf{e}^{\nu t}.$$

Thus, *<sup>g</sup>* is integrable on *(*−∞*, t*] for all *<sup>t</sup>* <sup>∈</sup> <sup>R</sup> and dominated convergence implies that

$$f = \left(\mathbf{t} \mapsto \int\_{-\infty}^{\mathbf{t}} \mathbf{g}\left(\mathbf{s}\right) \mathbf{ds}\right),$$

is continuous. Moreover, for *<sup>t</sup>* <sup>∈</sup> <sup>R</sup> we obtain

$$\|f(t)\| \lesssim \int\_{-\infty}^{t} \|g(\tau)\| \, \mathrm{d}\tau \lesssim \|\partial\_{\mathbb{H},\mathbb{V}}f\|\_{L\_{2,\mathbb{V}}} \sqrt{\frac{1}{2\nu}} \mathbf{e}^{\nu I}$$

which yields the claimed continuity.

**Corollary 4.1.3** *For all f* ∈ dom*(∂t ,ν), we have that* <sup>e</sup>−*νtf (t) <sup>H</sup>* → 0 *as t* → ±∞*.*

The proof is left as Exercise 4.2.

## **4.2 The Picard–Lindelöf Theorem**

The prototype of the Picard–Lindelöf theorem will be formulated for so-called uniformly Lipschitz continuous functions. We first need a preparation.

**Definition** Let *X* be a Banach space. Then we define

$$S\_{\mathbb{C}}(\mathbb{R}; X) := \{ f \colon \mathbb{R} \to X \text{ } \text{\textquotedblleft} f \text{ simple}, \text{\textquotedblright} f \text{ compact} \}$$

to be the set of *simple functions from* R *to X with compact support*.

**Lemma 4.2.1** *Let <sup>X</sup> be a Banach space and ν, η* <sup>∈</sup> <sup>R</sup>*. Then <sup>S</sup>*c*(*R; *X) is dense in <sup>L</sup>*2*,ν (*R; *X)* <sup>∩</sup> *<sup>L</sup>*2*,η(*R; *X); that is, for all <sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν (*R; *X)* <sup>∩</sup> *<sup>L</sup>*2*,η(*R; *X) there exists (fn)n in <sup>S</sup>*c*(*R; *X) such that fn* <sup>→</sup> *<sup>f</sup> in both <sup>L</sup>*2*,ν (*R; *X) and <sup>L</sup>*2*,η(*R; *X). In particular, <sup>S</sup>*c*(*R; *X) is dense in <sup>L</sup>*2*,ν(*R; *X).*

*Proof* Let *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *X)* <sup>∩</sup> *<sup>L</sup>*2*,η(*R; *X)*. Then for all *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> we have that <sup>1</sup>[−*n,n*]*<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *X)* <sup>∩</sup> *<sup>L</sup>*2*,η(*R; *X)* and <sup>1</sup>[−*n,n*]*<sup>f</sup>* <sup>→</sup> *<sup>f</sup>* in *<sup>L</sup>*2*,ν(*R; *X)* and in *<sup>L</sup>*2*,η(*R; *X)* as *<sup>n</sup>* → ∞. For *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> let *(f n,k)k* be in *S(μ*2*,ν*; *X)* such that *f n,k* <sup>→</sup> <sup>1</sup>[−*n,n*]*<sup>f</sup>* in *<sup>L</sup>*2*,ν(*R; *X)* as *<sup>k</sup>* → ∞. We put *fn,k* := <sup>1</sup>[−*n,n*]*<sup>f</sup> n,k* <sup>∈</sup> *<sup>S</sup>*c*(*R; *X)*. Then *fn,k* <sup>→</sup> <sup>1</sup>[−*n,n*]*<sup>f</sup>* in *<sup>L</sup>*2*,ν(*R; *X)* and in *<sup>L</sup>*2*,η(*R; *X)* as *<sup>k</sup>* → ∞.

In order to define the notion of uniformly Lipschitz continuous functions, we first need the Lipschitz semi-norm.

**Definition** Let *X*0*, X*<sup>1</sup> be normed spaces, and *F* : *X*<sup>0</sup> → *X*<sup>1</sup> Lipschitz continuous. Then

$$\|\|F\|\|\_{\text{Lip}} := \sup\_{\substack{\mathbf{x}, \mathbf{y} \in \mathcal{X}\_0 \\ \mathbf{x} \neq \mathbf{y}}} \frac{\|F(\mathbf{x}) - F(\mathbf{y})\|\|}{\|\mathbf{x} - \mathbf{y}\|}$$

is the *Lipschitz semi-norm* of *F*.

**Definition** " Let *<sup>H</sup>*0*, H*<sup>1</sup> be Hilbert spaces, *<sup>μ</sup>* <sup>∈</sup> <sup>R</sup>. Then a function *<sup>F</sup>* : *<sup>S</sup>*c*(*R; *<sup>H</sup>*0*)* <sup>→</sup> *ν<sup>μ</sup> <sup>L</sup>*2*,ν(*R; *<sup>H</sup>*1*)* is called *uniformly Lipschitz continuous*if for all *<sup>ν</sup> μ* we have that *<sup>F</sup>* considered in *<sup>L</sup>*2*,ν(*R; *<sup>H</sup>*0*)*×*L*2*,ν (*R; *<sup>H</sup>*1*)* is Lipschitz continuous, and for the unique Lipschitz continuous extensions *F<sup>ν</sup>* , *ν μ*, we have that

$$\sup\_{\nu \geqslant \mu} \left\| F^{\nu} \right\|\_{\text{Lip}} < \infty.$$

*Remark 4.2.2* Another way to introduce uniformly Lipschitz continuous mappings is the following. Let *<sup>H</sup>*0*, H*<sup>1</sup> be Hilbert spaces, *<sup>μ</sup>* <sup>∈</sup> <sup>R</sup>. Let *(F<sup>ν</sup> )ν<sup>μ</sup>* be a family of Lipschitz continuous mappings *<sup>F</sup><sup>ν</sup>* : *<sup>L</sup>*2*,ν(*R; *<sup>H</sup>*0*)* <sup>→</sup> *<sup>L</sup>*2*,ν (*R; *<sup>H</sup>*1*)* such that

$$\sup\_{\nu \geqslant \mu} \left\| F^{\nu} \right\|\_{\text{Lip}} < \infty$$

and the mappings are consistent in the sense that for all *ν, η μ* and *f* ∈ *<sup>L</sup>*2*,ν(*R; *<sup>H</sup>*0*)* <sup>∩</sup> *<sup>L</sup>*2*,η(*R; *<sup>H</sup>*0*)* we have

$$F^\upsilon(f) = F^\eta(f).$$

Then, for *ν <sup>μ</sup>* and *<sup>f</sup>* <sup>∈</sup> *<sup>S</sup>*c*(*R; *<sup>H</sup>*0*)* we have *<sup>F</sup>ν(f )* <sup>∈</sup> " *η<sup>μ</sup> <sup>L</sup>*2*,η(*R; *<sup>H</sup>*1*)* and *<sup>F</sup><sup>ν</sup>* <sup>|</sup>*S*c*(*R;*H*0*)* is uniformly Lipschitz continuous.

**Theorem 4.2.3 (Picard–Lindelöf—Hilbert Space Version)** *Let H be a Hilbert space, <sup>μ</sup>* <sup>∈</sup> <sup>R</sup> *and <sup>F</sup>* : *<sup>S</sup>*c*(*R; *H )* <sup>→</sup> " *ν<sup>μ</sup> <sup>L</sup>*2*,ν (*R; *H ) uniformly Lipschitz continuous with L* := sup*ν<sup>μ</sup> <sup>F</sup><sup>ν</sup>*Lip*. Then for all ν >* max{*L, μ*} *the equation*

$$
\partial\_{\mathfrak{t}, \boldsymbol{\upsilon}} \mu\_{\boldsymbol{\upsilon}} = F^{\boldsymbol{\upsilon}}(\boldsymbol{u}\_{\boldsymbol{\upsilon}})
$$

*admits a unique solution uν* ∈ dom*(∂t ,ν). Furthermore, for all ν >* max{*L, μ*} *the following properties hold:*


$$v = \mathbb{1}\_{(-\\\\\infty,a]} \partial\_{t,\upsilon}^{-1} F^{\upsilon}(\upsilon).$$


$$
\partial\_{\mathfrak{l},\boldsymbol{\upsilon}}\boldsymbol{\upsilon} = F^{\boldsymbol{\upsilon}}(\boldsymbol{\upsilon}) + \boldsymbol{f}
$$

*admits a unique solution vν,f* <sup>∈</sup> dom*(∂t ,ν ), and if f, g* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *H ) satisfy <sup>f</sup>* <sup>=</sup> *<sup>g</sup> on (*−∞*, a*] *for some <sup>a</sup>* <sup>∈</sup> <sup>R</sup>*, then vν,f* <sup>=</sup> *vν,g on (*−∞*, a*]*.*

*Proof of Theorem 4.2.3—First Part* Define : *<sup>L</sup>*2*,ν (*R; *H )* <sup>→</sup> *<sup>L</sup>*2*,ν(*R; *H )* by

$$
\Phi(u) = \partial\_{t,\boldsymbol{\nu}}^{-1} F^{\boldsymbol{\nu}}(u).
$$

Since *∂*−<sup>1</sup> *t ,ν* <sup>1</sup> *<sup>ν</sup>* and *ν>L* it follows that is a contraction and thus admits a unique fixed point, which by definition solves the equation in question. Moreover, we have that *uν* <sup>=</sup> *(uν )* <sup>=</sup> *<sup>∂</sup>*−<sup>1</sup> *t ,ν <sup>F</sup><sup>ν</sup> (uν)* <sup>∈</sup> dom*(∂t ,ν )*.

Differentiability of *uν* as in (a) follows from Exercise 4.1 and the continuity of *F<sup>ν</sup> (uν)*.

For the unique existence asserted in (d), note that the unique existence of *vν,f* follows from the above considerations after realising that *(v)* := *<sup>∂</sup>*−<sup>1</sup> *t ,ν <sup>F</sup><sup>ν</sup> (v)* <sup>+</sup> *∂*−<sup>1</sup> *t ,ν <sup>f</sup>* defines a contraction in *<sup>L</sup>*2*,ν(*R; *H )*. For the remaining statements in (d) and the statements in (b) and (c), we need some prerequisites. **Definition** Let *<sup>H</sup>*0*, H*<sup>1</sup> be Hilbert spaces, *<sup>ν</sup>* <sup>∈</sup> <sup>R</sup> and *<sup>F</sup>* : *<sup>L</sup>*2*,ν (*R; *<sup>H</sup>*0*)* <sup>→</sup> *<sup>L</sup>*2*,ν(*R; *<sup>H</sup>*1*)*. Then, *<sup>F</sup>* is called *causal* if for all *<sup>a</sup>* <sup>∈</sup> <sup>R</sup> and all *f, g* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *<sup>H</sup>*0*)* with *f* = *g* on *(*−∞*, a*], we have that *F (f )* = *F (g)* on *(*−∞*, a*].

*Remark 4.2.4* Let *<sup>ν</sup>* <sup>∈</sup> <sup>R</sup>, *<sup>a</sup>* <sup>∈</sup> <sup>R</sup>. If *<sup>f</sup>* <sup>∈</sup> *L(L*2*,ν (*R; *H ))* with spt *<sup>f</sup>* <sup>⊆</sup> *(*−∞*, a*] then *f* ∈ " *<sup>η</sup><sup>ν</sup> <sup>L</sup>*2*,η(*R; *H )* and

$$\|f\|\_{L\_{2,\eta}(\mathbb{R};H)} \lesssim \mathbf{e}^{(\upsilon-\eta)a} \|f\|\_{L\_{2,\upsilon}(\mathbb{R};H)} \quad (\eta \leqslant \nu).$$

Likewise, if spt *f* ⊆ [*a,*∞*)*, we get *f* ∈ " *ρ<sup>ν</sup> <sup>L</sup>*2*,ρ(*R; *H )* with

$$\|f\|\_{L\_{2,\rho}(\mathbb{R};H)} \lesssim \mathbf{e}^{(\upsilon-\rho)a} \|f\|\_{L\_{2,\upsilon}(\mathbb{R};H)} \quad (\rho \gtrsim \upsilon).$$

**Lemma 4.2.5** " *Let <sup>H</sup>*0*, H*<sup>1</sup> *be Hilbert spaces, <sup>μ</sup>* <sup>∈</sup> <sup>R</sup>*, <sup>F</sup>* : *<sup>S</sup>*c*(*R; *<sup>H</sup>*0*)* <sup>→</sup> *ν<sup>μ</sup> <sup>L</sup>*2*,ν(*R; *<sup>H</sup>*1*) uniformly Lipschitz continuous. Then the following statements hold:*


*Proof* (a) We divide the proof into three steps.


$$\begin{array}{ll} \int\_{-\infty}^{a} \|\mathbf{g}(t)\|\_{H\_{1}}^{2} \operatorname{\mathbf{e}}^{2\boldsymbol{\nu}(a-t)} \operatorname{\mathbf{d}t} \leqslant \int\_{\mathbb{R}} \|\mathbf{g}(t)\|\_{H\_{1}}^{2} \operatorname{\mathbf{e}}^{2\boldsymbol{\nu}(a-t)} \operatorname{\mathbf{d}t} \\ \leqslant c^{2} \int\_{a}^{\infty} \|\boldsymbol{f}(t)\|\_{H\_{0}}^{2} \operatorname{\mathbf{e}}^{2\boldsymbol{\nu}(a-t)} \operatorname{\mathbf{d}t} \to \mathbf{0} \end{array}$$

as *<sup>ν</sup>* → ∞. Since e2*ν(a*−*t )* → ∞ as *<sup>ν</sup>* → ∞ for all *t<a*, the monotone convergence theorem implies *g* = 0 on *(*−∞*, a*].

(iii) Let *f, g* <sup>∈</sup> *<sup>S</sup>*c*(*R; *<sup>H</sup>*0*)* such that *<sup>f</sup>* <sup>=</sup> *<sup>g</sup>* on *(*−∞*, a*] for some *<sup>a</sup>* <sup>∈</sup> <sup>R</sup>. Then *f* − *g* = 0 on *(*−∞*, a*]. Since *F* is uniformly Lipschitz continuous, with *L* := sup*ν<sup>μ</sup> <sup>F</sup><sup>ν</sup>*Lip we obtain *<sup>F</sup><sup>ν</sup> (f )* <sup>−</sup> *<sup>F</sup><sup>ν</sup> (g) <sup>L</sup>*2*,ν (*R;*H*1*) L f* − *g <sup>L</sup>*2*,ν (*R;*H*0*)* for all *ν <sup>μ</sup>*. By (ii) we conclude *<sup>F</sup><sup>ν</sup> (f )* <sup>=</sup> *<sup>F</sup>ν(g)* on *(*−∞*, a*] for all *ν μ*, which by (i) yields the assertion.

The statement in (b) directly follows from (a). Note that *∂*−<sup>1</sup> *t ,ν F<sup>ν</sup>* is uniformly Lipschitz continuous only for *ν >* 0. Let us prove (c). Since *<sup>F</sup><sup>ν</sup> (f )* <sup>=</sup> *F (f )* <sup>=</sup> *<sup>F</sup>η(f )* for *<sup>f</sup>* <sup>∈</sup> *<sup>S</sup>*c*(*R; *<sup>H</sup>*0*)*, the set *<sup>S</sup>*c*(*R; *<sup>H</sup>*0*)* is dense in *<sup>L</sup>*2*,ν(*R; *<sup>H</sup>*0*)* <sup>∩</sup> *<sup>L</sup>*2*,μ(*R; *<sup>H</sup>*0*)* by Lemma 4.2.1, and *<sup>F</sup><sup>ν</sup>* and *<sup>F</sup><sup>η</sup>* are Lipschitz-continuous, we obtain the assertion.

*Proof of Theorem 4.2.3—Second Part* The remaining part in (d): Let *f, g* ∈ *<sup>L</sup>*2*,ν(*R; *H )* with *<sup>f</sup>* <sup>=</sup> *<sup>g</sup>* on *(*−∞*, a*]. Since *ν>L* - 0, we compute using Lemma 4.2.5(b) and causality of *∂*−<sup>1</sup> *t ,ν* that

$$\begin{split} \mathbb{1}\_{(-\infty,a]}v\_{\boldsymbol{\nu},f} &= \mathbb{1}\_{(-\infty,a]}\partial\_{t,\boldsymbol{\nu}}^{-1}F^{\boldsymbol{\nu}}\left(v\_{\boldsymbol{\nu},f}\right) + \mathbb{1}\_{(-\infty,a]}\partial\_{t,\boldsymbol{\nu}}^{-1}f \\ &= \mathbb{1}\_{(-\infty,a]}\partial\_{t,\boldsymbol{\nu}}^{-1}F^{\boldsymbol{\nu}}\left(\mathbb{1}\_{(-\infty,a]}v\_{\boldsymbol{\nu},f}\right) + \mathbb{1}\_{(-\infty,a]}\partial\_{t,\boldsymbol{\nu}}^{-1}\mathbb{1}\_{(-\infty,a]}f \\ &= \mathbb{1}\_{(-\infty,a]}\partial\_{t,\boldsymbol{\nu}}^{-1}F^{\boldsymbol{\nu}}\left(\mathbb{1}\_{(-\infty,a]}v\_{\boldsymbol{\nu},f}\right) + \mathbb{1}\_{(-\infty,a]}\partial\_{t,\boldsymbol{\nu}}^{-1}\mathbb{1}\_{(-\infty,a]}g. \end{split}$$

The same computation also yields that

$$\mathbb{1}\_{\left( -\infty, a\right]} v\_{\boldsymbol{\nu}, \boldsymbol{g}} = \mathbb{1}\_{\left( -\infty, a\right]} \partial\_{\boldsymbol{t}, \boldsymbol{\nu}}^{-1} F^{\boldsymbol{\nu}} \left( \mathbb{1}\_{\left( -\infty, a\right]} v\_{\boldsymbol{\nu}, \boldsymbol{g}} \right) + \mathbb{1}\_{\left( -\infty, a\right]} \partial\_{\boldsymbol{t}, \boldsymbol{\nu}}^{-1} \mathbb{1}\_{\left( -\infty, a\right]} \boldsymbol{g} \cdot \boldsymbol{g} $$

It is easy to see that *<sup>u</sup>* → <sup>1</sup>*(*−∞*,a*]*∂*−<sup>1</sup> *t ,ν <sup>F</sup><sup>ν</sup> (u)* <sup>+</sup> <sup>1</sup>*(*−∞*,a*]*∂*−<sup>1</sup> *t ,ν* <sup>1</sup>*(*−∞*,a*]*<sup>g</sup>* defines a contraction in *<sup>L</sup>*2*,ν (*R; *H )*. Hence, the contraction mapping principle implies that <sup>1</sup>*(*−∞*,a*]*vν,f* <sup>=</sup> <sup>1</sup>*(*−∞*,a*]*vν,g*.

The statement in (b) follows from the fact that *<sup>u</sup>* → <sup>1</sup>*(*−∞*,a*]*∂*−<sup>1</sup> *t ,ν Fν(u)* defines a contraction and Lemma 4.2.5(b).

For the proof of (c), we observe that for all *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>, we have <sup>1</sup>*(*−∞*,n*]*uη* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *H )* <sup>∩</sup> *<sup>L</sup>*2*,η(*R; *H )*. Hence, by (b) and Lemma 4.2.5(c), it follows that

$$\mathbb{1}\_{\left( -\infty, n\right]} \mu\_{\eta} = \mathbb{1}\_{\left( -\infty, n\right]} \partial\_{\mathfrak{l}, \eta}^{-1} F^{\eta} \left( \mathbb{1}\_{\left( -\infty, n\right]} \mu\_{\eta} \right) = \mathbb{1}\_{\left( -\infty, n\right]} \partial\_{\mathfrak{l}, \nu}^{-1} F^{\upsilon} \left( \mathbb{1}\_{\left( -\infty, n\right]} \mu\_{\eta} \right) .$$

As <sup>1</sup>*(*−∞*,n*]*uν* satisfies the same fixed point equation, we deduce <sup>1</sup>*(*−∞*,n*]*uη* <sup>=</sup> <sup>1</sup>*(*−∞*,n*]*uν* for all *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>*,* which yields the assertion.

As a first application of Theorem 4.2.3 we state and prove the classical version of the Theorem of Picard–Lindelöf.

**Theorem 4.2.6 (Picard–Lindelöf—Classical Version)** *Let H be a Hilbert space,* <sup>⊆</sup> <sup>R</sup> <sup>×</sup> *<sup>H</sup> be open, <sup>f</sup>* : <sup>→</sup> *<sup>H</sup> continuous, (t*0*, x*0*)* <sup>∈</sup> *. Assume there exists L* -0 *such that for all (t, x), (t, y)* ∈ *we have*

$$\|f(t, \mathbf{x}) - f(t, \mathbf{y})\| \lesssim L \|\mathbf{x} - \mathbf{y}\|\,.$$

*Then, there exists δ >* 0 *such that the initial value problem*

$$\begin{cases} u'(t) = f(t, u(t)) & (t \in (t\_0, t\_0 + \delta)), \\ u(t\_0) = x\_0, \end{cases} \tag{4.1}$$

*admits a unique continuously differentiable solution, u*: [*t*0*, t*<sup>0</sup> + *δ*] → *H, which satisfies (t , u(t))* ∈ *for all t* ∈ [*t*0*, t*<sup>0</sup> + *δ*]*.*

*Proof* First of all we observe that we may assume, without loss of generality, that *x*<sup>0</sup> = 0. Indeed, to solve the initial value problem

$$\begin{cases} v'(t) = f(t, v(t) + \chi\_0) \quad (t \in (t\_0, t\_0 + \delta)), \\ v(t\_0) = 0, \end{cases}$$

for a continuously differentiable *v* : [*t*0*, t*<sup>0</sup> + *δ*] → *H* is equivalent to solving the problem in Theorem 4.2.6 for *<sup>u</sup>* by setting *<sup>u</sup>* <sup>=</sup> *<sup>v</sup>* <sup>+</sup> <sup>1</sup>[*t*0*,t*0+*δ*]*x*0. Appropriately shifting the time coordinate, we may also assume that *t*<sup>0</sup> = 0.

Thus, let *(*0*,* 0*)* ∈ . Then [0*, δ*- ] × *B* [0*, ε*] ⊆ for some *δ*- *,ε >* 0. Denote by *P* : *H* → *H* the projection onto *B* [0*, ε*]; that is, for *x* ∈ *H*, *P x* ∈ *B* [0*, ε*] is the unique element satisfying

$$\|\|\mathbf{x} - P\mathbf{x}\|\|\_{H} = \inf\_{\mathbf{y} \in B[0,\varepsilon]} \|\|\mathbf{x} - \mathbf{y}\|\|\_{H}.$$

By Exercise 4.4, *P* is Lipschitz continuous with Lipschitz semi-norm bounded by 1. We then define

$$F \colon \operatorname{S}\_{\mathbb{C}}(\mathbb{R}; H) \to \bigcap\_{\mathbb{V} \geqslant 0} L\_{2, \mathbb{V}}(\mathbb{R}; H)$$

$$\operatorname{g} \mapsto \left( \mathfrak{t} \mapsto \mathbbm{1}\_{[0, \delta')}(t) f(t, P(\operatorname{g}(t))) \right)$$

and will prove that *F* is well-defined and uniformly Lipschitz continuous. Since the mapping *<sup>t</sup>* → <sup>1</sup>[0*,δ*- *)(t)f (t,* 0*)* is supported on 0*, δ*- , we obtain for *ν* - 0 that *F (*0*)* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *H )*. Moreover, for *<sup>ν</sup>* -0 and *g, h* <sup>∈</sup> *<sup>S</sup>*c*(*R; *H )* we estimate

$$\begin{split} & \| F(g) - F(h) \|\_{L\_2, \upsilon(\mathbb{R}; H)}^2 \\ &= \int\_{\mathbb{R}} \left\| F(g)(t) - F(h)(t) \right\|^2 \mathbf{e}^{-2\upsilon t} \, \mathrm{d}t = \int\_0^{\delta'} \left\| f(t, P(\mathbf{g}(t))) - f(t, P(h(t))) \right\|^2 \mathbf{e}^{-2\upsilon t} \, \mathrm{d}t \\ & \quad \le L^2 \int\_0^{\delta'} \left\| P(g(t)) - P(h(t)) \right\|^2 \mathbf{e}^{-2\upsilon t} \, \mathrm{d}t \leqslant L^2 \int\_0^{\delta'} \left\| g(t) - h(t) \right\|^2 \mathbf{e}^{-2\upsilon t} \, \mathrm{d}t \\ & \quad \le L^2 \left\| g - h \right\|\_{L\_2, \upsilon(\mathbb{R}; H)}^2, \end{split}$$

which shows that *F* is well-defined and uniformly Lipschitz continuous.

By Theorem 4.2.3, there exists *v* ∈ dom*(∂t ,ν)* with *ν>L* such that

$$
\partial\_{\mathfrak{t}, \boldsymbol{v}} v = F^{\boldsymbol{v}}(v).
$$

We read off from *<sup>v</sup>* <sup>=</sup> *<sup>∂</sup>*−<sup>1</sup> *t ,ν <sup>F</sup>ν(v)* that *<sup>v</sup>* <sup>=</sup> 0 on *(*−∞*,* 0], and that *<sup>v</sup>* is continuous by Theorem 4.1.2. Moreover, we obtain that

$$v(t) = \int\_{-\infty}^{t} \mathbb{1}\_{[0,\delta')}(\tau) f(\tau, P(v(\tau))) \,\mathrm{d}\tau = \int\_{0}^{\min\{t,\delta'\}} f(\tau, P(v(\tau))) \,\mathrm{d}\tau,$$

from which we read off that *v* is continuously differentiable on 0*, δ*- since *f* and *<sup>P</sup>* are also continuous. The same equality implies for 0 *< t <sup>δ</sup>* := min{ *<sup>ε</sup> <sup>M</sup> , δ*- }, where *M* := sup*(t ,x)*∈[0*,δ*- ]×*B*[0*,ε*] *f (t , x)*, that

$$\|v(t)\| \lesssim \int\_0^t \|f(\tau, P(v(\tau)))\| \, d\tau \lesssim M\delta \lesssim \varepsilon.$$

Thus, *(t, v(t))* ∈ 0*, δ*- × *B* [0*, ε*] ⊆ for all 0 *t δ* and so *P v(t)* = *v(t)* for 0 *t δ*. Thus, *u* := *v*|[0*,δ*] satisfies (4.1).

Finally, concerning uniqueness, let *u*: [0*, δ*] <sup>→</sup> *<sup>H</sup>* be a continuously differentiable solution of (4.1). Let *<sup>v</sup>* be the extension of *<sup>u</sup>* by 0 to the whole of <sup>R</sup>. Then we get that

$$\begin{split} \mathbb{1}\_{\left( -\infty,\delta \right]} \widetilde{\boldsymbol{v}} &= \mathbb{1}\_{\left( -\infty,\delta \right]} \int\_{0}^{\cdot} \mathbb{1}\_{\left[ 0,\delta' \right)} (\boldsymbol{\tau}) f \left( \boldsymbol{\tau}, \widetilde{\boldsymbol{v}} (\boldsymbol{\tau}) \right) d \boldsymbol{\tau} \\ &= \mathbb{1}\_{\left( -\infty,\delta \right]} \int\_{-\infty}^{\cdot} \mathbb{1}\_{\left[ 0,\delta' \right]} (\boldsymbol{\tau}) f \left( \boldsymbol{\tau}, P(\widetilde{\boldsymbol{v}} (\boldsymbol{\tau})) \right) d \boldsymbol{\tau} \\ &= \mathbb{1}\_{\left( -\infty,\delta \right]} \partial\_{\boldsymbol{t},\boldsymbol{v}}^{-1} F^{\boldsymbol{v}} (\mathbb{1}\_{\left( -\infty,\delta \right]} \widetilde{\boldsymbol{v}}). \end{split}$$

Since <sup>1</sup>*(*−∞*,δ*]*<sup>v</sup>* is the unique solution of the equation *<sup>w</sup>* <sup>=</sup> <sup>1</sup>*(*−∞*,δ*]*∂*−<sup>1</sup> *t ,ν F<sup>ν</sup> (w)*, we obtain that <sup>1</sup>*(*−∞*,δ*]*<sup>v</sup>* <sup>=</sup> <sup>1</sup>*(*−∞*,δ*]*v*, which yields *<sup>u</sup>* <sup>=</sup> *u*.

*Remark 4.2.7* The reason for the proof of the classical Picard–Lindelöf theorem being seemingly complicated is two-fold. First of all, the Hilbert space solution theory is for *L*2-functions rather than continuous (or continuously differentiable) functions. The second, maybe more important point is that the Hilbert space Picard–Lindelöf asserts a solution theory, which provides *global* existence in the time variable. The main body of the proof of the classical Picard–Lindelöf theorem presented here is therefore devoted to 'localisation' of the abstract theorem. Furthermore, note that the method of proof for obtaining uniqueness and the admittance of the initial value rests on causality. This effect will resurface when we discuss partial differential equations.

## **4.3 Delay Differential Equations**

In this section, our study will not be as in depth as done for the local Picard–Lindelöf theorem. Of course, the solution theory would not be a very good one if it was only applicable to, arguably, the easiest case of ordinary differential equations. We shall see next that the developed theory applies to more elaborate examples.

In what follows, let *H* be a Hilbert space over K. We start out with a delay differential equation with so-called 'discrete delay'. For this, we introduce, for *h* ∈ R, the *time-shift operator*

$$\pi\_{\mathbb{R}} \colon \operatorname{S}\_{\mathbb{C}}(\mathbb{R}; H) \to \bigcap\_{\nu \in \mathbb{R}} L\_{2, \nu}(\mathbb{R}; H),$$

$$f \mapsto f(\cdot + h).$$

**Lemma 4.3.1** *Let <sup>μ</sup>* <sup>∈</sup> <sup>R</sup>*. The mapping τh* : *<sup>S</sup>*c*(*R; *H )* <sup>→</sup> " *ν<sup>μ</sup> <sup>L</sup>*2*,ν(*R; *H ) is uniformly Lipschitz continuous if and only if <sup>h</sup>* <sup>0</sup>*. More precisely, for <sup>ν</sup>* <sup>∈</sup> <sup>R</sup> *we have*

$$\|\mathfrak{r}\_h\|\_{L(L\_{2,\upsilon}(\mathbb{R};H))} = \mathbf{e}^{h\upsilon}.$$

*Proof* Let *<sup>f</sup>* <sup>∈</sup> *<sup>S</sup>*c*(*R; *H )*. Then for *<sup>ν</sup>* <sup>∈</sup> <sup>R</sup> we compute

$$\begin{aligned} \|\tau\_h f\|\_{L\_{2,\boldsymbol{\nu}}(\mathbb{R};H)}^2 &= \int\_{\mathbb{R}} \|f(t+h)\|^2 \, \mathbf{c}^{-2\nu l} \, \mathrm{d}t = \int\_{\mathbb{R}} \|f(t)\|^2 \, \mathbf{c}^{-2\nu(t-h)} \, \mathrm{d}t \\ &= \|f\|\_{L\_{2,\boldsymbol{\nu}}(\mathbb{R};H)}^2 \, \mathbf{c}^{2\nu h} .\end{aligned}$$

Since sup*ν<sup>μ</sup>* <sup>e</sup>2*νh <sup>&</sup>lt;* <sup>∞</sup> if and only if *<sup>h</sup>* 0 we obtain the equivalence. Moreover, the above equality also yields the norm of *τh* on *<sup>L</sup>*2*,ν (*R; *H )*.

We will reuse *τh* for the Lipschitz continuous extensions to *<sup>L</sup>*2*,ν (*R; *H )*. The wellposedness theorem for delay equations with discrete delay is contained in the next theorem. We note here that we only formulate the respective result for right-hand sides that are globally Lipschitz continuous. With a localisation technique, as has already been carried out for the classical Picard–Lindelöf theorem, it is also possible to obtain local results.

**Theorem 4.3.2** *Let <sup>H</sup> be a Hilbert space, <sup>μ</sup>* <sup>∈</sup> <sup>R</sup>*, <sup>N</sup>* <sup>∈</sup> <sup>N</sup>*, <sup>h</sup>*1*,...,hN* <sup>∈</sup> *(*−∞*,* 0]*, and*

$$G \colon \mathbb{S}\_{\mathbb{C}}(\mathbb{R}; H^N) \to \bigcap\_{\nu \geqslant \mu} L\_{2,\mathbb{V}}(\mathbb{R}; H)$$

*uniformly Lipschitz. Then there exists an <sup>η</sup>* <sup>∈</sup> <sup>R</sup> *such that for all <sup>ν</sup> η the equation*

$$\partial\_{\mathfrak{t},\boldsymbol{\upsilon}}\mu = G^{\boldsymbol{\upsilon}}\left(\mathfrak{r}\_{\hbar\_1}\mu, \dots, \mathfrak{r}\_{\hbar\_N}\mu\right).$$

*admits a solution u* ∈ dom*(∂t ,ν) which is unique in ν<sup>η</sup> <sup>L</sup>*2*,ν(*R; *H ). Moreover, for all <sup>a</sup>* <sup>∈</sup> <sup>R</sup> *the function ua* := <sup>1</sup>*(*−∞*,a*]*<sup>u</sup> satisfies*

$$\mu\_a = \mathbb{1}\_{(-\infty, a]} \partial\_{\mathfrak{t}, \boldsymbol{\nu}}^{-1} G^{\boldsymbol{\nu}} \left( \mathfrak{r}\_{\hbar\_1} \boldsymbol{\mu}\_a, \dots, \mathfrak{r}\_{\hbar\_N} \boldsymbol{\mu}\_a \right) \dots$$

*Proof* The assertion follows from Theorem 4.2.3 applied to *F* := *G*◦ *τh*<sup>1</sup> *,...,τhN* in conjunction with Lemma 4.3.1.

Next, we formulate an initial value problem for a subclass of the latter type of equations.

**Theorem 4.3.3** *Let h >* <sup>0</sup>*, <sup>f</sup>* : <sup>R</sup>≥<sup>0</sup> <sup>×</sup> *<sup>H</sup>* <sup>×</sup> *<sup>H</sup>* <sup>→</sup> *<sup>H</sup> continuous, and f (*·*,* <sup>0</sup>*,* <sup>0</sup>*)* <sup>∈</sup> *<sup>L</sup>*2*,μ(*R; *H ) for some μ >* <sup>0</sup>*. Assume that there exists <sup>L</sup>* -0 *with*

$$\|f(t, \mathbf{x}, \mathbf{y}) - f(t, \boldsymbol{u}, \boldsymbol{v})\| \lesssim L \, \| (\mathbf{x}, \mathbf{y}) - (\boldsymbol{u}, \boldsymbol{v}) \| \quad \left( (t, \mathbf{x}, \mathbf{y}), (t, \boldsymbol{u}, \boldsymbol{v}) \in \mathbb{R}\_{\geq 0} \times H \times H \right) \,.$$

*Let u*<sup>0</sup> ∈ *C (*[−*h,* 0]; *H). Then the initial value problem*

$$\begin{cases} u'(t) = f(t, u(t), u(t-h)) & (t > 0), \\ u(\tau) = u\_0(\tau) & (\tau \in [-h, 0]) \end{cases} \tag{4.2}$$

*admits a unique continuous solution u*: [−*h,*∞*)* → *H, continuously differentiable on (*0*,*∞*).*

*Proof* For *t <* 0 let *f (t ,*·*,*·*)* := 0. We define *<sup>F</sup>* : *<sup>S</sup>*c*(*R; *H )* <sup>→</sup> " *ν<sup>μ</sup> <sup>L</sup>*2*,ν (*R; *H )* by

$$\begin{aligned} F(\phi)(t) \\ \vdots &= f\left(t, \phi(t) + \mathbb{1}\_{[0,\infty)}(t)u\_0(0), \phi(t-h) + \mathbb{1}\_{[0,\infty)}(t-h)u\_0(0) + \mathbb{1}\_{[0,h)}(t)u\_0(t-h)\right) \end{aligned}$$

for all *<sup>t</sup>* <sup>∈</sup> <sup>R</sup>. It is easy to see that *<sup>F</sup>* is uniformly Lipschitz continuous. Thus, by Theorem 4.2.3, we find *η μ* such that for all *ν η* the equation

$$
\partial\_{\mathfrak{l}, \mathbb{U}} v = F^{\upsilon}(v)
$$

admits a solution *v* ∈ " *ν<sup>η</sup>* dom*(∂t ,ν)* which is unique in *ν<sup>η</sup> <sup>L</sup>*2*,ν(*R; *H ).* Note that spt*F<sup>ν</sup> (v)* <sup>⊆</sup> [0*,*∞*)*. Hence, *<sup>v</sup>* <sup>=</sup> 0 on *(*−∞*,* 0]*.* By Theorem 4.1.2, we obtain that *v(*0*)* <sup>=</sup> 0. We claim that *<sup>u</sup>* := *<sup>v</sup>* <sup>+</sup> <sup>1</sup>[0*,*∞*)(*·*)u*0*(*0*)* <sup>+</sup> <sup>1</sup>[−*h,*0*)u*<sup>0</sup> is a solution of (4.2). First of all note that *u* is continuous on [−*h,*∞*)*. Next, for 0 *<t<h* we have that *t* − *h <* 0 and thus *v(t* − *h)* = 0 and so we see that

$$\begin{aligned} &F''(v)(t) \\ &= f(t, v(t) + \mathbb{1}\_{[0, \infty)}(t)u\_0(0), v(t-h) + \mathbb{1}\_{[0, \infty)}(t-h)u\_0(0) + \mathbb{1}\_{[0, h)}(t)u\_0(t-h)) \\ &= f(t, u(t), u\_0(t-h)). \end{aligned}$$

Similarly, for *t h* we obtain

$$F^\upsilon(v)(t) = f(t, u(t), u(t-h))$$

and thus, by continuity of *f* , *u*<sup>0</sup> and *u*, it follows that *v* is continuously differentiable on *(*0*,*∞*)* and

$$
\mu'(t) = \upsilon'(t) = \partial\_{\mathfrak{l}, \mathbb{V}} \upsilon(t) = f(t, \mu(t), \mathfrak{u}(t - h)).
$$

It remains to show uniqueness. For this, let *w*: [−*h,*∞*)* → *H* be a solution of (4.2). Then

$$w(t) = u\_0(0) + \int\_0^t f(s, w(s), w(s - h)) \, \text{d}s \quad (t \ge 0)$$

and *w(t)* <sup>=</sup> *<sup>u</sup>*0*(t)* if *<sup>t</sup>* <sup>∈</sup> [−*h,* 0]. Extend *<sup>w</sup>* by 0 on *(*−∞*,* <sup>−</sup>*h)* and set *<sup>v</sup>* := *<sup>w</sup>* <sup>−</sup> <sup>1</sup>[0*,*∞*)(*·*)u*0*(*0*)* <sup>−</sup> <sup>1</sup>[−*h,*0*)u*0. We infer

$$\begin{aligned} \widetilde{\boldsymbol{v}}(t) &= \int\_0^t f(s, \boldsymbol{w}(s), \boldsymbol{w}(s-h)) \, \mathrm{d}s \\ &= \int\_{-\infty}^t f\left(s, \widetilde{\boldsymbol{v}}(s) + \mathbb{1}\_{[0,\infty)}(s)\boldsymbol{u}\_0(0), \\ &\qquad \qquad \qquad \qquad \qquad \widetilde{\boldsymbol{v}}(s-h) + \mathbb{1}\_{[0,\infty)}(s-h)\boldsymbol{u}\_0(0) + \mathbb{1}\_{[0,h)}(s)\boldsymbol{u}\_0(s-h) \right) \, \mathrm{d}s \end{aligned}$$

for all *<sup>t</sup>* <sup>∈</sup> <sup>R</sup>. For *<sup>a</sup>* <sup>∈</sup> <sup>R</sup> we set *va* := <sup>1</sup>*(*−∞*,a*]*<sup>v</sup>* <sup>∈</sup> " *<sup>ν</sup>*∈<sup>R</sup> *<sup>L</sup>*2*,ν (*R; *H )* and obtain, using the above formula for *v*,

$$
\widetilde{v}\_a = \mathbb{1}\_{(-\\\\\infty,a] \,\partial\_{t,\boldsymbol{\upsilon}}^{-1} F^{\boldsymbol{\upsilon}}(\widetilde{v}\_a) .} .
$$

By uniqueness of the solution of

$$\mathbf{1}\_{\left( -\infty, a\right]} v = \mathbf{1}\_{\left( -\infty, a\right]} \partial\_{t, v}^{-1} F^{\boldsymbol{\upsilon}} \left( \mathbb{1}\_{\left( -\infty, a\right]} v \right),$$

it follows that *va* <sup>=</sup> <sup>1</sup>*(*−∞*,a*]*<sup>v</sup>* for all *<sup>a</sup>* <sup>∈</sup> <sup>R</sup> and, thus, *<sup>u</sup>* <sup>=</sup> *<sup>w</sup>*.

The equation to come involves the whole history of the unknown; that is, the unknown evaluated at *(*−∞*,* 0]. For a mapping *<sup>u</sup>*: <sup>R</sup> <sup>→</sup> *<sup>H</sup>* and *<sup>t</sup>* <sup>∈</sup> <sup>R</sup> we define the 'history' of *<sup>u</sup>* up to time *<sup>t</sup>* as *ut* : <sup>R</sup>≤<sup>0</sup> <sup>→</sup> *<sup>H</sup>*, *ut(θ )* := *u(t* <sup>+</sup> *θ )* for all *<sup>θ</sup>* <sup>∈</sup> <sup>R</sup>≤0. Moreover, we define the mapping

$$
\mu\_{(\cdot)} \colon \mathbb{R} \ni t \mapsto u\_t,
$$

which maps each *<sup>t</sup>* <sup>∈</sup> <sup>R</sup> to the history of *<sup>u</sup>* up to time *<sup>t</sup>*.

**Lemma 4.3.4** *Let μ >* 0*. Then*

$$\Theta \colon \mathbb{S}\_{\mathbb{C}}(\mathbb{R}; H) \to \bigcap\_{\mathbb{V} \geqslant \mu} L\_{2, \mathbb{V}}(\mathbb{R}; L\_2(\mathbb{R}\_{\leq 0}; H))$$

$$\mu \mapsto \mu\_{(\cdot)}$$

*is uniformly Lipschitz continuous. More precisely, for all ν >* 0 *we have*

$$\|\Theta^\nu\_\nu\| = \frac{1}{\sqrt{2\nu}}.$$

*Proof* Let *<sup>u</sup>* <sup>∈</sup> *<sup>S</sup>*c*(*R; *H )*. Then *u(t)* <sup>=</sup> *ut* <sup>∈</sup> *<sup>L</sup>*2*(*R≤0; *H )* for all *<sup>t</sup>* <sup>∈</sup> <sup>R</sup> and we compute

$$\begin{split} \left\| \left\| \Theta u \right\| \right\|\_{L\_{2,\nu}\left(\mathbb{R}; L\_{2}(\mathbb{R}\_{\leq 0}; H)\right)}^{2} &= \int\_{\mathbb{R}} \int\_{\mathbb{R}\_{\leq 0}} \left\| u(t+\theta) \right\|^{2} \,\mathrm{d}\theta \,\mathrm{e}^{-2\nu t} \,\mathrm{d}t \\ &= \int\_{\mathbb{R}} \int\_{\mathbb{R}\_{\leq 0}} \left\| u(t) \right\|^{2} \,\mathrm{e}^{-2\nu(t-\theta)} \,\mathrm{d}\theta \,\mathrm{d}t \\ &= \frac{1}{2\nu} \int\_{\mathbb{R}} \left\| u(t) \right\|^{2} \,\mathrm{e}^{-2\nu t} \,\mathrm{d}t. \end{split}$$

**Theorem 4.3.5** *Let <sup>H</sup> be a Hilbert space, <sup>μ</sup>* <sup>∈</sup> <sup>R</sup> *and let* : *<sup>S</sup>*<sup>c</sup> <sup>R</sup>; *<sup>L</sup>*2*(*R≤0; *H )* " <sup>→</sup> *ν<sup>μ</sup> <sup>L</sup>*2*,ν(*R; *H ) be uniformly Lipschitz. Then, there exists η >* <sup>0</sup> *such that for all ν η the equation*

$$
\partial\_{\mathfrak{t},\boldsymbol{\upsilon}}\mu = \Phi^{\boldsymbol{\upsilon}}(\boldsymbol{u}\_{(\cdot)}).
$$

*admits a solution u* ∈ " *ν<sup>η</sup>* dom*(∂t ,ν ) unique in ν<sup>η</sup> <sup>L</sup>*2*,ν (*R; *H ). Proof* This is another application of Theorem 4.2.3.

## **4.4 Comments**

In a way, the proof of Theorem 4.2.6 is standard PDE-theory in a nutshell; a solution theory for *Lp*-spaces is used to deduce existence and uniqueness of solutions and a posteriori regularity theory provides more information on the properties of the solution.

Note that—of course—other proofs are available for the Picard–Lindelöf theorem. We chose, however, to present this proof here in order to provide a perspective on classical results. Furthermore, we mention that in order to obtain unique existence for the solution, it suffices to assume that *f* satisfies a uniform Lipschitz condition with respect to the second variable and that *f* is measurable. Continuity of *f* is needed in order to obtain *C*1-solutions.

A more detailed exposition and more examples of the theory applied to delay differential equations can be found in [52] and—in a Banach space setting—[85].

There is also a way of dealing with delay differential equations by expanding the state space the problem is formulated in. In this case, it is possible to make use of the rich theory of *C*0-semigroups. We refer to [10] for this.

Causality is one of the main concepts for evolutionary equations. We have provided this notion for mappings defined on *L*2*,ν*-type spaces only. The situation becomes different if one considers merely densely defined mappings. Then it is a priori unclear, whether for a Lipschitz continuous mapping the continuous extension is also causal. For this we refer to Exercise 4.7 below and to [51, 131], and [138, Chapter 2] as well as to references mentioned there.

## **Exercises**

#### **Exercise 4.1**

(a) Let *X* be a Banach space, *u*: [*a, b*] → *X* continuous. Show that *v* : *(a, b)* → *X* given by

$$v(t) = \int\_{a}^{t} u(\tau) \,\mathrm{d}\tau$$

is continuously differentiable with *v*- *(t)* = *u(t)* for all *t* ∈ *(a, b)*.

(b) Let *<sup>H</sup>* be a Hilbert space, and *<sup>ν</sup>* <sup>∈</sup> <sup>R</sup>*.* Let *<sup>u</sup>* <sup>∈</sup> dom*(∂t ,ν)* with *∂t ,νu* continuous. Show that *u* is continuously differentiable and *u*-= *∂t ,νu*.

**Exercise 4.2** Prove Corollary 4.1.3.

**Exercise 4.3** Let *H* be a Hilbert space. Show that

$$\text{dom}(\partial\_{l,\boldsymbol{\nu}}) \hookrightarrow \mathcal{C}^{1/2}\_{\boldsymbol{\nu}}(\mathbb{R}; \boldsymbol{H}) \coloneqq \left\{ f \in \mathcal{C}\_{\boldsymbol{\nu}}(\mathbb{R}; \boldsymbol{H}) \; ; \; \text{e}^{-\boldsymbol{\nu} \cdot} f \text{ is } \frac{1}{2} \text{-Hölder continuous} \right\},$$

where a function *<sup>g</sup>* : <sup>R</sup> <sup>→</sup> *<sup>H</sup>* is said to be <sup>1</sup> <sup>2</sup> *-Hölder continuous* if

$$\sup\_{\substack{s,t \in \mathbb{R} \\ t \neq s}} \frac{\|g(t) - g(s)\|}{|t - s|^{1/2}} < \infty.$$

**Exercise 4.4** Let *H* be a Hilbert space, *C* ⊆ *H* non-empty, closed and convex. Show that the projection, *P*, of *H* onto *C* defines a Lipschitz continuous mapping with Lipschitz semi-norm bounded by 1, where for *x* ∈ *H*, *P x* ∈ *C* is the unique element satisfying

$$\|\|\mathbf{x} - P\mathbf{x}\|\|\_{H} = \inf\_{\mathbf{y} \in \mathcal{C}} \|\|\mathbf{x} - \mathbf{y}\|\|\_{H} \cdot$$

**Exercise 4.5** Let *<sup>h</sup>*: <sup>R</sup> <sup>×</sup> <sup>R</sup>≤<sup>0</sup> <sup>×</sup> <sup>R</sup>*<sup>n</sup>* <sup>→</sup> <sup>R</sup>*<sup>n</sup>* be continuous satisfying

$$\|h(t, \mathbf{s}, \mathbf{x}) - h(t, \mathbf{s}, \mathbf{y})\| \le L \|\mathbf{x} - \mathbf{y}\|$$

with *h(*·*,*·*,* <sup>0</sup>*)* <sup>=</sup> 0. Let *R >* 0 and *<sup>u</sup>*<sup>0</sup> <sup>∈</sup> *C(*R≤0; <sup>R</sup>*n)* have compact support. Show that the initial value problem

$$\begin{cases} \boldsymbol{\mu}'(t) = \int\_{-R}^{0} \boldsymbol{h}(t, \mathbf{s}, \boldsymbol{u}\_{(t)}(\mathbf{s})) \, \mathrm{d}s & (t > 0), \\ \boldsymbol{\mu}(t) = \boldsymbol{u}\_{0}(t) & (t \le 0) \end{cases}$$

admits a unique continuous solution *<sup>u</sup>*: <sup>R</sup> <sup>→</sup> <sup>R</sup>*n*, which is continuously differentiable on R*>*0.

*Hint:* Modify from Lemma 4.3.4.

**Exercise 4.6** Let *H* be a Hilbert space. Show that for a uniformly Lipschitz continuous : *S*<sup>c</sup> <sup>R</sup>; *<sup>L</sup>*2*(*R≤0; *H )*<sup>2</sup> → " *ν<sup>μ</sup> <sup>L</sup>*2*,ν(*R; *H )* the equation

$$
\partial\_{\mathfrak{l},\boldsymbol{\upsilon}}\mu = \Phi^{\boldsymbol{\upsilon}}\left(\mu\_{\left(\cdot\right)}, \left(\partial\_{\mathfrak{l},\boldsymbol{\upsilon}}\mu\right)\_{\left(\cdot\right)}\right),
$$

admits a unique solution *u* ∈ dom*(∂t ,ν)* for *ν* large enough.

**Exercise 4.7** Let *<sup>D</sup>* <sup>⊆</sup> *<sup>L</sup>*2*(*R*)* be dense and suppose that *<sup>F</sup>* : *<sup>D</sup>* <sup>⊆</sup> *<sup>L</sup>*2*(*R*)* <sup>→</sup> *<sup>L</sup>*2*(*R*)* admits a Lipschitz continuous extension *F*0.

(a) Show that *<sup>F</sup>*<sup>0</sup> is causal if and only if for all *<sup>φ</sup>* <sup>∈</sup> *<sup>S</sup>*c*(*R; <sup>R</sup>*)* and all *<sup>a</sup>* <sup>∈</sup> <sup>R</sup> there exists *L* -0 such that

$$\left| \left\langle \mathbb{1}\_{\left( -\infty, a\right]} \cdot \left( F(f) - F(\mathbf{g}) \right), \phi \right\rangle\_{L\_2(\mathbb{R})} \right| \leqslant L \left\| \mathbb{1}\_{\left( -\infty, a\right]} \cdot \left( f - \mathbf{g} \right) \right\|\_{L\_2(\mathbb{R})} $$

for all *f, g* ∈ *D*; that is, the mapping

$$\left( \left. D, \left\| \mathbb{1}\_{\left( -\infty, a\right]} \cdot (\cdot - \cdot) \right\|\_{L\_2(\mathbb{R})} \right) \rightrightarrows f \mapsto F(f) \in \left( L\_2(\mathbb{R}), \left| \left\langle \mathbb{1}\_{\left( -\infty, a\right]} \cdot (\cdot - \cdot), \phi \right\rangle\_{L\_2(\mathbb{R})} \right| \right)$$

is Lipschitz continuous.


## **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 5 The Fourier–Laplace Transformation and Material Law Operators**

In this chapter we introduce the Fourier–Laplace transformation and use it to define operator-valued functions of *∂t ,ν*; the so-called material law operators. These operators will play a crucial role when we deal with partial differential equations. In the equations of classical mathematical physics, like the heat equation, wave equation or Maxwell's equation, the involved material parameters, such as heat conductivity or permeability of the underlying medium, are incorporated within these operators. Hence, these operators are called "material law operators". We start our chapter by defining the Fourier transformation and proving Plancherel's theorem in the Hilbert space-valued case, which states that the Fourier transformation defines a unitary operator on *<sup>L</sup>*2*(*R; *H )*.

Throughout, let *H* be a complex Hilbert space.

## **5.1 The Fourier Transformation**

We start by defining the Fourier transformation on *<sup>L</sup>*1*(*R; *H )*.

**Definition** For *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*1*(*R; *H )* we define the *Fourier transform <sup>f</sup>* <sup>0</sup>*of <sup>f</sup>* by

$$\widehat{f}(s) := \frac{1}{\sqrt{2\pi}} \int\_{\mathbb{R}} \mathbf{e}^{-\mathrm{i}s\mathbf{r}} f(t) \,\mathrm{d}t \quad (s \in \mathbb{R}).$$

We also introduce

$$C\_b(\mathbb{R}; H) := \{ f \colon \mathbb{R} \to H \text{ } \text{; } f \text{ continuous, bounded} \}$$

endowed with the sup-norm, ·∞.

**Lemma 5.1.1 (Riemann–Lebesgue)** *Let <sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*1*(*R; *H ). Then <sup>f</sup>* <sup>0</sup><sup>∈</sup> *<sup>C</sup>*b*(*R; *H ) and* lim|*t*|→∞ *f (t)* <sup>0</sup> <sup>=</sup> <sup>0</sup>*. Moreover,*

$$\|\widehat{f}\|\_{\infty} \le \frac{1}{\sqrt{2\pi}} \|f\|\_1.$$

*Proof* First, note that *f* <sup>0</sup>is continuous by dominated convergence and bounded with

$$\|\widehat{f}\|\_{\infty} \leqslant \frac{1}{\sqrt{2\pi}} \|f\|\_1.$$

This shows that the mapping

$$L\_1(\mathbb{R}; H) \to C\_b(\mathbb{R}; H), \quad f \mapsto \widehat{f} \tag{5.1}$$

defines a bounded linear operator. Moreover, for *<sup>ϕ</sup>* <sup>∈</sup> *<sup>C</sup>*<sup>1</sup> <sup>c</sup> *(*R; *H )* we compute

$$\widehat{\varphi}(\mathbf{s}) = \frac{1}{\sqrt{2\pi}} \int\_{\mathbb{R}} \mathbf{e}^{-\mathrm{i}\mathbf{s}t} \varphi(t) \,\mathrm{d}t = \frac{1}{\sqrt{2\pi}} \frac{1}{\mathrm{i}\mathbf{s}} \int\_{\mathbb{R}} \mathbf{e}^{-\mathrm{i}\mathbf{s}t} \varphi'(t) \,\mathrm{d}t$$

for *s* = 0 and thus,

$$\limsup\_{|s| \to \infty} \|\widehat{\varphi}(s)\| \lesssim \limsup\_{|s| \to \infty} \frac{1}{|s|} \frac{1}{\sqrt{2\pi}} \left\|\varphi'\right\|\_1 = 0,$$

which shows that lim|*s*|→∞ <sup>0</sup>*ϕ(s)* <sup>=</sup> 0. By the facts that *<sup>C</sup>*<sup>1</sup> <sup>c</sup> *(*R; *H )* is dense in *<sup>L</sup>*1*(*R; *H )* (see Lemma 3.1.8), *<sup>f</sup>* <sup>∈</sup> *<sup>C</sup>*b*(*R; *H )*; lim|*t*|→∞ *f (t)* <sup>=</sup> <sup>0</sup> is a closed subspace of *<sup>C</sup>*b*(*R; *H )* and the operator in (5.1) is bounded, the assertion follows.

It is our main goal to extend the definition of the Fourier transformation to functions in *<sup>L</sup>*2*(*R; *H )*. For doing so, we make use of the Schwartz space of rapidly decreasing functions.

#### **Definition** We define

$$\mathcal{S}(\mathbb{R}; H) := \left\{ f \in C^{\infty}(\mathbb{R}; H) \; ; \; \forall n, k \in \mathbb{N}\_0 : \; \left( t \mapsto t^k f^{(n)}(t) \right) \in C\_{\mathbf{b}}(\mathbb{R}; H) \right\}$$

to be the *Schwartz space* of rapidly decreasing functions on R with values in *H*.

As usual we abbreviate *<sup>S</sup>(*R*)* := *<sup>S</sup>(*R; <sup>K</sup>*)*.

*Remark 5.1.2 <sup>S</sup>(*R; *H )* is a Fréchet space with respect to the seminorms

$$\mathcal{S}(\mathbb{R}; H) \ni f \mapsto \sup\_{t \in \mathbb{R}} \left\| t^k f^{(n)}(t) \right\| \quad (n, k \in \mathbb{N}\_0).$$

Moreover, *<sup>S</sup>(*R; *H )* <sup>⊆</sup> " *<sup>p</sup>*∈[1*,*∞] *Lp(*R; *H )*. Indeed, *<sup>S</sup>(*R; *H )* <sup>⊆</sup> *<sup>L</sup>*∞*(*R; *H )* by definition, and for *<sup>f</sup>* <sup>∈</sup> *<sup>S</sup>(*R; *H )* and 1 *p <* <sup>∞</sup> we have that

$$\begin{aligned} \int\_{\mathbb{R}} \|f(t)\|^p \, \mathrm{d}t &= \int\_{\mathbb{R}} \frac{1}{(1+|t|)^{2p}} \left\|(1+|t|)^{2}f(t)\right\|^p \, \mathrm{d}t\\ &\leqslant \sup\_{t \in \mathbb{R}} \left\|(1+|t|)^{2}f(t)\right\|^p \int\_{\mathbb{R}} \frac{1}{(1+|t|)^{2p}} \, \mathrm{d}t < \infty. \end{aligned}$$

**Proposition 5.1.3** *For <sup>f</sup>* <sup>∈</sup> *<sup>S</sup>(*R; *H ) we have <sup>f</sup>* <sup>0</sup><sup>∈</sup> *<sup>S</sup>(*R; *H ) and the mapping*

$$\mathcal{S}(\mathbb{R}; H) \to \mathcal{S}(\mathbb{R}; H), \quad f \mapsto \widehat{f}$$

*is bijective. Moreover, for f, g* <sup>∈</sup> *<sup>L</sup>*1*(*R; *H ) we have that*

$$\int\_{\mathbb{R}} \left< \widehat{f}(t), g(t) \right> \, \mathrm{d}t = \int\_{\mathbb{R}} \left< f(t), \widehat{g}(-t) \right> \, \mathrm{d}t. \tag{5.2}$$

*Additionally, if f, f* <sup>0</sup><sup>∈</sup> *<sup>L</sup>*1*(*R; *H ) then*

$$f(t) = \widehat{\widehat{f}}^{\cdot}(-t) \quad (t \in \mathbb{R}).\tag{5.3}$$

*Proof* Let *<sup>f</sup>* <sup>∈</sup> *<sup>S</sup>(*R; *H )*. By Exercise 5.1 we have

$$\widehat{f}'(\mathbf{s}) = \frac{1}{\sqrt{2\pi}} \int\_{\mathbb{R}} (-\mathbf{i}t) \mathbf{e}^{-\mathbf{i}st} f(t) \, \mathbf{d}t = -\widehat{\mathbf{i}\left(t \xrightarrow{\sim} t\widehat{f}\left(t\right)\right)} (\mathbf{s}) \quad (\mathbf{s} \in \mathbb{R}) \tag{5.4}$$

and

$$\operatorname{cis}\widehat{f}(\mathbf{s}) = \frac{\mathbf{i}}{\sqrt{2\pi}} \int\_{\mathbb{R}} \left(-\mathbf{i}\mathbf{s}\right) \mathbf{e}^{-\mathbf{i}\mathbf{s}t} f(t) \,\mathrm{d}t = -\mathbf{i}\widehat{f'}(\mathbf{s}) \quad (\mathbf{s} \in \mathbb{R}).\tag{5.5}$$

Using these formulas, one can show that *f* <sup>0</sup> <sup>∈</sup> *<sup>S</sup>(*R; *H ).* Since the bijectivity of the Fourier transformation on *<sup>S</sup>(*R; *H )* would follow from (5.3), it suffices to prove the formulas (5.2) and (5.3). Let *f, g* <sup>∈</sup> *<sup>L</sup>*1*(*R; *H ).* Then we compute using Proposition 3.1.6 and Fubini's theorem

$$\begin{aligned} \int\_{\mathbb{R}} \left< \widehat{f}(t), g(t) \right> \, & \,\mathrm{d}t = \int\_{\mathbb{R}} \frac{1}{\sqrt{2\pi}} \left< \int\_{\mathbb{R}} \mathrm{e}^{-\mathrm{i}st} f(s) \, \mathrm{d}s, \,\mathrm{g}(t) \right> \, \mathrm{d}t \\ & = \int\_{\mathbb{R}} \int\_{\mathbb{R}} \frac{1}{\sqrt{2\pi}} \mathrm{e}^{\mathrm{i}st} \left< f(s), g(t) \right> \, \mathrm{d}s \, \mathrm{d}t \end{aligned}$$

$$\begin{aligned} &= \int\_{\mathbb{R}} \left< f(s), \frac{1}{\sqrt{2\pi}} \int\_{\mathbb{R}} \mathbf{e}^{\mathrm{i}st} g(t) \, \mathrm{d}t \right> \mathrm{d}s, \\ &= \int\_{\mathbb{R}} \left< f(s), \widehat{g}(-s) \right> \, \mathrm{d}s, \end{aligned}$$

which yields (5.2). For proving formula (5.3), we consider the function *γ* defined by *γ (t)* := <sup>e</sup>−*t*<sup>2</sup> <sup>2</sup> for *<sup>t</sup>* <sup>∈</sup> <sup>R</sup>. Clearly, *<sup>γ</sup>* <sup>∈</sup> *<sup>S</sup>(*R*)*. We claim that 0*<sup>γ</sup>* <sup>=</sup> *<sup>γ</sup>* . Indeed, we observe that *γ* solves the initial value problem *y*- + *ty* = 0 subject to *y(*0*)* = 1; if we can show that 0*<sup>γ</sup>* solves the same initial value problem, then their equality would follow from the uniqueness of the solution. First, we observe that 0*γ (*0*)* <sup>=</sup> √ 1 2*π* <sup>R</sup> <sup>e</sup>−*t*<sup>2</sup> <sup>2</sup> d*t* = 1*.* Second, we compute using the formulas (5.4) and (5.5) that

$$\widehat{\boldsymbol{\gamma}}'(\mathbf{s}) = -\widehat{\mathbf{i}(t \mapsto t\boldsymbol{\gamma}(t))}(\mathbf{s}) = \mathbf{i}\widehat{\boldsymbol{\gamma}}(\mathbf{s}) = -s\widehat{\boldsymbol{\gamma}}(\mathbf{s}) \quad (\mathbf{s} \in \mathbb{R}).$$

Altogether, we have shown that 0*<sup>γ</sup>* solves the same initial value problem as *<sup>γ</sup>* and hence, <sup>0</sup>*<sup>γ</sup>* <sup>=</sup> *<sup>γ</sup>* . Let now *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*1*(*R; *H )* with *<sup>f</sup>* <sup>0</sup> <sup>∈</sup> *<sup>L</sup>*1*(*R; *H ), a >* 0 and *<sup>x</sup>* <sup>∈</sup> *<sup>H</sup>*. Then we compute using (5.2)

$$\begin{split} \left\langle \int\_{\mathbb{R}} \widehat{f}(t) \boldsymbol{\gamma}(at) \mathbf{e}^{\mathrm{i}s \mathrm{I}} \, \mathrm{d} \mathbf{t}, x \right\rangle &= \int\_{\mathbb{R}} \left\langle \widehat{f}(t), \boldsymbol{\gamma}(at) \mathbf{x} \mathbf{e}^{-\mathrm{i}s \mathrm{I}} \right\rangle \mathrm{d} \mathbf{t} = \int\_{\mathbb{R}} \left\langle f(t), \left( \boldsymbol{\gamma}(a \cdot) \widehat{\mathbf{x}^{\mathrm{a}} \cdots \mathbf{i}}(\cdot) \right) (-t) \right\rangle \, \mathrm{d} \mathbf{t} \\ &= \int\_{\mathbb{R}} \left\langle f(t), \frac{1}{\sqrt{2\pi}} \int\_{\mathbb{R}} \boldsymbol{\gamma}(ar) \mathbf{x} \mathbf{e}^{-\mathrm{i}s \mathrm{I}} \mathbf{e}^{\mathrm{i}r \mathrm{I}} \, \mathrm{d} r \right\rangle \, \mathrm{d} \\ &= \frac{1}{a} \int\_{\mathbb{R}} \left\langle f(t), \widehat{\boldsymbol{\gamma}} \left( \frac{s-t}{a} \right) \mathbf{x} \right\rangle \, \mathrm{d} \mathbf{t} = \frac{1}{a} \int\_{\mathbb{R}} \left\langle f(t), \boldsymbol{\gamma} \left( \frac{s-t}{a} \right) \mathbf{x} \right\rangle \, \mathrm{d} r \\ &= \int\_{\mathbb{R}} \left\langle f(s-at), \boldsymbol{\gamma} \left(t\right) \mathbf{x} \right\rangle \, \mathrm{d} t = \left\langle \int\_{\mathbb{R}} f(s-at) \boldsymbol{\gamma} \left(t\right) \mathbf{d} \, \mathrm{d} \mathbf{t} \right\rangle \end{split}$$

for each *<sup>s</sup>* <sup>∈</sup> <sup>R</sup>. Since this holds for all *<sup>x</sup>* <sup>∈</sup> *<sup>H</sup>* we get

$$\int\_{\mathbb{R}} \widehat{f}(t)\chi(at)\mathbf{e}^{\mathrm{i}st} \,\mathrm{d}t = \int\_{\mathbb{R}} f(s-at)\chi(t) \,\mathrm{d}t \quad (s \in \mathbb{R}).$$

Letting *a* → 0 in the latter equality, we obtain

$$\int\_{\mathbb{R}} \widehat{f}(t) \mathbf{e}^{\mathrm{i}st} \, \mathrm{d}t = \lim\_{a \to 0} \int\_{\mathbb{R}} f(s - at) \boldsymbol{\chi} \, (t) \, \mathrm{d}t \quad (s \in \mathbb{R}), \tag{5.6}$$

where we have used dominated convergence for the term on the left-hand side. In order to compute the limit on the right-hand side, we first observe that

$$\int\_{\mathbb{R}} \left\| \int\_{\mathbb{R}} f(s - at)\gamma \, (t) \, \mathrm{d}t \right\| \, \mathrm{d}s \lesssim \int\_{\mathbb{R}} \int\_{\mathbb{R}} \| f(s - at) \| \, \mathrm{d}s \, \gamma(t) \, \mathrm{d}t = \| f \|\_{1} \, \| \gamma \|\_{1} \,, \varepsilon$$

and hence, for each *a >* 0 the operator

$$S\_a \colon L\_1(\mathbb{R}; H) \to L\_1(\mathbb{R}; H),$$

$$f \mapsto \left( s \mapsto \int\_{\mathbb{R}} f\left(s - at\right) \boldsymbol{\chi}\left(t\right) \,\mathrm{d}t \right),$$

is bounded by *γ* 1. Moreover, since *Saψ* → *ψ(*·*) γ* <sup>1</sup> as *a* → 0 for *ψ* ∈ *<sup>C</sup>*c*(*R; *H )*, we infer that

$$S\_a f \to f(\cdot) \| \mathcal{Y} \|\_1 \quad (a \to 0),$$

for each *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*1*(*R; *H )*. Hence, passing to a suitable sequence *(an)n* in <sup>R</sup>*>*<sup>0</sup> tending to 0, we get

$$\lim\_{n \to \infty} \left( S\_{a\_n} f \right)(\mathbf{s}) \to f(\mathbf{s}) \parallel \boldsymbol{\nu} \parallel\_{\mathbf{l}} \quad (\text{a.e.} \,\mathbf{s} \in \mathbb{R}).$$

Using this identity for the right-hand side of (5.6), we get

$$\int\_{\mathbb{R}} \widehat{f}(t) \mathbf{e}^{\mathrm{i}st} \,\mathrm{d}t = f(\mathrm{s}) \,\|\boldsymbol{\nu}\|\_{\mathrm{l}} \quad (\text{a.e.} \,\mathrm{s} \in \mathbb{R}),$$

and since *<sup>γ</sup>* <sup>1</sup> <sup>=</sup> <sup>√</sup>2*π*, we derive (5.3).

With these preparations at hand, we are now able to prove the main theorem of this section.

**Theorem 5.1.4 (Plancherel)** *The mapping*

$$\mathcal{F} \colon \mathcal{S}(\mathbb{R}; H) \subseteq L\_2(\mathbb{R}; H) \to L\_2(\mathbb{R}; H), \ f \mapsto \widehat{f} \ $$

*extends to a unitary operator on <sup>L</sup>*2*(*R; *H ), again denoted by <sup>F</sup>, the* Fourier transformation*. Moreover, <sup>F</sup>*<sup>∗</sup> <sup>=</sup> *<sup>F</sup>*−<sup>1</sup> *is given by <sup>f</sup>* → *f (* <sup>0</sup> −·*).*

*Proof* Using (5.2) and (5.3) we obtain that

$$\left<\widehat{f}, \widehat{\mathfrak{g}}\right>\_2 = \int\_{\mathbb{R}} \left<\widehat{f}(t), \widehat{\mathfrak{g}}(t)\right> \,\mathrm{d}t = \int\_{\mathbb{R}} \left \,\mathrm{d}t = \int\_{\mathbb{R}} \left \,\mathrm{d}t = \langle f, g\rangle\_2$$

for all *f, g* <sup>∈</sup> *<sup>S</sup>(*R; *H )* and thus, in particular,

$$\|f\|\_{2} = \|\mathcal{F}f\|\_{2} \,. \tag{5.7}$$

Moreover, dom*(F)* <sup>=</sup> ran*(F)* <sup>=</sup> *<sup>S</sup>(*R; *H )* is dense in *<sup>L</sup>*2*(*R; *H )* and hence, the first assertion follows by Exercise 5.2. As *<sup>F</sup>* is unitary, we have *<sup>F</sup>*<sup>∗</sup> <sup>=</sup> *<sup>F</sup>*<sup>−</sup>1, thus, by (5.2) applied to *f, g* <sup>∈</sup> *<sup>S</sup>(*R; *H )*, we read off (using Proposition 2.3.8) that *<sup>F</sup>*−<sup>1</sup> <sup>=</sup> *(f* → *f (*0−·*))*, which yields all the claims of the theorem at hand.

*Remark 5.1.5* We emphasise that for *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*(*R; *H )* the Fourier transform *<sup>F</sup><sup>f</sup>* is not given by the integral expression for *L*1-functions, simply because the integral does not need to exist. However, by dominated convergence

$$\mathcal{F}f = \lim\_{R \to \infty} \frac{1}{\sqrt{2\pi}} \int\_{-R}^{R} \mathbf{e}^{-\mathbf{i}\mathbf{r}(\cdot)} f(t) \,\mathrm{d}t,$$

where the limit is taken in *<sup>L</sup>*2*(*R; *H ).*

## **5.2 The Fourier–Laplace Transformation and Its Relation to the Time Derivative**

We now use the Fourier transformation to define an analogous transformation on our exponentially weighted *L*2-type spaces; the so-called Fourier–Laplace transformation. We recall from Corollary 3.2.5 that for *<sup>ν</sup>* <sup>∈</sup> <sup>R</sup> the mapping

$$L\_{\mathbf{exp}(-\nu\mathbf{m})} \colon L\_{2,\mathbb{V}}(\mathbb{R}; H) \to L\_2(\mathbb{R}; H), \ f \mapsto \left(t \mapsto \mathbf{e}^{-\nu t} f(t)\right).$$

is unitary. In a similar fashion, we obtain that

$$\left(\exp(-\nu \mathbf{m}) \colon L\_{\mathbb{L}, \mathbb{V}}(\mathbb{R}; H) \to L\_{\mathbb{L}}(\mathbb{R}; H), \ f \mapsto \left(t \mapsto \mathbf{e}^{-\nu t} f(t)\right)\right)$$

defines an isometry.

**Definition** Let *<sup>ν</sup>* <sup>∈</sup> <sup>R</sup>. We define the *Fourier–Laplace transformation* as

$$\mathcal{L}\_{\nu} \colon L\_{2,\nu}(\mathbb{R}; H) \to L\_2(\mathbb{R}; H), \ f \mapsto \mathcal{F} \exp(-\nu \mathbf{m}) f.$$

We can also consider the Fourier–Laplace transformation as a mapping from *<sup>L</sup>*1*,ν(*R; *H )* to *<sup>C</sup>*b*(*R; *H )*; that is,

$$\mathcal{L}\_{\nu} \colon L\_{1,\nu}(\mathbb{R}; H) \to C\_{\mathbb{b}}(\mathbb{R}; H), \ f \mapsto \mathcal{F} \exp(-\nu \mathbf{m}) f.$$

*Remark 5.2.1* Note that *L<sup>ν</sup>* = *F* exp*(*−*ν*m*)* is unitary as an operator from *<sup>L</sup>*2*,ν(*R; *H )* to *<sup>L</sup>*2*(*R; *H )* since it is the composition of two unitary operators. For *ϕ* ∈ *C*<sup>∞</sup> <sup>c</sup> *(*R; *H )*, we have the expression

$$\left(\mathcal{L}\_{\boldsymbol{\nu}}\varphi\right)(t) = \frac{1}{\sqrt{2\pi}} \int\_{\mathbb{R}} \mathbf{e}^{-(\bar{\boldsymbol{\nu}}+\boldsymbol{\nu})s} \varphi(s) \,\mathrm{d}s \quad (t \in \mathbb{R}),$$

which shows that *L<sup>ν</sup>* can be interpreted as a shifted variant of the Fourier transformation, where the real part in the exponent equals *ν* instead of zero.

Our next goal is to show that the Fourier–Laplace transformation provides a spectral representation of our time derivative, *∂t ,ν*.

**Definition** Let *<sup>V</sup>* : <sup>R</sup> <sup>→</sup> <sup>K</sup> be measurable. We define the *multiplication-by-V operator* as

$$V(\mathbf{m}) \colon \text{dom}(V(\mathbf{m})) \subseteq L\_2(\mathbb{R}; H) \to L\_2(\mathbb{R}; H), \ f \mapsto \begin{pmatrix} t \mapsto V(t)f(t) \end{pmatrix}$$

with

$$\text{dom}(V(\mathfrak{m})) := \left\{ f \in L\_2(\mathbb{R}; H) \; ; \; \left( t \mapsto V(t)f(t) \right) \in L\_2(\mathbb{R}; H) \right\}.$$

In particular, if *V* is the identity on R we will just write m instead of id*(*m*)* and call it the *multiplication-by-the-argument operator*.

*Remark 5.2.2* Note that the multiplication-by-*V* operator is a vector-valued analogue of the multiplication operator seen in Theorems 2.4.3 and 2.4.7. The statements in these theorems generalise (easily) to the vector-valued situation at hand. Thus, as in Theorem 2.4.3, one shows that m is selfadjoint. Moreover, when *H* = {0}, in a similar fashion to the arguments carried out in Theorem 2.4.7 one shows that

$$
\sigma \left( \mathbf{m} \right) = \mathbb{R}.
$$

In order to avoid trivial cases, we shall assume throughout that *H* = {0}.

**Theorem 5.2.3** *Let <sup>ν</sup>* <sup>∈</sup> <sup>R</sup>*. Then*

$$
\partial\_{t,\boldsymbol{\nu}} = \mathcal{L}\_{\boldsymbol{\nu}}^\*(\boldsymbol{\text{im}} + \boldsymbol{\nu}) \mathcal{L}\_{\boldsymbol{\nu}}.
$$

*In particular,*

$$
\sigma\left(\theta\_{l,\nu}\right) = \left\{\mathfrak{i} + \nu \; ; \; t \in \mathbb{R}\right\} \;.
$$

*Proof* We first prove the assertion for *ν* = 0 and show that

$$I\_{\boldsymbol{\nu}} = \mathcal{L}\_{\boldsymbol{\nu}}^{\*} \left( \frac{1}{\mathrm{im} + \boldsymbol{\nu}} \right) \mathcal{L}\_{\boldsymbol{\nu}}.$$

The assertion will then follow by Theorem 2.4.3(d). Note that <sup>1</sup> im+*<sup>ν</sup>* <sup>∈</sup> *L(L*2*(*R; *H ))* by Proposition 2.4.6, and hence, both operators *Iν* and *L*<sup>∗</sup> *<sup>ν</sup> (* <sup>1</sup> im+*<sup>ν</sup> )L<sup>ν</sup>* are bounded and defined on the whole of *<sup>L</sup>*2*,ν(*R; *H ).* Thus, it suffices to prove the equality on a dense subset of *<sup>L</sup>*2*,ν(*R; *H )*, like *<sup>C</sup>*c*(*R; *H ).* We will just do the computation for the case when *ν >* 0. So, let *<sup>ϕ</sup>* <sup>∈</sup> *<sup>C</sup>*c*(*R; *H )* and compute

$$\left(\mathcal{L}\_{\boldsymbol{\nu}}I\_{\boldsymbol{\nu}}\varphi\right)\left(t\right) = \frac{1}{\sqrt{2\pi}}\int\_{\mathbb{R}}\mathbf{e}^{-(\mathbf{i}\cdot+\boldsymbol{\nu})s}\int\_{-\infty}^{s}\varphi(r)\,\mathrm{d}r\,\mathrm{d}s = \frac{1}{\sqrt{2\pi}}\int\_{\mathbb{R}}\int\_{r}^{\infty}\mathbf{e}^{-(\mathbf{i}+\boldsymbol{\nu})s}\,\mathrm{d}s\,\varphi(r)\,\mathrm{d}r$$

$$=\frac{1}{\sqrt{2\pi}}\frac{1}{\mathrm{i}\cdot+\boldsymbol{\nu}}\int\_{\mathbb{R}}\mathbf{e}^{-(\mathbf{i}+\boldsymbol{\nu})r}\varphi(r)\,\mathrm{d}r = \frac{1}{\mathrm{i}\cdot+\boldsymbol{\nu}}\left(\mathcal{L}\_{\boldsymbol{\nu}}\varphi\right)\left(t\right)$$

for *<sup>t</sup>* <sup>∈</sup> <sup>R</sup>*.* For *ν <* 0 the computation is analogous. In the case when *<sup>ν</sup>* <sup>=</sup> 0 we observe that

$$\begin{split} \partial\_{t,0} &= \exp(-\nu \text{m}) (\partial\_{t,\nu} - \nu) \exp(-\nu \text{m})^{-1} = \exp(-\nu \text{m}) \mathcal{L}\_{\nu}^{\*} (\text{im} + \nu - \nu) \mathcal{L}\_{\nu} \exp(-\nu \text{m})^{-1} \\ &= \mathcal{L}\_{0}^{\*} (\text{im}) \mathcal{L}\_{0} . \end{split}$$

## **5.3 Material Law Operators**

Using the multiplication operator representation of *∂t ,ν* via the Fourier–Laplace transformation, we can assign a functional calculus to this operator. We will do this in the following and define operator-valued functions of *∂t ,ν*. The class of functions used for this calculus are the so-called material laws. We begin by defining this function class.

**Definition** A mapping *<sup>M</sup>* : dom*(M)* <sup>⊆</sup> <sup>C</sup> <sup>→</sup> *L(H )* is called a *material law* if


$$\|M\|\_{\infty,\mathbb{C}\_{\text{Re}\sim\nu}} := \sup\_{z\in\mathbb{C}\_{\text{Re}\sim\nu}} \|M(z)\| < \infty.$$

Moreover, we set

$$\mathrm{cs}\_{\mathbb{b}}(M) := \inf \left\{ \nu \in \mathbb{R} \; ; \; \mathbb{C}\_{\mathrm{Re} > \nu} \subseteq \mathrm{dom}(M) \text{ and } \|M\|\_{\infty, \mathbb{C}\_{\mathrm{Re} > \nu}} < \infty \right\}$$

to be the *abscissa of boundedness* of *M*.

*Example 5.3.1* Let us state various examples of material laws.

(a) Polynomials in *<sup>z</sup>*−1: Let *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>0, *<sup>M</sup>*0*,...,Mn* <sup>∈</sup> *L(H )*. Then

$$M(z) := \sum\_{k=0}^{n} z^{-k} M\_k \quad (z \in \mathbb{C} \backslash \{0\})$$

#### 5.3 Material Law Operators 75

defines a material law with

$$\mathbf{s}\_{\mathsf{b}}\left(M\right) = \begin{cases} -\infty & \text{if } M\_{\mathsf{l}} = \dots = M\_{\mathsf{n}} = 0, \\ 0 & \text{otherwise.} \end{cases}$$

(b) Series in *<sup>z</sup>*−1: Let *(Mk)k*∈<sup>N</sup> in *L(H )* such that <sup>∞</sup> *<sup>k</sup>*=<sup>0</sup> *Mk <sup>r</sup>*−*<sup>k</sup> <sup>&</sup>lt;* <sup>∞</sup> for some *r >* 0. Then

$$M(z) := \sum\_{k=0}^{\infty} z^{-k} M\_k \quad (z \in \mathbb{C} \ \{0\})$$

defines a material law with sb *(M) r*.

(c) Exponentials: Let *<sup>h</sup>* <sup>∈</sup> <sup>R</sup>*, M*<sup>0</sup> <sup>∈</sup> *L(H )* where *<sup>M</sup>*<sup>0</sup> <sup>=</sup> 0 and set

$$M(z) := M\_0 \mathbf{e}^{zh} \quad (z \in \mathbb{C}).$$

Then *M* is a material law if and only if *h* 0. In this case, sb *(M)* = −∞*.* (d) Laplace transforms: Let *<sup>ν</sup>* <sup>∈</sup> <sup>R</sup> and *<sup>k</sup>* <sup>∈</sup> *<sup>L</sup>*1*,ν(*R*)* with spt *<sup>k</sup>* <sup>⊆</sup> <sup>R</sup>≥0. Then

$$M(z) := \sqrt{2\pi} (\mathcal{L}k)(z) := \int\_0^\infty \mathbf{e}^{-\sharp t} k(t) \, \mathrm{d}t \quad (z \in \mathbb{C}\_{\mathrm{Re} > \upsilon})$$

defines a material law with sb *(M) ν*.

(e) Fractional powers: Let *<sup>M</sup>*<sup>0</sup> <sup>∈</sup> *L(H )*, *<sup>M</sup>*<sup>0</sup> <sup>=</sup> <sup>0</sup>*, α* <sup>∈</sup> <sup>R</sup> and set

$$M(z) := M\_0 z^{-\alpha} \quad (z \in \mathbb{C} \backslash \mathbb{R}\_{\leq 0}),$$

where we set

$$\left(r\mathbf{e}^{\mathrm{i}\theta}\right)^{-\alpha} := r^{-\alpha}\mathbf{e}^{-\mathrm{i}\alpha\theta} \quad (r > 0, \theta \in (-\pi, \pi)).$$

Then *M* is a material law if and only if *α* -0 and

$$\text{s}\_{\text{b}}(M) = \begin{cases} -\infty & \text{if } \alpha = 0, \\ 0 & \text{otherwise.} \end{cases}$$

For material laws *M* we now define the corresponding material law operators in terms of the functional calculus induced by the spectral representation of *∂t ,ν*.

**Proposition 5.3.2** *Let <sup>M</sup>* : dom*(M)* <sup>⊆</sup> <sup>C</sup> <sup>→</sup> *L(H ) be a material law. Then, for ν >* sb *(M), the operator*

$$M(\text{im} + \nu) \colon L\_2(\mathbb{R}; H) \to L\_2(\mathbb{R}; H), \ f \mapsto \left(t \mapsto M(\text{it} + \nu)f(t)\right)$$

*is bounded. Moreover, we define the* material law operator

$$M(\partial\_{l,\upsilon}) := \mathcal{L}^\*\_{\upsilon} M(\text{im} + \upsilon) \mathcal{L}\_{\upsilon} \in L(L\_{2,\upsilon}(\mathbb{R}; H)),$$

*and obtain*

$$\left\|\mathcal{M}(\partial\_{\mathbb{I},\mathbb{U}})\right\| \leqslant \left\|\mathcal{M}\right\|\_{\infty,\mathbb{C}\_{\operatorname{Re}\simeq\mathbb{V}}}\ .$$

*Proof* The proof is clear.

*Remark 5.3.3* The set of material laws is an algebra and the mapping of assigning a material law to its corresponding material law operator is an algebra homomorphism in the following sense. For *<sup>j</sup>* ∈ {1*,* <sup>2</sup>} let *Mj* : dom*(Mj )* <sup>⊆</sup> <sup>C</sup> <sup>→</sup> *L(H )* be material laws, *<sup>λ</sup>* <sup>∈</sup> <sup>C</sup>. Then *<sup>M</sup>*1+*M*<sup>2</sup> (with domain dom*(M*1*)*∩dom*(M*2*)*), *λM*<sup>1</sup> and *M*<sup>1</sup> · *M*<sup>2</sup> (with domain dom*(M*1*)* ∩ dom*(M*2*)*) are material laws as well. Moreover, sb *(M*<sup>1</sup> + *M*2*),*sb *(M*<sup>1</sup> · *M*2*)* max{sb *(M*1*),*sb *(M*2*)*}. Furthermore, if *M*2*(z)* is a scalar for all *z* ∈ dom*(M*2*)*, then for *ν >* max{sb *(M*1*),*sb *(M*2*)*} we have *(M*1*M*2*)(∂t ,ν )* = *M*1*(∂t ,ν)M*2*(∂t ,ν)* = *M*2*(∂t ,ν)M*1*(∂t ,ν)* = *(M*2*M*1*)(∂t ,ν)*.

*Example 5.3.4* We now revisit the material laws presented in Example 5.3.1 and compute their corresponding operators, *M(∂t ,ν)*.

(a) Let *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>0, *<sup>M</sup>*0*,...,Mn* <sup>∈</sup> *L(H )* and

$$M(z) := \sum\_{k=0}^{n} z^{-k} M\_k \quad (z \in \mathbb{C} \backslash \{0\}).$$

Then, for *ν >* 0, one obviously has

$$M(\partial\_{\mathfrak{l}\_{\vert},\upsilon}) = \sum\_{k=0}^{n} \partial\_{\mathfrak{l}\_{\vert},\upsilon}^{-k} M\_k,$$

due to Theorem 5.2.3.

(b) Let *(Mk)k*∈<sup>N</sup> in *L(H )* such that <sup>∞</sup> *<sup>k</sup>*=<sup>0</sup> *Mk <sup>r</sup>*−*<sup>k</sup> <sup>&</sup>lt;* <sup>∞</sup> for some *r >* 0 and

$$M(z) := \sum\_{k=0}^{\infty} z^{-k} M\_k \quad (z \in \mathbb{C} \backslash \{0\}).$$

Then, for *ν>r*, one has

$$M(\partial\_{\mathfrak{l}\_{\mathfrak{l}}\upsilon}) = \sum\_{k=0}^{\infty} \partial\_{\mathfrak{r}\_{\mathfrak{l}}\upsilon}^{-k} M\_{\mathfrak{k}}.$$

again on account of Theorem 5.2.3.

#### 5.3 Material Law Operators 77

(c) Let *h* 0*, M*<sup>0</sup> ∈ *L(H )* and

$$M(z) := M\_0 \mathbf{e}^{zh} \quad (z \in \mathbb{C}).$$

Then, for *<sup>ν</sup>* <sup>∈</sup> <sup>R</sup>, we have

$$M(\partial\_{\mathfrak{t},\boldsymbol{\upsilon}}) = M\_0 \boldsymbol{\tau}\_h,$$

where

$$\pi\_{\hbar} \colon L\_{2,\nu}(\mathbb{R}; H) \to L\_{2,\nu}(\mathbb{R}; H), \ f \mapsto \left(t \mapsto f(t+h)\right).$$

Indeed, for *<sup>ϕ</sup>* <sup>∈</sup> *<sup>C</sup>*c*(*R; *H )* we compute

$$\begin{split} \left( \mathcal{L}\_{\boldsymbol{\nu}} M\_{0} \boldsymbol{\tau}\_{h} \boldsymbol{\varphi} \right) (t) &= \frac{1}{\sqrt{2\pi}} \int\_{\mathbb{R}} \mathbf{e}^{-(\mathbf{i}\boldsymbol{t}+\boldsymbol{\nu})\cdot\mathbf{s}} M\_{0} \boldsymbol{\varphi} (\boldsymbol{s}+\boldsymbol{h}) \, \mathrm{d}s \\ &= M\_{0} \frac{1}{\sqrt{2\pi}} \int\_{\mathbb{R}} \mathbf{e}^{-(\mathbf{i}\boldsymbol{t}+\boldsymbol{\nu})(\boldsymbol{s}-\boldsymbol{h})} \boldsymbol{\varphi}(\boldsymbol{s}) \, \mathrm{d}s = M(\mathbf{i}+\boldsymbol{\nu}) \left( \mathcal{L}\_{\boldsymbol{\nu}} \boldsymbol{\varphi} \right) (t) \end{split}$$

for all *<sup>t</sup>* <sup>∈</sup> <sup>R</sup>, where we have used Proposition 3.1.6 in the second line. Hence,

$$M\_0 \mathfrak{r}\_\hbar \varphi = \mathcal{L}\_\upsilon^\* M(\mathrm{im} + \upsilon) \mathcal{L}\_\upsilon \varphi = M(\partial\_{\mathfrak{l},\upsilon}) \varphi$$

and since *<sup>C</sup>*c*(*R; *H )* is dense in *<sup>L</sup>*2*,ν(*R; *H )* the assertion follows.

(d) Let *<sup>ν</sup>* <sup>∈</sup> <sup>R</sup> and *<sup>k</sup>* <sup>∈</sup> *<sup>L</sup>*1*,ν (*R*)* with spt *<sup>k</sup>* <sup>⊆</sup> <sup>R</sup>≥<sup>0</sup> and

$$M(z) := \sqrt{2\pi} (\mathcal{L}k)(z) \quad (z \in \mathbb{C}\_{\text{Re} > \nu})\,.$$

Then, by Exercise 5.4,

$$M(\partial\_{\mathfrak{l},\mu}) = k\*$$

for each *μ > ν.* (e) Let *M*<sup>0</sup> ∈ *L(H )*, *α >* 0 and

$$M(z) := M\_0 z^{-\alpha} \quad (z \in \mathbb{C} \backslash \mathbb{R}\_{\leq 0}).$$

Then for *ν >* 0 we have

$$\left(M(\partial\_{l,\upsilon})f\right)(t) = M\_0 \int\_{-\infty}^{t} \frac{1}{\Gamma(\alpha)} (t-s)^{\alpha-1} f(s) \, \mathrm{d}s \quad (\text{a.e.} \, t \in \mathbb{R}) \qquad (5.8)$$

for each *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν (*R; *H )*; see Exercise 5.5. This formula gives rise to the definition

$$\left(\partial\_{t,\boldsymbol{\upsilon}}^{-\alpha}f\right)(t) := \int\_{-\infty}^{t} \frac{1}{\Gamma(\alpha)} (t-s)^{\alpha-1} f(s) \, \mathrm{d}s \quad (t \in \mathbb{R}), s$$

which is known as the *(Riemann–Liouville) fractional integral of order α*.

Throughout the previous examples, the operator *M(∂t ,ν)* did not depend on the actual value of *ν*. Indeed, this is true for all material laws. In order to see this, we need the following lemma.

**Lemma 5.3.5** *Let μ, ν* <sup>∈</sup> <sup>R</sup> *with μ<ν, and set <sup>U</sup>* := {*<sup>z</sup>* <sup>∈</sup> <sup>C</sup> ; Re *<sup>z</sup>* <sup>∈</sup> *(μ, ν)*}*. Let g* : *U* → *H be continuous and holomorphic on U such that g(*i·+*ν), g(*i·+*μ)* ∈ *<sup>L</sup>*2*(*R; *H ) and there exists a sequence (Rn)n*∈<sup>N</sup> *in* <sup>R</sup>-<sup>0</sup> *such that Rn* → ∞ *and*

$$\int\_{\mu}^{\nu} \|g(\pm \mathrm{i}R\_n + \rho)\| \, \mathrm{d}\rho \to 0 \quad (n \to \infty). \tag{5.9}$$

*Then*

$$
\mathcal{L}\_{\mu}^\* \mathbf{g} (\mathbf{i} \cdot + \mu) = \mathcal{L}\_{\nu}^\* \mathbf{g} (\mathbf{i} \cdot + \nu).
$$

*Proof* Let *<sup>t</sup>* <sup>∈</sup> <sup>R</sup>. By Cauchy's integral theorem, we have that

$$\int\_{\mathcal{Y}\_{\mathbb{R}\_n}} \mathbf{g}(\mathbf{z}) \mathbf{e}^{\mathbb{Z}^I} \, \mathrm{d}z = 0,$$

where *γRn* is the rectangular closed path with corners ±i*Rn* + *μ,* ±i*Rn* + *ν* (see Fig. 5.1). Thus, we have that

**Fig. 5.1** Curve *γRn*

$$\begin{split} & \text{i } \int\_{-R\_{\text{fl}}}^{R\_{\text{fl}}} \mathbf{g} (\mathbf{\dot{s}} + \boldsymbol{\nu}) \mathbf{e}^{(\mathbf{\dot{s}} + \boldsymbol{\nu})t} \, \mathbf{d} \mathbf{s} - \mathbf{i} \int\_{-R\_{\text{fl}}}^{R\_{\text{fl}}} \mathbf{g} (\mathbf{\dot{s}} + \boldsymbol{\mu}) \mathbf{e}^{(\mathbf{\dot{s}} + \boldsymbol{\mu})t} \, \mathbf{d} \mathbf{s} \\ & \quad = - \int\_{\mu}^{\nu} \mathbf{g} (-\mathbf{\dot{i}} R\_{\text{fl}} + \boldsymbol{\rho}) \mathbf{e}^{(-\mathbf{\dot{i}} R\_{\text{n}} + \boldsymbol{\rho})t} \, \mathbf{d} \boldsymbol{\rho} + \int\_{\mu}^{\nu} \mathbf{g} (\mathbf{\dot{i}} R\_{\text{l}} + \boldsymbol{\rho}) \mathbf{e}^{(\mathbf{\dot{i}} R\_{\text{n}} + \boldsymbol{\rho})t} \, \mathbf{d} \boldsymbol{\rho} . \end{split} \tag{5.10}$$

Note that with the help of the formula for the inverse Fourier transformation (see Theorem 5.1.4) and *L*<sup>∗</sup> *<sup>ν</sup>* <sup>=</sup> *(<sup>F</sup>* exp*(*−*ν*m*))*<sup>∗</sup> <sup>=</sup> exp*(*−*ν*m*)*−1*F*<sup>∗</sup> the left-hand side of (5.10) is nothing but

$$\sqrt{2\pi}\mathrm{i}\left(\left(\mathcal{L}\_{\nu}^{\*}\mathbb{1}\_{[-R\_{n},R\_{\scriptscriptstyle\rm\rm I}]}\mathrm{g}(\mathbf{i}\cdot+\boldsymbol{\nu})\right)(t) - \left(\mathcal{L}\_{\mu}^{\*}\mathbb{1}\_{[-R\_{n},R\_{\scriptscriptstyle\rm I}]}\mathrm{g}(\mathbf{i}\cdot+\boldsymbol{\mu})\right)(t)\right),$$

and hence, there is a subsequence of *(Rn)n* (which we do not relabel) such that the left-hand side of (5.10) tends to

$$\sqrt{2\pi}\mathrm{i}\left(\left(\mathcal{L}\_{\nu}^{\*}\mathrm{g}(\mathbf{i}\cdot+\boldsymbol{\nu})\right)(t)-\left(\mathcal{L}\_{\mu}^{\*}\mathrm{g}(\mathbf{i}\cdot+\boldsymbol{\mu})\right)(t)\right)$$

for almost every *<sup>t</sup>* <sup>∈</sup> <sup>R</sup> as *<sup>n</sup>* → ∞. As such, all we need to show is that the righthand side of (5.10) tends to 0 as *n* → ∞, which obviously follows by (5.9).

**Theorem 5.3.6** *Let <sup>M</sup>* : dom*(M)* <sup>⊆</sup> <sup>C</sup> <sup>→</sup> *L(H ) be a material law. Then, for μ, ν >* sb *(M) and <sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *H )* <sup>∩</sup> *<sup>L</sup>*2*,μ(*R; *H ), we have*

$$M(\partial\_{\mathfrak{t},\upsilon})f = M(\partial\_{\mathfrak{t},\mu})f.$$

*Moreover, M(∂t ,ν) is causal for all ν >* sb *(M).*

*Proof* Let *μ<ν*. We prove the assertion for *<sup>f</sup>* <sup>=</sup> <sup>1</sup>[*a,b*] · *<sup>x</sup>* with *a<b* and *<sup>x</sup>* <sup>∈</sup> *<sup>H</sup>* first. For *<sup>ρ</sup>* <sup>∈</sup> <sup>R</sup> we compute

$$\left(\mathcal{L}\_{\rho}f\right)(t) = \frac{1}{\sqrt{2\pi}} \int\_{a}^{b} x \mathbf{e}^{-(\mathbf{i}t+\rho)s} \,\mathrm{d}s = \frac{1}{\sqrt{2\pi}} \frac{1}{\mathrm{i}t+\rho} \left(\mathbf{e}^{-(\mathbf{i}t+\rho)a} - \mathbf{e}^{-(\mathbf{i}t+\rho)b}\right) \ge \frac{1}{\sqrt{2\pi}} \left(\mathbf{e}^{-(\mathbf{i}t+\rho)a} - \mathbf{e}^{-(\mathbf{i}t+\rho)a}\right)$$

for all *<sup>t</sup>* <sup>∈</sup> <sup>R</sup> \ {0}. Moreover, we define

$$\log(z) := \frac{1}{\sqrt{2\pi}} M(z) \ge \frac{1}{z} \left( \mathbf{e}^{-za} - \mathbf{e}^{-zb} \right) \quad (z \in \mathbb{C}\_{\text{Re} \geqslant \mu} \nmid \{0\}),$$

and prove that *g* satisfies the assumptions of Lemma 5.3.5. First, we note that *g* is bounded on {*<sup>z</sup>* <sup>∈</sup> <sup>C</sup> ; *<sup>μ</sup>* Re *<sup>z</sup> <sup>ν</sup>*} \ {0}. Indeed, we only need to prove that it is bounded near 0 provided that *μ* 0*.* To that end, we observe

$$\frac{1}{z}(\mathbf{e}^{-\varepsilon a} - \mathbf{e}^{-\varepsilon b}) = \mathbf{e}^{-\varepsilon a} \frac{1 - \mathbf{e}^{-\varepsilon(b-a)}}{z} \to b - a \quad (z \to 0).$$

Thus, *g* is bounded near 0. In particular, *z* = 0 is a removable singularity and, hence, *g* can be extended holomorphically to CRe*<sup>μ</sup>*. Moreover, for *ρ μ* we have that

$$\int\_{\mathbb{R}} \left\| \mathbf{g}(\mathbf{\dot{u}} + \boldsymbol{\rho}) \right\|^2 \, \mathrm{d}t = \int\_{-1}^{1} \left\| \mathbf{g}(\mathbf{\dot{u}} + \boldsymbol{\rho}) \right\|^2 \, \mathrm{d}t + \int\_{|\boldsymbol{t}| > 1} \left\| \mathbf{g}(\mathbf{\dot{u}} + \boldsymbol{\rho}) \right\|^2 \, \mathrm{d}t.$$

The first term on the right-hand side is finite since *g* is bounded, while the second term can be estimated by

$$\int\_{|t|>1} \left\| \mathbf{g} (\mathbf{i}t + \rho) \right\|^2 \, \mathrm{d}t \leqslant \left\| M \right\|\_{\infty, \mathbb{C}\_{\mathrm{Re} > \mu}}^2 \left\| \mathbf{x} \right\|^2 \frac{(\mathbf{e}^{-\rho a} + \mathbf{e}^{-\rho b})^2}{2\pi} \int\_{|t| > 1} \frac{1}{t^2 + \rho^2} \, \mathrm{d}t < \infty.$$

This proves that *g(*<sup>i</sup> · +*ρ)* <sup>∈</sup> *<sup>L</sup>*2*(*R; *H )* for each *<sup>ρ</sup> μ* and hence, particularly for *ρ* = *μ* and *ρ* = *ν*. Finally, for *ρ μ* we have that

$$\|\|g(\mathrm{it}+\rho)\|\| \leqslant \frac{1}{\sqrt{2\pi}} \|M\|\_{\infty,\mathbb{C}\_{\mathrm{Re}\simeq\mu}} \|\chi\| \frac{1}{\sqrt{t^2+\rho^2}} \left(\mathrm{e}^{-\rho a} + \mathrm{e}^{-\rho b}\right) \to 0 \quad (|t| \to \infty), \; t \to \infty$$

which together with the boundedness of *g* yields (5.9) by dominated convergence. This shows that *g* satisfies the assumptions of Lemma 5.3.5 and thus

$$M(\partial\_{l,\vee})f = \mathcal{L}^\*\_{\boldsymbol{\nu}}\mathcal{g}(\mathbf{i}\cdot+\boldsymbol{\nu}) = \mathcal{L}^\*\_{\boldsymbol{\mu}}\mathcal{g}(\mathbf{i}\cdot+\boldsymbol{\mu}) = M(\partial\_{l,\boldsymbol{\mu}})f.$$

By linearity, this equality extends to *<sup>S</sup>*c*(*R; *H )* and so,

$$F \colon \mathcal{S}\_{\mathbb{C}}(\mathbb{R}; H) \to \bigcap\_{\nu \geqslant \mu} L\_{2, \mathbb{V}}(\mathbb{R}; H), \ f \mapsto M(\partial\_{\mathbb{I}, \mathbb{V}})f$$

is well-defined. Moreover, *F* is uniformly Lipschitz continuous (observe that sup*ν<sup>μ</sup> <sup>F</sup><sup>ν</sup>* <sup>≤</sup> *<sup>M</sup>*∞*,*CRe*>μ* ) and hence, the assertions follow from Lemma 4.2.5.

## **5.4 Comments**

The Fourier and the Fourier–Laplace transformation introduced in this chapter are used to define an operator-valued functional calculus for the time derivative, *∂t ,ν*. This functional calculus can be defined since the Fourier–Laplace transformation provides the unitary transformation yielding the spectral representation of the time derivative as multiplication operator. This fact was already noticed in [83], which eventually led to evolutionary equations in [82].

We emphasise that we have used the fundamental property that both *F* and *L<sup>ν</sup>* are unitary. It is noteworthy that the Fourier transformation is an isometric isomorphism on *<sup>L</sup>*2*(*R; *X)* if and only if *<sup>X</sup>* is a Hilbert space, see [58]. In the Banach space-valued case one has to further restrict the class of functions used to define a functional calculus. For the topic of functional calculus we refer to the 21st Internet Seminar [46] by Markus Haase and to his monograph, [47].

Exercises 81

Material laws and the corresponding material law operators were also considered in [82, Section 3], including a physical motivation. Note that the definition in [82] is slightly different compared to the one presented here.

## **Exercises**

**Exercise 5.1** Let *(, , μ)* be a *σ*-finite measure space, *X* a Banach space and *<sup>I</sup>* <sup>⊆</sup> <sup>R</sup> an open interval. Let *<sup>g</sup>* : *<sup>I</sup>* <sup>×</sup> <sup>→</sup> *<sup>X</sup>* such that *g(t ,*·*)* <sup>∈</sup> *<sup>L</sup>*1*(μ*; *X)* for each *t* ∈ *I* , and define

$$h \colon I \to X, \ t \mapsto \int\_{\Omega} \operatorname{g}(t, \omega) \operatorname{d}\mu(\omega).$$

(a) Assume that *g(*·*, ω)* is continuous for *μ*-almost every *ω* ∈ and let *f* ∈ *L*1*(μ)* such that

$$\|\lg(t,\alpha)\| \lesssim f(\alpha) \quad (t \in I, \alpha \in \Omega).$$

Prove that *h* is continuous.

(b) Assume that *g(*·*, ω)* is differentiable for *μ*-almost every *ω* ∈ and let *f* ∈ *L*1*(μ)* such that

$$\|\partial\_l \lg(t, \omega)\| \lesssim f(\omega) \quad (t \in I, \mu - a.a. \ \omega \in \Omega).$$

Prove that *h* is differentiable with

$$h'(t) = \int\_{\Omega} \partial\_t \mathbf{g}(t, \boldsymbol{\omega}) \, \mathrm{d}\mu(\boldsymbol{\omega}).$$

**Exercise 5.2** Let *H*0*, H*<sup>1</sup> be two Hilbert spaces and *U* : dom*(U )* ⊆ *H*<sup>0</sup> → *H*<sup>1</sup> linear such that


Show that *U* can be uniquely extended to a unitary operator between *H*<sup>0</sup> and *H*1.

**Exercise 5.3** Let <sup>⊆</sup> <sup>C</sup> be open, *<sup>X</sup>* a complex Banach space and *<sup>f</sup>* : <sup>→</sup> *<sup>X</sup>*. Prove that the following statements are equivalent:

(i) *f* is holomorphic.

(ii) For all *x*- ∈ *X* the mapping *x*-◦ *<sup>f</sup>* : <sup>→</sup> <sup>C</sup> is holomorphic.

(iii) *f* is locally bounded and *x*- ◦ *<sup>f</sup>* : <sup>→</sup> <sup>C</sup> is holomorphic for all *<sup>x</sup>*- ∈ *D*, where *D* ⊆ *X*is a norming set1 for *X*.

<sup>1</sup> *<sup>D</sup>* <sup>⊆</sup> *<sup>X</sup>* is called a norming set for *X* if *x* = sup*x*-<sup>∈</sup>*D*\{0} <sup>1</sup> *x*- *x*- *(x)* for each *<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*. Note that *X*is norming for *X* by the Hahn–Banach theorem.

(iv) *f* is analytic, i.e. for each *z*<sup>0</sup> ∈ there is*r >* 0 and *(an)n* in *X* with *B (z*0*, r)* ⊆ and

$$f(z) = \sum\_{n=0}^{\infty} a\_n \, (z - z\_0)^n \quad (z \in \mathcal{B} \,(z\_0, r)).$$

Assume now that *X* = *L(X*1*, X*2*)* for two complex Banach spaces *X*1*, X*2, let *D*<sup>1</sup> ⊆ *X*<sup>1</sup> be dense and *D*<sup>2</sup> ⊆ *X*- <sup>2</sup> norming for *X*2. Prove that the statements (i) to (iv) are equivalent to

(v) *f* is locally bounded and *z* → *x*- <sup>2</sup>*(f (z)(x*1*))* <sup>∈</sup> <sup>C</sup> is holomorphic for all *x*<sup>1</sup> ∈ *D*<sup>1</sup> and *x*- <sup>2</sup> ∈ *D*2.

*Hint:* For the difficult implications one might also consult [6, Appendix A]. In the same source one can find that in part (iii) it is enough for *D* to be separating.

**Exercise 5.4** Let *<sup>ν</sup>* <sup>∈</sup> <sup>R</sup> and *<sup>k</sup>* <sup>∈</sup> *<sup>L</sup>*1*,ν(*R*)*. Prove that

$$
\mathcal{L}\_{\boldsymbol{\nu}}\left(k\*f\right) = \sqrt{2\pi} \left(\mathcal{L}\_{\boldsymbol{\nu}}k\right) \cdot \left(\mathcal{L}\_{\boldsymbol{\nu}}f\right),
$$

for *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν (*R; *H )*.

**Exercise 5.5** Let *α >* 0 and define *gα(t)* := <sup>1</sup>[0*,*∞*)(t)t<sup>α</sup>*−<sup>1</sup> for *<sup>t</sup>* <sup>∈</sup> <sup>R</sup>*.* Show that *gα* <sup>∈</sup> *<sup>L</sup>*1*,ν(*R*)* for each *ν >* 0 and that

$$(\mathcal{L}\_\upsilon g\_\alpha)\left(t\right) = \frac{1}{\sqrt{2\pi}}\Gamma(\alpha)(\mathrm{i}t + \upsilon)^{-\alpha}.$$

Use this formula and Exercise 5.4 to derive (5.8).

*Hint:* To compute the Fourier–Laplace transform of *gα*, derive that *Lνgα* solves a first order ordinary differential equation and use separation of variables to solve this equation.

**Exercise 5.6** Let *μ, ν* <sup>∈</sup> <sup>R</sup> with *μ<ν* and *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν (*R; *H )* <sup>∩</sup> *<sup>L</sup>*2*,μ(*R; *H )*. Moreover, set *<sup>U</sup>* := {*<sup>z</sup>* <sup>∈</sup> <sup>C</sup> ; *μ <* Re *z<ν*}. Show that *<sup>f</sup>* <sup>∈</sup> " *μ<ρ<ν <sup>L</sup>*2*,ρ(*R; *H )*<sup>∩</sup> *<sup>L</sup>*1*,ρ(*R; *H )* and that

$$U \ni z \mapsto (\mathcal{L}\_{\text{Re } z} f) \left( \operatorname{Im} z \right)$$

is holomorphic.

**Exercise 5.7** Let *<sup>H</sup>*0*, H*<sup>1</sup> be Hilbert spaces and *<sup>T</sup>* : *<sup>L</sup>*2*,ν(*R; *<sup>H</sup>*0*)* <sup>→</sup> *<sup>L</sup>*2*,ν(*R; *<sup>H</sup>*1*)* linear and bounded. We call *<sup>T</sup> autonomous*if *T τh* <sup>=</sup> *τhT* for each *<sup>h</sup>* <sup>∈</sup> <sup>R</sup> (*τh* denotes the translation operator defined in Example 5.3.4). Prove that for autonomous *T* , the following statements are equivalent:


Moreover, prove that for a material law *M*, the operator *M(∂t ,ν)* is autonomous for each *ν >* sb *(M)*.

## **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 6 Solution Theory for Evolutionary Equations**

In this chapter, we shall discuss and present the first major result of the manuscript: Picard's theorem on the solution theory for evolutionary equations which is the main result of [82]. In order to stress the applicability of this theorem, we shall deal with applications first and provide a proof of the actual result afterwards. With an initial interest in applications in mind, we start off with the introduction of some operators related to vector calculus.

## **6.1 First Order Sobolev Spaces**

Throughout this section let <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* be an open set.

**Definition** We define

$$\operatorname{grad}\_{\mathbb{C}} \colon C\_{\mathbb{C}}^{\infty}(\Omega) \subseteq L\_{2}(\Omega) \to L\_{2}(\Omega)^{d}$$

$$\phi \mapsto \left(\partial\_{j}\phi\right)\_{j \in \{1, \ldots, d\}},$$

$$\operatorname{div}\_{\mathbb{C}} \colon C\_{\mathbb{C}}^{\infty}(\Omega)^{d} \subseteq L\_{2}(\Omega)^{d} \to L\_{2}(\Omega)$$

$$\left(\phi\_{j}\right)\_{j \in \{1, \ldots, d\}} \mapsto \sum\_{j \in \{1, \ldots, d\}} \partial\_{j}\phi\_{j},$$

and if *d* = 3,

$$\operatorname{curl}\_{\mathbb{C}} \colon C\_{\mathbb{C}}^{\infty}(\Omega)^{\mathbb{C}} \subseteq L\_2(\Omega)^{\mathbb{C}} \to L\_2(\Omega)^{\mathbb{C}}$$

$$\left(\phi\_j\right)\_{j \in \{1, 2, 3\}} \mapsto \begin{pmatrix} \partial\_2 \phi\_3 - \partial\_3 \phi\_2 \\ \partial\_3 \phi\_1 - \partial\_1 \phi\_3 \\ \partial\_1 \phi\_2 - \partial\_2 \phi\_1 \end{pmatrix}.$$

Furthermore, we put

$$\text{div} := -\,\text{grad}\_{\mathbb{C}}^{\*}, \quad \text{grad} := -\,\text{div}\_{\mathbb{C}}^{\*}, \quad \text{curl} := \,\text{curl}\_{\mathbb{C}}^{\*}$$

and

$$\text{div}\_0 := -\,\text{grad}^\*, \quad \text{grad}\_0 := -\,\text{div}^\*, \quad \text{curl}\_0 := \,\text{curl}^\*.$$

**Proposition 6.1.1** *The relations* div*,* div0*,* grad*,* grad0*,* curl *and* curl0 *are all densely defined, closed linear operators.*

*Proof* The operators gradc*,* divc and curlc are densely defined by Exercise 6.3. Thus, div*,* grad and curl are closed linear operators by Lemma 2.2.7. Moreover, it follows from integration by parts that gradc ⊆ grad, divc ⊆ div and curlc ⊆ curl. Thus, div*,* grad and curl are also densely defined. This, in turn, implies that gradc*,* divc and curlc are closable by Lemma 2.2.7 with respective closures grad0*,* div0 and curl0 by Lemma 2.2.4.

We shall describe the domains of these operators in more detail in the next theorem.

**Theorem 6.1.2** *If <sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*() and <sup>g</sup>* <sup>=</sup> *(gj )j*∈{1*,...,d*} <sup>∈</sup> *<sup>L</sup>*2*()<sup>d</sup> then the following statements hold:*

(a) *f* ∈ dom*(*grad*) and g* = grad *f if and only if*

$$\forall \phi \in C\_{\mathfrak{c}}^{\infty}(\Omega), j \in \{1, \dots, d\} \colon -\int\_{\Omega} f \, \partial\_j \phi = \int\_{\Omega} g\_j \phi.$$


$$\forall \phi \in \mathcal{C}\_{\mathbf{c}}^{\infty}(\Omega) \colon -\int\_{\Omega} \mathbf{g} \cdot \operatorname{grad} \phi = \int\_{\Omega} f \phi.$$

(d) *g* ∈ dom*(*div0*) and f* = div0 *g if and only if there exists (gk)k in C*<sup>∞</sup> <sup>c</sup> *()<sup>d</sup> such that gk* <sup>→</sup> *<sup>g</sup> in <sup>L</sup>*2*()<sup>d</sup> and* div *gk* <sup>→</sup> *<sup>f</sup> in <sup>L</sup>*2*() as <sup>k</sup>* → ∞*.*

*If <sup>d</sup>* <sup>=</sup> <sup>3</sup> *and f, g* <sup>∈</sup> *<sup>L</sup>*2*()*<sup>3</sup> *then the following statements hold:* (e) *f* ∈ dom*(*curl*) and g* = curl *f if and only if*

$$\forall \phi \in C\_{\mathbf{c}}^{\infty}(\Omega)^{3} \colon \int\_{\Omega} f \cdot \operatorname{curl} \phi = \int\_{\Omega} \mathbf{g} \cdot \phi \,.$$

(f) *f* ∈ dom*(*curl0*) and g* = curl0 *f if and only if there exists (fk )k in C*<sup>∞</sup> <sup>c</sup> *()*<sup>3</sup> *such that fk* <sup>→</sup> *<sup>f</sup> in <sup>L</sup>*2*()*<sup>3</sup> *and* curl *fk* <sup>→</sup> *<sup>g</sup> in <sup>L</sup>*2*()*<sup>3</sup> *as <sup>k</sup>* → ∞*.*

All the statements in Theorem 6.1.2 are elementary consequences of the integration by parts formula, the definitions of the adjoint and Lemma 2.2.4. We ask the reader to prove these statements in Exercise 6.4.

We introduce the following notation:

$$H^1(\Omega) := \text{dom}(\text{grad}),$$

$$H^1\_0(\Omega) := \text{dom}(\text{grad}\_0),$$

$$H(\text{div}, \Omega) := \text{dom}(\text{div}),$$

$$H(\text{curl}, \Omega) := \text{dom}(\text{curl}).$$

Following the rationale of appending zero as an index for *H*<sup>1</sup> <sup>0</sup> *()*, we shall also use

$$H\_0(\text{div}, \Omega) := \text{dom}(\text{div}\_0),$$

$$H\_0(\text{curl}, \Omega) := \text{dom}(\text{curl}\_0).$$

We caution the reader that other authors also use *H*0*(*div*, )* and *H*0*(*curl*, )* to denote the kernel of div and curl.

All the spaces just defined are so-called Sobolev spaces. We note that for *d* = 3 we clearly have *<sup>H</sup>*1*()*<sup>3</sup> <sup>⊆</sup> *H (*div*, )* <sup>∩</sup> *H (*curl*, )*. On the other hand, note that *H (*div*, )* is neither a sub- nor a superset of *H (*curl*, )*.

*Remark 6.1.3* We emphasise that *H*<sup>1</sup> <sup>0</sup> *()* <sup>=</sup> *<sup>C</sup>*∞<sup>c</sup> *()H*1*()* <sup>⊆</sup> *<sup>H</sup>*1*()* is a proper inclusion for many open . The '0' in the index is a reminder of '0'-boundary conditions. In fact, the only difference between these two spaces lies in the behaviour of their elements at the boundary of . The space *H*<sup>1</sup> <sup>0</sup> signifies all *<sup>H</sup>*1 functions vanishing at *∂* in a generalised sense. The corresponding statements are true for the inclusions *H*0*(*div*, )* ⊆ *H (*div*, )* and *H*0*(*curl*, )* ⊆ *H (*curl*, )*. The space *H*0*(*div*, )* describes *H (*div*, )*-vector fields with vanishing normal component and to lie in *H*0*(*curl*, )* provides a handy generalisation of vanishing tangential component. We will anticipate these abstractions when we apply the solution theory of evolutionary equations for particular cases. In a later chapter we will come back to this issue when we discuss inhomogeneous boundary value problems.

For later use, we record the following relationships between the vector-analytical operators introduced above.

**Proposition 6.1.4** *Let d* = 3*. We have the following inclusions:*

$$
\overline{\text{ran}}(\text{curl}\_0) \subseteq \text{ker}(\text{div}\_0),
$$

$$
\overline{\text{ran}}(\text{grad}\_0) \subseteq \text{ker}(\text{curl}\_0),
$$

$$
\overline{\text{ran}}(\text{curl}) \subseteq \text{ker}(\text{div}),
$$

$$
\overline{\text{ran}}(\text{grad}) \subseteq \text{ker}(\text{curl}).
$$

*Proof* It is elementary to show that for given *ψ* ∈ *C*<sup>∞</sup> <sup>c</sup> *()*<sup>3</sup> and *<sup>φ</sup>* <sup>∈</sup> *<sup>C</sup>*<sup>∞</sup> <sup>c</sup> *()* we have div0 curl0 *ψ* = 0 as well as curl0 grad0 *φ* = 0. Thus, we obtain ran*(*curlc*)* ⊆ ker*(*div0*)* and ran*(*gradc*)* ⊆ ker*(*curl0*)*. Since ker*(*div0*)* and ker*(*curl0*)* are closed, and *C*∞ <sup>c</sup> *()*<sup>3</sup> and *C*<sup>∞</sup> <sup>c</sup> *()* are cores for curl0 and grad0 respectively, we obtain the first two inclusions. The last two inclusions follow from the first two by taking into account the orthogonal decompositions

$$L\_2(\Omega)^3 = \overline{\text{ran}}(\text{grad}) \oplus \text{ker}(\text{div}\_0) = \text{ker}(\text{curl}) \oplus \overline{\text{ran}}(\text{curl}\_0).$$

and

$$L\_2(\Omega)^3 = \overline{\text{ran}}(\text{grad}\_0) \oplus \text{ker}(\text{div}) = \text{ker}(\text{curl}\_0) \oplus \overline{\text{ran}}(\text{curl})$$

which follow from Corollary 2.2.6.

## **6.2 Well-Posedness of Evolutionary Equations and Applications**

The solution theory of evolutionary equations is contained in the next result, Picard's theorem. This result is central for all the derivations to come. In fact, with the notation of Theorem 6.2.1, we shall prove that for all (well-behaved) *F* there is a unique solution of

$$\left(\partial\_{\mathfrak{l},\mathbb{V}}M(\partial\_{\mathfrak{l},\mathbb{V}}) + A\right)U = F.$$

The solution *U* depends continuously and causally on the choice of *F*.

In order to formulate the result, for a Hilbert space *<sup>H</sup>*, *<sup>ν</sup>* <sup>∈</sup> <sup>R</sup> and a given closed operator *<sup>A</sup>*: dom*(A)* <sup>⊆</sup> *<sup>H</sup>* <sup>→</sup> *<sup>H</sup>* we define its extended operator in *<sup>L</sup>*2*,ν(*R; *H )*, again denoted by *A*, by

$$\begin{aligned} L\_{2,\boldsymbol{\nu}}(\mathbb{R}; \operatorname{dom}(A)) \subseteq L\_{2,\boldsymbol{\nu}}(\mathbb{R}; H) \to L\_{2,\boldsymbol{\nu}}(\mathbb{R}; H) \\ f \mapsto \begin{pmatrix} t \mapsto Af(t) \end{pmatrix}. \end{aligned}$$

We have collected some properties of extended operators in Exercises 6.1 and 6.2.

**Theorem 6.2.1 (Picard)** *Let <sup>ν</sup>*<sup>0</sup> <sup>∈</sup> <sup>R</sup> *and <sup>H</sup> be a Hilbert space. Let <sup>M</sup>* : dom*(M)* <sup>⊆</sup> <sup>C</sup> <sup>→</sup> *L(H ) be a material law with* sb *(M) < ν*<sup>0</sup> *and let <sup>A</sup>*: dom*(A)* <sup>⊆</sup> *<sup>H</sup>* <sup>→</sup> *<sup>H</sup> be skew-selfadjoint. Assume that*

$$\operatorname{Re}\left\langle \phi, zM(z)\phi \right\rangle\_H \gtrsim c \left\| \phi \right\|\_H^2 \quad (\phi \in H, z \in \mathbb{C}\_{\operatorname{Re}\gtrsim \iota\_0})$$

*for some c >* 0*. Then for all ν ν*<sup>0</sup> *the operator ∂t ,νM(∂t ,ν)* + *A is closable and*

$$S\_{\boldsymbol{\nu}} := \left( \overline{\partial\_{\boldsymbol{l},\boldsymbol{\nu}}M(\partial\_{\boldsymbol{l},\boldsymbol{\nu}}) + A} \right)^{-1} \in L(L\_{2,\boldsymbol{\nu}}(\mathbb{R}; H)).$$

*Furthermore, Sν is causal and satisfies Sν L(L*2*,ν )* 1*/c, and for all F* ∈ dom*(∂t ,ν ) we have*

*SνF* ∈ dom*(∂t ,ν)* ∩ dom*(A).*

*Furthermore, for η, ν <sup>ν</sup>*<sup>0</sup> *and <sup>F</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *H )*∩*L*2*,η(*R; *H ) we have that SνF* <sup>=</sup> *SηF.*

The property that *SνF* <sup>=</sup> *SηF* for all *<sup>F</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *H )* <sup>∩</sup> *<sup>L</sup>*2*,η(*R; *H )* where *η, ν* - *<sup>ν</sup>*0, for some *<sup>ν</sup>*<sup>0</sup> <sup>∈</sup> <sup>R</sup>, will be referred to as *Sν* being *eventually independent of <sup>ν</sup>* in what follows.

*Remark 6.2.2* If *F* ∈ dom*(∂t ,ν)*, then *U* := *SνF* ∈ dom*(∂t ,ν)* ∩ dom*(A)* by Theorem 6.2.1. Since *M(∂t ,ν)* leaves the space dom*(∂t ,ν )* invariant, this gives that *M(∂t ,ν)U* ∈ dom*(∂t ,ν)* and thus, *U* solves the evolutionary equation literally; that is,

$$(\partial\_{\mathfrak{t},\boldsymbol{\upsilon}}\mathcal{M}(\partial\_{\mathfrak{t},\boldsymbol{\upsilon}}) + A)U = F,$$

while for *<sup>F</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *H )*, in general, we just have

$$(\overline{\partial\_{\mathfrak{l},\mathbb{U}}M(\partial\_{\mathfrak{l},\mathbb{U}}) + A})U = F.$$

**Definition** Let *H* be a Hilbert space and *T* ∈ *L(H )*. If *T* is selfadjoint, we write *T <sup>c</sup>* for some *<sup>c</sup>* <sup>∈</sup> <sup>R</sup> if

$$\forall \mathbf{x} \in H \; : \; \langle \mathbf{x}, T\mathbf{x} \rangle\_H \geqslant c \parallel \mathbf{x} \parallel\_H^2 \; .$$

Moreover, we define the *real part of <sup>T</sup>* by Re *<sup>T</sup>* := <sup>1</sup> <sup>2</sup> *(T* + *T* <sup>∗</sup>*)*.

Note that if *H* is a Hilbert space and *T* ∈ *L(H )* then Re *T* is selfadjoint. Moreover,

$$<\langle \mathbf{x}, (\operatorname{Re} T) \mathbf{x} \rangle\_H = \operatorname{Re} \langle \mathbf{x}, T \mathbf{x} \rangle\_H \quad (\mathbf{x} \in H).$$

Hence, in Theorem 6.2.1 the assumption on the material law can be rephrased as

$$\operatorname{Re}\,\varepsilon M(z) \geqslant c \quad (z \in \mathbb{C}\_{\operatorname{Re}\geqslant v\_{0}})\,.$$

The following operators will be prototypical examples needed for the applications of the previous theorem.

**Proposition 6.2.3** *Let H*0*, H*<sup>1</sup> *be Hilbert spaces.*

(a) *Let B* : dom*(B)* ⊆ *H*<sup>0</sup> → *H*1*, C*: dom*(C)* ⊆ *H*<sup>1</sup> → *H*<sup>0</sup> *be densely defined linear operators. Then*

$$\begin{aligned} \begin{pmatrix} 0 & C \\ B & 0 \end{pmatrix} : \text{dom}(B) \times \text{dom}(C) \subseteq H\_0 \times H\_1 \to H\_0 \times H\_1 \\\\ (\phi, \psi) &\mapsto (C\psi, B\phi) \end{aligned}$$

*is densely defined, and we have*

$$
\begin{pmatrix} 0 & C \\ B & 0 \end{pmatrix}^\* = \begin{pmatrix} 0 & B^\* \\ C^\* & 0 \end{pmatrix}.
$$

(b) *Let a* ∈ *L(H*0*), and c >* 0*. Assume* Re *a <sup>c</sup>. Then <sup>a</sup>*−<sup>1</sup> <sup>∈</sup> *L(H*0*) with <sup>a</sup>*−<sup>1</sup> 1 *<sup>c</sup> and* Re *<sup>a</sup>*−<sup>1</sup> *<sup>c</sup> <sup>a</sup>*−<sup>2</sup>*.*

*Proof* The proof of the first statement can be done in two steps. First, notice that the inclusion <sup>0</sup> *<sup>B</sup>*<sup>∗</sup> *C*∗ 0 ⊆ 0 *C B* 0 ∗ follows immediately. If, on the other hand, *φ ψ* <sup>∈</sup> dom <sup>0</sup> *<sup>C</sup> B* 0 <sup>∗</sup> with <sup>0</sup> *<sup>C</sup> B* 0 <sup>∗</sup> *φ ψ* = *ξ ζ* we get for all *x* ∈ dom*(B)* that

$$
\langle Bx, \psi \rangle\_{H\_1} = \left\langle \begin{pmatrix} 0 & C \\ B & 0 \end{pmatrix} \begin{pmatrix} x \\ 0 \end{pmatrix}, \begin{pmatrix} \phi \\ \psi \end{pmatrix} \right\rangle\_{H\_0 \times H\_1} = \left\langle \begin{pmatrix} x \\ 0 \end{pmatrix}, \begin{pmatrix} 0 & C \\ B & 0 \end{pmatrix}^\* \begin{pmatrix} \phi \\ \psi \end{pmatrix} \right\rangle\_{H\_0 \times H\_1}.
$$

$$
= \left\langle \begin{pmatrix} x \\ 0 \end{pmatrix}, \begin{pmatrix} \xi \\ \xi \end{pmatrix} \right\rangle\_{H\_0 \times H\_1} = \langle x, \xi \rangle\_{H\_0}.
$$

Hence, *ψ* ∈ dom*(B*∗*)* and *B*∗*ψ* = *ξ*. Similarly, we obtain *φ* ∈ dom*(C*∗*)* and *C*∗*φ* = *ζ* .

For the second statement, we compute for all *φ* ∈ *H*<sup>0</sup> using the Cauchy–Schwarz inequality

$$\|\|\phi\|\|\_{H\_0} \|a\phi\|\|\_{H\_0} \ge \left| \langle \phi, a\phi \rangle\_{H\_0} \right| \geqslant \text{Re } \langle \phi, a\phi \rangle\_{H\_0} \gtrsim c \langle \phi, \phi \rangle\_{H\_0} = c \parallel \phi \|\|\_{H\_0}^2.$$

Thus, *a* is one-to-one. Since Re *a* = Re *a*<sup>∗</sup> it follows that *a*<sup>∗</sup> is one-to-one, as well. Thus, we get that *a* has dense range by Theorem 2.2.5. The inequality

$$\|a\phi\|\_{H\_0} \geqslant c \|\phi\|\_{H\_0}$$

implies that *a*−<sup>1</sup> is bounded with  *<sup>a</sup>*−<sup>1</sup> <sup>1</sup> *<sup>c</sup>* . Hence, as *<sup>a</sup>*−<sup>1</sup> is closed, dom*(a*−1*)* <sup>=</sup> ran*(a)* is closed by Lemma 2.1.3 and hence, dom*(a*−1*)* <sup>=</sup> *<sup>H</sup>*0; that is, *<sup>a</sup>*−<sup>1</sup> <sup>∈</sup> *L(H*0*)*. To conclude, let *<sup>ψ</sup>* <sup>∈</sup> *<sup>H</sup>*<sup>0</sup> and put *<sup>φ</sup>* := *<sup>a</sup>*−1*ψ*. Then *<sup>ψ</sup> <sup>H</sup>*<sup>0</sup> <sup>=</sup> *aa*−1*<sup>ψ</sup> <sup>H</sup>*<sup>0</sup> *a <sup>a</sup>*−1*<sup>ψ</sup> <sup>H</sup>*<sup>0</sup> and so

$$\left\langle \operatorname{Re} \left\langle \psi, a^{-1} \psi \right\rangle\_{H\_0} = \operatorname{Re} \left\langle a\phi, \phi \right\rangle\_{H\_0} = \operatorname{Re} \left\langle \phi, a\phi \right\rangle\_{H\_0} \geqslant c \left\langle \phi, \phi \right\rangle\_{H\_0} = c \left\langle a^{-1} \psi, a^{-1} \psi \right\rangle\_{H\_0} \qquad \square$$

$$\geqslant c \frac{1}{\left\| a\right\|^2} \left\| \psi \right\|\_{H\_0}^2.$$

#### **The Heat Equation**

The first example we will consider is the heat equation in an open subset <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* . Under a heat source, *<sup>Q</sup>*: <sup>R</sup>× <sup>→</sup> <sup>R</sup>, the heat distribution, *<sup>θ</sup>* : <sup>R</sup>× <sup>→</sup> <sup>R</sup>, satisfies the so-called heat flux balance

$$
\partial\_t \theta + \text{div} \, q = \mathcal{Q}.
$$

Here, *<sup>q</sup>* : <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup>*<sup>d</sup>* is the heat flux which is connected to *<sup>θ</sup>* via Fourier's law

$$q = -a \operatorname{grad} \theta,$$

where *<sup>a</sup>* : <sup>→</sup> <sup>R</sup>*d*×*<sup>d</sup>* is the heat conductivity, which is measurable, bounded and uniformly strictly positive in the sense that

$$\operatorname{Re} a(x) \geqslant c$$

for all *x* ∈ and some *c >* 0 in the sense of positive definiteness. Moreover, we assume that is thermally isolated, which is modelled by requiring that the normal component of *q* vanishes at *∂*; that is, *q* ∈ dom*(*div0*)* (see Remark 6.1.3). Written as a block matrix and incorporating the boundary condition, we obtain

$$
\left(\partial\_l \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} + \begin{pmatrix} 0 & 0 \\ 0 & a^{-1} \end{pmatrix} + \begin{pmatrix} 0 & \text{div}\_0 \\ \text{grad} & 0 \end{pmatrix}\right) \begin{pmatrix} \theta \\ q \end{pmatrix} = \begin{pmatrix} \mathcal{Q} \\ 0 \end{pmatrix}.
$$

**Theorem 6.2.4** *For all ν >* 0*, the operator*

$$
\partial\_{\mathfrak{l},\boldsymbol{\upsilon}} \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} + \begin{pmatrix} 0 & 0 \\ 0 & a^{-1} \end{pmatrix} + \begin{pmatrix} 0 & \mathrm{div}\_0 \\ \mathrm{grad}\ 0 \end{pmatrix},
$$

*is densely defined and closable in L*2*,ν* <sup>R</sup>; *<sup>L</sup>*2*()* <sup>×</sup> *<sup>L</sup>*2*()<sup>d</sup> . The respective closure is continuously invertible with causal inverse being eventually independent of ν.*

*Proof* The assertion follows from Theorem 6.2.1 applied to

$$M(z) = \begin{pmatrix} 1 \ 0 \\ 0 \ 0 \end{pmatrix} + z^{-1} \begin{pmatrix} 0 & 0 \\ 0 \ a^{-1} \end{pmatrix} \quad \text{and} \quad A = \begin{pmatrix} 0 & \text{div}\_0 \\ \text{grad} & 0 \end{pmatrix}.$$

Note that *M* is a material law with sb *(M)* = 0 by Example 5.3.1. Moreover, for *(x, y)* <sup>∈</sup> *<sup>L</sup>*2*()* <sup>×</sup> *<sup>L</sup>*2*()<sup>d</sup>* and *<sup>z</sup>* <sup>∈</sup> <sup>C</sup>Re*<sup>ν</sup>* with *ν >* 0 we estimate

$$\begin{split} \left\| \operatorname{Re} \langle (\mathbf{x}, \mathbf{y}), z \mathcal{M}(\mathbf{z})(\mathbf{x}, \mathbf{y}) \rangle\_{L\_2(\Omega) \times L\_2(\Omega)^d} \geqslant \operatorname{Re} z \left\| \mathbf{x} \right\|\_{L\_2(\Omega)}^2 + c \left\| a \right\|^{-2} \left\| \mathbf{y} \right\|\_{L\_2(\Omega)^d}^2 \\ \geqslant \min \{ \nu, c \left\| a \right\|^{-2} \} \left\| (\mathbf{x}, \mathbf{y}) \right\|\_{L\_2(\Omega) \times L\_2(\Omega)^d}^2, \end{split}$$

where we have used Proposition 6.2.3(b) in the first inequality. Moreover, *A* is skewselfadjoint by Proposition 6.2.3(a).

*Remark 6.2.5* Assume that *Q* ∈ dom*(∂t ,ν)*. It then follows from Theorem 6.2.1 that

$$
\begin{pmatrix} \theta \\ q \end{pmatrix} := \overbrace{\left(\partial\_{t,\upsilon} \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} + \begin{pmatrix} 0 & 0 \\ 0 \ a^{-1} \end{pmatrix} + \begin{pmatrix} 0 & \text{div} \boldsymbol{0} \\ \text{grad} & 0 \end{pmatrix}\right)^{-1} \begin{pmatrix} \mathcal{Q} \\ 0 \end{pmatrix}}^{-1}
$$

$$
\in \text{dom}\left(\partial\_{t,\upsilon}\right) \cap \text{dom}\left(\begin{pmatrix} 0 & \text{div}\_{0} \\ \text{grad} & 0 \end{pmatrix}\right). \tag{6.1}
$$

Then, as in Remark 6.2.2, it follows that *θ* and *q* satisfy the heat flux balance and Fourier's law in the sense that *θ* ∈ dom*(∂t ,ν)* ∩ dom*(*grad*)* and *q* ∈ dom*(*div0*)* and

$$\begin{aligned} \partial\_l \theta + \text{div}\_0 q &= \mathcal{Q}, \\ q &= -a \, \text{grad} \, \theta. \end{aligned}$$

This regularity result is true even for *<sup>Q</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *<sup>L</sup>*2*())*; see [88] and Chap. 15, Theorem 15.2.3.

#### **The Scalar Wave Equation**

The classical scalar wave equation in a medium <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* (think, for instance, of a vibrating string (*d* = 1*)* or membrane (*d* = 2*)*) consists of the equation of the balance of momentum where the acceleration of the (vertical) displacement, *<sup>u</sup>*: <sup>R</sup><sup>×</sup>  <sup>→</sup> <sup>R</sup>, is balanced by external forces, *<sup>f</sup>* : <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup>, and the divergence of the stress, *<sup>σ</sup>* : <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup>*<sup>d</sup>* , in such a way that

$$
\partial\_t^2 u - \text{div}\,\sigma = f.
$$

The stress is related to *u* via the following so-called stress-strain relation (here Hooke's law)

$$\sigma = T \operatorname{grad} u,$$

where the so-called elasticity tensor, *<sup>T</sup>* : <sup>→</sup> <sup>R</sup>*d*×*<sup>d</sup>* , is bounded, measurable, and satisfies

$$T(\mathbf{x}) = T(\mathbf{x})^\* \geqslant c$$

for some *c >* 0 uniformly in *x* ∈ . The quantity grad *u* is referred to as the strain. We think of *u* as being fixed at *∂* ("clamped boundary condition"). This is modelled by *u* ∈ dom*(*grad0*)*.

Using *v* := *∂tu* as an unknown, we can rewrite the balance of momentum and Hooke's law as 2 × 2-block-operator matrix equation

$$
\left(\partial\_l \begin{pmatrix} 1 & 0 \\ 0 \ T^{-1} \end{pmatrix} - \begin{pmatrix} 0 & \text{div} \\ \text{grad}\_0 & 0 \end{pmatrix} \right) \begin{pmatrix} v \\ \sigma \end{pmatrix} = \begin{pmatrix} f \\ 0 \end{pmatrix} \dots
$$

The solution theory of evolutionary equations for the wave equation now reads as follows:

**Theorem 6.2.6** *Let* <sup>⊆</sup> <sup>R</sup>*<sup>d</sup> be open, and <sup>T</sup> as indicated above. Then, for all ν >* 0*,*

$$
\partial\_{t, \upsilon} \begin{pmatrix} 1 & 0 \\ 0 \ T^{-1} \end{pmatrix} - \begin{pmatrix} 0 & \text{div} \\ \text{grad}\_0 & 0 \end{pmatrix}
$$

*is densely defined and closable in L*2*,ν* <sup>R</sup>; *<sup>L</sup>*2*()* <sup>×</sup> *<sup>L</sup>*2*()<sup>d</sup> . The respective closure is continuously invertible with causal inverse being eventually independent of ν.*

*Proof* We apply Theorem 6.2.1 to *<sup>A</sup>* = − 0 div grad0 0 , which is skew-selfadjoint by Proposition 6.2.3(a), and *M(z)* = 1 0 0 *T* <sup>−</sup><sup>1</sup> , which defines a material law with sb *(M)* = −∞. The positive definiteness constraint needed in Theorem 6.2.1 is satisfied by Proposition 6.2.3(b) on account of the selfadjointness of *T* , which implies the same for *<sup>T</sup>* <sup>−</sup>1. Indeed, for *<sup>ν</sup>*<sup>0</sup> *<sup>&</sup>gt;* 0 and *<sup>z</sup>* <sup>∈</sup> <sup>C</sup>Re*<sup>ν</sup>*<sup>0</sup> we estimate

$$\begin{split} \left\| \operatorname{Re} \left\langle (\mathbf{x}, \mathbf{y}), z \mathcal{M}(\mathbf{z}) (\mathbf{x}, \mathbf{y}) \right\rangle\_{L\_{2}(\Omega) \times L\_{2}(\Omega)^{d}} &= \operatorname{Re} \left\langle \mathbf{x}, z \mathbf{x} \right\rangle\_{L\_{2}(\Omega)} + \operatorname{Re} \left\langle \mathbf{y}, zT^{-1} \mathbf{y} \right\rangle\_{L\_{2}(\Omega)^{d}} \\ &\geqslant \nu\_{0} \left\| \mathbf{x} \right\|\_{L\_{2}(\Omega)}^{2} + \nu\_{0} \frac{c}{\left\| T \right\|^{2}} \left\| \mathbf{y} \right\|\_{L\_{2}(\Omega)^{d}}^{2} \\ &\geqslant \nu\_{0} \min\{1, c/\left\| T \right\|^{2} \} \left\| (\mathbf{x}, \mathbf{y}) \right\|\_{L\_{2}(\Omega) \times L\_{2}(\Omega)^{d}}^{2} \end{split}$$

for each *(x, y)* <sup>∈</sup> *<sup>L</sup>*2*()*×*L*2*()<sup>d</sup>* , where we used the selfadjointness of *<sup>T</sup>* <sup>−</sup><sup>1</sup> in the second line.

*Remark 6.2.7* Let *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *<sup>L</sup>*2*())*, *ν >* 0, and define

$$
\begin{pmatrix} \mu \\ \widetilde{\sigma} \end{pmatrix} = \overline{\left( \partial\_{\mathfrak{t},\boldsymbol{\nu}} \begin{pmatrix} 1 & 0 \\ 0 \ T^{-1} \end{pmatrix} - \begin{pmatrix} 0 & \operatorname{div} \\ \operatorname{grad}\_{\boldsymbol{0}} & 0 \end{pmatrix} \right)^{-1} \begin{pmatrix} \partial\_{\mathfrak{t},\boldsymbol{\nu}}^{-1} f \\ 0 \end{pmatrix}}.
$$

By Theorem 6.2.1, we obtain *u σ* <sup>∈</sup> dom*(∂t ,ν)*∩dom 0 div grad0 0 *.* Hence, we have

$$
\partial\_{\mathfrak{t}, \boldsymbol{\upsilon}} \mu - \operatorname{div} \widetilde{\boldsymbol{\sigma}} = \partial\_{\mathfrak{t}, \boldsymbol{\upsilon}}^{-1} f
$$
 
$$
\partial\_{\mathfrak{t}, \boldsymbol{\upsilon}} T^{-1} \widetilde{\boldsymbol{\sigma}} = \operatorname{grad}\_0 \mu
$$

or

$$
\partial\_{\mathfrak{t}, \boldsymbol{\upsilon}} \mu - \operatorname{div} \widetilde{\boldsymbol{\sigma}} = \partial\_{\mathfrak{t}, \boldsymbol{\upsilon}}^{-1} f
$$
 
$$
\widetilde{\boldsymbol{\sigma}} = T \partial\_{\mathfrak{t}, \boldsymbol{\upsilon}}^{-1} \operatorname{grad}\_0 \mu.
$$

Thus, formally, after another time-differentiation and the setting of *<sup>σ</sup>* <sup>=</sup> *∂t ,ν<sup>σ</sup>* we obtain a solution of the wave equation, *(u, σ )*. Notice, however, that differentiating div*<sup>σ</sup>* cannot be done without any additional knowledge of the regularity of *σ*. In fact, in order to arrive at the balance of momentum equation, one would need to have div*<sup>σ</sup>* <sup>∈</sup> dom*(∂t ,ν)*. However, one only has *<sup>σ</sup>* <sup>∈</sup> dom*(∂t ,ν )* <sup>∩</sup> dom*(*div*)*. It is an elementary argument, see [110, Lemma 4.6], that we in fact have div *∂*−<sup>1</sup> *t ,ν* <sup>=</sup> *<sup>∂</sup>*−<sup>1</sup> *t ,ν* div*,* which suggests that, in general, div*σ /*<sup>∈</sup> dom*(∂t ,ν)*, see Exercise 6.6.

#### **Maxwell's Equations**

The final example in this chapter forms the archetypical evolutionary equation— Maxwell's equations in a medium <sup>⊆</sup> <sup>R</sup>3. In order to identify the particular choices of *M(∂t ,ν)* and *A* in the present situation (and to finally conclude the 2 × 2-block matrix formulation historically due to the work of [59, 64, 102]), we start out with Faraday's law of induction, which relates the unknown electric field, *<sup>E</sup>* : <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup>3, to the magnetic induction, *<sup>B</sup>* : <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup>3, via

$$
\partial\_t B + \operatorname{curl} E = 0.
$$

We assume that the medium is contained in a perfect conductor, which is reflected in the so-called electric boundary condition which asks for the vanishing of the tangential component of *E* at the boundary. This is modelled by *E* ∈ dom*(*curl0*)*. The next constituent of Maxwell's equations is Ampère's law

$$
\partial\_t D + j\_c - \operatorname{curl} H = j\_0,
$$

which relates the unknown electric displacement, *<sup>D</sup>* : <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup>3, (free) current (density), *jc* : <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup>3, and magnetic field, *<sup>H</sup>* : <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup>3, to the (given) external currents, *<sup>j</sup>*<sup>0</sup> : <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup>3. Maxwell's equations are completed by constitutive relations specific to each material at hand. Indeed, the (bounded, measurable) dielectricity, *<sup>ε</sup>* : <sup>→</sup> <sup>R</sup>3×3, and the (bounded, measurable) magnetic permeability, *<sup>μ</sup>*: <sup>→</sup> <sup>R</sup>3×3, are symmetric matrix-valued functions which couple the electric displacement to the electric field and the magnetic field to the magnetic induction via

$$D = \varepsilon E, \text{ and } B = \mu H.$$

Finally, Ohm's law relates the current to the electric field via the (bounded, measurable) electric conductivity, *<sup>σ</sup>* : <sup>→</sup> <sup>R</sup>3×3, as

$$j\_c = \sigma E.$$

All in all, in terms of *(E, H )*, Maxwell's equations read

$$
\left(\partial\_l \begin{pmatrix} \varepsilon & 0 \\ 0 \ \mu \end{pmatrix} + \begin{pmatrix} \sigma & 0 \\ 0 \ 0 \end{pmatrix} + \begin{pmatrix} 0 & -\operatorname{curl} \\ \operatorname{curl}\_0 & 0 \end{pmatrix} \right) \begin{pmatrix} E \\ H \end{pmatrix} = \begin{pmatrix} j\_0 \\ 0 \end{pmatrix} \dots
$$

For the time being, we shall assume that there exist *c >* 0 and *ν*<sup>0</sup> *>* 0 such that for all *ν ν*<sup>0</sup> we have

$$
\nu \varepsilon(\mathbf{x}) + \text{Re}\,\sigma(\mathbf{x}) \gtrless c, \quad \mu(\mathbf{x}) \gtrless c \quad (\mathbf{x} \in \Omega),
$$

in the sense of positive definiteness. Note that the latter condition allows particularly for *ε* = 0 on certain regions, if Re *σ* compensates for this. To approximate small *ε* by 0 is referred to as the eddy current approximation in these regions. With the above preparations at hand, we may now formulate the well-posedness result concerning Maxwell's equations.

**Theorem 6.2.8** *Let* <sup>⊆</sup> <sup>R</sup><sup>3</sup> *be open and <sup>ν</sup> ν*0*. Then*

$$
\begin{pmatrix} \partial\_{\mathbb{H}, \mathbb{V}} \begin{pmatrix} \varepsilon & 0 \\ 0 \ \mu \end{pmatrix} + \begin{pmatrix} \sigma & 0 \\ 0 & 0 \end{pmatrix} + \begin{pmatrix} 0 & -\operatorname{curl} \\ \operatorname{curl}\_{0} & 0 \end{pmatrix} \end{pmatrix}
$$

*is densely defined and closable in L*2*,ν* <sup>R</sup>; *<sup>L</sup>*2*()*<sup>3</sup> <sup>×</sup> *<sup>L</sup>*2*()*<sup>3</sup> *. The respective closure is continuously invertible with causal inverse being eventually independent of ν.*

*Proof* The assertion follows from Theorem 6.2.1 applied to the material law

$$M(z) = \begin{pmatrix} \varepsilon & 0\\ 0 \ \mu \end{pmatrix} + z^{-1} \begin{pmatrix} \sigma & 0\\ 0 & 0 \end{pmatrix}$$

and the skew-selfadjoint operator

$$A = \begin{pmatrix} 0 & -\operatorname{curl} \\ \operatorname{curl}\_0 & 0 \end{pmatrix}. \tag{7}$$

*Remark 6.2.9* In the physics literature (see e.g. [40, Chapter 18]), Maxwell's equations are usually complemented by Gauss' law,

$$\text{div}\_0 \, B = 0,$$

as well as the introduction of the charge density, *ρ* = div *εE*, and the current, *j* = *j*<sup>0</sup> − *jc*, by the continuity equation

$$
\partial\_t \rho = \text{div} \, j.
$$

We shall argue in the following that these equations are *automatically* satisfied if *(E, H )* is a solution to Maxwell's equation. Indeed, assuming *j*<sup>0</sup> ∈ dom*(∂t ,ν )*, then, as a consequence of Theorem 6.2.1, for

$$
\begin{pmatrix} E \\ H \end{pmatrix} = \overline{\begin{pmatrix} \sigma\_{l,\upsilon} \begin{pmatrix} \varepsilon & 0 \\ 0 \ \mu \end{pmatrix} + \begin{pmatrix} \sigma & 0 \\ 0 0 \end{pmatrix} + \begin{pmatrix} 0 & -\operatorname{curl} \\ \operatorname{curl}\_0 & 0 \end{pmatrix}} \begin{pmatrix} j\_0 \\ 0 \end{pmatrix}}
$$

we observe *E H* ∈ dom *∂t ,ν* <sup>∩</sup> dom <sup>0</sup> <sup>−</sup> curl curl0 0 . Reformulating the latter equation yields

$$\begin{aligned} \mathcal{B} &= \mu H = -\partial\_{t,\boldsymbol{\upsilon}}^{-1} \operatorname{curl}\_0 \, E, \\ \varepsilon E &= \partial\_{t,\boldsymbol{\upsilon}}^{-1} \left( -\sigma E + j\boldsymbol{0} + \operatorname{curl} H \right) = \partial\_{t,\boldsymbol{\upsilon}}^{-1} j + \partial\_{t,\boldsymbol{\upsilon}}^{-1} \operatorname{curl} H. \end{aligned}$$

Since curl0 *<sup>E</sup>* <sup>∈</sup> ran*(*curl0*)*, we have by Proposition 3.1.6(b) that *<sup>∂</sup>*−<sup>1</sup> *t ,ν* curl0 *E* ∈ ran*(*curl0*)*. Thus, by Proposition 6.1.4, we obtain

$$\operatorname{div}\_0 B = \operatorname{div}\_0 \left( -\partial\_{\mathfrak{r},\boldsymbol{\nu}}^{-1} \operatorname{curl}\_0 E \right) = 0.$$

Similarly, we deduce that

$$\rho = \operatorname{div} \varepsilon E = \operatorname{div} \partial\_{t, \upsilon}^{-1} j.$$

If, in addition, we have that *j* ∈ dom*(*div*)*, we recover the continuity equation. In general, the continuity equation is satisfied in the integrated sense just derived.

We shall keep the list of examples to that for now. In the course of this book, we will see more (involved) examples. Furthermore, we will study the boundary conditions more deeply and shall relate the conditions introduced abstractly here to more classical formulations involving trace spaces.

## **6.3 Proof of Picard's Theorem**

In this section we shall prove the well-posedness theorem. For this, we recall an elementary result from functional analysis. It is remindful of the Lax–Milgram lemma.

**Proposition 6.3.1** *Let H be a Hilbert space and B* : dom*(B)* ⊆ *H* → *H densely defined and closed. Assume there exists c >* 0 *such that*

$$\operatorname{Re}\left\langle \phi, B\phi \right\rangle\_H \ge c \left\| \phi \right\|\_H^2 \quad (\phi \in \operatorname{dom}(B)),$$

$$\operatorname{Re}\left\langle \psi, B^\*\psi \right\rangle\_H \ge c \left\| \psi \right\|\_H^2 \quad (\psi \in \operatorname{dom}(B^\*)).$$

*Then <sup>B</sup>*−<sup>1</sup> <sup>∈</sup> *L(H ) and <sup>B</sup>*−<sup>1</sup> <sup>1</sup>*/c.*

*Proof* Since *B* is not necessarily bounded here, the present argument requires a refinement of the one in Proposition 6.2.3. In fact, the first assumed inequality implies closedness of the range of *B* as well as continuous invertibility with *<sup>B</sup>*−<sup>1</sup> : ran*(B)* <sup>→</sup> *<sup>H</sup>*. The fact that ran*(B)* is dense in *<sup>H</sup>* follows from the second inequality.

*Remark 6.3.2* In the proof of Theorem 6.2.1, we will apply Proposition 6.3.1 in a situation, where dom*(B*∗*)* ⊆ dom*(B)*. In this case, the condition

$$\operatorname{Re}\left\langle \phi, B\phi \right\rangle\_H \geqslant c \left\| \phi \right\|\_H^2 \quad (\phi \in \operatorname{dom}(B))$$

readily implies

$$\operatorname{Re} \left\langle \psi, B^\* \psi \right\rangle\_H \geqslant c \parallel \psi \parallel\_H^2 \quad (\psi \in \operatorname{dom}(B^\*)).$$

Next, we turn to the proof of Picard's theorem. For this, we recall that we do not notationally distinguish between the operator *A* defined on *H* and its extension to *H*-valued functions. We leave it to the context, which realisation of *A* is considered, which will always be obvious; see also Exercises 6.1 and 6.2.

*Proof of Theorem 6.2.1* Let *ν <sup>ν</sup>*<sup>0</sup> and *<sup>z</sup>* <sup>∈</sup> <sup>C</sup>Re*<sup>ν</sup>* . Define *B(z)* := *zM(z)* + *A*. Since *M(z)* ∈ *L(H )* it follows from Theorem 2.3.2 that *B(z)*<sup>∗</sup> = *(zM(z))* <sup>∗</sup> −*A* and dom*(B(z))* = dom*(B(z)*∗*)* = dom*(A)*. Moreover, for all *φ* ∈ dom*(A)* we have

$$\operatorname{Re}\left\langle \phi, B(z)\phi \right\rangle\_H = \operatorname{Re}\left\langle \phi, \left(zM(z) + A\right)\phi \right\rangle\_H = \operatorname{Re}\left\langle \phi, zM(z)\phi \right\rangle\_H \geqslant c \left\| \phi \right\|\_H^2,$$

due to the skew-selfadjointness of *A*. Thus, by Proposition 6.3.1 (see also Remark 6.3.2) applied to *B(z)* instead of *B*, we deduce that

$$S \colon \mathbb{C}\_{\mathbf{Re} \geqslant \nu} \ni z \mapsto B(z)^{-1}$$

is bounded and assumes values in *L(H )* with norm bounded by 1*/c*. By Exercise 6.5, we have that *S* is holomorphic. Thus, *S* is a material law and  *S(∂t ,ν )* 1*/c* by Proposition 5.3.2. Moreover, Theorem 5.3.6 implies that *S(∂t ,ν)* is independent of *ν* and causal.

Next, if *<sup>f</sup>* <sup>∈</sup> dom*(∂t ,ν),* it follows that *(*im <sup>+</sup> *ν)Lνf* <sup>∈</sup> *<sup>L</sup>*2*(*R; *H )*. Hence, for all *<sup>t</sup>* <sup>∈</sup> <sup>R</sup> we obtain

$$\begin{aligned} AS(\mathrm{i}t+\nu)\mathcal{L}\_{\upsilon}f(t) &= A\left((\mathrm{i}t+\nu)\,M(\mathrm{i}t+\nu)+A\right)^{-1}\mathcal{L}\_{\upsilon}f(t) \\ &= \mathcal{L}\_{\upsilon}f(t) - (\mathrm{i}t+\nu)\,M(\mathrm{i}t+\nu)S(\mathrm{i}t+\nu)\mathcal{L}\_{\upsilon}f(t). \end{aligned}$$

Thus, by the boundedness of *<sup>M</sup>* and *<sup>S</sup>*, we deduce *S(*<sup>i</sup> · +*ν)Lνf* <sup>∈</sup> *<sup>L</sup>*2*(*R; dom*(A))*. This implies *S(∂t ,ν )f* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; dom*(A))* by Exercise 6.2. Similarly, but more easily, it follows that *(*<sup>i</sup> · +*ν) S(*<sup>i</sup> · +*ν)Lνf* <sup>∈</sup> *<sup>L</sup>*2*(*R; *H )* also, which shows *S(∂t ,ν)f* ∈ dom*(∂t ,ν)*.

We now define the operator *B(*im + *ν)* by

$$\text{dom}(B(\text{im} + \nu)) := \left\{ f \in L\_2(\mathbb{R}; H) : f(t) \in \text{dom}(A) \text{ for a.e.} \ t \in \mathbb{R}, \right.$$

$$(t \mapsto B(\text{it} + \nu)f(t)) \in L\_2(\mathbb{R}; H) \right\}$$

and

$$B(\mathrm{im} + \nu)f := (t \mapsto B(\mathrm{it} + \nu)f(t)) \quad (f \in \mathrm{dom}(B(\mathrm{im} + \nu))).$$

Then one easily sees that *B(*im <sup>+</sup> *ν)* <sup>=</sup> *S(*im <sup>+</sup> *ν)*−<sup>1</sup> and since *S(*im <sup>+</sup> *ν)* is closed, it follows that *B(*im + *ν)* is closed as well. Moreover

$$(\text{im} + \nu)M(\text{im} + \nu) + A \subseteq B(\text{im} + \nu)$$

and hence, the operator *(*im + *ν)M(*im + *ν)* + *A* is closable, which also yields the closability of *∂t ,νM(∂t ,ν)* + *A* by unitary equivalence. To complete the proof, we have to show that

$$(\mathrm{im} + \nu)M(\mathrm{im} + \nu) + A = B(\mathrm{im} + \nu),$$

as this equality implies *S(∂t ,ν)* = *∂t ,νM(∂t ,ν)* + *A* −<sup>1</sup> by unitary equivalence. For showing the asserted equality, let *<sup>f</sup>* <sup>∈</sup> dom*(B(*im <sup>+</sup> *ν))*. For *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> we define *fn* := <sup>1</sup>[−*n,n*]*<sup>f</sup>* . Then *fn* <sup>∈</sup> dom*(*im+*ν)*∩dom*(A)* <sup>⊆</sup> dom *(*im+*ν)M(*im+*ν)*+*A* for each *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> and by dominated convergence, we have that *fn* <sup>→</sup> *<sup>f</sup>* as *<sup>n</sup>* → ∞ as well as

$$\left( (\text{im} + \nu)M(\text{im} + \nu) + A \right) f\_{\boldsymbol{n}} = B(\text{im} + \nu)f\_{\boldsymbol{n}}$$

$$= \mathbb{1}\_{[-n, n]} B(\text{im} + \nu)f \to B(\text{im} + \nu)f$$

*n* → ∞. This shows that *f* ∈ dom *(*im + *ν)M(*im + *ν)* + *A* and hence, the assertion follows.

*Remark 6.3.3* Note that Theorem 6.2.1 can partly be generalised in the following way (with the same proof). Let *<sup>M</sup>* : <sup>C</sup>Re*>ν*<sup>0</sup> <sup>→</sup> *L(H )* be holomorphic and *<sup>A</sup>* <sup>a</sup> closed, densely defined operator in *H* such that *zM(z)* + *A* is boundedly invertible for all *<sup>z</sup>* <sup>∈</sup> <sup>C</sup>Re*>ν*<sup>0</sup> and that sup*z*∈CRe*>ν*<sup>0</sup> *(zM(z)* <sup>+</sup> *A)*−<sup>1</sup> *L(H ) <* ∞. Then *Sν* ∈ *L(L*2*,ν (*R; *H ))* is causal and eventually independent of *<sup>ν</sup>*.

*Remark 6.3.4* As the proof of Theorem 6.2.1 shows, for *ν ν*<sup>0</sup> we have that *<sup>S</sup>* : <sup>C</sup>Re*<sup>ν</sup> <sup>z</sup>* → *(zM(z)* <sup>+</sup> *A)*−<sup>1</sup> <sup>∈</sup> *L(H )* is a material law and *Sν* <sup>=</sup> *S(∂t ,ν )*. Thus, the solution operator is a material law operator, and by Remark 5.3.3 applied to *<sup>S</sup>* and *<sup>z</sup>* → <sup>1</sup> *<sup>z</sup>* 1*<sup>H</sup>* we obtain

$$
\mathcal{S}\_{\boldsymbol{\nu}} \partial\_{\boldsymbol{\nu}, \boldsymbol{\nu}} \subseteq \partial\_{\boldsymbol{\nu}, \boldsymbol{\nu}} \mathcal{S}\_{\boldsymbol{\nu}} \dots
$$

## **6.4 Comments**

The proof of Theorem 6.2.1 here is rather close to the strategy originally employed in [82], at least where existence and uniqueness are concerned. The causality part is a consequence of some observations detailed in [52, 131]. The original process of proving causality used the Theorem of Paley and Wiener, which we shall discuss later on.

The eddy current approximation has enjoyed great interest in the mathematical and physical community, in particular for the case when *ε* = 0 everywhere. The reason being that then Maxwell's equations are merely of parabolic type. We shall refer to [79] and the references therein for an extensive discussion.

Both Proposition 6.3.1 and the Lax–Milgram lemma have been put into a general perspective in [89].

## **Exercises**

**Exercise 6.1** Let *(,,μ)* be a *σ*-finite measure space and let *H*0*, H*<sup>1</sup> be Hilbert spaces. Let *A*: dom*(A)* ⊆ *H*<sup>0</sup> → *H*<sup>1</sup> be densely defined and closed. Show that the operator

$$\begin{aligned} A\_{\mu} \colon L\_2(\mu; \text{dom}(A)) \subseteq L\_2(\mu; H\_0) \to L\_2(\mu; H\_1) \\ f \mapsto \left(\omega \mapsto Af(\omega)\right), \end{aligned}$$

is densely defined and closed. Moreover, show that *Aμ* <sup>∗</sup> <sup>=</sup> *(A*∗*)μ*.

**Exercise 6.2** In the situation of Exercise 6.1, if *(*1*,* 1*, μ*1*)* is another *σ*-finite measure space and *F* : *L*2*(μ)* → *L*2*(μ*1*)* is unitary, show that for *j* ∈ {0*,* 1} there exists a unique unitary operator *FHj* : *L*2*(μ*; *Hj )* → *L*2*(μ*1; *Hj )* such that

$$\mathcal{F}\_{H\_f}(\phi \mathbf{x}) = (\mathcal{F}\phi)\mathbf{x} \quad (\phi \in L\_2(\mu), \mathbf{x} \in H\_f).$$

Furthermore, prove that

$$
\mathcal{F}\_{H\_1} A\_{\mu} \mathcal{F}\_{H\_0}^\* = A\_{\mu\_1}.
$$

**Exercise 6.3** Show that for <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* open, the set *<sup>C</sup>*<sup>∞</sup> <sup>c</sup> *()* ⊆ *L*2*()* is dense.

**Exercise 6.4** Prove Theorem 6.1.2.

**Exercise 6.5** Let *H* be a Hilbert space, *A*: dom*(A)* ⊆ *H* → *H* skew-selfadjoint, and *c >* 0. Moreover, let *<sup>M</sup>* : dom*(M)* <sup>⊆</sup> <sup>C</sup> <sup>→</sup> *L(H )* be holomorphic with

$$\operatorname{Re} M(z) \geqslant c \quad (z \in \operatorname{dom}(M)).$$

Show that dom*(M) z* → *(M(z)* + *A)* <sup>−</sup><sup>1</sup> is holomorphic.

**Exercise 6.6** Let *C* : dom*(C)* ⊆ *H*<sup>0</sup> → *H*<sup>1</sup> be a densely defined and closed linear operator acting in Hilbert spaces *H*<sup>0</sup> and *H*1. For *ν >* 0 show that

$$
\overline{\partial\_{\mathfrak{r},\boldsymbol{\nu}}^{-1}C} = C \partial\_{\mathfrak{r},\boldsymbol{\nu}}^{-1}.
$$

Hint: Apply Exercise 6.2 and show *(*im <sup>+</sup> *ν)*−1*<sup>C</sup>* <sup>=</sup> *C(*im <sup>+</sup> *ν)*−<sup>1</sup> with a suitable approximation argument.

**Exercise 6.7** Let <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* be open.


$$D := \left\{ \phi \in H^1(\Omega) \; ; \; \text{grad}\,\phi \in \text{dom}(\text{div}), \phi = \text{div}\,\text{grad}\,\phi \right\} \subseteq C^\infty(\Omega).$$

and show that *<sup>C</sup>*∞*()* <sup>∩</sup> *<sup>H</sup>*1*()* <sup>⊆</sup> *<sup>H</sup>*1*()* is dense.

*Remark* The regularity assumption in (b) always holds and is known as Weyl's Lemma, see e.g. [45, Corollary 8.11], where the more general situation of an elliptic operator with smooth coefficients is treated. See also [32, p.127], where the regularity is shown for harmonic distributions.

## **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 7 Examples of Evolutionary Equations**

This chapter is devoted to a small tour through a variety of evolutionary equations. More precisely, we shall look into the equations of poro-elastic media, (time-)fractional elasticity, thermodynamic media with delay as well as visco-elastic media. The discussion of these examples will be similar to that of the examples in the previous chapter in the sense that we shall present the equations first, reformulate them suitably and then apply the solution theory to them. The study of visco-elastic media within the framework of partial integro-differential equations will be carried out in the exercises section.

## **7.1 Poro-Elastic Deformations**

In this section we will discuss the equations of poro-elasticity, which form a coupled system of equations. More precisely, the equations of (linearised) elasticity are coupled with the diffusion equation. Before properly writing these equations we introduce the following notation and differential operators.

**Definition** Let K*d*×*<sup>d</sup>* sym := *<sup>A</sup>* <sup>∈</sup> <sup>K</sup>*d*×*<sup>d</sup>* ; *<sup>A</sup>* <sup>=</sup> *<sup>A</sup>* <sup>⊆</sup> <sup>K</sup>*d*×*<sup>d</sup>* be the (closed) subspace of symmetric *<sup>d</sup>* <sup>×</sup> *<sup>d</sup>* matrices. Let <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* be open. Then define

$$\begin{aligned} L\_2(\mathfrak{Q})^{d \times d}\_{\text{sym}} &:= L\_2(\mathfrak{Q}; \mathbb{K}^{d \times d}\_{\text{sym}}) \\ &= \left\{ (\Phi\_{jk})\_{j,k \in \{1, \ldots, d\}} \in L\_2(\mathfrak{Q})^{d \times d} \; ; \; \forall j, k \in \{1, \ldots, d\} \colon \Phi\_{jk} = \Phi\_{kj} \right\}. \end{aligned}$$

Analogously, we set *C*∞ <sup>c</sup> *()d*×*<sup>d</sup>* sym := *C*<sup>∞</sup> <sup>c</sup> *(*; <sup>K</sup>*d*×*<sup>d</sup>* sym *)*.

Note that the symmetry of a *d* × *d* matrix here means that the matrix elements are symmetric with respect to the main diagonal. For <sup>K</sup> <sup>=</sup> <sup>C</sup>, this does not

*.*

correspond to the symmetry of the associated linear operator (which would rather be selfadjointness).

**Definition** Let <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* be open. Then we define

$$\text{Grad}\_{\mathbb{C}} \colon \mathcal{C}\_{\mathbb{C}}^{\infty}(\mathfrak{Q})^{d} \subseteq L\_{2}(\mathfrak{Q})^{d} \to L\_{2}(\mathfrak{Q})\_{\text{sym}}^{d \times d}$$

$$\left(\phi\_{j}\right)\_{j \in \{1, \ldots, d\}} \mapsto \frac{1}{2} \left(\partial\_{k}\phi\_{j} + \partial\_{j}\phi\_{k}\right)\_{j, k \in \{1, \ldots, d\}},$$

and

$$\text{Div}\_{\mathfrak{c}} \colon \mathcal{C}\_{\mathfrak{c}}^{\infty}(\mathfrak{Q})\_{\text{sym}}^{d \times d} \subseteq L\_2(\mathfrak{Q})\_{\text{sym}}^{d \times d} \to L\_2(\mathfrak{Q})^d$$

$$\left(\Phi\_{jk}\right)\_{j,k \in \{1, \ldots, d\}} \mapsto \left(\sum\_{k=1}^d \partial\_k \Phi\_{jk}\right)\_{j \in \{1, \ldots, d\}}$$

Similarly to the definitions in the previous chapter, we put Grad := − Div<sup>∗</sup> <sup>c</sup> , Div := − Grad<sup>∗</sup> <sup>c</sup> and Grad0 := − Div∗, Div0 := − Grad∗, where (analogously to the scalarvalued case) we observe that Gradc ⊆ − Div<sup>∗</sup> <sup>c</sup> motivating the notation Grad and Grad0.

*Remark 7.1.1* Note that in the literature Grad *u* is also denoted by *ε(u)* and is called the *strain tensor*. Due to the (obvious) similarity to the scalar case, we refrain from using *ε* in this context and prefer Grad instead. Again, the index 0 in the operators refers to generalised Dirichlet (for Grad0) or Neumann (for Div0) boundary conditions.

We are now properly equipped to formulate the equations of poro-elasticity; see also [69] and below for further details. In an elastic body <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* , the displacement field, *<sup>u</sup>*: <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup>*<sup>d</sup>* , and the pressure field, *<sup>p</sup>*: <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup>, of a fluid diffusing through satisfy the following two energy balance equations

$$\begin{aligned} \partial\_t \rho \, \partial\_t u - \operatorname{grad} \partial\_t \lambda \operatorname{div} u - \operatorname{Div} C \operatorname{Grad} u + \operatorname{grad} \alpha^\* p &= f, \\ \partial\_t (c\_0 p + \alpha \operatorname{div} u) - \operatorname{div} k \operatorname{grad} p &= g. \end{aligned}$$

The right-hand sides *<sup>f</sup>* : <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup>*<sup>d</sup>* and *<sup>g</sup>* : <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup> describe some given external forcing. We assume homogeneous Neumann boundary conditions for the diffusing fluid as well as homogeneous Dirichlet (i.e. clamped) boundary conditions for the elastic body. The operator *<sup>ρ</sup>* <sup>∈</sup> *L(L*2*()<sup>d</sup> )* describes the density of the medium (usually realised as a multiplication operator by a bounded, measurable, scalar function). The bounded linear operators *<sup>C</sup>* <sup>∈</sup> *L(L*2*()d*×*<sup>d</sup>* sym *)* and *<sup>k</sup>* <sup>∈</sup> *L(L*2*()<sup>d</sup> )* are the elasticity tensor and the hydraulic conductivity of the medium, whereas *c*0*, λ* ∈ *L(L*2*())* are the porosity of the medium and the compressibility of the fluid, respectively. The operator *α* ∈ *L(L*2*())* is the socalled Biot–Willis constant. Note that in many applications *ρ,c*0*, λ* and *α* are just positive real numbers, and *C* and *k* are strictly positive definite tensors or matrices.

The reformulation of the equations for poro-elasticity involves several 'tricks'. One of these is to introduce the matrix trace as the operator

$$\text{trace}: L\_2(\Omega)^{d \times d}\_{\text{sym}} \to L\_2(\Omega)$$

$$(\Phi\_{jk})\_{j,k \in \{1, \dots, d\}} \mapsto \sum\_{j=1}^d \Phi\_{jj}.$$

Note that the adjoint is given by trace<sup>∗</sup> *<sup>f</sup>* <sup>=</sup> diag*(f, . . . , f )* <sup>∈</sup> *<sup>L</sup>*2*()d*×*<sup>d</sup>* sym . It is then elementary to obtain trace Grad ⊆ div as well as grad = Div trace∗. Hence, we formally get

$$\begin{aligned} \partial\_l \rho \partial\_l u - \text{Div} \left( \left( \partial\_l \text{trace}^\* \lambda \,\text{trace} + C \right) \text{Grad} \, u - \text{trace}^\* \alpha^\* p \right) &= f, \\ \partial\_l \left( c\_0 p + \alpha \,\text{trace} \, \text{Grad} \, u \right) - \text{div} \, k \, \text{grad} \, p &= g. \end{aligned}$$

Next, we introduce a new set of unknowns

$$\begin{aligned} v &:= \partial\_t u, \\ T &:= C \operatorname{Grad} u, \\ \omega &:= \lambda \operatorname{trace} \operatorname{Grad} v - \alpha^\* p, \\ q &:= -k \operatorname{grad} p. \end{aligned}$$

Here, *v* is the velocity, *T* is the stress tensor and *q* is the heat flux. The quantity *ω* is an additional variable, which helps to rewrite the system into the form of evolutionary equations.

In order to finalise the reformulation we shall assume some additional properties on the coefficients involved. Throughout the rest of this section, we assume that

$$\rho = \rho^\* \ge c,$$

$$c\_0 = c\_0^\* \ge c,$$

$$\text{Re}\,\lambda \ge c,$$

$$\text{Re}\,k \ge c,\text{ and}$$

$$C = C^\* \ge c$$

for some *c >* 0, where all inequalities are thought of in the sense of positive definiteness (compare Chap. 6). As a consequence, we obtain

$$\operatorname{trace} \mathbf{G} \mathbf{a} \,\boldsymbol{\upsilon} = \lambda^{-1} \boldsymbol{\alpha} + \lambda^{-1} \boldsymbol{\alpha}^\* p.$$

Rewriting the defining equations for *T , ω,* and *q* together with the two equations we started out with, we obtain the system

$$\begin{aligned} \partial\_t \rho v - \text{Div} \left( T + \text{trace}^\* \, \omega \right) &= f, \\ \partial\_t c\_0 p + \alpha \lambda^{-1} \omega + \alpha \lambda^{-1} \alpha^\* p + \text{div} \, q &= g, \\ \lambda^{-1} \omega + \lambda^{-1} \alpha^\* p - \text{trace} \, \text{Grad} \, v &= 0, \\ \partial\_t C^{-1} T - \text{Grad} \, v &= 0, \\ k^{-1} q + \text{grad} \, p &= 0. \end{aligned}$$

Note that at this stage of modelling we assumed that we can freely interchange the order of differentiation, so that Grad *∂tu* = *∂t* Grad *u*. Introducing

$$M\_0 := \begin{pmatrix} \rho & 0 & 0 & 0 & 0 \\ 0 & c\_0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & C^{-1} & 0 \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix}, \quad M\_1 := \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & a\lambda^{-1}a^\* & a\lambda^{-1} & 0 & 0 \\ 0 & \lambda^{-1}a^\* & \lambda^{-1} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & k^{-1} \end{pmatrix},\tag{7.1}$$

$$V := \begin{pmatrix} 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}, \quad A := \begin{pmatrix} 0 & 0 & 0 & -\text{Div} & 0 \\ 0 & 0 & 0 & 0 & \text{div}\_0 \\ 0 & 0 & 0 & 0 & 0 \\ -\text{Grad}\_0 & 0 & 0 & 0 & 0 \\ 0 & \text{grad} & 0 & 0 & 0 \end{pmatrix},\tag{7.2}$$

we obtain

$$\begin{pmatrix} \partial\_l M\_0 + M\_1 + VAV^\* \end{pmatrix} \begin{pmatrix} v \\ p \\ \omega \\ T \\ q \end{pmatrix} = \begin{pmatrix} f \\ g \\ 0 \\ 0 \\ 0 \end{pmatrix}.$$

This perspective enables us to prove well-posedness for the equations of poroelasticity by applying Theorem 6.2.1.

**Theorem 7.1.2** *Put <sup>H</sup>* := *<sup>L</sup>*2*()<sup>d</sup>* <sup>×</sup> *<sup>L</sup>*2*()* <sup>×</sup> *<sup>L</sup>*2*()* <sup>×</sup> *<sup>L</sup>*2*()d*×*<sup>d</sup>* sym <sup>×</sup> *<sup>L</sup>*2*()<sup>d</sup> and let M*0*, M*1*, V* ∈ *L(H ) and A be given as in (7.1) and (7.2). Then there exists ν*<sup>0</sup> *>* 0 *such that for all ν ν*<sup>0</sup> *the operator ∂t ,νM*<sup>0</sup> + *M*<sup>1</sup> + *V AV* <sup>∗</sup> *is continuously invertible on <sup>L</sup>*2*,ν (*R; *H ). The inverse Sν of this operator is causal and eventually independent of ν. Moreover,* sup*ν<sup>ν</sup>*<sup>0</sup> *Sν <* ∞ *and F* ∈ dom*(∂t ,ν) implies SνF* ∈ dom*(∂t ,ν )* ∩ dom*(V AV* <sup>∗</sup>*).*

We will provide two prerequisites for the proof. We ask for the details of the proof of Theorem 7.1.2 in Exercise 7.1.

**Proposition 7.1.3** *Let H*0*, H*<sup>1</sup> *be Hilbert spaces, B* : dom*(B)* ⊆ *H*<sup>0</sup> → *H*<sup>0</sup> *skewselfadjoint, V* ∈ *L(H*0*, H*1*) bijective. Then (VBV* <sup>∗</sup>*)* <sup>∗</sup> = −*VBV* <sup>∗</sup>*.*

The proof of Proposition 7.1.3 is left as (part of) Exercise 7.1.

**Proposition 7.1.4** *Let H be a Hilbert space, N*0*, N*<sup>1</sup> ∈ *L(H ) with N*<sup>0</sup> = *N*<sup>∗</sup> 0 *. Assume there exist c*0*, c*<sup>1</sup> *>* 0 *such that x,N*0*x <sup>c</sup>*<sup>0</sup> *<sup>x</sup>* <sup>2</sup> *for all <sup>x</sup>* <sup>∈</sup> ran*(N*0*) and* Re *y,N*1*y <sup>c</sup>*<sup>1</sup> *<sup>y</sup>* <sup>2</sup> *for all <sup>y</sup>* <sup>∈</sup> ker*(N*0*). Then for all* <sup>0</sup> *< c*- <sup>1</sup> *< c*<sup>1</sup> *there exists ν*<sup>0</sup> *>* 0 *such that for all ν ν*<sup>0</sup> *we have that*

$$\nu N\_0 + \text{Re } N\_1 \gg c'\_1.$$

*Proof* Note that by the selfadjointness of *N*<sup>0</sup> we can decompose *H* = ran*(N*0*)* ⊕ ker*(N*0*)*, see Corollary 2.2.6. Let *z* ∈ *H*, and *x* ∈ ran*(N*0*)*, *y* ∈ ker*(N*0*)* such that *z* = *x* + *y*. For *ε, ν >* 0 we estimate

$$\begin{split} &\nu\left<\mathbf{x}+\mathbf{y}, N\_{\mathbf{0}}(\mathbf{x}+\mathbf{y})\right> + \operatorname{Re}\left<\mathbf{x}+\mathbf{y}, N\_{\mathbf{1}}(\mathbf{x}+\mathbf{y})\right> \\ &= \nu\left<\mathbf{x}, N\_{\mathbf{0}}\mathbf{x}\right> + \operatorname{Re}\left<\mathbf{y}, N\_{\mathbf{1}}\mathbf{y}\right> + \operatorname{Re}\left<\mathbf{x}, N\_{\mathbf{1}}\mathbf{x}\right> + \operatorname{Re}\left<\mathbf{x}, N\_{\mathbf{1}}\mathbf{y}\right> + \operatorname{Re}\left<\mathbf{y}, N\_{\mathbf{1}}\mathbf{x}\right> \\ &\geqslant \nu c\_{0} \left\|\mathbf{x}\right\|^{2} + c\_{1} \left\|\mathbf{y}\right\|^{2} - \left\|N\_{\mathbf{1}}\right\| \left\|\mathbf{x}\right\|^{2} - 2\left\|N\_{\mathbf{1}}\right\| \left\|\mathbf{x}\right\| \left\|\mathbf{y}\right\| \\ &\geqslant \nu c\_{0} \left\|\mathbf{x}\right\|^{2} + c\_{1} \left\|\mathbf{y}\right\|^{2} - \left\|N\_{\mathbf{1}}\right\| \left\|\mathbf{x}\right\|^{2} - \frac{1}{\varepsilon} \left\|N\_{\mathbf{1}}\right\|^{2} \left\|\mathbf{x}\right\|^{2} - \varepsilon \left\|\mathbf{y}\right\|^{2} \\ &= \left(\nu c\_{0} - \frac{1}{\varepsilon} \left\|N\_{\mathbf{1}}\right\|^{2} - \left\|N\_{\mathbf{1}}\right\|\right) \left\|\mathbf{x}\right\|^{2} + (c\_{1} - \varepsilon) \left\|\mathbf{y}\right\|^{2}, \end{split}$$

where we have used the Peter–Paul inequality (i.e., Young's inequality for products of non-negative numbers). For 0 *< c*- <sup>1</sup> *< c*<sup>1</sup> we find *ε >* 0 such that *c*<sup>1</sup> − *ε>c*- 1. Then we choose *ν*<sup>0</sup> *>* <sup>1</sup> *c*0 *c*- <sup>1</sup> <sup>+</sup> <sup>1</sup> *<sup>ε</sup> <sup>N</sup>*<sup>1</sup> <sup>2</sup> <sup>+</sup> *<sup>N</sup>*<sup>1</sup> . With this choice of *ν*<sup>0</sup> we deduce for all *ν ν*<sup>0</sup> that

$$
\mathbb{P}\left<\boldsymbol{z},\,N\_{0}\boldsymbol{z}\right> + \mathbf{Re}\left<\boldsymbol{z},\,N\_{1}\boldsymbol{z}\right> \geqslant c\_{1}^{\prime} \left(\left\|\mathbf{x}\right\|^{2} + \left\|\mathbf{y}\right\|^{2}\right) = c\_{1}^{\prime}\left\|\boldsymbol{z}\right\|^{2}\,,\
$$

which yields the assertion.

## **7.2 Fractional Elasticity**

Let <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* be open. In order to better fit to the experimental data of visco-elastic solids (i.e., to incorporate solids that 'memorise' previous force applied to them) the equations of linearised elasticity need to be extended in some way. The balance law

for the momentum, however, is still satisfied; that is, for the displacement *<sup>u</sup>*: <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup>*<sup>d</sup>* we still have that

$$
\partial\_t \rho \partial\_t \mu - \text{Div } T = f,
$$

where *<sup>ρ</sup>* <sup>∈</sup> *L(L*2*()<sup>d</sup> )* models the density and *<sup>f</sup>* : <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup>*<sup>d</sup>* is a given external forcing term. The stress tensor, *<sup>T</sup>* : <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup>*d*×*<sup>d</sup>* sym , does *not* follow the classical Hooke's law, which, if it did, would look like

$$T = C \operatorname{Grad} u$$

for *<sup>C</sup>* <sup>∈</sup> *L(L*2*()d*×*<sup>d</sup>* sym *)*. Instead it is amended by another material dependent coefficient *<sup>D</sup>* <sup>∈</sup> *L(L*2*()d*×*<sup>d</sup>* sym *)* and a fractional time derivative; that is,

$$T = C \operatorname{Grad} \mu + D \partial\_t^{\alpha} \operatorname{Grad} \mu,$$

for some *<sup>α</sup>* <sup>∈</sup> [0*,* 1], where *<sup>∂</sup><sup>α</sup> <sup>t</sup>* := *∂t∂<sup>α</sup>*−<sup>1</sup> *<sup>t</sup>* , see Example 5.3.1(e). We shall simplify the present consideration slightly and refer to Exercise 7.2 instead for a more involved example. Throughout this section, we shall assume that

$$C = 0, \ D = D^\* \geqslant c, \ \text{and} \ \rho = \rho^\* \geqslant c.$$

for some *c >* 0. Thus, putting *v* := *∂tu* and assuming the clamped boundary conditions again, we study well-posedness of

$$
\partial\_t \rho v - \text{Div}\, T = f,\tag{7.3}
$$

$$T = D\partial\_t^{\alpha} \operatorname{Grad}\_0 u. \tag{7.4}$$

In order to do that, we first rewrite the second equation. We will make use of the following proposition which will serve us to show bounded invertibility of *∂<sup>α</sup> <sup>t</sup>* (in the space *L*2*,ν* ), and which will also be employed to obtain well-posedness.

**Proposition 7.2.1** *Let ν >* <sup>0</sup>*, <sup>z</sup>* <sup>∈</sup> <sup>C</sup>Re*<sup>ν</sup> , α* ∈ [0*,* 1]*. Then*

$$
\operatorname{Re} z^{\alpha} \geqslant (\operatorname{Re} z)^{\alpha} \geqslant \nu^{\alpha}.
$$

*Proof* Let us prove the first inequality. Note that without loss of generality, we may assume that Re *z* = 1. Let *ϕ* := arg *z* ∈ −*π* <sup>2</sup> *, <sup>π</sup>* 2 . Since ln ◦ cos is concave on −*π* <sup>2</sup> *, <sup>π</sup>* 2 (as *(*ln ◦ cos*)*-= − tan is decreasing) and *(*ln ◦ cos*)(*0*)* = 0, we obtain

ln cos*(αϕ)* = ln cos*(αϕ*+*(*1−*α)*0*) α* ln cos*(ϕ)*+*(*1−*α)*ln cos*(*0*)* = ln cos*(ϕ)α ,* and therefore cos*(αϕ)* cos*(ϕ)α*. Since Re *<sup>z</sup>* <sup>=</sup> 1 implies <sup>|</sup>*z*<sup>|</sup> <sup>=</sup> <sup>1</sup> cos*(ϕ)*, we obtain

$$\operatorname{Re} z^{\alpha} = \frac{\cos(\alpha \varphi)}{(\cos \varphi)^{\alpha}} \geqslant 1 = (\operatorname{Re} z)^{\alpha}.$$

The second inequality follows from monotonicity of *<sup>x</sup>* → *<sup>x</sup>α*.

Applying Proposition 7.2.1 and noting that *D* is boundedly invertible we can reformulate (7.4) as

$$
\partial\_{t, \upsilon}^{-\alpha} D^{-1} T - \text{Grad}\_0 u = 0,
$$

so that (7.4) and (7.3) read

$$
\begin{pmatrix} \partial\_{\mathfrak{t},\boldsymbol{\nu}} \begin{pmatrix} \rho & 0\\ 0 \ \partial\_{\mathfrak{t},\boldsymbol{\nu}}^{-\alpha} D^{-1} \end{pmatrix} - \begin{pmatrix} 0 & \operatorname{Div} \\ \operatorname{Grad}\_{\mathbf{0}} & 0 \end{pmatrix} \end{pmatrix} \begin{pmatrix} \boldsymbol{\nu} \\ T \end{pmatrix} = \begin{pmatrix} \boldsymbol{f} \\ \boldsymbol{0} \end{pmatrix}.
$$

A solution theory for the latter equation, thus, reads as follows, where again *v* := *∂t ,νu*.

**Theorem 7.2.2** *Put <sup>H</sup>* := *<sup>L</sup>*2*()<sup>d</sup>* <sup>×</sup> *<sup>L</sup>*2*()d*×*<sup>d</sup>* sym *. Then for all ν >* 0 *the operator*

$$
\partial\_{\mathfrak{t},\boldsymbol{\upsilon}} \begin{pmatrix} \boldsymbol{\rho} & \boldsymbol{0} \\ \boldsymbol{0} \ \partial\_{\mathfrak{t},\boldsymbol{\upsilon}}^{-\alpha} \boldsymbol{D}^{-1} \end{pmatrix} - \begin{pmatrix} \boldsymbol{0} & \operatorname{Div} \\ \operatorname{Grad}\_{\mathbf{0}} & \mathbf{0} \end{pmatrix},
$$

*is densely defined and closable in <sup>L</sup>*2*,ν(*R; *H ). The inverse of the closure is continuous, causal and eventually independent of ν.*

*Proof* The proof rests on Theorem 6.2.1. Since 0 Div Grad0 0 is skew-selfadjoint by Proposition 6.2.3(a), it suffices to confirm the positive definiteness condition for the material law. For this let *<sup>z</sup>* <sup>∈</sup> <sup>C</sup>Re*<sup>ν</sup>* and compute for *<sup>x</sup>* <sup>∈</sup> *<sup>L</sup>*2*()d*×*<sup>d</sup>* sym , using Proposition 7.2.1 and Proposition 6.2.3(b),

$$\operatorname{Re}\left\langle x, z z^{-\alpha} D^{-1} x \right\rangle = \operatorname{Re}\left\langle x, z^{1-\alpha} D^{-1} x \right\rangle \geqslant \nu^{1-\alpha} \left\langle x, D^{-1} x \right\rangle \geqslant \nu^{1-\alpha} \frac{c}{\left\| D \right\|^2} \left\| x \right\|^2 \dots$$

This yields the assertion.

## **7.3 The Heat Equation with Delay**

Let <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* be open. In this section we concentrate on a generalisation of the heat equation discussed in the previous chapter. Although we keep the heat flux balance in the sense that

$$
\partial\_t \theta + \text{div} \, q = \mathcal{Q},
$$

with *<sup>q</sup>* : <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup>*<sup>d</sup>* being the heat flux and *<sup>θ</sup>* : <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup> being the heat, we shall now modify Fourier's law to the extent that

$$q = -a \operatorname{grad} \theta - b \tau\_{-h} \operatorname{grad} \theta$$

for some *a, b* <sup>∈</sup> *L(L*2*()<sup>d</sup> )* with Re *<sup>a</sup> c* for some *c >* 0, and *h >* 0. We shall again assume homogeneous Neumann boundary conditions for *q*. Written in the now standard block operator matrix form, this modified heat equation reads

$$
\left(\partial\_{l,v}\begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} + \begin{pmatrix} 0 & 0 \\ 0 \ (a+b\tau\_{-h})^{-1} \end{pmatrix} + \begin{pmatrix} 0 & \text{div}\_0 \\ \text{grad}\ 0 \end{pmatrix}\right) \begin{pmatrix} \theta \\ q \end{pmatrix} = \begin{pmatrix} \mathcal{Q} \\ 0 \end{pmatrix}.
$$

In order to actually justify the existence of the operator *(a* + *bτ*−*h)* <sup>−</sup><sup>1</sup> as a bounded linear operator, we provide the following lemma.

#### **Lemma 7.3.1** *Let h >* 0*.*


*Proof* Note that *a* is invertible with  *<sup>a</sup>*−<sup>1</sup> <sup>1</sup> *<sup>c</sup>* and Re *<sup>a</sup>*−<sup>1</sup> *c <sup>a</sup>* <sup>2</sup> by Proposition 6.2.3(b).

(a) By Example 5.3.4(c), for all *ν >* 0 we obtain

$$\|b\mathfrak{r}\_{-h}\|\_{L(L\_{2,\nu})} \lesssim \|b\|\_{L(L\_2(\Omega)^d)} \sup\_{l \in \mathbb{R}} \left| \mathfrak{e}^{-(l\mathfrak{r}+\nu)h} \right| = \|b\|\_{L(L\_2(\Omega)^d)} \mathfrak{e}^{-h\nu}.$$

Thus, we find *ν*<sup>0</sup> *>* 0 such that for all *ν ν*<sup>0</sup> we obtain  *bτ*−*ha*−<sup>1</sup> *L(L*2*,ν )* 1 *<sup>c</sup> bτ*−*<sup>h</sup> L(L*2*,ν ) <* 1*.* Thus,

$$a + b\,\pi\_{-h} = \left(1 + b\,\pi\_{-h}a^{-1}\right)a$$

is continuously invertible by a Neumann series argument.

(b) Let 0 *< c*- *< c/ <sup>a</sup>*2, and set *d(z)* := −*b*e−*zha*−1. Moreover, we choose *ν*<sup>1</sup> *<sup>ν</sup>*<sup>0</sup> such that *d(z) L(L*2*()<sup>d</sup> )* min{ <sup>1</sup> <sup>2</sup> *, ε*} for all *<sup>z</sup>* <sup>∈</sup> <sup>C</sup>Re*<sup>ν</sup>*<sup>1</sup> , where 0 *< ε* <sup>1</sup> 2 *c c <sup>a</sup>* <sup>2</sup> <sup>−</sup> *<sup>c</sup>*- . For *<sup>z</sup>* <sup>∈</sup> <sup>C</sup>Re*<sup>ν</sup>*<sup>1</sup> we compute

$$\begin{split} \mathrm{Re}\left(a+b\mathrm{e}^{-z\hbar}\right)^{-1} &= \mathrm{Re}\,a^{-1}\left(1-d(z)\right)^{-1} = \mathrm{Re}\left(a^{-1}\sum\_{k=0}^{\infty}d(z)^{k}\right) \\ &= \mathrm{Re}\left(a^{-1}+\sum\_{k=1}^{\infty}a^{-1}d(z)^{k}\right) \\ &\geqslant \frac{c}{\|\|a\|\|^{2}}-\left\|\sum\_{k=1}^{\infty}a^{-1}d(z)^{k}\right\| \geqslant \frac{c}{\|\|a\|\|^{2}}-\frac{1}{c}\sum\_{k=1}^{\infty}\|d(z)\|^{k} \\ &= \frac{c}{\|\|a\|\|^{2}}-\frac{1}{c}\frac{\|d(z)\|}{1-\|d(z)\|}\geqslant \frac{c}{\|a\|^{2}}-\frac{1}{c}2\varepsilon\geqslant c'. \end{split}$$

With this lemma we are in the position to provide the well-posedness for the modified heat equation.

**Theorem 7.3.2** *Let <sup>H</sup>* <sup>=</sup> *<sup>L</sup>*2*()* <sup>×</sup> *<sup>L</sup>*2*()d. There exists <sup>ν</sup>*<sup>0</sup> *<sup>&</sup>gt;* <sup>0</sup> *such that for all ν ν*<sup>0</sup> *the operator*

$$
\partial\_{l, \upsilon} \begin{pmatrix} 1 \ 0 \\ 0 \ 0 \end{pmatrix} + \begin{pmatrix} 0 & 0 \\ 0 \ (a + b\tau\_{-h})^{-1} \end{pmatrix} + \begin{pmatrix} 0 & \text{div}\_0 \\ \text{grad} & 0 \end{pmatrix},
$$

*is densely defined and closable with continuously invertible closure on <sup>L</sup>*2*,ν(*R; *H ). The inverse of the closure is causal and eventually independent of ν.*

*Proof* The proof rests on Theorem 6.2.1 and Lemma 7.3.1.

## **7.4 Dual Phase Lag Heat Conduction**

The last example is concerned with a different modification of Fourier's law. The heat flux balance

$$
\partial\_t \theta + \text{div}\, q = \mathcal{Q} \tag{7.5}
$$

is accompanied by the modified Fourier's law

$$(1 + s\_q \partial\_t + \frac{1}{2} s\_q^2 \partial\_t^2) q = -(1 + s\_\theta \partial\_t) \operatorname{grad} \theta,\tag{7.6}$$

where *sq* <sup>∈</sup> <sup>R</sup>*, sθ <sup>&</sup>gt;* 0 are given numbers, which are called 'phases'.

*Remark 7.4.1* The modified Fourier's law in (7.6) is an attempt to resolve the problem of infinite propagation speed which stems from a truncated Taylor series expansion of a model given by

$$
\mathfrak{r}\_{\mathfrak{s}\_q} q = -\mathfrak{r}\_{\mathfrak{s}\_\theta} \operatorname{grad} \theta .
$$

Note that it can be shown that such a model would even be ill-posed, see [34].

Let us turn back to the system (7.5) and (7.6). Notice, since *sθ >* 0, and due to a strictly positive real part of the derivative in our functional analytic setting, we deduce that *(*1 + *sθ ∂t ,ν)* is continuously invertible for *ν* -0. Thus, we obtain

$$
\partial\_{\mathfrak{t},\boldsymbol{\nu}} (\partial\_{\mathfrak{t},\boldsymbol{\nu}}^{-1} + s\_q + \frac{1}{2} s\_q^2 \partial\_{\mathfrak{t},\boldsymbol{\nu}}) (1 + s\_\theta \partial\_{\mathfrak{t},\boldsymbol{\nu}})^{-1} q = -\operatorname{grad} \theta.
$$

The block operator matrix formulation of the dual phase lag heat conduction model is thus

$$
\begin{pmatrix}
\partial\_{l,\boldsymbol{\upsilon}} \begin{pmatrix} 1 & 0 \\ 0 \left(\partial\_{l,\boldsymbol{\upsilon}}^{-1} + s\_{q} + \frac{1}{2}s\_{q}^{2}\partial\_{l,\boldsymbol{\upsilon}}\right)(1 + s\_{\boldsymbol{\theta}}\,\partial\_{l,\boldsymbol{\upsilon}})^{-1} \end{pmatrix} + \begin{pmatrix} 0 & \operatorname{div}\_{0} \\ \operatorname{grad} & 0 \end{pmatrix} \begin{pmatrix} \boldsymbol{\theta} \\ \boldsymbol{q} \end{pmatrix} = \begin{pmatrix} \boldsymbol{Q} \\ \boldsymbol{0} \end{pmatrix}.
$$

**Theorem 7.4.2** *Let <sup>H</sup>* <sup>=</sup> *<sup>L</sup>*2*()* <sup>×</sup> *<sup>L</sup>*2*()<sup>d</sup> . Assume sq* <sup>∈</sup> <sup>R</sup> \ {0}*, sθ <sup>&</sup>gt;* <sup>0</sup>*. Then there exists ν*<sup>0</sup> *>* 0 *such that for all ν ν*<sup>0</sup> *the operator*

$$\partial\_{t,\boldsymbol{\upsilon}} \begin{pmatrix} 1 & 0 \\ 0 \left( \partial\_{t,\boldsymbol{\upsilon}}^{-1} + \mathbf{s}\_q + \frac{1}{2} \mathbf{s}\_q^2 \partial\_{t,\boldsymbol{\upsilon}} \right) (1 + \mathbf{s}\_\theta \partial\_{t,\boldsymbol{\upsilon}})^{-1} \end{pmatrix} + \begin{pmatrix} 0 & \text{div}\_0 \\ \text{grad} & \mathbf{0} \end{pmatrix}.$$

*is densely defined and closable with continuously invertible closure on <sup>L</sup>*2*,ν(*R; *H ). The inverse of the closure is causal and eventually independent of ν.*

The proof of Theorem 7.4.2 is again based on Theorem 6.2.1. Thus, we shall only record the decisive observation in the next result. For this, we define

$$M(z) := \frac{z^{-1} + s\_q + \frac{1}{2}s\_q^2 z}{1 + s\_\theta z} \in \mathbb{C} \quad (z \in \mathbb{C} \mid \{0, -\frac{1}{s\_\theta}\}) \dots$$

**Lemma 7.4.3** *Let sq* <sup>∈</sup> <sup>R</sup> \ {0}*, sθ <sup>&</sup>gt;* <sup>0</sup>*. Then there exist <sup>ν</sup>*<sup>0</sup> <sup>∈</sup> <sup>R</sup> *and c >* <sup>0</sup> *such that for all <sup>z</sup>* <sup>∈</sup> <sup>C</sup>Re*<sup>ν</sup>*<sup>0</sup> *we have*

$$\operatorname{Re} z M(z) \geqslant c.$$

*Proof* We put *<sup>σ</sup>* := *sq sθ* . Let *<sup>z</sup>* <sup>∈</sup> <sup>C</sup> \ {0*,* <sup>−</sup> <sup>1</sup> *sθ* }. We compute

$$zM(z) = \frac{1 + s\_q z + \frac{1}{2} s\_q^2 z^2}{1 + s\_\theta z} = \frac{1}{2} s\_q z \sigma + \sigma \left( 1 - \frac{1}{2} \sigma \right) + \frac{1 - \sigma \left( 1 - \frac{1}{2} \sigma \right)}{1 + s\_\theta z}$$

#### 7.5 Comments 113

and therefore

$$\operatorname{Re} z \mathcal{M}(z) = \frac{1}{2} s\_q \sigma \operatorname{Re} z + \sigma \left( 1 - \frac{1}{2} \sigma \right) + \frac{\left( 1 - \sigma \left( 1 - \frac{1}{2} \sigma \right) \right) \left( 1 + s\_\theta \operatorname{Re} z \right)}{\left| 1 + s\_\theta z \right|^2}.$$

By assumption

$$0 < \frac{s\_q^2}{s\_\theta} = s\_q \sigma,$$

and since

$$\frac{\left(1-\sigma\left(1-\frac{1}{2}\sigma\right)\right)\left(1+s\_{\theta}\operatorname{Re}z\right)}{\left|1+s\_{\theta}z\right|^{2}} \to 0$$

as Re *z* → ∞, we obtain

$$\operatorname{Re} zM(z) \ge \frac{1}{2} \mathfrak{s}\_q \sigma \operatorname{Re} z - \delta$$

for some *δ >* 0 and all *<sup>z</sup>* <sup>∈</sup> <sup>C</sup> with Re *<sup>z</sup>* large enough.

**7.5 Comments**

The equations of poro-elasticity have been proposed in [69] and were mathematically studied in [63, 103].

Equations of fractional elasticity are discussed in [20, 73, 87, 134]. The wellposedness conditions stated here and in Exercise 7.2 can be generalised as it is outlined in [87] to the case where both *C* and *D* are non-negative, selfadjoint operators so that *C* and *D* satisfy the conditions imposed on *N*<sup>1</sup> and *N*<sup>0</sup> in Proposition 7.1.4. We refrained from presenting this argument here, as it seemed too technical for the time being. Note however that the proof is neither fundamentally different nor considerably less elementary.

The heat equation with delay has also been studied in [55] with an entirely different strategy; the dual phase lag models have been dealt with in [68, 127].

Other ideas to rectify infinite propagation speed of the heat equation can be found in [3], where nonlinear models for heat conduction are being discussed.

The visco-elastic equations discussed in Exercise 7.6 are studied with convolution operators more general than below in [119]; see also [19, 27, 95, 116].

## **Exercises**

#### **Exercise 7.1 (Solutions to the Equations of Poro-Elasticity)**


$$\begin{aligned} \partial\_{l,\boldsymbol{\upsilon}}\rho \,\partial\_{l,\boldsymbol{\upsilon}}\mu - \operatorname{grad}\lambda \,\operatorname{div}\partial\_{l,\boldsymbol{\upsilon}}\mu - \operatorname{Div}\boldsymbol{C} \operatorname{Grad}\_{0}\mu + \operatorname{grad}\boldsymbol{\alpha}^{\*}\boldsymbol{p} &= \boldsymbol{f} \end{aligned}$$

$$\partial\_{l,\boldsymbol{\upsilon}}c\_{0}\boldsymbol{p} + \alpha \operatorname{div}\partial\_{l,\boldsymbol{\upsilon}}\mu - \operatorname{div}\_{0}k \operatorname{grad}\boldsymbol{p} = \boldsymbol{g}.$$

**Exercise 7.2** Let <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* be open, *C,D* <sup>∈</sup> *L(L*2*()d*×*<sup>d</sup>* sym *)*, *D* = *D*<sup>∗</sup> *c* for some *c >* 0 and *<sup>α</sup>* ∈ [ <sup>1</sup> <sup>2</sup> *,* 1]. Show that there exists *ν*<sup>0</sup> *>* 0 such that for all *ν ν*<sup>0</sup> the system

$$\begin{aligned} \partial\_{l,\boldsymbol{\upsilon}} \rho \boldsymbol{v} - \operatorname{Div} T &= f, \\ T &= \left( C + D \partial\_{l,\boldsymbol{\upsilon}}^{\alpha} \right) \operatorname{Grad}\_0 \boldsymbol{u}, \end{aligned}$$

where *<sup>v</sup>* <sup>=</sup> *∂t ,νu*, admits a unique solution *(v, T )* <sup>∈</sup> *<sup>L</sup>*2*,ν (*R; *<sup>L</sup>*2*()<sup>d</sup>* <sup>×</sup> *<sup>L</sup>*2*()d*×*<sup>d</sup>* sym *)* for all *<sup>f</sup>* <sup>∈</sup> *<sup>H</sup>*<sup>1</sup> *<sup>ν</sup> (*R; *<sup>L</sup>*2*()d)*.

The following exercises are devoted to showing the well-posedness of certain equations in visco-elasticity, where the 'viscous part' is modelled by convolution with certain integral kernels. The proof of the positive definiteness property requires some preliminary results. We assume the reader to be equipped with the basics from the theory of functions of one complex variable.

For *<sup>U</sup>* <sup>⊆</sup> <sup>C</sup> open write *<sup>U</sup>* := *(x, y)* <sup>∈</sup> <sup>R</sup>2; *<sup>x</sup>* <sup>+</sup> <sup>i</sup>*<sup>y</sup>* <sup>∈</sup> *<sup>U</sup>* , and for *<sup>u</sup>*: *<sup>U</sup>* <sup>→</sup> <sup>C</sup> holomorphic, define *f*Re *<sup>u</sup>* : *U* <sup>→</sup> <sup>R</sup> by *<sup>f</sup>*Re *u(x, y)* := Re *u(x* <sup>+</sup> <sup>i</sup>*y)* for *(x, y)* <sup>∈</sup> *<sup>U</sup>* . We put

$$H\_{\text{Re}}(U) := \{ f\_{\text{Re}\,\mu} \,;\,\mu \colon U \to \mathbb{C} \text{ holomorphic} \}.$$

**Exercise 7.3** Let *<sup>U</sup>* <sup>⊆</sup> <sup>C</sup> be open.

(a) Let *f* ∈ *H*Re*(U )*. Show that *f* satisfies the *mean value property*; that is, for all *(x, y)* ∈ *U* and *r >* 0 with *<sup>B</sup> ((x, y), r)* <sup>⊆</sup> *<sup>U</sup>* we have

$$f(\mathbf{x}, \mathbf{y}) = \frac{1}{2\pi} \int\_0^{2\pi} f(\mathbf{x} + r\cos\theta, \mathbf{y} + r\sin\theta) \,\mathrm{d}\theta \,\mathrm{d}\theta.$$

Exercises 115

(b) Let *<sup>U</sup>* := <sup>C</sup>Im*>*<sup>0</sup> and *<sup>f</sup>* <sup>∈</sup> *<sup>H</sup>*Re*(U )* <sup>∩</sup> *C(*<sup>R</sup> <sup>×</sup> <sup>R</sup>-<sup>0</sup>*)*. Moreover, assume that *f (x,* <sup>0</sup>*)* <sup>=</sup> 0 for each *<sup>x</sup>* <sup>∈</sup> <sup>R</sup> and *f (x, y)* <sup>→</sup> 0 as <sup>|</sup>*(x, y)*|→∞. Show that *<sup>f</sup>* <sup>=</sup> 0 on <sup>R</sup> <sup>×</sup> <sup>R</sup>-0.

**Exercise 7.4** In this exercise we show a version of *Poisson's formula*. Let *U* := <sup>C</sup>Im*>*<sup>0</sup> and *<sup>f</sup>* <sup>∈</sup> *<sup>H</sup>*Re*(U )* <sup>∩</sup> *C(*<sup>R</sup> <sup>×</sup> <sup>R</sup>-0*)*.


$$f(\mathbf{x}, \mathbf{y}) = \frac{1}{\pi} \int\_{\mathbb{R}} \frac{\mathbf{y}}{(\mathbf{x} - \mathbf{x}')^2 + \mathbf{y}^2} f(\mathbf{x}', \mathbf{0}) \, d\mathbf{x}' \quad (\mathbf{x}, \mathbf{y}) \in \mathbb{R} \times \mathbb{R}\_{>0} \\ \text{s.t.}$$

*Hint*: Apply Exercise 7.3(b).

**Exercise 7.5** Let *<sup>ν</sup>*<sup>0</sup> <sup>∈</sup> <sup>R</sup> and *<sup>k</sup>* <sup>∈</sup> *<sup>L</sup>*1*,ν*<sup>0</sup> *(*R; <sup>R</sup>*)* with spt *<sup>k</sup>* <sup>⊆</sup> <sup>R</sup>-0.

(a) Show that for all *(x, ν)* <sup>∈</sup> <sup>R</sup> <sup>×</sup> <sup>R</sup>*>ν*<sup>0</sup> we have

$$\operatorname{Im}(\mathcal{L}k)(\mathbf{ix}+\boldsymbol{\nu}) = \frac{1}{\pi} \int\_{\mathbb{R}} \frac{\boldsymbol{\nu}-\boldsymbol{\nu}\_{0}}{(\mathbf{x}-\boldsymbol{x}')^{2}+(\boldsymbol{\nu}-\boldsymbol{\nu}\_{0})^{2}} \operatorname{Im}(\mathcal{L}k)(\mathbf{ix}'+\boldsymbol{\nu}\_{0}) \,\mathrm{d}\mathbf{x}'.$$

*Hint*: Approximate *k* by functions in *C*∞ <sup>c</sup> *(*R-<sup>0</sup>; <sup>R</sup>*)* and use Poisson's formula (see Exercise 7.4).

(b) Assume there exists *d* -0 such that for all *<sup>x</sup>* <sup>∈</sup> <sup>R</sup>

$$\propto \operatorname{Im}(\mathcal{L}k)(\|x+\nu\_0\| \leqslant d.)$$

Show that for all *ν <sup>ν</sup>*<sup>0</sup> and *<sup>x</sup>* <sup>∈</sup> <sup>R</sup> we have

$$\propto \operatorname{Im}(\mathcal{L}k)(\operatorname{ix}+\nu) \leqslant 4d\dots$$

*Hint*: Use the formula in (a) and split the integral into positive and negative part of <sup>R</sup>; use symmetry of *(Lk)* under conjugation due to the realness of *<sup>k</sup>*.

**Exercise 7.6** Let <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* be open, *<sup>ν</sup>*<sup>0</sup> <sup>∈</sup> <sup>R</sup> and *<sup>k</sup>* <sup>∈</sup> *<sup>L</sup>*1*,ν*<sup>0</sup> *(*R; <sup>R</sup>*)* with spt *<sup>k</sup>* <sup>⊆</sup> <sup>R</sup>-0. Assume there exists *d* -0 such that

$$
\propto \operatorname{Im}(\mathcal{L}k)(\mathrm{i}x + \nu\_0) \lesssim d \quad (x \in \mathbb{R}).
$$

Show that there exists *ν*<sup>1</sup> *ν*<sup>0</sup> such that for all *ν ν*<sup>1</sup> the operator

$$
\partial\_{\mathbf{l}, \boldsymbol{\nu}} \begin{pmatrix} 1 & 0 \\ 0 \ (1 - k \ast)^{-1} \end{pmatrix} + \begin{pmatrix} 0 & \text{Div} \\ \mathbf{Grado} & 0 \end{pmatrix},
$$

is well-defined, densely defined and closable in *<sup>L</sup>*2*,ν (*R; *H )* with *<sup>H</sup>* <sup>=</sup> *<sup>L</sup>*2*()<sup>d</sup>* <sup>×</sup> *L*2*()d*×*<sup>d</sup>* sym . Further, show that its closure is continuously invertible, and that the corresponding inverse is causal and eventually independent of *ν*.

**Exercise 7.7** Let *<sup>ν</sup>*<sup>0</sup> <sup>∈</sup> <sup>R</sup> and *<sup>k</sup>* <sup>∈</sup> *<sup>L</sup>*1*,ν*<sup>0</sup> *(*R; <sup>R</sup>*)* with spt *<sup>k</sup>* <sup>⊆</sup> <sup>R</sup>-0.

(a) Assume that *k* is absolutely continuous with *k*- <sup>∈</sup> *<sup>L</sup>*1*,ν*<sup>0</sup> *(*R; <sup>R</sup>*)*. Show that there exist *ν*<sup>1</sup> *ν*<sup>0</sup> and *d* -0 with

$$
\propto \operatorname{Im}(\mathcal{L}k)(\mathbf{i}\mathbf{x} + \boldsymbol{\nu}\_{\mathbb{I}}) \lesssim d \quad (\mathbf{x} \in \mathbb{R}).
$$

(b) Assume that *k(t)* - 0 for all *<sup>t</sup>* <sup>∈</sup> <sup>R</sup> and that *k(t) k(s)*, whenever *<sup>s</sup> <sup>t</sup>*. Show that there exists *ν*<sup>1</sup> *ν*<sup>0</sup> with

$$\text{tr}\,\text{Im}(\mathcal{L}k)(\text{ix}+\nu\_{\text{l}}) \leqslant \mathbf{0} \quad (\text{x}\in\mathbb{R})\text{-}\lambda$$

*Hint*: For part (b) use the explicit formula for Im*(Lk)* as an integral and the periodicity of sin.

*Remark*: The condition in (a) is a standard assumption for convolution kernels in the framework of visco-elastic equations; the condition in (b) is from [95].

## **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 8 Causality and a Theorem of Paley and Wiener**

In this chapter we turn our focus back to causal operators. In Chap. 5 we found out that material laws provide a class of causal and autonomous bounded operators. In this chapter we will present another proof of this fact, which rests on a result which characterises functions in *<sup>L</sup>*2*(*R; *H )* with support contained in the nonnegative reals; the celebrated Theorem of Paley and Wiener. With the help of this theorem, which is interesting in its own right, the proof of causality for material laws becomes very easy. At a first glance it seems that holomorphy of a material law is a rather strong assumption. In the second part of this chapter, however, we shall see that in designing autonomous and causal solution operators, there is no way of circumventing holomorphy.

In the following, let *H* be a Hilbert space, and we consider *L*2*,ν(*R-<sup>0</sup>; *H )* as the subspace of functions in *<sup>L</sup>*2*,ν (*R; *H )* vanishing on *(*−∞*,* <sup>0</sup>*)*.

## **8.1 A Theorem of Paley and Wiener**

We start with the following lemma, for which we need the notion of locally integrable functions. We define

$$L\_{\mathbb{L}, \text{loc}}(\mathbb{R}; H) := \{ f \; ; \; \forall \; K \subseteq \mathbb{R} \; \text{compact} : \mathbb{1}\_{K} f \in L\_{\mathbb{L}}(\mathbb{R}; H) \},$$

$$= \left\{ f \; ; \; \forall \; \varphi \in \mathcal{C}\_{\mathbb{c}}^{\infty}(\mathbb{R}) : \varphi f \in L\_{\mathbb{L}}(\mathbb{R}; H) \right\}.$$

**Lemma 8.1.1** *Let <sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*1*,*loc*(*R; *H ). Then we have <sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*(*R-<sup>0</sup>; *H ) if and only if f* ∈ " *ν>*<sup>0</sup> *<sup>L</sup>*2*,ν (*R; *H ) with* sup*ν>*<sup>0</sup> *<sup>f</sup> <sup>L</sup>*2*,ν (*R;*H ) <sup>&</sup>lt;* <sup>∞</sup>*. In the latter case we have that*

$$\|f\|\_{L\_2(\mathbb{R}\_{\geqslant 0};H)} = \lim\_{\nu \to 0+} \|f\|\_{L\_{2,\nu}(\mathbb{R};H)} = \sup\_{\nu > 0} \|f\|\_{L\_{2,\nu}(\mathbb{R};H)}.$$

© The Author(s) 2022 C. Seifert et al., *Evolutionary Equations*, Operator Theory: Advances and Applications 287, https://doi.org/10.1007/978-3-030-89397-2\_8

119

*Proof* Let *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*(*R-<sup>0</sup>; *H )* and *ν >* 0. Then we estimate

$$\int\_{\mathbb{R}} \|f(t)\|\_{H}^{2} \operatorname{e}^{-2\operatorname{wt}} \operatorname{dt} = \int\_{\mathbb{R}\_{\geqslant 0}} \|f(t)\|\_{H}^{2} \operatorname{e}^{-2\operatorname{wt}} \operatorname{dt} \lesssim \int\_{\mathbb{R}\_{\geqslant 0}} \|f(t)\|\_{H}^{2} \operatorname{dt} = \|f\|\_{L\_{2}(\mathbb{R}\_{\geqslant 0}; H)}^{2},$$

which proves that *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *H )* with *<sup>f</sup> <sup>L</sup>*2*,ν (*R;*H ) <sup>f</sup> <sup>L</sup>*2*(*R-<sup>0</sup>;*H )* for each *ν >* 0. Moreover, *f <sup>L</sup>*2*,ν (*R;*H )* → *f <sup>L</sup>*2*(*R-<sup>0</sup>;*H )* as *ν* → 0 by monotone convergence and since clearly *f <sup>L</sup>*2*,ν (*R;*H ) f <sup>L</sup>*2*,μ(*R;*H )* for 0 *< μ ν* we obtain

$$\|f\|\_{L\_2(\mathbb{R}\_{\geqslant 0};H)} = \lim\_{\nu \to 0+} \|f\|\_{L\_{2,\nu}(\mathbb{R};H)} = \sup\_{\nu > 0} \|f\|\_{L\_{2,\nu}(\mathbb{R};H)}.$$

Assume now that *f* ∈ " *ν>*<sup>0</sup> *<sup>L</sup>*2*,ν(*R; *H )* with *<sup>C</sup>* := sup*ν>*<sup>0</sup> *<sup>f</sup> <sup>L</sup>*2*,ν (*R;*H ) <sup>&</sup>lt;* <sup>∞</sup>. This inequality yields

$$\sup\_{\boldsymbol{\nu}\in(0,\infty)} \int\_{(-\infty,0)} \|\boldsymbol{f}(t)\|^2 \,\mathrm{e}^{-2\nu t} \,\mathrm{d}t \leqslant \mathcal{C}^2.$$

Hence, the monotone convergence theorem yields that *g(t)* := lim*ν*→∞ *f (t)* <sup>2</sup> <sup>e</sup>−2*νt* for *<sup>t</sup>* <sup>∈</sup> *(*−∞*,* <sup>0</sup>*)* defines a function *<sup>g</sup>* <sup>∈</sup> *<sup>L</sup>*<sup>1</sup> *(*−∞*,* <sup>0</sup>*)*. Thus, [*<sup>g</sup>* = ∞] is a set of measure zero and thus [*f* = 0] ∩ *(*−∞*,* 0*)* = *(*−∞*,* 0*)* \ [*g* = ∞] has full measure in *(*−∞*,* <sup>0</sup>*)* implying that spt *<sup>f</sup>* <sup>⊆</sup> <sup>R</sup>-0.

Finally, from

$$\sup\_{\nu \in (0,\infty)} \int\_{(0,\infty)} \|f(t)\|^2 \,\mathrm{e}^{-2\nu t} \,\mathrm{d}t \leqslant \mathcal{C}^2.$$

we infer again by the monotone convergence theorem that *<sup>t</sup>* → lim*ν*→<sup>0</sup> *f (t)* <sup>2</sup> <sup>e</sup>−2*νt* <sup>=</sup> *f (t)* <sup>2</sup> defines a function in *<sup>L</sup>*1*(*0*,*∞*)*, showing the remaining assertion.

For the proof of the Paley–Wiener theorem we need a suitable space of holomorphic functions on the right half-plane, the so-called *Hardy space <sup>H</sup>*2*(*CRe*>ν* ; *H )*, which we introduce in the following.

**Definition** For *<sup>ν</sup>* <sup>∈</sup> <sup>R</sup> we define the *Hardy space*

$$\mathcal{H}\_2(\mathbb{C}\_{\text{Re} > \nu}; H) := \left\{ \mathbf{g} : \mathbb{C}\_{\text{Re} > \nu} \to H \text{ ; } \operatorname{g} \text{ holomorphic}, \sup\_{\rho > \nu} \int\_{\mathbb{R}} \|\mathbf{g}(\mathbf{\dot{u}} + \rho)\|\_{H}^{2} \, \mathrm{d}t < \infty \right\}$$

and equip it with the norm ·*H*2*(*CRe*>ν* ;*H )* defined by

$$\|\|\mathbf{g}\|\|\_{\mathcal{H}\_2(\mathbb{C}\_{\text{Re}\sim\nu};H)} := \sup\_{\rho>\nu} \left( \int\_{\mathbb{R}} \|\mathbf{g}(\mathbf{i}t+\rho)\|\_{H}^2 \,\mathrm{d}t \right)^{\frac{1}{2}}.$$

We motivate the Theorem of Paley–Wiener first. For this, let *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν (*R-<sup>0</sup>; *H )* and define its *Laplace transform* as

$$\mathbb{C}\_{\text{Re}\sim\nu} \ni z \mapsto \mathcal{L}f(z) := \frac{1}{\sqrt{2\pi}} \int\_0^\infty f(t) \mathbf{e}^{-\varepsilon t} \,\mathrm{d}t. \tag{8.1}$$

Note that *<sup>L</sup>f (z)* <sup>=</sup> *<sup>L</sup>*Re *zf (*Im *z)* for all *<sup>z</sup>* <sup>∈</sup> <sup>C</sup>Re*>ν* due to the support constraint on *f* . Moreover, it is not difficult to see that the integral on the right-hand side of (8.1) exists as *<sup>t</sup>* → <sup>e</sup>−*ρtf (t)* <sup>∈</sup> *<sup>L</sup>*1*(*R-<sup>0</sup>; *H )* <sup>∩</sup> *<sup>L</sup>*2*(*R-<sup>0</sup>; *H )* for all *ρ>ν*. Hence, *<sup>L</sup><sup>f</sup>* : <sup>C</sup>Re*>ν* <sup>→</sup> *<sup>H</sup>* is holomorphic (cf. Exercise 5.6). Moreover, by Lemma 8.1.1

$$\begin{aligned} \sup\_{\rho>\nu} \|\mathcal{L}f(\mathbf{i}\cdot+\rho)\|\_{L\_2(\mathbb{R};H)} &= \sup\_{\rho>\nu} \|\mathcal{L}\_{\rho}f\|\_{L\_2(\mathbb{R};H)} = \sup\_{\rho>\nu} \|f\|\_{L\_{2,\rho}(\mathbb{R};H)} \\ &= \sup\_{\rho>0} \|\mathbf{e}^{-\nu}f\|\_{L\_{2,\rho}(\mathbb{R};H)} \\ &= \|\mathbf{e}^{-\nu}f\|\_{L\_2(\mathbb{R};H)} = \|f\|\_{L\_{2,\nu}(\mathbb{R};H)}, \end{aligned}$$

which proves that

$$\begin{aligned} \mathcal{L} \colon L\_{2, \boldsymbol{\nu}}(\mathbb{R}\_{\geqslant 0}; H) &\to \mathcal{H}\_2(\mathbb{C}\_{\mathbf{Re} > \boldsymbol{\nu}}; H) \\ f &\mapsto \left( z \mapsto (\mathcal{L}\_{\mathbf{Re} \, z} f) \, (\operatorname{Im} z) \right) \end{aligned}$$

is well-defined and isometric. It turns out that *L* is actually surjective, see Corollary 8.1.3 below. The surjectivity statement is contained in the following Theorem of Paley–Wiener, [78]. We mainly follow the proof given in [101, 19.2 Theorem].

**Theorem 8.1.2 (Paley–Wiener)** *Let <sup>g</sup>* <sup>∈</sup> *<sup>H</sup>*2*(*CRe*>*0; *H ). Then there exists an <sup>f</sup>* <sup>∈</sup> *L*2*(*R-<sup>0</sup>; *H ) such that*

$$
\mathcal{L}\_{\boldsymbol{\nu}} f = \operatorname{g}(\mathbf{i} \cdot + \boldsymbol{\nu}) \quad (\boldsymbol{\nu} > 0) .
$$

*Proof* For *ν >* 0 we set *gν* := *g(*<sup>i</sup> · +*ν)* <sup>∈</sup> *<sup>L</sup>*2*(*R; *H )* and *fν* := *<sup>F</sup>*<sup>∗</sup>*gν* <sup>∈</sup> *<sup>L</sup>*2*(*R; *H )*. Moreover, we set *<sup>f</sup>* := <sup>e</sup>*(*·*) f*1. We first prove that *f* ∈ " *ν>*<sup>0</sup> *<sup>L</sup>*2*,ν(*R; *H )* with sup*ν>*<sup>0</sup> *<sup>f</sup> <sup>L</sup>*2*,ν (*R;*H ) <sup>&</sup>lt;* <sup>∞</sup>*.* For doing so, let *a >* 0, *ρ >* 0 and *<sup>x</sup>* <sup>∈</sup> <sup>R</sup>. Applying

#### **Fig. 8.1** Curve *γ*

Cauchy's integral theorem to the function *<sup>z</sup>* → <sup>e</sup>*zxg(z)* and the curve *<sup>γ</sup>* , as indicated in Fig. 8.1, we obtain

$$\begin{split} 0 &= \mathrm{i} \int\_{-a}^{a} \mathrm{e}^{(\mathrm{i}t+1)\times} \mathrm{g}(\mathrm{i}t+1) \, \mathrm{d}t - \int\_{\rho}^{1} \mathrm{e}^{(\mathrm{i}a+\kappa)\times} \mathrm{g}(\mathrm{i}a+\kappa) \, \mathrm{d}\kappa \\ &- \mathrm{i} \int\_{-a}^{a} \mathrm{e}^{(\mathrm{i}t+\rho)\times} \mathrm{g}(\mathrm{i}t+\rho) \, \mathrm{d}t + \int\_{\rho}^{1} \mathrm{e}^{(-\mathrm{i}a+\kappa)\times} \mathrm{g}(-\mathrm{i}a+\kappa) \, \mathrm{d}\kappa. \end{split} \tag{8.2}$$

Moreover, since

$$\begin{split} \left| \int\_{\mathbb{R}} \left| \int\_{\rho}^{1} \mathbf{e}^{(\pm \bar{\mathbf{u}} + \kappa)\mathbf{x}} g(\pm \mathbf{i} a + \kappa) \, \mathrm{d}\kappa \right|\_{H}^{2} \, \mathrm{d}\kappa \right| &\leq \int\_{\mathbb{R}} \left| \int\_{\rho}^{1} \left| \mathbf{e}^{(\pm \bar{\mathbf{u}} + \kappa)\mathbf{x}} \right|^{2} \, \mathrm{d}\kappa \int\_{\rho}^{1} \|g(\pm \mathbf{i} a + \kappa)\|\_{H}^{2} \, \mathrm{d}\kappa \right| \, \mathrm{d}\kappa \\ &\leq \left| \int\_{\rho}^{1} \mathbf{e}^{2 \kappa x} \, \mathrm{d}\kappa \right| \left| \int\_{\rho}^{1} \int\_{\mathbb{R}} \|g(\pm \mathbf{i} a + \kappa)\|\_{H}^{2} \, \mathrm{d}\kappa \, \mathrm{d}\kappa \right| \\ &\lesssim \left| \int\_{\rho}^{1} \mathbf{e}^{2 \kappa x} \, \mathrm{d}\kappa \right| \left| 1 - \rho \right| \left\|g \right\|\_{\mathcal{H}\_{2}(\mathbb{C}\_{\mathrm{Re} \simeq 0}; H)}^{2} < \infty, \end{split}$$

we infer that *<sup>a</sup>* → <sup>1</sup> *<sup>ρ</sup>* <sup>e</sup>*(*±i*a*+*κ)xg(*±i*<sup>a</sup>* <sup>+</sup> *κ)* <sup>d</sup>*<sup>κ</sup>* <sup>∈</sup> *<sup>L</sup>*2*(*R; *H )* and thus, we find a sequence *(an)n*∈<sup>N</sup> in <sup>R</sup>*>*<sup>0</sup> such that *an* → ∞ and

$$\int\_{\rho}^{1} \mathbf{e}^{(\pm \mathrm{i}a\_{\mathrm{i}} + \kappa)\chi} g(\pm \mathrm{i}a\_{\mathrm{i}} + \kappa) \, \mathrm{d}\kappa \to 0$$

as *n* → ∞. Hence, using (8.2) with *a* replaced by *an* and letting *n* tend to infinity, we derive that

$$\int\_{-a\_{\mathrm{fl}}}^{a\_{\mathrm{fl}}} \mathbf{e}^{(\mathrm{i}t+1)\times} \mathbf{g} (\mathrm{i}t+1) \, \mathrm{d}t - \int\_{-a\_{\mathrm{fl}}}^{a\_{\mathrm{fl}}} \mathbf{e}^{(\mathrm{i}t+\rho)\times} \mathbf{g} (\mathrm{i}t+\rho) \, \mathrm{d}t \to 0 \quad (n \to \infty).$$

Noting that for each *μ >* 0 we have

$$\int\_{-a\_{\hbar}}^{a\_{\hbar}} \mathbf{e}^{(\mathbf{i}t+\mu)\times} g(\mathbf{i}t+\mu) \, \mathrm{d}t = \sqrt{2\pi} \mathbf{e}^{\mu\chi} \mathcal{F}^\*(\mathbb{1}\_{[-a\_{\hbar}, a\_{\hbar}] \mathbf{g}\_{\mu}})(\mathbf{x}) \quad (\mathbf{x} \in \mathbb{R})$$

and that <sup>1</sup>[−*an,an*]*gμ* <sup>→</sup> *gμ* in *<sup>L</sup>*2*(*R; *H )* as *<sup>n</sup>* → ∞, we may choose a subsequence (again denoted by *(an)n*) such that

$$\begin{split} 0 &= \lim\_{n \to \infty} \left( \int\_{-a\_n}^{a\_n} \mathbf{e}^{(\mathrm{i}t+1)\mathbf{x}} \mathbf{g} (\mathrm{i}t+1) \, \mathrm{d}t - \int\_{-a\_n}^{a\_n} \mathbf{e}^{(\mathrm{i}t+\rho)\mathbf{x}} \mathbf{g} (\mathrm{i}t+\rho) \, \mathrm{d}t \right) \\ &= \lim\_{n \to \infty} \left( \sqrt{2\pi} \mathbf{e}^{\mathrm{x}} \mathcal{F}^\* (\mathbf{1}\_{\left\{-a\_n, a\_n\right\} \mathbf{g}} \mathbf{l}\_1) (\mathrm{x}) - \sqrt{2\pi} \mathbf{e}^{\rho \boldsymbol{x}} \mathcal{F}^\* (\mathbf{1}\_{\left\{-a\_n, a\_n\right\} \mathbf{g}} \mathbf{g}\_\rho) (\mathbf{x}) \right) \\ &= \sqrt{2\pi} \Big( \mathbf{e}^{\boldsymbol{x}} f\_1(\mathbf{x}) - \mathbf{e}^{\rho \boldsymbol{x}} f\_\rho(\mathbf{x}) \Big) \end{split}$$

for almost every *<sup>x</sup>* <sup>∈</sup> <sup>R</sup>. Hence, *<sup>f</sup>* <sup>=</sup> <sup>e</sup>*(*·*) f*<sup>1</sup> = exp*(ρ*m*)fρ* for each *ρ >* 0 and thus,

$$\int\_{\mathbb{R}} \|f(t)\|\_{H}^{2} \operatorname{e}^{-2\rho t} \operatorname{d}t = \int\_{\mathbb{R}} \|f\_{\rho}(t)\|\_{H}^{2} \operatorname{d}t < \infty$$

which shows *f* ∈ " *ρ>*<sup>0</sup> *<sup>L</sup>*2*,ρ(*R; *H )* with

sup *ρ>*0 *f <sup>L</sup>*2*,ρ (*R;*H )* = sup *ρ>*0 *fρ <sup>L</sup>*2*(*R;*H )* <sup>=</sup> sup *ρ>*0 *gρ <sup>L</sup>*2*(*R;*H )* <sup>=</sup> *<sup>g</sup> <sup>H</sup>*2*(*CRe*>*0;*H ) .*

Thus, *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*(*R-<sup>0</sup>; *H )* with *f <sup>L</sup>*2*(*R-<sup>0</sup>;*H )* <sup>=</sup> *<sup>g</sup> <sup>H</sup>*2*(*CRe*>*0;*H )* by Lemma 8.1.1. Moreover,

$$\mathcal{L}\_{\boldsymbol{\nu}}f = \mathcal{F}\exp(-\boldsymbol{\nu}\mathbf{m})f = \mathcal{F}\exp(-\boldsymbol{\nu}\mathbf{m})\exp(\boldsymbol{\nu}\mathbf{m})f\_{\boldsymbol{\nu}} = \mathcal{F}f\_{\boldsymbol{\nu}} = \mathbf{g}\_{\boldsymbol{\nu}} = \mathbf{g}(\mathbf{i}\cdot + \boldsymbol{\nu})$$

for each *ν >* 0, which shows the representation formula for *g*.

Summarising the results of Theorem 8.1.2 and the arguments carried out just before Theorem 8.1.2, we obtain the following statement.

**Corollary 8.1.3** *Let <sup>ν</sup>* <sup>∈</sup> <sup>R</sup>*. Then the mapping*

$$\begin{aligned} \mathcal{L} \colon L\_{2,\boldsymbol{\nu}}(\mathbb{R}\_{\geqslant 0}; H) &\to \mathcal{H}\_2(\mathbb{C}\_{\mathbf{Re} > \boldsymbol{\nu}}; H) \\ f &\mapsto \left(z \mapsto (\mathcal{L}\_{\mathbf{Re} \, z} f) \, (\operatorname{Im} z)\right), \end{aligned}$$

*is an isometric isomorphism. In particular, <sup>H</sup>*2*(*CRe*>ν* ; *H ) is a Hilbert space.*

*Proof* We have argued already that *L* is well-defined and isometric. Thus, we show that *<sup>L</sup>* is onto, next. For this, let *<sup>g</sup>* <sup>∈</sup> *<sup>H</sup>*2*(*CRe*>ν* ; *H )* and define *g(z)* := *g(z* <sup>+</sup> *ν)* for *<sup>z</sup>* <sup>∈</sup> <sup>C</sup>Re*<sup>&</sup>gt;*0. Then *<sup>g</sup>* <sup>∈</sup> *<sup>H</sup>*2*(*CRe*>*0; *H )* and thus, Theorem 8.1.2 yields the existence of *f* <sup>∈</sup> *<sup>L</sup>*2*(*R-<sup>0</sup>; *H )* with

$$\mathbf{g}(\mathbf{i}\cdot+\boldsymbol{\rho}) = \widetilde{\mathbf{g}}(\mathbf{i}\cdot+\boldsymbol{\rho}-\boldsymbol{\nu}) = \mathcal{L}\_{\boldsymbol{\rho}-\boldsymbol{\nu}}\widetilde{f} = \mathcal{L}\_{\boldsymbol{\rho}}(\mathbf{e}^{\boldsymbol{\nu}\cdot}\widetilde{f}) \quad (\boldsymbol{\rho}\times\boldsymbol{\nu}).$$

Hence, setting *<sup>f</sup>* := <sup>e</sup>*ν*· *f* <sup>∈</sup> *<sup>L</sup>*2*,ν (*R-<sup>0</sup>; *H )*, we obtain *Lf* = *g*.

We can now provide an alternative proof of Theorem 5.3.6 by proving causality with the help of the Theorem of Paley–Wiener.

**Proposition 8.1.4** *Let <sup>M</sup>* : dom*(M)* <sup>⊆</sup> <sup>C</sup> <sup>→</sup> *L(H ) be a material law. Then for ν >* sb *(M) we have M(∂t ,ν)* <sup>∈</sup> *L(L*2*,ν (*R; *H )) and M(∂t ,ν) is causal and autonomous (see Exercise 5.7).*

*Proof* Let *ν >* sb *(M)*. Then *<sup>M</sup>* : <sup>C</sup>Re*<sup>ν</sup>* → *L(H )* is bounded and holomorphic on <sup>C</sup>Re*>ν* . Hence, by unitary equivalence, *M(∂t ,ν)* <sup>∈</sup> *L(L*2*,ν (*R; *H ))*. Moreover, *M(∂t ,ν)* is autonomous by Exercise 5.7. Thus, for causality it suffices to check that spt *M(∂t ,ν)f* <sup>⊆</sup> <sup>R</sup>-<sup>0</sup> whenever *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R-<sup>0</sup>; *H )*. So let *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R-<sup>0</sup>; *H )*. Then *<sup>L</sup><sup>f</sup>* <sup>∈</sup> *<sup>H</sup>*2*(*CRe*>ν* ; *H )* by Corollary 8.1.3 and since *<sup>M</sup>* is bounded and holomorphic on CRe*>ν* , we infer also that

$$\left(z \mapsto M(z)\left(\mathcal{L}f\right)(z)\right) \in \mathcal{H}\_2(\mathbb{C}\_{\text{Re}\geq\nu}; H).$$

Again by Corollary 8.1.3 there exists *<sup>g</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R-<sup>0</sup>; *H )* such that

$$
\mathcal{L}g(z) = M(z) \left( \mathcal{L}f \right)(z) \quad (z \in \mathbb{C}\_{\text{Re} > \nu})\,.
$$

Thus, in particular

$$
\mathcal{L}\_{\rho} \mathbf{g} = M(\mathbf{im} + \rho) \mathcal{L}\_{\rho} f \quad (\rho > \nu).
$$

Since *f, g* <sup>∈</sup> *<sup>L</sup>*2*,ν (*R-<sup>0</sup>; *H )* we infer that *Lρg* → *Lνg* and *L<sup>ρ</sup> f* → *Lνf* in *<sup>L</sup>*2*(*R; *H )* as *<sup>ρ</sup>* <sup>→</sup> *<sup>ν</sup>* by dominated convergence. Moreover, *M(*im <sup>+</sup> *ρ)* <sup>→</sup> *M(*im <sup>+</sup> *ν)* strongly on *<sup>L</sup>*2*(*R; *H )* as *<sup>ρ</sup>* <sup>→</sup> *<sup>ν</sup>* (cf. Exercise 8.2). Hence, we derive

$$
\mathcal{L}\_{\upsilon}g = M(\mathrm{im} + \nu)\mathcal{L}\_{\upsilon}f,
$$

and thus, *g* = *M(∂t ,ν)f* which shows causality.

## **8.2 A Representation Result**

In this section we argue that our solution theory needs holomorphy as a central property for the material law. There are two key properties for rendering *<sup>T</sup>* <sup>∈</sup> *L(L*2*,ν*<sup>0</sup> *(*R; *H ))* a material law operator. The first one is causality (i.e., <sup>1</sup>*(*−∞*,a*]*(*m*)T* <sup>1</sup>*(*−∞*,a*]*(*m*)* <sup>=</sup> <sup>1</sup>*(*−∞*,a*]*(*m*)T* for all *<sup>a</sup>* <sup>∈</sup> <sup>R</sup>) and, secondly, *<sup>T</sup>* needs to be autonomous (i.e., *τhT* <sup>=</sup> *T τh* for all *<sup>h</sup>* <sup>∈</sup> <sup>R</sup> where *τhf* <sup>=</sup> *f (*· + *h)*). The main theorem of this section reads as follows:

**Theorem 8.2.1** *Let <sup>ν</sup>*<sup>0</sup> <sup>∈</sup> <sup>R</sup> *and let <sup>T</sup>* <sup>∈</sup> *L(L*2*,ν*<sup>0</sup> *(*R; *H )) be causal and autonomous. Then <sup>T</sup>* <sup>|</sup>*L*2*,ν*0∩*L*2*,ν has a unique extension Tν* <sup>∈</sup> *L(L*2*,ν (*R; *H )) for each ν>ν*<sup>0</sup> *and there exists a unique <sup>M</sup>* : <sup>C</sup>Re*>ν*<sup>0</sup> <sup>→</sup> *L(H ) holomorphic and bounded such that Tν* = *M(∂t ,ν) for each ν>ν*0*.*

We consider the following (shifted) variant of Theorem 8.2.1 first.

**Theorem 8.2.2** *Let <sup>T</sup>* <sup>∈</sup> *L(L*2*(*R; *H )) be causal and autonomous. Then there exists <sup>M</sup>* : <sup>C</sup>Re*>*<sup>0</sup> <sup>→</sup> *L(H ), a material law (i.e., holomorphic and bounded), such that*

$$(\mathcal{L}Tf)\left(z\right) = M(z)\left(\mathcal{L}f\right)(z) \quad (f \in L\mathfrak{\kappa}(\mathbb{R}\_{\geqslant 0}; H), z \in \mathbb{C}\_{\text{Re}\geq 0}).$$

*Proof* For *s >* 0 and *<sup>x</sup>* <sup>∈</sup> *<sup>H</sup>* we define *fx,s* := <sup>1</sup>*(*<sup>0</sup>*,s)x* and compute

$$\mathcal{L}f\_{\mathbf{x},s}(\mathbf{z}) = \frac{1}{\sqrt{2\pi}} \int\_0^s \mathbf{e}^{-\varepsilon t} \mathbf{x} \,\mathrm{d}t = \frac{1}{\sqrt{2\pi}} \frac{1 - \mathbf{e}^{-\varepsilon s}}{z} \mathbf{x} \quad (\mathbf{z} \in \mathbb{C}\_{\mathrm{Re} > 0}).\tag{8.3}$$

We define *<sup>M</sup>* : <sup>C</sup>Re*>*<sup>0</sup> <sup>→</sup> *L(H )* via

$$M(z) \ge \frac{\sqrt{2\pi}z}{1 - \mathbf{e}^{-z}} \mathcal{L}T f\_{\mathbf{x}, \mathbf{l}}(z),$$

which is well-defined since spt *Tfx,*<sup>1</sup> ⊆ [0*,*∞*)* (use causality of *T* ); *M(z)* ∈ *L(H )*, since *T* is bounded. Also, *M(*·*)x* is evidently holomorphic for every *x* ∈ *H* as a product of two holomorphic mappings and thus by Exercise 5.3, *M* is holomorphic itself. Next, we show that for all *<sup>z</sup>* <sup>∈</sup> <sup>C</sup>Re*>*<sup>0</sup> and *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*(*R-<sup>0</sup>; *H )*, we have

$$\left(\left(\mathcal{L}Tf\right)\left(z\right) = M(z)\left(\mathcal{L}f\right)\left(z\right). \tag{8.4}$$

By definition of *M*, the equality is true for *f* replaced by *fx,*1, *x* ∈ *H*. Next, observe that lin <sup>1</sup>*(a,a*+<sup>1</sup>*/n)x* ; *<sup>a</sup>* - <sup>0</sup>*, n* <sup>∈</sup> <sup>N</sup>*, x* <sup>∈</sup> *<sup>H</sup>* is dense in *<sup>L</sup>*2*(*R-<sup>0</sup>; *H )*. Hence, for (8.4), it suffices to show

$$\left(\mathcal{L}T\mathbb{1}\_{(a,a+1/n)}x\right)(z) = M(z)\left(\mathcal{L}\mathbb{1}\_{(a,a+1/n)}x\right)(z) \tag{8.5}$$

for all *a* - 0, *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>, *<sup>x</sup>* <sup>∈</sup> *<sup>H</sup>*, and *<sup>z</sup>* <sup>∈</sup> <sup>C</sup>Re*<sup>&</sup>gt;*0. Next, using that *<sup>T</sup>* is autonomous in the situation of (8.5), we see *<sup>T</sup>* <sup>1</sup>*(a,a*+<sup>1</sup>*/n)x* = *T τ*−*a*1*(*0*,*1*/n)x* = *τ*−*<sup>a</sup> T* 1*(*0*,*1*/n)x* and, by a straightforward computation, *(Lτ*−*af )(z)* <sup>=</sup> <sup>e</sup>−*zaLf (z)* for all *<sup>f</sup>* <sup>∈</sup> *L*2*(*R-<sup>0</sup>; *H )*. Thus,

$$\left(\mathcal{L}T\mathbb{1}\_{(a,a+1/n)}\chi\right)(z) = \mathfrak{e}^{-\varepsilon a}\left(\mathcal{L}T\mathbb{1}\_{(0,1/n)}\chi\right)(z),$$

which yields that it suffices to show (8.5) for *a* = 0 only, that is, for *f* = *fx,*1*/n*. Furthermore, we compute for *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> and *<sup>z</sup>* <sup>∈</sup> <sup>C</sup>Re*>*<sup>0</sup>

$$\begin{aligned} \mathcal{L}T f\_{\mathbf{x},1}(z) &= \sum\_{k=0}^{n-1} (\mathcal{L}T \mathbbm{1}\_{(k/n,(k+1)/n)} \mathbf{x})(z) = \sum\_{k=0}^{n-1} \mathbf{e}^{-\varepsilon k/n} (\mathcal{L}T \mathbbm{1}\_{(0,1/n)} \mathbf{x})(z) \\ &= \frac{1 - \mathbf{e}^{-\varepsilon}}{1 - \mathbf{e}^{-\varepsilon/n}} (\mathcal{L}T f\_{\mathbf{x},1/n})(z) .\end{aligned}$$

Thus, using (8.3) for *s* = 1*/n*, we deduce from the definition of *M*,

$$\mathcal{L}Tf\_{\mathbf{x},1/n}(z) = \frac{1 - \mathbf{e}^{-\varepsilon/n}}{\sqrt{2\pi}z} \frac{\sqrt{2\pi}z}{1 - \mathbf{e}^{-\varepsilon}} \mathcal{L}Tf\_{\mathbf{x},1}(z) = \frac{1 - \mathbf{e}^{-\varepsilon/n}}{\sqrt{2\pi}z} M(z)\mathbf{x}.$$

$$= M(z)\mathcal{L}f\_{\mathbf{x},1/n}(z).$$

Hence, (8.4) holds for all *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*(*R-<sup>0</sup>; *H )*. It remains to show boundedness of *<sup>M</sup>*. For this, let *<sup>z</sup>* <sup>∈</sup> <sup>C</sup>Re*>*<sup>0</sup> and *<sup>x</sup>* <sup>∈</sup> *<sup>H</sup>*. Set *<sup>f</sup>* := <sup>1</sup>[0*,*∞*)*e−*z*<sup>∗</sup> *x* as well as *c* := 2 Re *z* <sup>√</sup>2*π*. Then

$$
\mathcal{L}f(z) = \frac{1}{\sqrt{2\pi}} \int\_0^\infty \mathbf{e}^{-zt-z^\*t} \mathbf{x} \,\mathrm{d}t = \frac{\mathbf{x}}{c}.
$$

By virtue of (8.4), we get *LTf (z)* = *M(z)Lf (z)* and thus *M(z)x* = *cLTf (z)*. This leads to

$$\begin{split} \|M(\boldsymbol{z})\boldsymbol{x}\| &\leqslant \frac{c}{\sqrt{2\pi}} \int\_{0}^{\infty} \left\| \mathbf{e}^{-\boldsymbol{\varepsilon}\boldsymbol{\varepsilon}} \boldsymbol{T}f(t) \right\| \, \mathrm{d}t \leqslant \frac{c}{\sqrt{2\pi}} \left\| \mathbbm{1}\_{[0,\infty)} \mathbf{e}^{-\boldsymbol{\varepsilon}(\cdot)} \right\|\_{L\_{2}(\mathbb{R})} \left\| \boldsymbol{T}f \right\|\_{L\_{2}(\mathbb{R})} \\ &\leqslant \frac{c}{\sqrt{2\pi}} \left\| \mathbbm{1}\_{[0,\infty)} \mathbf{e}^{-\boldsymbol{\varepsilon}(\cdot)} \right\|\_{L\_{2}(\mathbb{R})}^{2} \left\| \boldsymbol{T} \right\|\_{L(L\_{2}(\mathbb{R};H))} \left\| \boldsymbol{x} \right\|\_{H} = \left\| \boldsymbol{T} \right\|\_{L(L\_{2}(\mathbb{R};H))} \left\| \boldsymbol{x} \right\|\_{H}, \end{split}$$

where we used that *<sup>f</sup> <sup>L</sup>*2*(*R;*H )* <sup>=</sup> <sup>1</sup>[0*,*∞*)*e−*z(*·*) <sup>L</sup>*2*(*R*) x <sup>H</sup>* . Thus, *M(z) T* , which yields boundedness of *M* and the assertion of the theorem. We can now prove our main result of this section.

*Proof of Theorem 8.2.1* We just prove the existence of a function *M*. The proof of its uniqueness is left as Exercise 8.3.

We first prove the assertion for *<sup>ν</sup>*<sup>0</sup> <sup>=</sup> 0. So, let *<sup>T</sup>* <sup>∈</sup> *L(L*2*(*R; *H ))* be causal and autonomous. According to Theorem 8.2.2 we find *<sup>M</sup>* : <sup>C</sup>Re*>*<sup>0</sup> <sup>→</sup> *L(H )* holomorphic and bounded such that

$$(\mathcal{L}Tf)\ (z) = M(z)\ (\mathcal{L}f)\ (z) \quad (f \in L\_2(\mathbb{R}\_{\geqslant 0}; H), z \in \mathbb{C}\_{\text{Re} > 0})\ .$$

Let now *ϕ* ∈ *C*<sup>∞</sup> <sup>c</sup> *(*R; *H )* and set *<sup>a</sup>* := inf spt *<sup>ϕ</sup>*. Then *τaϕ* <sup>∈</sup> *<sup>L</sup>*2*(*R-<sup>0</sup>; *H )*, and for *ν >* 0 we compute

$$\mathcal{L}\_{\upsilon}T\varphi = \mathcal{L}\_{\upsilon}\mathfrak{r}\_{-a}T\mathfrak{r}\_{a}\varphi = \mathfrak{e}^{-(\mathrm{im}+\upsilon)a}\mathcal{L}\_{\upsilon}T\mathfrak{r}\_{a}\varphi = \mathfrak{e}^{-(\mathrm{im}+\upsilon)a}M(\mathrm{im}+\upsilon)\mathcal{L}\_{\upsilon}\mathfrak{r}\_{a}\varphi$$

$$= M(\mathrm{im}+\upsilon)\mathcal{L}\_{\upsilon}\varphi. \tag{8.6}$$

The latter implies

$$\|T\varphi\|\_{L\_{2,\upsilon}(\mathbb{R};H)} = \|\mathcal{L}\_{\upsilon}T\varphi\|\_{L\_2(\mathbb{R};H)} = \|M(\text{im}+\upsilon)\mathcal{L}\_{\upsilon}\varphi\|\_{L\_2(\mathbb{R};H)}$$

$$\lesssim \|M\|\_{\infty,\mathbb{C}\_{\mathbb{R}\approx 0}}\|\varphi\|\_{L\_{2,\upsilon}(\mathbb{R};H)}$$

and hence, *<sup>T</sup>* <sup>|</sup>*C*∞<sup>c</sup> *(*R;*H )* has a unique continuous extension *Tν* <sup>∈</sup> *L(L*2*,ν(*R; *H ))*. Using (8.6) we obtain

$$T\_{\mathbb{V}} = \mathcal{L}\_{\mathbb{V}}^\* M(\text{im} + \nu) \mathcal{L}\_{\mathbb{V}} = M(\partial\_{\mathbb{H}, \mathbb{V}})$$

by approximation.

Let now *<sup>ν</sup>*<sup>0</sup> <sup>∈</sup> <sup>R</sup>. Then the operator

$$\widetilde{T} := \mathbf{e}^{-\mathsf{v}\_0 \mathbf{m}} T \mathbf{e}^{\mathsf{v}\_0 \mathbf{m}} \in L(L\_2(\mathbb{R}; H)),$$

is causal and autonomous as well. Thus, *T* |*C*∞<sup>c</sup> *(*R;*H )* has continuous extensions *T <sup>ρ</sup>* <sup>∈</sup> *L(L*2*,ρ (*R; *H ))* for each *ρ >* 0 and there is *<sup>M</sup>*: <sup>C</sup>Re*>*<sup>0</sup> <sup>→</sup> *L(H )* holomorphic and bounded such that *T <sup>ρ</sup>* <sup>=</sup> *M(∂ t ,ρ)* for each *ρ >* 0. Using *<sup>T</sup>* <sup>|</sup>*C*∞<sup>c</sup> *(*R;*H )* <sup>=</sup> <sup>e</sup>*ν*0m*<sup>T</sup>* |*C*∞<sup>c</sup> *(*R;*H )*e−*<sup>ν</sup>*0m, we derive that *<sup>T</sup>* <sup>|</sup>*C*∞<sup>c</sup> *(*R;*H )* has the unique continuous extension *Tν* <sup>=</sup> <sup>e</sup>*ν*0m*<sup>T</sup> ν*−*ν*<sup>0</sup> <sup>e</sup>−*ν*0m <sup>∈</sup> *L(L*2*,ν (*R; *H ))* for each *ν>ν*<sup>0</sup> and

$$\begin{split} \mathcal{L}\_{\boldsymbol{\nu}}T\_{\boldsymbol{\nu}} &= \mathcal{L}\_{\boldsymbol{\nu}}\mathbf{e}^{\boldsymbol{\nu}\text{on}}\widetilde{T}\_{\boldsymbol{\nu}-\boldsymbol{\nu}\_{0}}\mathbf{e}^{-\boldsymbol{\nu}\text{on}} = \mathcal{L}\_{\boldsymbol{\nu}-\boldsymbol{\nu}\_{0}}\widetilde{T}\_{\boldsymbol{\nu}-\boldsymbol{\nu}\_{0}}\mathbf{e}^{-\boldsymbol{\nu}\text{on}} = \widetilde{M}(\text{im}+\boldsymbol{\nu}-\boldsymbol{\nu}\_{0})\mathcal{L}\_{\boldsymbol{\nu}-\boldsymbol{\nu}\_{0}}\mathbf{e}^{-\boldsymbol{\nu}\text{on}} \\ &= \widetilde{M}(\text{im}+\boldsymbol{\nu}-\boldsymbol{\nu}\_{0})\mathcal{L}\_{\boldsymbol{\nu}}. \end{split}$$

Hence,

$$T\_{\mathbb{v}} = M(\partial\_{\mathfrak{l}\_{\mathbb{L}^{\mathbb{N}}}})$$

for the holomorphic and bounded function *<sup>M</sup>* given by *M(z)* := *M(z* <sup>−</sup> *<sup>ν</sup>*0*)* for *<sup>z</sup>* <sup>∈</sup> <sup>C</sup>Re*>ν*<sup>0</sup> .

## **8.3 Comments**

The stated Theorem of Paley and Wiener is of course not the only theorem characterising properties of the support of *L*2-functions in terms of their Fourier or Laplace transform. For instance, a similar result holds for functions having compact support, see e.g. [101, 19.3 Theorem] and Exercise 8.7. These theorems provide a nice connection between *L*2-functions and spaces of holomorphic functions in form of Hardy spaces. In this chapter we just introduced the Hardy space *H*<sup>2</sup> and it is not surprising that there are also the Hardy spaces *H<sup>p</sup>* for 1 *p* ∞. We refer to [35] for this topic.

The representation result presented in the second part of this chapter was originally proved by Fourès and Segal in 1955, [41]. In this article the authors prove an analogous representation result for causal operators on *<sup>L</sup>*2*(*R*<sup>d</sup>* ; *H )*, where causality is defined with respect to a closed and convex cone on R*<sup>d</sup>* . The quite elementary proof of Theorem 8.2.2 for *d* = 1 presented here was kindly communicated to us by Hendrik Vogt.

## **Exercises**

**Exercise 8.1** Let<sup>⊆</sup> <sup>R</sup>*>*<sup>0</sup> be a set with an accumulation point in <sup>R</sup>*>*0. Prove that { *<sup>x</sup>* → <sup>e</sup>−*λx* ; *<sup>λ</sup>* <sup>∈</sup>} is a total set in *<sup>L</sup>*1*(*R-0*)*. *Hint:* Use that the set is total if and only if

$$\forall f \in L\_{\infty}(\mathbb{R}\_{\geqslant 0}): \left(\forall \lambda \in \Lambda \,:\, \int\_{\mathbb{R}\_{\geqslant 0}} \mathbf{e}^{-\lambda \mathbf{x}} f(\mathbf{x}) \, \mathbf{dx} = \mathbf{0} \Rightarrow f = \mathbf{0}\right).$$

**Exercise 8.2** Let *<sup>M</sup>* : dom*(M)* <sup>⊆</sup> <sup>C</sup> <sup>→</sup> *L(H )* be a material law. Moreover, let *ν >* sb *(M)*. Show that lim*ρ*→*ν*<sup>+</sup> *M(*im + *ρ)* = *M(*im + *ν)* where the limit is meant in the strong operator topology on *<sup>L</sup>*2*(*R; *H )*.

**Exercise 8.3** Prove the uniqueness statement in Theorem 8.2.1.

**Exercise 8.4** Give an example of a continuous and bounded function *<sup>M</sup>* : <sup>C</sup>Re*>*<sup>0</sup> <sup>→</sup> *L(H )* such that the corresponding operator *M(∂t ,ν)* is not causal for any *ν >* 0.

**Exercise 8.5** Prove the following distributional variant of the Paley–Wiener theorem: Let *<sup>ν</sup>*<sup>0</sup> *<sup>&</sup>gt;* 0, *<sup>k</sup>* <sup>∈</sup> <sup>N</sup>, *<sup>f</sup>* : <sup>C</sup>Re*>ν*<sup>0</sup> <sup>→</sup> <sup>C</sup>, and set *h(z)* := <sup>1</sup> *<sup>z</sup><sup>k</sup> f (z)* for *<sup>z</sup>* <sup>∈</sup> <sup>C</sup>Re*>ν*<sup>0</sup> . We assume that *<sup>h</sup>* <sup>∈</sup> *<sup>H</sup>*2*(*CRe*>ν*<sup>0</sup> ; <sup>C</sup>*)*. For *ν>ν*<sup>0</sup> we define the distribution *u*: *C*<sup>∞</sup> <sup>c</sup> *(*R*)* <sup>→</sup> <sup>C</sup> by

$$\mu(\psi) := \left\langle \mathcal{L}\_{\upsilon}^\* h(\mathbf{i} \cdot + \upsilon), (\partial\_{\mathbf{i},\upsilon}^\*)^k \psi \right\rangle\_{L\_{2,\upsilon}(\mathbb{R}; \mathbb{C})} \quad (\psi \in C\_{\mathsf{c}}^{\infty}(\mathbb{R}; \mathbb{C})).$$

Prove that spt *<sup>u</sup>* <sup>⊆</sup> <sup>R</sup>-0, where

$$\text{spt}\,\mu := \mathbb{R} \mid \bigcup \{ U \subseteq \mathbb{R} \text{ open} \text{; } \forall \,\psi \in C\_{\mathbb{C}}^{\infty}(U; \mathbb{C}) : \mu(\psi) = 0 \} \dots$$

What is *<sup>u</sup>* if *<sup>f</sup>* <sup>=</sup> <sup>1</sup>CRe*>ν*<sup>0</sup> ?

**Exercise 8.6** Let *<sup>g</sup>* <sup>∈</sup> *<sup>L</sup>*2*(*R*), a >* 0 such that spt *<sup>g</sup>* <sup>⊆</sup> [−*a, a*]. Show that *<sup>f</sup>* := *<sup>F</sup><sup>g</sup>* extends to a holomorphic function *f* : <sup>C</sup> <sup>→</sup> <sup>C</sup> with *f (* <sup>i</sup>*t)* <sup>=</sup> *f (t)* for each *<sup>t</sup>* <sup>∈</sup> <sup>R</sup> such that

$$\exists \mathcal{C} \geqslant 0 \,\forall z \in \mathbb{C}: \, |f(z)| \leqslant \mathcal{C} \mathbf{e}^{a \mid \operatorname{Re} z \vert}.$$

**Exercise 8.7** Let *<sup>f</sup>* : <sup>C</sup> <sup>→</sup> <sup>C</sup> be holomorphic such that

(a) ∃*C* - <sup>0</sup>*,a>* <sup>0</sup> <sup>∀</sup>*<sup>z</sup>* <sup>∈</sup> <sup>C</sup> : |*f (z)*<sup>|</sup> *<sup>C</sup>*e*a*<sup>|</sup> Re *<sup>z</sup>*<sup>|</sup> , (b) *f (*i·*)* <sup>∈</sup> *<sup>L</sup>*2*(*R*)*.

Prove that *g* := *F*∗*f (*i·*)* satisfies spt *g* ⊆ [−*a, a*]. *Hint:* Apply Theorem 8.1.2 to the function *<sup>h</sup>* : <sup>C</sup>Re*>*<sup>0</sup> <sup>→</sup> <sup>C</sup> given by

$$h(z) := \mathbf{e}^{-za} \frac{f(z)}{z+1} \quad (z \in \mathbb{C}\_{\text{Re}>0})$$

to derive that spt *<sup>g</sup>* <sup>⊆</sup> <sup>R</sup>-<sup>−</sup>*<sup>a</sup>* .

*Remark:* The assertion even holds true if one replaces condition (a) by

$$\exists C \geqslant 0, \ a \succ 0 \,\forall z \in \mathbb{C}: \, |f(z)| \leqslant C\mathbf{e}^{a|z|}.$$

## **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 9 Initial Value Problems and Extrapolation Spaces**

Up until now we have dealt with evolutionary equations of the form

$$\overline{\left(\partial\_{\mathfrak{t},\boldsymbol{\upsilon}}M(\partial\_{\mathfrak{t},\boldsymbol{\upsilon}}) + A\right)}U = F$$

for some given *<sup>F</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν (*R; *H )* for some Hilbert space *<sup>H</sup>*, a skew-selfadjoint operator *A* in *H* and a material law *M* defined on a suitable half-plane satisfying an appropriate positive definiteness condition with *<sup>ν</sup>* <sup>∈</sup> <sup>R</sup> chosen suitably large. Under these conditions, we established that the solution operator, *Sν* := *∂t ,νM(∂t ,ν)* + *A* −<sup>1</sup> <sup>∈</sup> *L(L*2*,ν (*R; *H ))*, is eventually independent of *<sup>ν</sup>* and causal; that is, if *<sup>F</sup>* <sup>=</sup> 0 on *(*−∞*, a*] for some *<sup>a</sup>* <sup>∈</sup> <sup>R</sup>, then so too is *<sup>U</sup>*.

To solve for *<sup>U</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν (*R; *H )* for some non-negative *<sup>ν</sup>* penalises *<sup>U</sup>* having support on <sup>R</sup>≤0. This might be interpreted as an implicit *initial condition at* −∞. In this chapter, we shall study how to obtain a solution for initial value problems with an initial condition at 0, based on the solution theory developed in the previous chapters.

## **9.1 What are Initial Values?**

This section is devoted to the motivation of the framework to follow in the subsequent section. Let us consider the following, arguably easiest but not entirely trivial, initial value problem: find a 'causal' *<sup>u</sup>*: <sup>R</sup> <sup>→</sup> <sup>R</sup> such that for *<sup>u</sup>*<sup>0</sup> <sup>∈</sup> <sup>R</sup> we have

$$\begin{cases} \mu'(t) = 0 \qquad (t > 0),\\ \mu(0) = \mu\_0. \end{cases} \tag{9.1}$$

131

First of all note that there is no condition for *u* on *(*−∞*,* 0*)*. Since, there is no source term or right-hand side supported on *(*−∞*,* 0*)*, causality would imply that *u* = 0 on *(*−∞*,* <sup>0</sup>*)*. Moreover, *<sup>u</sup>* <sup>=</sup> *<sup>c</sup>* for some constant *<sup>c</sup>* <sup>∈</sup> <sup>R</sup> on *(*0*,*∞*)*. Thus, in order to match with the initial condition,

$$
\mu(t) = \mu\_0 \mathbb{1}\_{[0,\infty)}(t) \quad (t \in \mathbb{R}).
$$

Notice also that *u* is not continuous. Hence, by the Sobolev embedding theorem (Theorem 4.1.2), *u /*∈ *ν>*<sup>0</sup> dom*(∂t ,ν )*.

**Proposition 9.1.1** *Let H be a Hilbert space, u*<sup>0</sup> ∈ *H. Define*

$$\delta\_0 \mu\_0 \colon C\_\mathbf{c}^\infty(\mathbb{R}; H) \to \mathbb{K}$$

$$f \mapsto \langle \mu\_0, f(0) \rangle\_H \dots$$

*Then, for all <sup>ν</sup>* <sup>∈</sup> <sup>R</sup>*>*0*, <sup>δ</sup>*0*u*<sup>0</sup> *extends to a continuous linear functional on* dom*(∂t ,ν). Re-using the notation for this extension, for all f* ∈ dom*(∂t ,ν) we have*

$$(\delta\_0 \mu\_0)\left(f\right) = -\left\langle \mathbb{1}\_{[0,\infty)}\mu\_0, \left(\partial\_{l,\boldsymbol{\nu}} - 2\boldsymbol{\nu}\right)f\right\rangle\_{L\_{2,\boldsymbol{\nu}}(\mathbb{R};H)}.\tag{9.2}$$

*Proof* The equality (9.2) is obvious for *f* ∈ *C*<sup>∞</sup> <sup>c</sup> *(*R; *H )* as it is a direct consequence of the fundamental theorem of calculus (look at the right-hand side first). The continuity of *δ*0*u*<sup>0</sup> follows from the Cauchy–Schwarz inequality applied to the righthand side of (9.2). Note that <sup>1</sup>[0*,*∞*)u*<sup>0</sup> <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *H )*.

Recall from Corollary 3.2.6 that

$$
\partial\_{\mathfrak{r},\boldsymbol{\nu}}^\* = -\partial\_{\mathfrak{r},\boldsymbol{\nu}} + 2\boldsymbol{\nu}.
$$

Hence, if we *formally* apply this formula to (9.2), we obtain

$$\left\langle \partial\_{\mathfrak{t},\boldsymbol{\nu}} \mathbb{1}\_{[0,\infty)} \mu\_0, f \right\rangle = \left\langle \mathbb{1}\_{[0,\infty)} \mu\_0, \left. \partial\_{\mathfrak{t},\boldsymbol{\nu}}^\* f \right\rangle\_{L\_{2,\boldsymbol{\nu}}(\mathbb{R};H)} = (\delta\_0 \mu\_0)(f).$$

Therefore, in order to use the introduced time derivative operator for the above initial value problem, we need to extend the time derivative to a broader class of functions than just dom*(∂t ,ν)*. To utilise the adjoint operator in this way will be central to the construction to follow. It will turn out that indeed

$$
\partial\_{\mathfrak{l},\boldsymbol{\upsilon}} \mathbf{1}\_{[0,\infty)} \mu\_0 = \delta\_0 \mu\_0.
$$

Moreover, we shall show below that

$$
\partial\_{\mathfrak{t},\boldsymbol{\upsilon}}\boldsymbol{\mu} = \delta\_0 \boldsymbol{\mu}\_0
$$

considered on the full time-line R is one possible replacement of the initial value problem (9.1).

## **9.2 Extrapolating Operators**

Since we are dealing with functionals, let us recall the definition of the dual space. Throughout this section let *H,H*0*, H*<sup>1</sup> be Hilbert spaces.

**Definition** The space

*H*-:= {*<sup>ϕ</sup>* : *<sup>H</sup>* <sup>→</sup> <sup>K</sup> ; *<sup>ϕ</sup>* linear and bounded}

is called the *dual space of H*. We equip *H*with the linear structure

$$(\lambda \odot \varphi + \psi)(\mathbf{x}) := \lambda^\* \varphi(\mathbf{x}) + \psi(\mathbf{x}) \quad (\lambda \in \mathbb{K}, \varphi, \psi \in H', \mathbf{x} \in H).$$

*Remark 9.2.1* Note that *H* is a Hilbert space itself, since by the Riesz representation theorem for each *ϕ* ∈ *H*we find a unique element *RH ϕ* ∈ *H* such that

$$\forall \mathbf{x} \in H \; ; \; \varphi(\mathbf{x}) = \langle \mathbf{R}\_H \varphi, \mathbf{x} \rangle \; .$$

Due to the linear structure on *H*- , the so induced mapping *RH* : *H*- → *H* (which is one-to-one and onto) becomes linear and

$$H' \times H' \ni (\varphi, \psi) \mapsto \langle R\_H \varphi, R\_H \psi \rangle$$

defines an inner product on *H*- , which induces the usual norm on functionals.

From now on we will identify elements *x* ∈ *H* with their representatives in *H*- ; that is, we identify *x* with *R*−<sup>1</sup> *<sup>H</sup> x*.

Let *C*: dom*(C)* ⊆ *H*<sup>0</sup> → *H*<sup>1</sup> be linear, densely defined and closed. We recall that in this case dom*(C)* endowed with the graph inner product

$$(\mu, \upsilon) \mapsto \langle u, \upsilon \rangle\_{H\_0} + \langle Cu, C\upsilon \rangle\_{H\_1}$$

becomes a Hilbert space. Clearly, dom*(C)* → *H*<sup>0</sup> is continuous with dense range. Moreover, we see that dom*(C) x* → *Cx* ∈ *H*<sup>1</sup> is continuous. We define

$$\begin{aligned} C^{\diamond} \colon H\_{\mathbb{L}} &\to \text{dom}(C)' =: H^{-1}(C), \\ (C^{\diamond} \phi)(\mathfrak{x}) &:= \langle \phi, C\mathfrak{x} \rangle\_{H\_{\mathbb{L}}} \quad (\phi \in H\_{\mathbb{L}}, \mathfrak{x} \in \text{dom}(C)). \end{aligned}$$

Note that *C*" is related to the dual operator*C* of *C* considered as a bounded operator from dom*(C)* to *H*<sup>1</sup> by

$$C^\diamond = C' \mathcal{R}\_{H\_1}^{-1} \cdot$$

**Proposition 9.2.2** *With the notions and definitions from this section, the following statements hold:*


#### *Proof*

(a) Let *φ,ψ* <sup>∈</sup> *<sup>H</sup>*1, *<sup>λ</sup>* <sup>∈</sup> <sup>K</sup>. Then

$$C^{\diamond}(\lambda \phi + \psi)(\mathbf{x}) = \lambda^\*(C^{\diamond}\phi)(\mathbf{x}) + (C^{\diamond}\psi)(\mathbf{x}) = (\lambda \odot C^{\diamond}\phi + C^{\diamond}\psi)(\mathbf{x}) \quad (\mathbf{x} \in \text{dom}(C)).$$

To show continuity, let *φ* ∈ *H*<sup>1</sup> and *x* ∈ dom*(C)*. Then

$$|C^{\diamond}(\phi)(\mathbf{x})| = \left| \langle \phi, \mathbf{C} \mathbf{x} \rangle\_{H\_1} \right| \lesssim \|\phi\|\_{H\_1} \|\mathbf{C} \mathbf{x}\|\_{H\_1} \lesssim \|\phi\|\_{H\_1} \|\mathbf{x}\|\_{\text{dom}(C)} \cdot \mathbf{x}$$

Hence,  *<sup>C</sup>*" <sup>=</sup> sup*φ*∈*H*1*, <sup>φ</sup> <sup>H</sup>*1<sup>1</sup> *<sup>C</sup>*"*<sup>φ</sup>* dom*(C)*-1*.*

(b) Let *φ* ∈ dom*(C*∗*)*. Then we have for all *x* ∈ dom*(C)*

$$\left(\mathcal{C}^{\diamond}\phi\right)(\mathbf{x}) = \langle\phi, \mathcal{C}\mathbf{x}\rangle\_{H\_1} = \left\langle\mathcal{C}^{\ast}\phi, \mathbf{x}\right\rangle\_{H\_0} = \left(\mathcal{C}^{\ast}\phi\right)(\mathbf{x})\dots$$

We obtain *C*"*φ* = *C*∗*φ* (note that a functional on *H*<sup>0</sup> is uniquely determined by its values on dom*(C)*).

(c) Using (b), we are left with showing ker*(C*"*)* ⊆ ker*(C*∗*)*. So, let *φ* ∈ ker*(C*"*)*. Then for all *x* ∈ dom*(C)* we have

$$0 = \left(C^{\diamond} \phi\right)(x) = \langle \phi, Cx \rangle\_{H\_1} \ ,$$

which leads to *φ* ∈ dom*(C*∗*)* and *φ* ∈ ker*(C*∗*)*.


We will also write *C*−<sup>1</sup> := *(C*∗*)* " for the so-called *extrapolated operator* of *C*. Then *(C*∗*)*−<sup>1</sup> = *C*". We will record the index −1 at the beginning, but in order to avoid too much clutter in the notation we will drop this index again, bearing in mind that *C*−<sup>1</sup> ⊇ *C* and *(C*∗*)*−<sup>1</sup> ⊇ *C*∗.

*Example 9.2.3* We have shown that for all *<sup>ν</sup>* <sup>∈</sup> <sup>R</sup> the operator *∂t ,ν* is densely defined and closed. Then for *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R*)* we have for all *<sup>φ</sup>* <sup>∈</sup> *<sup>C</sup>*<sup>∞</sup> <sup>c</sup> *(*R*)*

$$\left( (\partial\_{\mathbf{l},\boldsymbol{\nu}})\_{-1} f \right) (\boldsymbol{\phi}) = \left< f, \partial\_{\mathbf{l},\boldsymbol{\nu}}^{\*} \phi \right>\_{L\_{2,\boldsymbol{\nu}}} = \left< f, \left( -\partial\_{\mathbf{l},\boldsymbol{\nu}} + 2\boldsymbol{\nu} \right) \phi \right>\_{L\_{2,\boldsymbol{\nu}}} = -\int\_{\mathbb{R}} \left< f, \left( \mathbf{e}^{-2\boldsymbol{\nu}\cdot} \phi \right)' \right>\_{\mathbb{C}} \,.$$

Hence, *(∂t ,ν)*−1*f* acts as the 'usual' distributional derivative taking into account the exponential weight in the scalar product.

With this observation we deduce that for *ν >* 0 we have

$$\left(\partial\_{\mathfrak{l},\boldsymbol{\nu}}\right)\_{-\boldsymbol{\l}}\mathbb{1}\_{[0,\infty)} = \partial\_{\mathfrak{l},\boldsymbol{\nu}}\mathbb{1}\_{[0,\infty)} = \delta\_0.$$

Hence, the initial value problem from the beginning reads: find *u* such that

$$(\partial\_{\mathfrak{l},\mathbb{U}})\_{-1}\mu = \delta\_0\mu\_0.$$

*Example 9.2.4* Let <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* be open. Consider grad0 : *<sup>H</sup>*<sup>1</sup> <sup>0</sup> *()* ⊆ *L*2*()* → *<sup>L</sup>*2*()<sup>d</sup>* . We compute div−<sup>1</sup> : *<sup>L</sup>*2*()<sup>d</sup>* <sup>→</sup> *<sup>H</sup>* <sup>−</sup>1*()* with *<sup>H</sup>* <sup>−</sup>1*()* := *<sup>H</sup>*<sup>1</sup> <sup>0</sup> *()*- . For *<sup>q</sup>* <sup>∈</sup> *<sup>L</sup>*2*()<sup>d</sup>* we obtain for all *<sup>φ</sup>* <sup>∈</sup> *<sup>H</sup>*<sup>1</sup> <sup>0</sup> *()*

$$(\operatorname{div}\_{-1} q) \left( \phi \right) = \langle q, \operatorname{div}^\* \phi \rangle\_{L\_2(\Omega)^d} = - \langle q, \operatorname{grad}\_0 \phi \rangle\_{L\_2(\Omega)^d}.$$

Also, with similar arguments, we see that

$$\left(\text{grad}\_{-1} f\right)(q) = -\left\langle f, \text{div}\_0 q \right\rangle\_{L\_2(\Omega)}$$

for all *f* ∈ *L*2*()* and *q* ∈ *H*0*(*div*, )*.

We consider a case of particular interest within the framework of evolutionary equations.

**Proposition 9.2.5** *Let A*: dom*(C)* × dom*(C*∗*)* ⊆ *H*<sup>0</sup> × *H*<sup>1</sup> → *H*<sup>0</sup> × *H*<sup>1</sup> *be given by*

$$A\begin{pmatrix} \phi \\ \psi \end{pmatrix} = \begin{pmatrix} 0 & C^\* \\ -C & 0 \end{pmatrix} \begin{pmatrix} \phi \\ \psi \end{pmatrix} = \begin{pmatrix} C^\* \psi \\ -C \phi \end{pmatrix}.$$

*Then <sup>A</sup>*−<sup>1</sup> : *<sup>H</sup>*<sup>0</sup> <sup>×</sup> *<sup>H</sup>*<sup>1</sup> <sup>→</sup> *<sup>H</sup>* <sup>−</sup>1*(C)* <sup>×</sup> *<sup>H</sup>* <sup>−</sup>1*(C*∗*) acts as*

$$A\_{-1} \begin{pmatrix} \phi \\ \psi \end{pmatrix} = \begin{pmatrix} 0 & (C^\*)\_{-1} \\ -C\_{-1} & 0 \end{pmatrix} \begin{pmatrix} \phi \\ \psi \end{pmatrix} = \begin{pmatrix} (C^\*)\_{-1} \psi \\ -C\_{-1} \phi \end{pmatrix}.$$

Next, we will look at the solution theory when carried over to distributional righthand sides.

An immediate consequence of the introduction of extrapolated operators, however, is that we are now in the position to omit the closure bar for the operator sum in an evolutionary equation, which we will see in an abstract version in Theorem 9.2.6 and for evolutionary equations in Theorem 9.3.2. The main advantage is that we can calculate an operator sum much easier than the closure of it. The price we have to pay is that we have to work in a larger space *H* <sup>−</sup><sup>1</sup> rather than in the original Hilbert space *<sup>L</sup>*2*,ν (*R; *H )*. Put differently, this provides another notion of "solutions" for evolutionary equations. For this, we need to introduce the set

$$\text{Fun}(H) := \{ \phi \colon \text{dom}(\phi) \subseteq H \to \mathbb{K} \text{ } ; \text{\(\phi \text{ linear}\)} \}$$

of not necessarily everywhere defined linear functionals on *H*. Any *u* ∈ *H* is thus identified with an element in Fun*(H )* via *ψ* → *u, ψ <sup>H</sup>* . Note that we can add and scalarly multiply elements in Fun*(H )* with respect to the same addition and multiplication defined on *H* and with their natural domains. As usual, we will use the ⊆-sign for extension/restriction of mappings.

**Theorem 9.2.6** *Let A*: dom*(A)* ⊆ *H* → *H, B* : dom*(B)* ⊆ *H* → *H be densely defined and closed such that A* + *B is closable, and assume that there exists (Tn)n*∈<sup>N</sup> *in L(H ) such that Tn* → 1*<sup>H</sup> in the strong operator topology with* ran*(Tn)* ⊆ dom*(B) and*

$$T\_n A \subseteq A T\_n, \quad \ T\_n B \subseteq B T\_n \text{ for all } n \in \mathbb{N}.$$

*Then T* ∗ *<sup>n</sup> A*<sup>∗</sup> ⊆ *A*∗*T* <sup>∗</sup> *<sup>n</sup> and T* <sup>∗</sup> *<sup>n</sup> B*<sup>∗</sup> ⊆ *B*∗*T* <sup>∗</sup> *<sup>n</sup> for each <sup>n</sup>* <sup>∈</sup> <sup>N</sup> *and* ran*(T* <sup>∗</sup> *<sup>n</sup> )* ⊆ dom*(B*∗*). Moreover, for x,f* ∈ *H the following conditions are equivalent:*


*Proof* Let *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>. Taking adjoints in the inclusion *TnA* <sup>⊆</sup> *ATn*, we derive *(ATn)*<sup>∗</sup> <sup>⊆</sup> *(TnA)*∗. By Theorem 2.3.4 and Remark 2.3.7 we obtain

$$T\_n^\* A^\* \subseteq \overline{T\_n^\* A^\*} = (A T\_n)^\* \subseteq (T\_n A)^\* = A^\* T\_n^\*.$$

The same argument shows the claim for *B*∗. Moreover, since *BTn* is a closed linear operator defined on the whole space *H*, it follows that *BTn* ∈ *L(H )* by the closed graph theorem. Hence, *(BTn)*<sup>∗</sup> is bounded by Lemma 2.2.9 and since *(BTn)*<sup>∗</sup> ⊆ *(TnB)*<sup>∗</sup> = *B*∗*T* <sup>∗</sup> *<sup>n</sup>* , we derive that dom*(B*∗*T* <sup>∗</sup> *<sup>n</sup> )* = *H*, showing that ran*(T* ∗ *<sup>n</sup> )* ⊆ dom*(B*∗*)*.

We now prove the asserted equivalence.

(i)⇒(ii): By definition, there exists *(xn)n* in dom*(A)*∩dom*(B)* such that *xn* → *x* in *H* and *Axn* + *Bxn* → *f.* By continuity, we obtain *A*−<sup>1</sup>*xn* → *A*−1*x* and *B*−<sup>1</sup>*xn* → *<sup>B</sup>*−1*<sup>x</sup>* in *<sup>H</sup>* <sup>−</sup>1*(A*∗*)* and *<sup>H</sup>* <sup>−</sup>1*(B*∗*)*, respectively. Thus, we have

$$(A\_{-1}\mathbf{x} + B\_{-1}\mathbf{x})(\mathbf{y}) = \lim\_{n \to \infty} (A\_{-1}\mathbf{x}\_n + B\_{-1}\mathbf{x}\_n)(\mathbf{y}) = \lim\_{n \to \infty} \left< A\mathbf{x}\_n + B\mathbf{x}\_n, \mathbf{y} \right>,$$
 
$$= \left< f, \mathbf{y} \right>,$$

for each *y* ∈ dom*(A*∗*)* ∩ dom*(B*∗*)*, which shows the asserted inclusion.

(ii)⇒(i): For *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> we put *xn* := *Tnx*. Then *xn* <sup>∈</sup> dom*(B)* and for all *<sup>y</sup>* <sup>∈</sup> dom*(A*∗*)* ∩ dom*(B*∗*)*, we obtain

$$
\langle T\_n f - B\mathbf{x}\_n, \mathbf{y} \rangle = \langle T\_n f, \mathbf{y} \rangle - \langle T\_n \mathbf{x}, B^\* \mathbf{y} \rangle = \langle f, T\_n^\* \mathbf{y} \rangle - \langle \mathbf{x}, T\_n^\* B^\* \mathbf{y} \rangle
$$

$$
= \langle f, T\_n^\* \mathbf{y} \rangle - \langle \mathbf{x}, B^\* T\_n^\* \mathbf{y} \rangle = f(T\_n^\* \mathbf{y}) - (B\_{-1} \mathbf{x})(T\_n^\* \mathbf{y})
$$

$$
= (A\_{-1} \mathbf{x})(T\_n^\* \mathbf{y}) = \langle \mathbf{x}, A^\* T\_n^\* \mathbf{y} \rangle = \langle \mathbf{x}, T\_n^\* A^\* \mathbf{y} \rangle = \langle \mathbf{x}\_n, A^\* \mathbf{y} \rangle,
$$

where we have used that *T* ∗ *<sup>n</sup> y* ∈ dom*(A*∗*)* ∩ dom*(B*∗*)*. Let now *y* ∈ dom*(A*∗*)*. Then *T* ∗ *<sup>k</sup> <sup>y</sup>* <sup>∈</sup> dom*(A*∗*)*∩dom*(B*∗*)* for each *<sup>k</sup>* <sup>∈</sup> <sup>N</sup> and thus, by what we have shown above

$$
\begin{aligned}
\langle T\_k(T\_n f - B\mathbf{x}\_n), \mathbf{y} \rangle &= \left\langle T\_n f - B\mathbf{x}\_n, T\_k^\* \mathbf{y} \right\rangle = \left\langle \mathbf{x}\_n, A^\* T\_k^\* \mathbf{y} \right\rangle \\ &= \left\langle \mathbf{x}\_n, T\_k^\* A^\* \mathbf{y} \right\rangle = \left\langle T\_k \mathbf{x}\_n, A^\* \mathbf{y} \right\rangle
\end{aligned}
$$

for each *<sup>k</sup>* <sup>∈</sup> <sup>N</sup>. Letting *<sup>k</sup>* tend to infinity, we derive

$$
\langle T\_n f - B\mathbf{x}\_n, \mathbf{y} \rangle = \langle \mathbf{x}\_n, A^\* \mathbf{y} \rangle \dots
$$

Since this holds for each *y* ∈ dom*(A*∗*)*, this implies that we have *xn* ∈ dom*(A)* and *Axn* +*Bxn* = *Tnf* . Letting *n* → ∞*,* we deduce *xn* → *x* and *Axn* +*Bxn* → *f* ; that is, (i).

**Lemma 9.2.7** *Let T* : dom*(T )* ⊆ *H* → *H be densely defined and closed with* <sup>0</sup> <sup>∈</sup> *ρ(T ). Then <sup>T</sup>*−<sup>1</sup> : *<sup>H</sup>* <sup>→</sup> *<sup>H</sup>* <sup>−</sup>1*(T* <sup>∗</sup>*) is an isomorphsim. In particular, the norms (T*−1*)* −1· *<sup>H</sup> and* ·*<sup>H</sup>* <sup>−</sup>1*(T* <sup>∗</sup>*) are equivalent.*

*Proof* Note that since 0 ∈ *ρ(T )* we obtain {0} = ker*(T )* = ker*((T* <sup>∗</sup>*)* "*)* = ker*(T*−1*)*, see Proposition 9.2.2(c). Thus, *<sup>T</sup>*−<sup>1</sup> is one-to-one. Next, let *<sup>f</sup>* <sup>∈</sup> *<sup>H</sup>* <sup>−</sup>1*(T* <sup>∗</sup>*)*. Since 0 ∈ *ρ(T )*, we obtain 0 ∈ *ρ(T* <sup>∗</sup>*)* by Exercise 2.4, which implies that *T* ∗·*, T* ∗· defines an equivalent scalar product on dom*(T* ∗*)*. Thus, by the Riesz representation theorem, we find *φ* ∈ dom*(T* <sup>∗</sup>*)* such that for all *ψ* ∈ dom*(T* <sup>∗</sup>*)* we have

$$f(\psi) = \left< T^\*\phi, T^\*\psi \right> = \left( \left( T^\* \right)^\diamond \left( T^\*\phi \right) \right)(\psi).$$

Hence, *f* ∈ ran*((T* <sup>∗</sup>*)* "*)* = ran*(T*−1*)*, thus proving that *T*−<sup>1</sup> is onto.

The following alternative description of *H* <sup>−</sup>1*(T* <sup>∗</sup>*)* is content of Exercise 9.5.

**Proposition 9.2.8** *Let T* : dom*(T )* ⊆ *H* → *H be densely defined and closed with* 0 ∈ *ρ(T ). Then*

$$H^{-1}(T^\*) \cong \left( H, \left\| \widetilde{T^{-1}} \cdot \right\|\_{H} \right),$$

*where* ∼= *means isomorphic as Banach spaces and (* ·*) denotes the completion.*

**Proposition 9.2.9** *Let B* ∈ *L(H ). Assume that T* : dom*(T )* ⊆ *H* → *H is densely defined and closed with* <sup>0</sup> <sup>∈</sup> *ρ(T ) and <sup>T</sup>* <sup>−</sup>1*<sup>B</sup>* <sup>=</sup> *BT* <sup>−</sup>1*. Then <sup>B</sup> admits a unique continuous extension <sup>B</sup>* <sup>∈</sup> *L(H* <sup>−</sup>1*(T* <sup>∗</sup>*)).*

*Proof* By Proposition 9.2.2(e), dom*(B)* <sup>=</sup> *<sup>H</sup>* is dense in *<sup>H</sup>* <sup>−</sup>1*(T* <sup>∗</sup>*)*. Thus, it suffices to show that *<sup>B</sup>* : *<sup>H</sup>* <sup>⊆</sup> *<sup>H</sup>* <sup>−</sup>1*(T* <sup>∗</sup>*)* <sup>→</sup> *<sup>H</sup>* <sup>−</sup>1*(T* <sup>∗</sup>*)* is continuous. For this, let *<sup>φ</sup>* <sup>∈</sup> *<sup>H</sup>* and compute for all *q* ∈ dom*(T* <sup>∗</sup>*)*

$$|\left(B\phi\right)\left(q\right)| = |\left| = \left|\left\right| = \left|\left\right|.$$

$$= \left|\left\right| \leqslant \|B\| \left\|T^{-1}\phi\right\| \|q\|\_{\text{dom}(T^\*)}.$$

The statement now follows upon invoking Lemma 9.2.7.

The abstract notions and concepts just developed will be applied to evolutionary equations next.

## **9.3 Evolutionary Equations in Distribution Spaces**

In this section, we will specialise the results from the previous section and provide an extension of the solution theory in *<sup>L</sup>*2*,ν(*R; *H )*. For this, and throughout this whole section, we let *<sup>H</sup>* be a Hilbert space, *<sup>μ</sup>* <sup>∈</sup> <sup>R</sup> and *<sup>M</sup>* : <sup>C</sup>Re*>μ* <sup>→</sup> *L(H )* be a material law. Furthermore, let *ν >* max{sb *(M),* 0} and *A*: dom*(A)* ⊆ *H* → *H* be skew-selfadjoint. In order to keep track of the Hilbert spaces involved, we shall put

$$\begin{aligned} H^1\_\upsilon(\mathbb{R}; H) &:= \text{dom}(\partial\_{l, \upsilon}), \\ H^{-1}\_\upsilon(\mathbb{R}; H) &:= \text{dom}(\partial\_{l, \upsilon})' \cong \text{dom}(\partial^\*\_{l, \upsilon})'. \end{aligned}$$

**Proposition 9.3.1** *Let D* : dom*(D)* ⊆ *H* → *H be densely defined and closed and B* ∈ *L(H ). Assume that DB is densely defined. Then for all φ* ∈ *H, (DB)*−1*(φ)* = *(D*−1*B)(φ) on* dom*(D*∗*).*

*Proof* First of all, note that *(DB)*<sup>∗</sup> = *B*∗*D*∗, by Theorem 2.3.4. Next, let *φ* ∈ *H* and *x* ∈ dom*(D*∗*)*. Then

$$((DB)\_{-1}\phi)(\mathbf{x}) = \left<\phi,(DB)^{\*}\mathbf{x}\right> = \left<\phi,\overline{B^{\*}D^{\*}\mathbf{x}}\right>$$

$$= \left<\phi,B^{\*}D^{\*}\mathbf{x}\right> = \left = (D\_{-1}B\phi)(\mathbf{x}).\qquad \square$$

The first application of the theory developed in the previous section reads as follows.

**Theorem 9.3.2** *Let U,F* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *H ). Then the following statements are equivalent:*

(i) *U* ∈ dom*(∂t ,νM(∂t ,ν)* + *A) and ∂t ,νM(∂t ,ν)* + *A U* = *F.* (ii) *∂t ,νM(∂t ,ν)U* + *AU* ⊆ *F where the left-hand side is considered as an element of <sup>H</sup>* <sup>−</sup><sup>1</sup> *<sup>ν</sup> (*R; *H )* <sup>∩</sup> *<sup>L</sup>*2*,ν(*R; *<sup>H</sup>* <sup>−</sup>1*(A))* <sup>⊆</sup> Fun*(L*2*,ν (*R; *H )).*

Before we come to the proof, we state the following lemma, the proof of which is left as Exercise 9.7.

**Lemma 9.3.3** *Let H be a Hilbert space.*

(a) *Let B* : dom*(B)* ⊆ *H* → *H and C*: dom*(C)* ⊆ *H* → *H be densely defined closed linear operators. Moreover, let λ,μ* ∈ *ρ(C) be in the same connected component of ρ(C) and*

$$(\mu - C)^{-1}B \subseteq B(\mu - C)^{-1}.$$

*Then (λ* <sup>−</sup> *C)*−1*<sup>B</sup>* <sup>⊆</sup> *B(λ* <sup>−</sup> *C)*−1*.*

(b) *For ν >* <sup>0</sup> *we have (*<sup>1</sup> <sup>+</sup> *ε∂t ,ν)*−<sup>1</sup> <sup>→</sup> <sup>1</sup>*L*2*,ν (*R;*H ) and (*<sup>1</sup> <sup>+</sup> *ε∂*<sup>∗</sup> *t ,ν)*−<sup>1</sup> <sup>→</sup> <sup>1</sup>*L*2*,ν (*R;*H ) strongly as ε* → 0+*.*

*Proof of Theorem 9.3.2* At first, we want to apply Theorem 9.2.6 from above to the case *<sup>L</sup>*2*,ν(*R; *H )* being the Hilbert space, *<sup>A</sup>* the operator in *<sup>L</sup>*2*,ν(*R; *H )*, *B* = *∂t ,νM(∂t ,ν)*, and *Tn* := <sup>1</sup> <sup>+</sup> <sup>1</sup> *<sup>n</sup> ∂t ,ν*−<sup>1</sup> , *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>. The operators *<sup>A</sup>* and *<sup>B</sup>* are densely defined. Indeed, *A* is skew-selfadjoint and dom*(B)* ⊇ dom*(∂t ,ν)*. Next, by Theorems 2.3.2 and 2.3.4,

$$((B+A)^{\*} \supseteq B^{\*} + A^{\*} = (\partial\_{\mathbb{I},\mathbb{V}}M(\partial\_{\mathbb{I},\mathbb{V}}))^{\*} - A \supseteq M(\partial\_{\mathbb{I},\mathbb{V}})^{\*}\partial\_{\mathbb{I},\mathbb{V}}^{\*} - A.$$

In consequence, dom*((A* + *B)*∗*)* ⊇ dom*(∂t ,ν)* ∩ dom*(A)* is dense. Thus, *B* + *A* is closable by Lemma 2.2.7.

By Lemma 9.3.3 we obtain *Tn, T* <sup>∗</sup> *<sup>n</sup>* <sup>→</sup> <sup>1</sup>*L*2*,ν (*R;*H )* strongly in *<sup>L</sup>*2*,ν (*R; *H )* as *<sup>n</sup>* → ∞. Moreover, by Hille's theorem (see Proposition 3.1.6) we have *<sup>∂</sup>*−<sup>1</sup> *t ,ν A* ⊆ *A∂*−<sup>1</sup> *t ,ν* and thus, *TnA* <sup>⊆</sup> *ATn* for each *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> by Lemma 9.3.3, which also yields *T* ∗ *<sup>n</sup> A* ⊆ *AT* <sup>∗</sup> *<sup>n</sup>* for each *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> by Theorem 9.2.6. The latter, together with the strong convergence of *(Tn)n* and *(T* <sup>∗</sup> *<sup>n</sup> )n*, yields that *Tn, T* <sup>∗</sup> *<sup>n</sup>* → 1*L*2*,ν (*R;dom*(A))* strongly in *<sup>L</sup>*2*,ν(*R; dom*(A))* as *<sup>n</sup>* → ∞.

Next, we infer ran*(Tn)* ⊆ dom*(∂t ,ν)* ⊆ dom*(B)* and

$$T\_n B \subseteq B T\_n$$

for all *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> by using the Fourier–Laplace transformation, see also Theorem 5.2.3. Hence, by Theorem 9.2.6, condition (i) is equivalent to

$$(\partial\_{\mathfrak{l},\mathbb{V}}M(\partial\_{\mathfrak{l},\mathbb{V}}))\_{-1}U + A\_{-1}U \subseteq F. \tag{9.3}$$

It remains to show that (9.3) is equivalent to (ii): We apply Proposition 9.3.1 to the case *D* = *∂t ,ν, B* = *M(∂t ,ν)*. For this assume that (9.3) holds. By Proposition 9.3.1, we deduce that for all *ϕ* ∈ dom*(∂*<sup>∗</sup> *t ,ν)* ∩ dom*(A)* we have that (use dom*(A)* = dom*(A*∗*)*)

$$((\partial\_{\mathfrak{l},\boldsymbol{\nu}}M(\partial\_{\mathfrak{l},\boldsymbol{\nu}}))\_{-1}U + A\_{-1}U)(\boldsymbol{\varphi}) = ((\partial\_{\mathfrak{l},\boldsymbol{\nu}})\_{-1}M(\partial\_{\mathfrak{l},\boldsymbol{\nu}})U + A\_{-1}U)(\boldsymbol{\varphi})$$

Thus, (9.3) implies (ii).

Now, assume that (ii) holds. Let *<sup>φ</sup>* <sup>∈</sup> dom*((∂t ,νM(∂t ,ν))*∗*)* <sup>∩</sup> *<sup>L</sup>*2*,ν(*R; dom*(A))*. Then, for *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>, *φn* := *<sup>T</sup>* <sup>∗</sup> *<sup>n</sup> <sup>φ</sup>* <sup>→</sup> *<sup>φ</sup>* as *<sup>n</sup>* → ∞ in *<sup>L</sup>*2*,ν (*R; dom*(A))* and

*(∂t ,νM(∂t ,ν))*<sup>∗</sup>*φn* = *T* <sup>∗</sup> *<sup>n</sup> (∂t ,νM(∂t ,ν))*<sup>∗</sup>*φ* → *(∂t ,νM(∂t ,ν))*<sup>∗</sup>*φ (n* → ∞*)*

in *<sup>L</sup>*2*,ν (*R; *H )*. By (ii) we obtain

$$( (\partial\_{l,\boldsymbol{\nu}})\_{-1} M(\partial\_{l,\boldsymbol{\nu}}) U + A\_{-1} U ) (\phi\_n) = F(\phi\_n).$$

Using Proposition 9.3.1, we infer

$$((\partial\_{l,\boldsymbol{\upsilon}}M(\partial\_{l,\boldsymbol{\upsilon}}))\_{-1}U + A\_{-1}U)(\phi\_n) = F(\phi\_n).$$

Letting *n* → ∞, we deduce (9.3).

Assume now that there exists *c >* 0 such that

$$\operatorname{Re}\, zM(z) \geqslant c \quad (z \in \mathbb{C}\_{\operatorname{Re}\geqslant v})\,.$$

We recall from Theorem 6.2.1 that the operator *∂t ,νM(∂t ,ν)* + *A* is continuously invertible in *<sup>L</sup>*2*,ν(*R; *H )*.

**Theorem 9.3.4** *The operator Sν* := *∂t ,νM(∂t ,ν)* + *A* −<sup>1</sup> <sup>∈</sup> *L(L*2*,ν (*R; *H )) admits a continuous extension to L(H* <sup>−</sup><sup>1</sup> *<sup>ν</sup> (*R; *H )).*

*Proof* We apply Proposition 9.2.9 to *<sup>L</sup>*2*,ν(*R; *H )* being the Hilbert space, *<sup>T</sup>* <sup>=</sup> *∂t ,ν* and *<sup>B</sup>* <sup>=</sup> *Sν* . For this, it remains to prove that *<sup>T</sup>* <sup>−</sup><sup>1</sup>*Sν* <sup>=</sup> *SνT* <sup>−</sup>1. This however follows from the fact that *z* → *S(z)* := *(zM(z)* + *A)* <sup>−</sup><sup>1</sup> is a material law and *S(∂t ,ν)* = *Sν* .

## **9.4 Initial Value Problems for Evolutionary Equations**

Let *<sup>H</sup>* be a Hilbert space, *<sup>μ</sup>* <sup>∈</sup> <sup>R</sup>, *<sup>M</sup>* : <sup>C</sup>Re*>μ* <sup>→</sup> *L(H )* a material law, *ν >* max{sb *(M),* 0} and *A*: dom*(A)* ⊆ *H* → *H* skew-selfadjoint. In this section we shall focus on the implementation of initial value problems for evolutionary equations. A priori there is no explicit initial condition implemented in the theory established in *<sup>L</sup>*2*,ν (*R; *H )*. Indeed, choosing *ν >* 0 we have only an implicit exponential decay condition at −∞. For initial values at 0, we would rather want to

solve the following type of equation. In the situation of the previous section, for a given initial value *U*<sup>0</sup> ∈ *H* we seek to solve the initial value problem

$$\begin{cases} \left( \partial\_{t,\boldsymbol{\nu}} M(\partial\_{t,\boldsymbol{\nu}}) + A \right) U = 0 \quad \text{on } (0,\infty) \,, \\ U(0+) = U\_0. \end{cases} \tag{9.4}$$

In this generality the initial value problem cannot be solved. Indeed, for *U* ∈ *<sup>L</sup>*2*,ν(*R; *H )* evaluation at 0 is not well-defined. A way to overcome this difficulty is to weaken the attainment of the initial value. For this, we specialise to the case when

$$M(\partial\_{l,\boldsymbol{\upsilon}}) = M\_0 + \partial\_{l,\boldsymbol{\upsilon}}^{-1} M\_1$$

with *M*0*, M*<sup>1</sup> ∈ *L(H )*.

We start with two lemmas, the second of which will also be useful in the next chapter.

**Lemma 9.4.1** *Let H*0*, H*<sup>1</sup> *be Hilbert spaces and assume that H*<sup>1</sup> → *H*<sup>0</sup> *continuously and densely. Then C*∞ <sup>c</sup> *(*R; *<sup>H</sup>*1*)* <sup>⊆</sup> *<sup>L</sup>*2*,ν (*R; *<sup>H</sup>*1*)* <sup>∩</sup> *<sup>H</sup>*<sup>1</sup> *<sup>ν</sup> (*R; *<sup>H</sup>*0*) is dense.*

*Proof* By Proposition 3.2.4, *C*∞ <sup>c</sup> *(*R; *<sup>H</sup>*1*)* <sup>⊆</sup> *<sup>H</sup>*<sup>1</sup> *<sup>ν</sup> (*R; *<sup>H</sup>*1*)* is dense. Since the embedding *H*<sup>1</sup> *<sup>ν</sup> (*R; *<sup>H</sup>*1*)* <sup>→</sup> *<sup>L</sup>*2*,ν(*R; *<sup>H</sup>*1*)* <sup>∩</sup> *<sup>H</sup>*<sup>1</sup> *<sup>ν</sup> (*R; *<sup>H</sup>*0*)* is continuous, it thus suffices to show that this embedding is also dense. For this, let *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν (*R; *<sup>H</sup>*1*)*∩*H*<sup>1</sup> *<sup>ν</sup> (*R; *<sup>H</sup>*0*)*. For *ε >* 0 small enough, we define

$$f\_{\varepsilon} := (1 + \varepsilon \partial\_{\mathfrak{l}, \mathbb{U}})^{-1} f \in H\_{\mathbb{V}}^{\mathbb{L}}(\mathbb{R}; H\_{\mathbb{L}}) .$$

By Lemma 9.3.3(b), *fε* <sup>→</sup> *<sup>f</sup>* in *<sup>L</sup>*2*,ν(*R; *<sup>H</sup>*1*)* as *<sup>ε</sup>* <sup>→</sup> 0. It remains to show that *∂t ,νfε* <sup>→</sup> *∂t ,νf* in *<sup>L</sup>*2*,ν (*R; *<sup>H</sup>*0*)* as *<sup>ε</sup>* <sup>→</sup> 0. For this, by definition of *<sup>H</sup>*<sup>1</sup> *<sup>ν</sup> (*R; *<sup>H</sup>*0*)*, we find *<sup>g</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *<sup>H</sup>*0*)* such that *<sup>f</sup>* <sup>=</sup> *<sup>∂</sup>*−<sup>1</sup> *t ,ν g*. Using again Lemma 9.3.3(b), we infer

$$\partial\_{l,\boldsymbol{\upsilon}} f\_{\varepsilon} = \partial\_{l,\boldsymbol{\upsilon}} (1 + \varepsilon \partial\_{l,\boldsymbol{\upsilon}})^{-1} f = (1 + \varepsilon \partial\_{l,\boldsymbol{\upsilon}})^{-1} \boldsymbol{g} \to \boldsymbol{g} = \partial\_{l,\boldsymbol{\upsilon}} f$$

in *<sup>L</sup>*2*,ν (*R; *<sup>H</sup>*0*)* as *<sup>ε</sup>* <sup>→</sup> 0. This concludes the proof.

**Lemma 9.4.2** *Let <sup>U</sup>*<sup>0</sup> <sup>∈</sup> dom*(A), <sup>U</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *H ) such that <sup>M</sup>*0*<sup>U</sup>* <sup>−</sup>1[0*,*∞*)M*0*U*<sup>0</sup> : <sup>R</sup> <sup>→</sup> *<sup>H</sup>* <sup>−</sup>1*(A) is continuous,* spt *<sup>U</sup>* <sup>⊆</sup> [0*,*∞*) and*

$$\begin{cases} \partial\_{t, \upsilon} M\_0 U + M\_1 U + AU = 0 & \text{on } (0, \infty), \\ (M\_0 U)(0+) = M\_0 U\_0 & \text{in } H^{-1}(A), \end{cases}$$

*where the first equality is meant in the sense that for all <sup>ϕ</sup>* <sup>∈</sup> *<sup>H</sup>*<sup>1</sup> *<sup>ν</sup> (*R; *H )* <sup>∩</sup> *<sup>L</sup>*2*,ν(*R; dom*(A)) with* spt *<sup>ϕ</sup>* <sup>⊆</sup> [0*,*∞*)*

$$(\partial\_{\mathcal{I},\boldsymbol{\upsilon}}M\_0U + M\_1U + AU)(\boldsymbol{\varphi}) = 0.$$

*Then <sup>U</sup>* <sup>−</sup> <sup>1</sup>[0*,*∞*)U*<sup>0</sup> <sup>∈</sup> dom*(∂t ,νM*<sup>0</sup> <sup>+</sup> *<sup>M</sup>*<sup>1</sup> <sup>+</sup> *A) and*

$$(\overline{\partial\_{l,\nu}M\_0 + M\_l + A})(U - \mathbb{1}\_{[0,\infty)}U\_0) = -(M\_l + A)U\_0\mathbb{1}\_{[0,\infty)}.$$

*Proof* We apply Theorem 9.3.2 for showing the claim; that is, we show that

$$((\partial\_{l,\nu}M\_0 + M\_1)(U - \mathbb{1}\_{[0,\infty)}U\_0) + A(U - \mathbb{1}\_{[0,\infty)}U\_0))(\psi) = (-(M\_1 + A)U\_0 \mathbb{1}\_{[0,\infty)})(\psi)$$

for each *<sup>ψ</sup>* <sup>∈</sup> *<sup>H</sup>*<sup>1</sup> *<sup>ν</sup> (*R; *H )* <sup>∩</sup> *<sup>L</sup>*2*,ν (*R; dom*(A))*. Note that by continuity (use Lemma 9.4.1 with *H*<sup>0</sup> = *H* and *H*<sup>1</sup> = dom*(A)*), it suffices to show the equality for *ψ* ∈ *C*<sup>∞</sup> <sup>c</sup> *(*R; dom*(A))*. So, let *<sup>ψ</sup>* <sup>∈</sup> *<sup>C</sup>*<sup>∞</sup> <sup>c</sup> *(*R; dom*(A))* and for *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> we define the function *ϕn* <sup>∈</sup> *<sup>H</sup>*<sup>1</sup> *<sup>ν</sup> (*R*)* by

$$\varphi\_{\hbar}(t) := \begin{cases} 0 & \text{if } t \le 0, \\ nt & \text{if } t \in (0, 1/n), \\ 1 & \text{if } t \ge 1/n. \end{cases}$$

Note that *ϕnψ* <sup>∈</sup> *<sup>H</sup>*<sup>1</sup> *<sup>ν</sup> (*R; *H )* <sup>∩</sup> *<sup>L</sup>*2*,ν (*R; dom*(A))* and spt*(ϕnψ)* <sup>⊆</sup> [0*,*∞*)* for each *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>. Thus, we obtain

$$\begin{aligned} &\left(\left(\partial\_{l,\boldsymbol{\upsilon}}M\_{0} + M\_{1} + A\right)(U - \mathbb{1}\_{[0,\infty)}U\_{0})\right)(\boldsymbol{\psi}) \\ &= \left(\left(\partial\_{l,\boldsymbol{\upsilon}}M\_{0} + M\_{1} + A\right)U\right)(\boldsymbol{\psi}) - \left(\left(\partial\_{l,\boldsymbol{\upsilon}}M\_{0} + M\_{1} + A\right)(\mathbb{1}\_{[0,\infty)}U\_{0})\right)(\boldsymbol{\psi}) \\ &= \left(\left(\partial\_{l,\boldsymbol{\upsilon}}M\_{0} + M\_{1} + A\right)U\right)(\boldsymbol{\varphi}\_{n}\boldsymbol{\psi}) + \left(\left(\partial\_{l,\boldsymbol{\upsilon}}M\_{0} + M\_{1} + A\right)U\right)((1-\boldsymbol{\varphi}\_{n})\boldsymbol{\psi}) \\ &- \left(\left(\partial\_{l,\boldsymbol{\upsilon}}M\_{0} + M\_{1} + A\right)(\mathbb{1}\_{[0,\infty)}U\_{0})\right)(\boldsymbol{\psi}) \\ &= \left(\left(\partial\_{l,\boldsymbol{\upsilon}}M\_{0} + M\_{1} + A\right)U\right)((1-\boldsymbol{\varphi}\_{n})\boldsymbol{\psi}) - (\delta\_{0}M\_{0}U\_{0})(\boldsymbol{\psi}) \\ &- \left((M\_{1} + A)(\mathbb{1}\_{[0,\infty)}U\_{0})\right)(\boldsymbol{\psi}) \end{aligned}$$

for each *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>. Thus, the claim follows if we can show that

$$\left( (\partial\_{l,\boldsymbol{\nu}}M\_0 + M\_1 + A)U \right)((1-\varphi\_n)\psi) - (\delta\_0 M\_0 U\_0)(\psi) \to 0 \quad (n \to \infty).$$

For doing so, we first observe that for all *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> we have

$$(\delta\_0 M\_0 U\_0)(\psi) = (\delta\_0 M\_0 U\_0)((1 - \varphi\_n)\psi) = (\partial\_{l,\vee} M\_0 \mathbb{1}\_{[0,\infty)} U\_0)((1 - \varphi\_n)\psi),$$

since *ϕn(*0*)* = 0. Moreover,

$$\left( (M\_1 + A)U \right) ((1 - \varphi\_n)\psi) = \left< U, (1 - \varphi\_n)(M\_1^\* + A^\*)\psi \right>\_{L\_{2,v}} \to 0 \quad (n \to \infty),$$

since 1 <sup>−</sup> *ϕn(*m*)* <sup>→</sup> <sup>1</sup>*(*−∞*,*0]*(*m*)* strongly in *<sup>L</sup>*2*,ν(*R; *H )* and spt *<sup>U</sup>* <sup>⊆</sup> [0*,*∞*)*. Thus, it remains to show that

$$(\partial\_{l,\boldsymbol{\nu}}M\_0(U - \mathbb{1}\_{[0,\infty)}U\_0))((1-\varphi\_\boldsymbol{\nu})\psi) \to 0 \quad (\boldsymbol{n}\to\infty).$$

We compute

$$\begin{split} & \left( \partial\_{t,\boldsymbol{v}} M\_{0} (U - \mathbb{1}\_{[0,\infty)} U\_{0}) \right) ((1 - \varphi\_{n}) \psi) \\ &= \left< M\_{0} (U - \mathbb{1}\_{[0,\infty)} U\_{0}), \partial\_{t,\boldsymbol{v}}^{\*} ((1 - \varphi\_{n}) \psi) \right>\_{L\_{2,\boldsymbol{v}}} \\ &= \left< M\_{0} (U - \mathbb{1}\_{[0,\infty)} U\_{0}), n \mathbb{1}\_{[0,1/n]} \psi \right>\_{L\_{2,\boldsymbol{v}}} - \left< M\_{0} (U - \mathbb{1}\_{[0,\infty)} U\_{0}), (1 - \varphi\_{n}) \partial\_{t,\boldsymbol{v}} \psi \right>\_{L\_{2,\boldsymbol{v}}} \\ &+ 2\nu \left< M\_{0} (U - \mathbb{1}\_{[0,\infty)} U\_{0}), (1 - \varphi\_{n}) \psi \right>\_{L\_{2,\boldsymbol{v}}}. \end{split}$$

Note that the last two terms on the right-hand side tend to 0 as *n* → ∞ since, as above, 1 <sup>−</sup> *ϕn(*m*)* <sup>→</sup> <sup>1</sup>*(*−∞*,*0]*(*m*)* strongly in *<sup>L</sup>*2*,ν (*R; *H )* and spt *<sup>U</sup>* <sup>⊆</sup> [0*,*∞*)*. For the first term, we observe that

$$\begin{aligned} & \left| \langle M\_0(U - \mathbb{1}\_{[0,\infty)} U\_0), n\mathbb{1}\_{[0,1/n]} \psi \rangle\_{L\_{2,v}} \right| \\ & \leqslant n \int\_0^{1/n} \left| \langle M\_0(U(t) - U\_0), \psi(t) \rangle\_H \right| \mathfrak{e}^{-2\upsilon t} \, \mathrm{d}t \\ & \leqslant n \int\_0^{1/n} \left\| M\_0(U(t) - U\_0) \right\|\_{H^{-1}(A)} \left\| \psi(t) \right\|\_{\mathrm{dom}(A^\alpha)} \, \mathrm{e}^{-2\upsilon t} \, \mathrm{d}t \to 0 \quad (n \to \infty), \end{aligned}$$

by the fundamental theorem of calculus, since *(M*0*U )(t)* <sup>→</sup> *<sup>M</sup>*0*U*<sup>0</sup> in *<sup>H</sup>* <sup>−</sup>1*(A)* as *t* → 0+.

Assume now additionally that there exists *c >* 0 such that

$$
\varepsilon M\_0 + M\_1 \geqslant c \quad (\varepsilon \in \mathbb{C}\_{\mathbf{Re} \geqslant \mathbb{V}}) \dots
$$

Then we can actually prove a stronger result than in the previous lemma.

**Theorem 9.4.3** *Let <sup>U</sup>*<sup>0</sup> <sup>∈</sup> dom*(A), <sup>U</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν (*R; *H ). Then the following statements are equivalent:*

(i) *<sup>M</sup>*0*<sup>U</sup>* <sup>−</sup> <sup>1</sup>[0*,*∞*)M*0*U*<sup>0</sup> : <sup>R</sup> <sup>→</sup> *<sup>H</sup>* <sup>−</sup>1*(A) is continuous,* spt *<sup>U</sup>* <sup>⊆</sup> [0*,*∞*) and*

$$\begin{cases} \partial\_{t, \upsilon} M\_0 U + M\_1 U + AU = 0 & \text{on } (0, \infty), \\ M\_0 U(0+) = M\_0 U\_0 & \text{in } H^{-1}(A), \end{cases}$$

*where the first equality is meant as in Lemma 9.4.2.*

$$\begin{array}{cccc} \text{(ii)} & U & -\mathbbm{1}\_{[0,\infty)}U\_0 \\ \overline{\left(\partial\_{l,\boldsymbol{\nu}}M\_0 + M\_1 + A\right)}(U - \mathbbm{1}\_{[0,\infty)}U\_0) & \text{ } ^\circ \mathbb{I} & \text{ } ^\circ \text{A} \\ \end{array} \\ \begin{array}{cccc} \text{(i)} & \text{dom}(\overline{\partial\_{l,\boldsymbol{\nu}}M\_0 + M\_1 + A)}, & \text{and} & \text{we} & \text{have} \\ \mathbbm{1}\_{[0,\infty)}U\_0 & \text{ $\boldsymbol{\nu}$ } & \text{ $\boldsymbol{\nu}$ } \\ \end{array} \\ \text{(iii)} \\ \begin{array}{cccc} \text{(i)} & \text{and} & \text{we} \\ \end{array} \\ \begin{array}{cccc} \text{(i)} & \text{and} & \text{we} & \text{have} \\ \end{array} \\ \text{(iv)} \\ \text{(i)} \\ \end{array}$$

$$\text{(iii) } U = S\_\nu \delta\_0 M\_0 U\_0 \text{, with } S\_\nu \in L(H\_\nu^{-1}(\mathbb{R}; H)) \text{ as in Theorem 9.3.4.} $$

*Moreover, in either case we have <sup>M</sup>*0*<sup>U</sup>* <sup>−</sup> <sup>1</sup>[0*,*∞*)M*0*U*<sup>0</sup> <sup>∈</sup> *<sup>H</sup>*<sup>1</sup> *<sup>ν</sup> (*R; *<sup>H</sup>* <sup>−</sup>1*(A)).*

*Proof* (i)⇒(ii): This was shown in Lemma 9.4.2. (ii)⇒(iii): We have that

$$U - \mathbb{1}\_{[0,\infty)} U\_0 = -S\_\vee ((M\_\mathbb{I} + A)\mathbb{1}\_{[0,\infty)} U\_0).$$

Applying *∂*−<sup>1</sup> *t ,ν* to both sides of this equality we infer that

$$\begin{aligned} \partial\_{t,\boldsymbol{\nu}}^{-1}(U - \mathbb{1}\_{[0,\infty)}U\_0) &= -\mathcal{S}\_{\boldsymbol{\nu}}((M\_{\mathrm{l}} + A)\partial\_{t,\boldsymbol{\nu}}^{-1}\mathbb{1}\_{[0,\infty)}U\_0) \\ &= -\partial\_{t,\boldsymbol{\nu}}^{-1}\mathbb{1}\_{[0,\infty)}U\_0 + \mathcal{S}\_{\boldsymbol{\nu}}(\partial\_{t,\boldsymbol{\nu}}M\_0\partial\_{t,\boldsymbol{\nu}}^{-1}\mathbb{1}\_{[0,\infty)}U\_0), \end{aligned}$$

which gives

$$
\partial\_{t,\boldsymbol{\nu}}^{-1}U = \mathcal{S}\_{\boldsymbol{\nu}}(\partial\_{\boldsymbol{t},\boldsymbol{\nu}}M\_0\partial\_{\boldsymbol{t},\boldsymbol{\nu}}^{-1}\mathbb{1}\_{[0,\infty)}U\_0) = \mathcal{S}\_{\boldsymbol{\nu}}(M\_0\mathbb{1}\_{[0,\infty)}U\_0).
$$

Applying *∂t ,ν* to both sides and taking into account Theorem 9.3.4, we derive the claim.

(iii)⇒(ii): We do the argument in the proof of (ii)⇒(iii) backwards. First, we apply *∂*−<sup>1</sup> *t ,ν* to *U* = *Sν (δ*0*M*0*U*0*)*, which yields

$$
\partial\_{t,\boldsymbol{\nu}}^{-1}U = \partial\_{t,\boldsymbol{\nu}}^{-1}\mathbb{S}\_{\boldsymbol{\nu}}(\delta\_0 M\_0 U\_0) = \mathbb{S}\_{\boldsymbol{\nu}}(M\_0 \mathbb{1}\_{[0,\infty)} U\_0) = \mathbb{S}\_{\boldsymbol{\nu}}(\partial\_{t,\boldsymbol{\nu}}M\_0 \partial\_{t,\boldsymbol{\nu}}^{-1}\mathbb{1}\_{[0,\infty)} U\_0).
$$

Thus,

$$\begin{aligned} \partial\_{t,\boldsymbol{\nu}}^{-1}(U - \mathbb{1}\_{[0,\infty)}U\_0) &= S\_{\boldsymbol{\nu}}(\partial\_{t,\boldsymbol{\nu}}M\_0\partial\_{t,\boldsymbol{\nu}}^{-1}\mathbb{1}\_{[0,\infty)}U\_0) - \partial\_{t,\boldsymbol{\nu}}^{-1}\mathbb{1}\_{[0,\infty)}U\_0 \\ &= -S\_{\boldsymbol{\nu}}((M\_{\boldsymbol{\mathcal{I}}} + A)\partial\_{t,\boldsymbol{\nu}}^{-1}\mathbb{1}\_{[0,\infty)}U\_0). \end{aligned}$$

An application of *∂t ,ν* yields the claim. (ii),(iii)⇒(i): Since *U* = *Sν (δ*0*M*0*U*0*)*, we derive that

$$(\partial\_{\mathfrak{t},\boldsymbol{\upsilon}}M\_0 + M\_\mathbf{l} + A)U \subseteq \delta\_0 M\_0 U\_{0,\boldsymbol{\upsilon}}$$

which in particular yields *(∂t ,νM*<sup>0</sup> + *M*<sup>1</sup> + *A)U* = 0 on *(*0*,*∞*)*. By (ii) we infer

$$U - \mathbb{1}\_{[0,\infty)} U\_0 = -S\_\nu((M\_1 + A)\mathbb{1}\_{[0,\infty)} U\_0),$$

which shows that spt*(U* <sup>−</sup> <sup>1</sup>[0*,*∞*)U*0*)* <sup>⊆</sup> [0*,*∞*)* due to causality and hence, spt *<sup>U</sup>* <sup>⊆</sup> [0*,*∞*)*. It remains to show that *<sup>M</sup>*0*(U* <sup>−</sup> <sup>1</sup>[0*,*∞*)U*0*)* <sup>∈</sup> *<sup>H</sup>*<sup>1</sup> *<sup>ν</sup> (*R; *<sup>H</sup>* <sup>−</sup>1*(A))*, since this would imply the continuity of *<sup>M</sup>*0*(U* <sup>−</sup> <sup>1</sup>[0*,*∞*)U*0*)* with values in *<sup>H</sup>* <sup>−</sup>1*(A)* by Theorem 4.1.2 and thus,

$$M\_0(U - \mathbb{1}\_{[0,\infty)} U\_0)(0+) = M\_0(U - \mathbb{1}\_{[0,\infty)} U\_0)(0-) = 0 \text{ in } H^{-1}(A).$$

since the function is supported on [0*,*∞*)* only. We compute

$$\begin{aligned} &M\_0(U - \mathbb{1}\_{[0,\infty)}U\_0) \\ &= -M\_0 S\_\upsilon((M\_1 + A)\mathbb{1}\_{[0,\infty)}U\_0) \\ &= -\partial\_{t,\upsilon} M\_0 S\_\upsilon(\partial\_{t,\upsilon}^{-1}(M\_1 + A)\mathbb{1}\_{[0,\infty)}U\_0) \\ &= -\partial\_{t,\upsilon}^{-1}(M\_1 + A)\mathbb{1}\_{[0,\infty)}U\_0 + (M\_1 + A)S\_\upsilon(\partial\_{t,\upsilon}^{-1}(M\_1 + A)\mathbb{1}\_{[0,\infty)}U\_0), \end{aligned}$$

and since the right-hand side belongs to *H*<sup>1</sup> *<sup>ν</sup> (*R; *<sup>H</sup>* <sup>−</sup>1*(A))*, the assertion follows.

*Remark 9.4.4* By Theorem 9.3.4, we always have *<sup>U</sup>* <sup>=</sup> *Sν <sup>δ</sup>*0*M*0*U*<sup>0</sup> <sup>∈</sup> *<sup>H</sup>* <sup>−</sup><sup>1</sup> *<sup>ν</sup> (*R; *H )*. This then serves as our generalisation for the initial value problem even if *U*<sup>0</sup> ∈*/* dom*(A)*.

The upshot of Theorem 9.4.3(ii) is that, provided *U*<sup>0</sup> ∈ dom*(A)*, we can reformulate initial value problems with the help of our theory as evolutionary equations with *L*2*,ν*-right-hand sides. Thus, we do not need the detour to extrapolation spaces for being able to solve the initial value problem (9.4) (with an adapted initial condition as in (i)) in this situation.

Also note that it may seem that *U* does depend on the 'full information' of *U*<sup>0</sup> as it is indicated in (ii). In fact, *U* only depends on the values of *U*<sup>0</sup> orthogonal to the kernel of *M*<sup>0</sup> as it is seen in (iii). We conclude this chapter with two examples; the first one is the heat equation, the second example considers Maxwell's equations.

*Example 9.4.5 (Initial Value Problems for the Heat Equation)* We recall the setting for the heat equation outlined in Theorem 6.2.4. This time, we will use homogeneous Dirichlet boundary conditions for the heat distribution *<sup>θ</sup>*. Let <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* be open and bounded, *<sup>a</sup>* <sup>∈</sup> *<sup>L</sup>*∞*()d*×*<sup>d</sup>* with Re *a(x) c >* 0 for a.e. *x* ∈ for some *c >* 0. In this case, we have

$$M\_0 = \begin{pmatrix} 1 \ 0 \\ 0 \ 0 \end{pmatrix}, \quad M\_1 = \begin{pmatrix} 0 & 0 \\ 0 \ a^{-1} \end{pmatrix}, \quad A = \begin{pmatrix} 0 & \text{div} \\ \text{grad}\_0 & 0 \end{pmatrix}.$$

For the unknown heat distribution, *θ*, we ask it to have the initial value *θ*<sup>0</sup> ∈ dom*(*grad0*)*. Let *ν >* 0 and *V* ∈ *L*2*,ν* <sup>R</sup>; *<sup>L</sup>*2*()* <sup>×</sup> *<sup>L</sup>*2*()<sup>d</sup>* be the unique solution of

$$\overline{\left(\partial\_{l,\boldsymbol{\vee}}M\_0 + M\_1 + A\right)}V = -\left(M\_1 + A\right)\mathbb{1}\_{[0,\infty)}\begin{pmatrix} \theta\_0\\0 \end{pmatrix} = -\mathbb{1}\_{[0,\infty)}\begin{pmatrix} 0\\\mathrm{grad}\_0\theta\_0 \end{pmatrix}.$$

Then *(θ , q)* := *<sup>U</sup>* := *<sup>V</sup>* <sup>+</sup> <sup>1</sup>[0*,*∞*) θ*0 0 ∈ *L*2*,ν* <sup>R</sup>; *<sup>L</sup>*2*()* <sup>×</sup> *<sup>L</sup>*2*()<sup>d</sup>* satisfies (ii) from Theorem 9.4.3. Hence, on *(*0*,*∞*)* we have

$$
\begin{pmatrix} \partial\_{\mathfrak{t}, \mathfrak{v}} \theta \\ a^{-1} q \end{pmatrix} + \begin{pmatrix} \operatorname{div} q \\ \operatorname{grad}\_0 \theta \end{pmatrix} = 0
$$

and the initial value is attained in the sense that

$$(M\_0\left(\theta,q\right))\left(0+\right) = \begin{pmatrix} \theta\left(0+\right) \\ 0 \end{pmatrix} = \begin{pmatrix} \theta\_0 \\ 0 \end{pmatrix} \quad \text{in} \quad H^{-1}(A) = H^{-1}(\text{grad}\_0) \times H^{-1}(\text{div}),$$

which follows from Proposition 9.2.5 where we computed *H* <sup>−</sup>1*(A)*. Let us have a closer look at the attainment of the initial value. As a particular consequence of strong convergence in *<sup>H</sup>* <sup>−</sup>1*(*grad0*)*, we obtain for all *<sup>φ</sup>* <sup>∈</sup> dom*(*div*)*

$$\langle \theta(t), \operatorname{div} \phi \rangle \to \langle \theta\_0, \operatorname{div} \phi \rangle$$

as *t* → 0+. Since grad0 is one-to-one and has closed range (see Corollary 11.3.2), we see that div has dense and closed range. Hence div is onto. This implies that for all *ψ* ∈ *L*2*()*

$$
\langle \theta(t), \psi \rangle \to \langle \theta\_0, \psi \rangle \quad (t \to 0+).
$$

We deduce that the initial value is attained weakly. This might seem a bit unsatisfactory, however, we shall see stronger assertions for more particular cases in the next chapter.

Next, we have a look at Maxwell's equations.

*Example 9.4.6 (Initial Value Problems for Maxwell's Equations)* We briefly recall the situation of Maxwell's equations from Theorem 6.2.8. Let *ε, μ, σ* : <sup>→</sup> <sup>R</sup>3×<sup>3</sup> satisfy the assumptions in Theorem 6.2.8 and let *(E*0*, H*0*)* ∈ dom*(*curl0*)* × dom*(*curl*)*. Let *(E,* <sup>0</sup> *H )* <sup>0</sup> <sup>∈</sup> *<sup>L</sup>*2*,ν (*R; *<sup>L</sup>*2*()*6*)* satisfy

$$\begin{split} &\overline{\left(\partial\_{t,\boldsymbol{\nu}}\begin{pmatrix} \boldsymbol{\varepsilon} & \boldsymbol{0} \\ \boldsymbol{0} \ \boldsymbol{\mu} \end{pmatrix} + \begin{pmatrix} \boldsymbol{\sigma} \ \boldsymbol{0} \\ \boldsymbol{0} \ \boldsymbol{0} \end{pmatrix} + \begin{pmatrix} \boldsymbol{0} & -\operatorname{curl} \\ \operatorname{curl}\_{\boldsymbol{0}} & \boldsymbol{0} \end{pmatrix}} \right) \begin{pmatrix} \widehat{\boldsymbol{E}} \\ \widehat{\boldsymbol{H}} \end{pmatrix} \\ &= -\left( \begin{pmatrix} \boldsymbol{\sigma} \ \boldsymbol{0} \\ \boldsymbol{0} \ \boldsymbol{0} \end{pmatrix} + \begin{pmatrix} \boldsymbol{0} & -\operatorname{curl} \\ \operatorname{curl}\_{\boldsymbol{0}} & \boldsymbol{0} \end{pmatrix} \right) \mathbb{1}\_{[0,\infty)} \begin{pmatrix} \boldsymbol{E}\_{0} \\ \boldsymbol{H}\_{0} \end{pmatrix} = \mathbb{1}\_{[0,\infty)} \begin{pmatrix} -\boldsymbol{\sigma}E\_{0} + \operatorname{curl} \boldsymbol{H}\_{0} \\ -\operatorname{curl}\_{\boldsymbol{0}} \boldsymbol{E}\_{0} \end{pmatrix}. \end{split}$$

Then, as we have argued for the heat equation,

$$
\begin{pmatrix} E \\ H \end{pmatrix} := \begin{pmatrix} \widehat{E} \\ \widehat{H} \end{pmatrix} + \mathbb{1}\_{[0,\infty)} \begin{pmatrix} E\_0 \\ H\_0 \end{pmatrix},
$$

satisfies a corresponding initial value problem. We note here that although often the second component in the right-hand side is set to 0, as there are 'no magnetic monopoles', in the theory of evolutionary equations the second component of the right-hand side does appear as an initial value in disguise.

## **9.5 Comments**

There are many ways to define spaces generalising the action of an operator to a bigger class of elements; both in a concrete setting and in abstract situations; see e.g. [22, 38]. People have also taken into account simultaneous extrapolation spaces for operators that commute, see e.g. [77, 93].

These spaces are particularly useful for formulating initial value problems as was exemplified above; see also the concluding chapter of [84] for more insight. Yet there is more to it as one can in fact generalise the equation under consideration or even force the attainment of the initial value in a stronger sense. These issues, however, imply that either the initial value is attained in a much weaker sense, or that there are other structural assumptions needed to be imposed on the material law *M* (as well as on the operator *A*).

In fact, quite recently, it was established that a particular proper subclass of evolutionary equations can be put into the framework of *C*0-semigroups. The conditions required to allow for statements in this direction are, on the other hand, rather hard to check in practice; see [116, 120].

## **Exercises**

**Exercise 9.1** Let *<sup>H</sup>*<sup>0</sup> be a Hilbert space, *<sup>T</sup>* <sup>∈</sup> *L(H*0*)*. Compute *<sup>H</sup>* <sup>−</sup>1*(T )* and *H* <sup>−</sup>1*(T* <sup>∗</sup>*)*.

**Exercise 9.2** Let *H*0*, H*<sup>1</sup> be Hilbert spaces such that *H*<sup>0</sup> → *H*<sup>1</sup> is dense and continuous. Prove that *H*- <sup>1</sup> → *H*- <sup>0</sup> is dense and continuous as well.

**Exercise 9.3** Prove the following statement which generalises Proposition 9.2.9 from above: Let *H*<sup>0</sup> be a Hilbert space, *A* ∈ *L(H*0*)*. Assume that *T* : dom*(T )* ⊆ *<sup>H</sup>*<sup>0</sup> <sup>→</sup> *<sup>H</sup>*<sup>0</sup> is densely defined and closed with 0 <sup>∈</sup> *ρ(T )* and *<sup>T</sup>* <sup>−</sup>1*<sup>A</sup>* <sup>=</sup> *AT* <sup>−</sup><sup>1</sup> <sup>+</sup> *<sup>T</sup>* <sup>−</sup>1*BT* <sup>−</sup><sup>1</sup> for some bounded *<sup>B</sup>* <sup>∈</sup> *L(H*0*)*. Then *<sup>A</sup>* admits a unique continuous extension, *<sup>A</sup>* <sup>∈</sup> *L(H* <sup>−</sup>1*(T* <sup>∗</sup>*))*.

**Exercise 9.4** Let *H*<sup>0</sup> be a Hilbert space, *N* : dom*(N)* ⊆ *H*<sup>0</sup> → *H*<sup>0</sup> be a *normal* operator; that is, *N* is densely defined and closed and *NN*<sup>∗</sup> = *N*∗*N*. Show that *<sup>H</sup>* <sup>−</sup>1*(N)* ∼= *<sup>H</sup>* <sup>−</sup>1*(N*∗*)* and deduce *<sup>H</sup>* <sup>−</sup><sup>1</sup>*(∂t ,ν)* ∼= *<sup>H</sup>* <sup>−</sup>1*(∂*<sup>∗</sup> *t ,ν)*.

**Exercise 9.5** Prove Proposition 9.2.8.

**Exercise 9.6** Let *<sup>H</sup>*<sup>0</sup> be a Hilbert space, *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> and *<sup>T</sup>* : dom*(T )* <sup>⊆</sup> *<sup>H</sup>*<sup>0</sup> <sup>→</sup> *<sup>H</sup>*<sup>0</sup> be a densely defined, closed linear operator with 0 <sup>∈</sup> *ρ(T )*. We define *<sup>H</sup>n(T )* := dom*(T n)* and *<sup>H</sup>* <sup>−</sup>*n(T )* := *<sup>H</sup>* <sup>−</sup>1*(T n)*. Show that for all *<sup>k</sup>* <sup>∈</sup> <sup>N</sup> and <sup>∈</sup> <sup>Z</sup> we have that *<sup>H</sup>k*+*(T )* <sup>→</sup> *<sup>H</sup>* " *(T )* continuously and densely. Also show that *D* := *<sup>n</sup>*∈<sup>N</sup> dom*(T n)* is dense in *<sup>H</sup>(T )* and dense in *<sup>H</sup>* <sup>−</sup>*(T* <sup>∗</sup>*)* for all <sup>∈</sup> <sup>N</sup> and that *<sup>T</sup>* <sup>|</sup>*<sup>D</sup>* can be continuously extended to a topological isomorphism *<sup>H</sup>(T )* <sup>→</sup> *<sup>H</sup>*−1*(T )* and to an isomorphism *<sup>H</sup>* <sup>−</sup>+1*(T* <sup>∗</sup>*)* <sup>→</sup> *<sup>H</sup>* <sup>−</sup>*(T* <sup>∗</sup>*)* for each <sup>∈</sup> <sup>N</sup>.

#### **Exercise 9.7** Prove Lemma 9.3.3.

Hint: Prove a similar equality with *∂*−<sup>1</sup> *t ,ν* formally replaced by *<sup>z</sup>* <sup>∈</sup> *∂B (r, r)* <sup>⊆</sup> <sup>C</sup> and deduce the assertion with the help of Theorem 5.2.3.

## **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 10 Differential Algebraic Equations**

Let *<sup>H</sup>* be a Hilbert space and *<sup>ν</sup>* <sup>∈</sup> <sup>R</sup>. We saw in the previous chapter how initial value problems can be formulated within the framework of evolutionary equations. More precisely, we have studied problems of the form

$$\begin{cases} \left(\partial\_{t,\nu}M\_0 + M\_1 + A\right)U = 0 \quad \text{on } (0,\infty),\\ M\_0U(0+) = M\_0U\_0 \end{cases} \tag{10.1}$$

for *U*<sup>0</sup> ∈ *H*, *M*0*, M*<sup>1</sup> ∈ *L(H )* and *A*: dom*(A)* ⊆ *H* → *H* skew-selfadjoint; that is, we have considered material laws of the form

$$M(z) := M\_0 + z^{-1} M\_1 \quad (z \in \mathbb{C} \backslash \{0\})\dots$$

Here, the initial value is attained in a weak sense as an equality in the extrapolation space *H* <sup>−</sup>1*(A)*. The first line is also meant in a weak sense since the left-hand side turned out to be a functional in *<sup>H</sup>* <sup>−</sup><sup>1</sup> *<sup>ν</sup> (*R; *H )* <sup>∩</sup> *<sup>L</sup>*2*,ν(*R; *<sup>H</sup>* <sup>−</sup>1*(A))*. In Theorem 9.4.3 it was shown that the latter problem can be rewritten as

$$\left(\partial\_{\mathfrak{l},\boldsymbol{\upsilon}}M\_0 + M\_{\mathfrak{l}} + A\right)U = \delta\_0 M\_0 U\_{0\cdot}.$$

In this chapter we aim to inspect initial value problems a little closer but in the particularly simple case when *A* = 0. However, we want to impose the initial condition for *U* and not just *M*0*U*. Thus, we want to deal with the problem

$$\begin{cases} \left(\partial\_{t,\upsilon}M\_0 + M\_1\right)U = 0 \quad \text{on } (0,\infty) \,, \\ U(0+) = U\_0 \end{cases} \tag{10.2}$$

149

for two bounded operators *M*0*, M*<sup>1</sup> and an initial value *U*<sup>0</sup> ∈ *H*. This class of differential equations is known as *differential algebraic equations*since the operator *M*<sup>0</sup> is allowed to have a non-trivial kernel. Thus, (10.2) is a coupled problem of a differential equation (on *(*ker *M*0*)*⊥) and an algebraic equation (on ker *M*0). We begin by treating these equations in the finite-dimensional case; that is, *<sup>H</sup>* <sup>=</sup> <sup>C</sup>*<sup>n</sup>* and *<sup>M</sup>*0*, M*<sup>1</sup> <sup>∈</sup> <sup>C</sup>*n*×*<sup>n</sup>* for some *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>.

## **10.1 The Finite-Dimensional Case**

Throughout this section let *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> and *<sup>M</sup>*0*, M*<sup>1</sup> <sup>∈</sup> <sup>C</sup>*n*×*n*.

**Definition** We define the *spectrum of the matrix pair (M*0*, M*1*)* by

*σ (M*0*, M*1*)* := {*<sup>z</sup>* <sup>∈</sup> <sup>C</sup> ; det*(zM*<sup>0</sup> <sup>+</sup> *<sup>M</sup>*1*)* <sup>=</sup> <sup>0</sup>}*,*

and the *resolvent set of the matrix pair (M*0*, M*1*)* by

$$
\rho(M\_0, M\_1) := \mathbb{C} \backslash \sigma(M\_0, M\_1).
$$

*Remark 10.1.1*


In contrast to the case of the spectrum of one matrix, it may happen that *σ (M*0*, M*1*)* <sup>=</sup> <sup>C</sup> (for example we can choose *<sup>M</sup>*<sup>0</sup> <sup>=</sup> 0 and *<sup>M</sup>*<sup>1</sup> singular). More precisely, we have the following result.

**Lemma 10.1.2** *The set σ (M*0*, M*1*) is either finite or equals the whole complex plane* C*. If σ (M*0*, M*1*) is finite then* card*(σ (M*0*, M*1*)) n.*

*Proof* The function *z* → det*(zM*<sup>0</sup> + *M*1*)* is a polynomial of order less than or equal to *<sup>n</sup>*. If it is constantly zero, then *σ (M*0*, M*1*)* <sup>=</sup> <sup>C</sup> and otherwise card*(σ (M*0*, M*1*)) n*.

**Definition** The matrix pair *(M*0*, M*1*)* is called *regular* if *σ (M*0*, M*1*)* <sup>=</sup> <sup>C</sup>.

The main problem in solving an initial value problem of the form (10.2) is that one cannot expect a solution for each initial value *<sup>U</sup>*<sup>0</sup> <sup>∈</sup> <sup>C</sup>*<sup>n</sup>* as the following simple example shows.

*Example 10.1.3* Let *M*<sup>0</sup> = 1 1 0 0 *, M*<sup>1</sup> = 1 0 0 1 and let *<sup>U</sup>*<sup>0</sup> <sup>∈</sup> <sup>C</sup>2. We assume that there exists a solution *<sup>U</sup>* : <sup>R</sup>-<sup>0</sup> <sup>→</sup> <sup>C</sup><sup>2</sup> satisfying (10.2); that is,

$$U\_1'(t) + U\_2'(t) + U\_1(t) = 0 \quad (t > 0),$$

$$U\_2(t) = 0 \quad (t > 0),$$

$$U(0+) = U\_0.$$

The second and third equation yield that the second coordinate of *U*<sup>0</sup> has to be zero. Then, for *<sup>U</sup>*<sup>0</sup> <sup>=</sup> *(x,* <sup>0</sup>*)* <sup>∈</sup> <sup>C</sup><sup>2</sup> the unique solution of the above problem is given by

$$U(t) = \left(U\_1(t), U\_2(t)\right) = (\ge^{-t}, 0) \quad (t \ge 0).$$

**Definition** We call an initial value *<sup>U</sup>*<sup>0</sup> <sup>∈</sup> <sup>C</sup>*<sup>n</sup> consistent* for (10.2) if there exists *ν >* 0 and *<sup>U</sup>* <sup>∈</sup> *C(*R-<sup>0</sup>; <sup>C</sup>*n)*∩*L*2*,ν(*R-<sup>0</sup>; <sup>C</sup>*n)* such that (10.2) holds. We denote the set of all consistent initial values for (10.2) by

$$\text{IV}(M\_0, M\_1) := \left\{ U\_0 \in \mathbb{C}^n \; ; \; U\_0 \text{ consistent} \right\} \; .$$

*Remark 10.1.4* It is obvious that IV*(M*0*, M*1*)* is a subspace of C*n*. In particular, 0 ∈ IV*(M*0*, M*1*)*.

It is now our goal to determine the space IV*(M*0*, M*1*)*. One possibility for doing so uses the so-called *quasi-Weierstraß normal form*.

**Proposition 10.1.5 (Quasi-Weierstraß Normal Form)** *Assume that (M*0*, M*1*) is regular. Then there exist invertible matrices P,Q* <sup>∈</sup> <sup>C</sup>*n*×*<sup>n</sup> such that*

$$PM\_0 \, \mathcal{Q} = \begin{pmatrix} 1 & 0 \\ 0 & N \end{pmatrix}, \quad PM\_1 \, \mathcal{Q} = \begin{pmatrix} C & 0 \\ 0 & 1 \end{pmatrix}.$$

*where <sup>C</sup>* <sup>∈</sup> <sup>C</sup>*k*×*<sup>k</sup> and <sup>N</sup>* <sup>∈</sup> <sup>C</sup>*(n*−*k)*×*(n*−*k) for some <sup>k</sup>* ∈ {0*,...,n*}*. Moreover, the matrix <sup>N</sup> is nilpotent; that is, there exists* <sup>∈</sup> <sup>N</sup> *such that <sup>N</sup>* <sup>=</sup> <sup>0</sup>*.*

*Proof* Since *(M*0*, M*1*)* is regular we find *<sup>λ</sup>* <sup>∈</sup> <sup>C</sup> such that *λM*<sup>0</sup> <sup>+</sup> *<sup>M</sup>*<sup>1</sup> is invertible. We set *<sup>P</sup>*<sup>1</sup> := *(λM*<sup>0</sup> <sup>+</sup> *<sup>M</sup>*1*)*−<sup>1</sup> and obtain

$$\begin{aligned} M\_{0,1} &:= P\_1 M\_0 = (\lambda M\_0 + M\_1)^{-1} M\_0, \\ M\_{1,1} &:= P\_1 M\_1 = \left(\lambda M\_0 + M\_1\right)^{-1} M\_1 = 1 - \lambda M\_{0,1}. \end{aligned}$$

Let now *<sup>P</sup>*<sup>2</sup> <sup>∈</sup> <sup>C</sup>*n*×*<sup>n</sup>* such that

$$M\_{0,2} := P\_2 M\_{0,1} P\_2^{-1} = \begin{pmatrix} J & 0 \\ 0 & \widetilde{N} \end{pmatrix}.$$

for some invertible matrix *<sup>J</sup>* <sup>∈</sup> <sup>C</sup>*k*×*<sup>k</sup>* and a nilpotent matrix *<sup>N</sup>* <sup>∈</sup> <sup>C</sup>*(n*−*k)*×*(n*−*k)* (use the Jordan normal form of *M*0*,*<sup>1</sup> here). Then

$$M\_{1,2} := P\_2 M\_{1,1} P\_2^{-1} = \begin{pmatrix} 1 - \lambda J & 0 \\ 0 & 1 - \lambda \widetilde{N} \end{pmatrix}.$$

Now, by the nilpotency of *<sup>N</sup>*, the matrix *(*<sup>1</sup> <sup>−</sup> *λN )* is invertible by the Neumann series. We set

$$P\_3 := \begin{pmatrix} J^{-1} & 0 \\ 0 & (1 - \lambda \widetilde{N})^{-1} \end{pmatrix}$$

and obtain

$$P\_3 M\_{0,2} = \begin{pmatrix} 1 & 0 \\ 0 \ (1 - \lambda \widetilde{N})^{-1} \widetilde{N} \end{pmatrix}, \text{ and } \quad P\_3 M\_{1,2} = \begin{pmatrix} J^{-1} - \lambda \ 0 \\ 0 & 1 \end{pmatrix}.$$

Note that *(*1−*λN)* <sup>−</sup>1*N* is nilpotent, since the matrices commute and *<sup>N</sup>* is nilpotent. Thus, the assertion follows with *<sup>N</sup>* := *(*<sup>1</sup> <sup>−</sup> *λN )* <sup>−</sup>1*N*, *<sup>C</sup>* := *<sup>J</sup>* <sup>−</sup><sup>1</sup> <sup>−</sup> *<sup>λ</sup>*, *<sup>P</sup>* <sup>=</sup> *<sup>P</sup>*3*P*2*P*1, and *<sup>Q</sup>* <sup>=</sup> *<sup>P</sup>* <sup>−</sup><sup>1</sup> <sup>2</sup> .

It is clear that the matrices *P*, *Q*, *C* and *N* in the previous proposition are not uniquely determined by *M*<sup>0</sup> and *M*1. However, the size of *N* and *C* as well as the degree of nilpotency of *N* are determined by *M*<sup>0</sup> and *M*<sup>1</sup> as the following proposition shows.

**Proposition 10.1.6** *Let P,Q* <sup>∈</sup> <sup>C</sup>*n*×*<sup>n</sup> be invertible such that*

$$PM\_0 \, \_0Q = \begin{pmatrix} 1 & 0 \\ 0 & N \end{pmatrix}, \quad PM\_1 \, \_0Q = \begin{pmatrix} C & 0 \\ 0 & 1 \end{pmatrix},$$

*where <sup>C</sup>* <sup>∈</sup> <sup>C</sup>*k*×*k, <sup>N</sup>* <sup>∈</sup> <sup>C</sup>*(n*−*k)*×*(n*−*k) for some <sup>k</sup>* ∈ {0*,...,n*}*, and <sup>N</sup> is nilpotent. Then (M*0*, M*1*) is regular and*

(a) *k is the degree of the polynomial z* → det*(zM*<sup>0</sup> + *M*1*).* (b) *<sup>N</sup>* <sup>=</sup> <sup>0</sup> *if and only if*

$$\sup\_{|z| \ge r} \left\| z^{-\ell+1} (zM\_0 + M\_1)^{-1} \right\| < \infty$$

*for one (or equivalently all) r >* 0 *such that B (*0*, r)* ⊇ *σ (M*0*, M*1*).*

#### *Proof* First, note that

$$\det(zM\_0 + M\_1) = \frac{1}{\det P \, \det \mathcal{Q}} \det \begin{pmatrix} z + C & 0 \\ 0 & zN + 1 \end{pmatrix} = \frac{1}{\det P \, \det \mathcal{Q}} \det(z + C)$$

for all *(z* <sup>∈</sup> <sup>C</sup>*)*. Hence, *(M*0*, M*1*)* is regular and

$$k = \deg \det((\cdot) + C) = \deg \det((\cdot)M\_0 + M\_1),$$

which shows (a). Moreover, we have *ρ(M*0*, M*1*)* = *ρ(*−*C)* and

$$\left( (zM\_0 + M\_1)^{-1} = \mathcal{Q} \begin{pmatrix} (z+C)^{-1} & 0\\ 0 & (zN+1)^{-1} \end{pmatrix} \right) P \quad (z \in \rho(M\_0, M\_1)),$$

and hence, for *r >* 0 with *B (*0*, r)* ⊇ *σ (M*0*, M*1*)* we have

$$\left\|(zM\_0 + M\_1)^{-1}\right\| \leqslant K\_{\mathrm{I}} \left\|(zN + 1)^{-1}\right\| \quad (|z| \geqslant r),$$

for some *K*<sup>1</sup> - 0, since sup|*z*|*r (z* <sup>+</sup> *C)*−<sup>1</sup> *<sup>&</sup>lt;* <sup>∞</sup>. Now let <sup>∈</sup> <sup>N</sup> such that *<sup>N</sup>* <sup>=</sup> 0. Then

$$\left\| \left( zN + 1 \right)^{-1} \right\| = \left\| \sum\_{k=0}^{\ell - 1} (-1)^k z^k N^k \right\| \leqslant K\_2 \left| z \right|^{\ell - 1} \quad (|z| \geqslant r),$$

for some constant *K*<sup>2</sup> -0 and thus,

$$\left\| \left( zM\_0 + M\_1 \right)^{-1} \right\| \leqslant K\_1 K\_2 \left| z \right|^{\ell - 1} \quad (|z| \geqslant r).$$

Assume on the other hand that

$$\sup\_{|z| \ge r} \left\| z^{-\ell+1} (zM\_0 + M\_1)^{-1} \right\| < \infty$$

for some <sup>∈</sup> <sup>N</sup> and *r >* 0 with *σ (M*0*, M*1*)* <sup>⊆</sup> *<sup>B</sup> (*0*, r)*. Then there exist *<sup>K</sup>*1*, <sup>K</sup>*<sup>2</sup> - 0 such that

$$\left\|(zN+1)^{-1}\right\| \leqslant \left\|\begin{pmatrix} (z+C)^{-1} & 0\\ 0 & (zN+1)^{-1} \end{pmatrix}\right\| \leqslant \widetilde{K}\_1 \left\|(zM\_0+M\_1)^{-1}\right\| \leqslant \widetilde{K}\_2 \left|z\right|^{\ell-1}$$

for all *<sup>z</sup>* <sup>∈</sup> <sup>C</sup> with <sup>|</sup>*z*<sup>|</sup> *<sup>r</sup>*. Now, let *<sup>p</sup>* <sup>∈</sup> <sup>N</sup> be minimal such that *<sup>N</sup><sup>p</sup>* <sup>=</sup> 0. We show that *p* by contradiction. Assume *p>*. Then we compute

$$\begin{aligned} 0 &= \lim\_{n \to \infty} \frac{1}{n^{\ell}} (nN + 1)^{-1} N^{p - \ell - 1} = \lim\_{n \to \infty} \sum\_{k=0}^{p-1} (-1)^{k} n^{k - \ell} N^{k + p - \ell - 1} \\ &= \lim\_{n \to \infty} \sum\_{k=0}^{\ell - 1} (-1)^{k} n^{k - \ell} N^{k + p - \ell - 1} + (-1)^{\ell} N^{p - 1} \\ &= (-1)^{\ell} N^{p - 1}, \end{aligned}$$

which contradicts the minimality of *p*.

**Theorem 10.1.7** *Let (M*0*, M*1*) be regular and P,Q* <sup>∈</sup> <sup>C</sup>*n*×*<sup>n</sup> be chosen according to Proposition 10.1.5. Let k* = deg det*((*·*)M*<sup>0</sup> + *M*1*). Then*

$$\text{IV}(M\_0, M\_1) = \left\{ U\_0 \in \mathbb{C}^n \; ; \; \mathcal{Q}^{-1} U\_0 \in \mathbb{C}^k \times \{ 0 \} \right\}.$$

*Moreover, for each U*<sup>0</sup> ∈ IV*(M*0*, M*1*) the solution U of (10.2) is unique and satisfies <sup>U</sup>* <sup>∈</sup> *C(*R-<sup>0</sup>; <sup>C</sup>*n)* <sup>∩</sup> *<sup>C</sup>*1*(*R*>*0; <sup>C</sup>*n) as well as*

$$M\_0 U'(t) + M\_1 U(t) = 0 \quad (t > 0),$$

$$U(0+) = U\_0.$$

*Proof* Let *<sup>C</sup>* <sup>∈</sup> <sup>C</sup>*k*×*<sup>k</sup>* and *<sup>N</sup>* <sup>∈</sup> <sup>C</sup>*(n*−*k)*×*(n*−*k)* be nilpotent as in Proposition 10.1.5. Obviously *<sup>U</sup>* is a solution of (10.2) if and only if *<sup>V</sup>* := *<sup>Q</sup>*−1*<sup>U</sup>* both is continuous on R-<sup>0</sup> and solves

$$\left(\partial\_{l,\upsilon}\begin{pmatrix} 1 & 0\\ 0 & N \end{pmatrix} + \begin{pmatrix} C & 0\\ 0 & 1 \end{pmatrix}\right)V = 0 \quad \text{on } (0,\infty),\tag{10.3}$$

$$V(0+) = \mathcal{Q}^{-1}U\_0 =: V\_0.$$

Clearly, if *<sup>Q</sup>*−1*U*<sup>0</sup> <sup>=</sup> *(x,* <sup>0</sup>*)* <sup>∈</sup> <sup>C</sup>*<sup>k</sup>* × {0} then *<sup>V</sup>* given by *V (t)* := *(*e−*tCx,* <sup>0</sup>*)* for *t* - 0 is a solution of (10.3) for *ν >* 0 large enough. On the other hand, if *V* given by *V (t)* <sup>=</sup> *(V*1*(t), V*2*(t))* <sup>∈</sup> <sup>C</sup>*<sup>k</sup>* <sup>×</sup> <sup>C</sup>*n*−*<sup>k</sup>* (*<sup>t</sup>* -0) is a solution of (10.3) then we have

$$
\partial\_{t, \upsilon} N V\_2 + V\_2 = 0 \quad \text{on } (0, \infty) \text{ .}
$$

Since *<sup>N</sup>* is nilpotent, there exists <sup>∈</sup> <sup>N</sup> with *<sup>N</sup>* <sup>=</sup> 0. Hence,

$$N^{\ell-1}V\_2(t) = -N^{\ell-1}\partial\_{t,\nu}NV\_2(t) = \partial\_{t,\nu}N^{\ell}V\_2(t) = 0 \quad (t > 0),$$

which in turn implies *∂t ,νN*−1*V*<sup>2</sup> <sup>=</sup> 0 on *(*0*,*∞*)*. Using again the differential equation, we infer *<sup>N</sup>*−2*V*2*(t)* <sup>=</sup> 0 for *t >* 0. Inductively, we deduce *<sup>V</sup>*2*(t)* <sup>=</sup> <sup>0</sup> for *t >* 0 and by continuity *<sup>V</sup>*2*(*0+*)* <sup>=</sup> 0, which yields *<sup>V</sup>*<sup>0</sup> <sup>=</sup> *<sup>Q</sup>*−1*U*<sup>0</sup> <sup>∈</sup> <sup>C</sup>*<sup>k</sup>* × {0}. The uniqueness follows from Proposition 10.2.7 below.

## **10.2 The Infinite-Dimensional Case**

Let now *M*0*, M*<sup>1</sup> ∈ *L(H )*. Again, it is our aim to determine the space of consistent initial values for the problem

$$\begin{cases} \left(\partial\_{l,\boldsymbol{\nu}}M\_0 + M\_1\right)U = 0 \quad \text{on } (0,\infty)\,,\\ U(0+) = U\_0. \end{cases} \tag{10.4}$$

Here, consistent initial values are defined as in the finite-dimensional setting:

**Definition** We call an initial value *U*<sup>0</sup> ∈ *H consistent* for (10.4) if there exist *ν >* 0 and *<sup>U</sup>* <sup>∈</sup> *C(*R-<sup>0</sup>; *H )* <sup>∩</sup> *<sup>L</sup>*2*,ν (*R-<sup>0</sup>; *H )* such that (10.4) holds. We denote the set of all consistent initial values for (10.4) by

$$\text{IV}(M\_0, M\_1) := \{U\_0 \in H \; ; \; U\_0 \; \text{consistent} \} \; .$$

Before we try to determine IV*(M*0*, M*1*)* we prove a regularity result for solutions of (10.4).

**Proposition 10.2.1** *Let ν >* <sup>0</sup>*, <sup>U</sup>*<sup>0</sup> <sup>∈</sup> *<sup>H</sup> and <sup>U</sup>* <sup>∈</sup> *C(*R-<sup>0</sup>; *H )* <sup>∩</sup> *<sup>L</sup>*2*,ν(*R-<sup>0</sup>; *H ) be a solution of (10.4). Then <sup>M</sup>*0*(U* <sup>−</sup> <sup>1</sup>[0*,*∞*)U*0*)* <sup>∈</sup> *<sup>H</sup>*<sup>1</sup> *<sup>ν</sup> (*R; *H ) and*

$$\left(\partial\_{\mathbb{H},\mathbb{V}}M\_0\left(U - \mathbb{1}\_{[0,\infty)}U\_0\right) + M\_{\mathbb{I}}U = 0.\right)$$

*Proof* We extend *<sup>U</sup>* to <sup>R</sup> by 0. First, observe that *<sup>M</sup>*0*(U* <sup>−</sup> <sup>1</sup>[0*,*∞*)U*0*)*: <sup>R</sup> <sup>→</sup> *<sup>H</sup>* is continuous, since *U* is continuous and *U (*0+*)* = *U*0. By Lemma 9.4.2 (with *A* = 0), we obtain

$$U - \mathbb{1}\_{[0,\infty)} U\_0 \in \text{dom}\left(\overline{\partial\_{t,\boldsymbol{\nu}}M\_0 + M\_1}\right) \text{ and } \left(\overline{\partial\_{t,\boldsymbol{\nu}}M\_0 + M\_1}\right)(U - \mathbb{1}\_{[0,\infty)}U\_0) = -M\_1 U\_0 \mathbb{1}\_{[0,\infty)}.$$

Since *∂t ,ν* is closed and *M*<sup>0</sup> is bounded, *∂t ,νM*<sup>0</sup> is closed as well. Since *M*<sup>1</sup> is bounded, therefore also *∂t ,νM*0+*M*<sup>1</sup> is closed. Thus, *<sup>U</sup>*−1[0*,*∞*)U*<sup>0</sup> <sup>∈</sup> dom*(∂t ,νM*0<sup>+</sup> *<sup>M</sup>*1*)* <sup>=</sup> dom*(∂t ,νM*0*)* and therefore *<sup>M</sup>*0*(U* <sup>−</sup> <sup>1</sup>[0*,*∞*)U*0*)* <sup>∈</sup> dom*(∂t ,ν)*, and

$$
\partial\_{\mathbb{T}, \mathbb{V}} M\_0 (U - \mathbb{1}\_{[0, \infty)} U\_0) + M\_1 U = 0. \tag{7}
$$

We now come back to the space IV*(M*0*, M*1*)*. Since we are now dealing with an infinite-dimensional setting, we cannot use normal forms to determine IV*(M*0*, M*1*)* without dramatically restricting the class of operators. Thus, we follow a different approach using so-called Wong sequences.

**Definition** We set

$$\mathbf{IV}\_0 := H$$

and for *<sup>k</sup>* <sup>∈</sup> <sup>N</sup><sup>0</sup> we set

$$\mathrm{IV}\_{k+1} := \mathcal{M}\_1^{-1}[\mathcal{M}\_0[\mathrm{IV}\_k]].$$

The sequence *(*IV*k)k*∈N<sup>0</sup> is called the *Wong sequence* associated with *(M*0*, M*1*)*.

*Remark 10.2.2* By induction, we infer IV*k*+<sup>1</sup> <sup>⊆</sup> IV*<sup>k</sup>* for each *<sup>k</sup>* <sup>∈</sup> <sup>N</sup>0.

As in the matrix case, we denote by

$$\rho(M\_0, M\_1) := \left\{ z \in \mathbb{C} \; ; \; (zM\_0 + M\_1)^{-1} \in L(H) \right\}$$

the *resolvent set of (M*0*, M*1*)*.

**Lemma 10.2.3** *Let <sup>k</sup>* <sup>∈</sup> <sup>N</sup>0*. Then:*


$$(zM\_0 + M\_1)^{-1}M\_0 \mathbf{x} = \frac{1}{z}\mathbf{x} + \sum\_{\ell=1}^k \frac{1}{z^{\ell+1}}\mathbf{x}\_\ell + \frac{1}{z^{k+1}}(zM\_0 + M\_1)^{-1}\mathbf{x}\_{k+1}.$$

(d) *If ρ(M*0*, M*1*)* <sup>=</sup> <sup>∅</sup> *then <sup>M</sup>*−<sup>1</sup> <sup>1</sup> [*M*0[IV*k*]] ∈ IV*k*+1*.*

*Proof* The proofs of the statements (a) to (c) are left as Exercise 10.6. We now prove (d). If *k* = 0 there is nothing to show. So assume that the statement holds for some *<sup>k</sup>* <sup>∈</sup> <sup>N</sup><sup>0</sup> and let *<sup>x</sup>* <sup>∈</sup> *<sup>M</sup>*−<sup>1</sup> 1 *M*<sup>0</sup> IV*k*+<sup>1</sup> . Since IV*k*+<sup>1</sup> ⊆ IV*k*, we infer *<sup>x</sup>* <sup>∈</sup> *<sup>M</sup>*−<sup>1</sup> 1 *M*<sup>0</sup> IV*k* ⊆ IV*k*+<sup>1</sup> by induction hypothesis. Hence, we find a sequence *(wn)n*∈<sup>N</sup> in IV*k*+<sup>1</sup> with *wn* → *x*. Let now *z* ∈ *ρ(M*0*, M*1*)*. Then, by (b), we have *(zM*<sup>0</sup> <sup>+</sup> *<sup>M</sup>*1*)*−1*M*0*wn* <sup>∈</sup> IV*k*+<sup>2</sup> for each *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> and hence, *(zM*<sup>0</sup> <sup>+</sup> *<sup>M</sup>*1*)* <sup>−</sup>1*M*0*<sup>x</sup>* <sup>∈</sup> IV*k*+2. Moreover, since *M*1*x* ∈ *M*<sup>0</sup> IV*k*+<sup>1</sup> , we find a sequence *(yn)n*∈<sup>N</sup> in IV*k*+<sup>1</sup> with *M*0*yn* → *M*1*x*. Setting now

$$\mathbf{x}\_n := (zM\_0 + M\_1)^{-1} zM\_0 \mathbf{x} + (zM\_0 + M\_1)^{-1} M\_0 \mathbf{y}\_n \in \overline{\mathbf{IV}\_{k+2}}$$

(where, again, we have used (b)) for *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>, we derive

$$\mathbf{x}\_n = (zM\_0 + M\_1)^{-1} zM\_0 \mathbf{x} + (zM\_0 + M\_1)^{-1} M\_0 \mathbf{y}\_n$$

$$= \mathbf{x} - (zM\_0 + M\_1)^{-1} \left(M\_1 \mathbf{x} - M\_0 \mathbf{y}\_n\right) \to \mathbf{x}$$

as *n* → ∞ and thus, *x* ∈ IV*k*+2.

The importance of the Wong sequence becomes apparent if we consider solutions of (10.4).

**Lemma 10.2.4** *Assume that ρ(M*0*, M*1*)* <sup>=</sup> <sup>∅</sup>*. Let ν >* <sup>0</sup> *and <sup>U</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν (*R-<sup>0</sup>; *H )*∩ *C(*R-<sup>0</sup>; *H ) be a solution of (10.4). Then U (t)* ∈ " *<sup>k</sup>*∈N<sup>0</sup> IV*<sup>k</sup> for each <sup>t</sup>* -0*.*

*Proof* We prove the claim, *U (t)* ∈ IV*<sup>k</sup>* for all *t* - 0 and *<sup>k</sup>* <sup>∈</sup> <sup>N</sup>0, by induction. For *k* = 0 there is nothing to show. Assume now that *U (t)* ∈ IV*<sup>k</sup>* for each *t* - 0 and some *<sup>k</sup>* <sup>∈</sup> <sup>N</sup>0. By Proposition 10.2.1 we know that

$$
\partial\_{\mathfrak{t}, \mathbb{V}} M\_0 (U - \mathbbm{1}\_{[0,\infty)} U\_0) + M\_1 U = 0
$$

and thus, in particular,

$$M\_0 U(t) - M\_0 U\_0 + \int\_0^t M\_1 U(s) \, \mathrm{d}s = 0 \quad (t \ge 0).$$

Let now *t* -0 and *h >* 0. Then we infer

$$M\_0 U(t+h) - M\_0 U(t) + M\_1 \int\_{t}^{t+h} U(s) \, \mathrm{d}s = 0$$

and hence,

$$\int\_{1}^{t+h} U(\mathbf{s}) \, \mathrm{d}\mathbf{s} \in \mathcal{M}\_{1}^{-1} \left[ \mathcal{M}\_{0} [\overline{\mathrm{IV}\_{k}}] \right] \subseteq \overline{\mathrm{IV}\_{k+1}}$$

by Lemma 10.2.3(d). Since *U* is continuous, the fundamental theorem of calculus implies *U (t)* ∈ IV*k*+1, which yields the assertion.

In particular, the space of consistent initial values has to be a subspace of " *<sup>k</sup>*∈N<sup>0</sup> IV*k*. We now impose an additional constraint on the operator pair *(M*0*, M*1*)*, which is equivalent to being regular in the finite-dimensional setting (cf. Proposition 10.1.6).

**Definition** We call the operator pair *(M*0*, M*1*) regular* if there exists *ν*<sup>0</sup> - 0 such that


Moreover, we call the smallest <sup>∈</sup> <sup>N</sup> satisfying (b) the *index of (M*0*, M*1*)*, which is denoted by ind*(M*0*, M*1*)*.

*Remark 10.2.5* Note that for matrices *M*<sup>0</sup> and *M*<sup>1</sup> the index equals the degree of nilpotency of *N* in the quasi-Weierstraß normal form by Proposition 10.1.6.

From now on, we will require that *(M*0*, M*1*)* is regular. First, we prove an important result on the Wong sequence in this case.

**Proposition 10.2.6** *Let (M*0*, M*1*) be regular, <sup>k</sup>* <sup>∈</sup> <sup>N</sup>0*, and <sup>k</sup>* ind*(M*0*, M*1*). Then*

$$
\overline{\mathrm{IV}\_k} = \overline{\mathrm{IV}\_{\mathrm{ind}(M\_0, M\_1)}} \cdot 
$$

*Proof* We show that IV*<sup>k</sup>* = IV*k*+<sup>1</sup> for each *k* ind*(M*0*, M*1*).* Since the inclusion "⊇" holds trivially, it suffices to show IV*<sup>k</sup>* ⊆ IV*k*+1. For doing so, let *k* - ind*(M*0*, M*1*)* and *x* ∈ IV*k*. By Lemma 10.2.3(c) we find *x*1*,...,xk*+<sup>1</sup> ∈ *H* such that

$$(zM\_0 + M\_1)^{-1}M\_0\mathbf{x} = \frac{1}{z}\mathbf{x} + \sum\_{\ell=1}^k \frac{1}{z^{\ell+1}}\mathbf{x}\_\ell + \frac{1}{z^{k+1}}(zM\_0 + M\_1)^{-1}\mathbf{x}\_{k+1}$$

for each *<sup>z</sup>* <sup>∈</sup> <sup>C</sup>Re*>ν*<sup>0</sup> . Since *<sup>k</sup>* ind*(M*0*, M*1*),* we derive

$$z(zM\_0 + M\_1)^{-1}M\_0x \to x \quad (\text{Re } z \to \infty),$$

and since the elements on the left-hand side belong to IV*k*+1, by Lemma 10.2.3(b), the assertion immediately follows.

We now prove that in case of a regular operator pair *(M*0*, M*1*)* the solution of (10.4) for a consistent initial value *U*<sup>0</sup> is uniquely determined.

**Proposition 10.2.7** *Let (M*0*, M*1*) be regular, U*<sup>0</sup> ∈ IV*(M*0*, M*1*), and ν >* 0 *such that a solution <sup>U</sup>* <sup>∈</sup> *C(*R-<sup>0</sup>; *H )*<sup>∩</sup> *<sup>L</sup>*2*,ν(*R-<sup>0</sup>; *H ) of (10.4) exists. Then this solution is unique. In particular*

$$(\mathcal{L}\_{\rho}U)(t) = \frac{1}{\sqrt{2\pi}} \left( (\text{it} + \rho)M\_0 + M\_1 \right)^{-1} M\_0 U\_0 \quad (a.e. \ t \in \mathbb{R})$$

*for each ρ >* max{*ν, ν*0}*.*

*Proof* By Proposition 10.2.1 we have *<sup>M</sup>*0*(U* <sup>−</sup> <sup>1</sup>[0*,*∞*)U*0*)* <sup>∈</sup> *<sup>H</sup>*<sup>1</sup> *<sup>ν</sup> (*R; *H )* and

$$
\partial\_{\mathbb{H}, \boldsymbol{\nu}} M\_0 (U - \mathbb{1}\_{[0,\infty)} U\_0) + M\_1 U = 0.
$$

Applying the Fourier–Laplace transformation, *L<sup>ρ</sup>* , for *ρ >* max{*ν, ν*0} we deduce

$$(\text{it} + \rho)M\_0(\mathcal{L}\_\rho U(t) - \frac{1}{\sqrt{2\pi}} \frac{1}{\text{it} + \rho} U\_0) + M\_1 \mathcal{L}\_\rho U(t) = 0 \quad (\text{a.e.} \ t \in \mathbb{R})$$

which in turn yields

$$\mathcal{L}\_{\rho}U(t) = \frac{1}{\sqrt{2\pi}} \left( (\mathrm{i}t + \rho)M\_0 + M\_{\mathrm{l}} \right)^{-1} M\_0 U\_0 \quad (\mathrm{a.e.} \, t \in \mathbb{R})$$

and, in particular, proves the uniqueness of the solution.

*Remark 10.2.8* Let *U* be a solution of (10.4) for a consistent initial value *U*0. Then the formula in Proposition 10.2.7 shows that *U* ∈ " *ρ>ν*<sup>0</sup> *<sup>L</sup>*2*,ρ(*R; *H )* and hence, we also have *<sup>M</sup>*0*(U* <sup>−</sup> <sup>1</sup>[0*,*∞*)U*0*)* <sup>∈</sup> " *ρ>ν*<sup>0</sup> *<sup>H</sup>*<sup>1</sup> *<sup>ρ</sup> (*R; *H )*. If *<sup>ν</sup>*<sup>0</sup> *<sup>&</sup>gt;* 0 then we even obtain *<sup>U</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν*<sup>0</sup> *(*R; *H )* since sup*ρ>ν*<sup>0</sup> *<sup>U</sup> <sup>L</sup>*2*,ρ (*R;*H )* <sup>=</sup> sup*ρ>ν*<sup>0</sup> *<sup>L</sup>ρU <sup>L</sup>*2*(*R;*H ) <sup>&</sup>lt;* <sup>∞</sup> (cp. Lemma 8.1.1), and therefore also *<sup>M</sup>*0*(U* <sup>−</sup> <sup>1</sup>[0*,*∞*)U*0*)* <sup>∈</sup> *<sup>H</sup>*<sup>1</sup> *ν*0 *(*R; *H )*.

One interesting consequence of the latter proposition is the following.

**Corollary 10.2.9** *Let (M*0*, M*1*) be regular. Then the operator M*<sup>0</sup> : IV*(M*0*, M*1*)* → *H is injective.*

*Proof* Let *U*<sup>0</sup> ∈ IV*(M*0*, M*1*)* with *M*0*U*<sup>0</sup> = 0. By Proposition 10.2.7, the solution *U* of (10.4) with *U (*0+*)* = *U*<sup>0</sup> satisfies

$$\mathcal{L}\_{\rho}U(t) = \frac{1}{\sqrt{2\pi}} \left( (\mathrm{i}t + \rho)M\_0 + M\_1 \right)^{-1} M\_0 U\_0 = 0$$

and hence, *U* = 0, which in turn implies *U*<sup>0</sup> = *U (*0+*)* = 0.

We now want to determine the space IV*(M*0*, M*1*)* in terms of the Wong sequence.

**Proposition 10.2.10** *Let (M*0*, M*1*) be regular. Then*

$$\text{IV}\_{\text{ind}(M\_0, M\_1)} \subseteq \text{IV}(M\_0, M\_1) \subseteq \text{IV}\_{\text{ind}(M\_0, M\_1)} \cdot \text{I}$$

*Proof* The second inclusion follows from Lemma 10.2.4 and Proposition 10.2.6. Let now *U*<sup>0</sup> ∈ IVind*(M*0*,M*1*)* and set

$$V(z) := \frac{1}{\sqrt{2\pi}} (zM\_0 + M\_1)^{-1} M\_0 U\_0 \quad (z \in \mathbb{C}\_{\text{Re} > \mathbb{W}\_0}).$$

Let *k* := ind*(M*0*, M*1*)*. By Lemma 10.2.3(c) we find *x*1*,...,xk*+<sup>1</sup> ∈ *H* such that

$$V(z) = \frac{1}{\sqrt{2\pi}} \left( \frac{1}{z} U\_0 + \sum\_{\ell=1}^k \frac{1}{z^{\ell+1}} x\_\ell + \frac{1}{z^{k+1}} \left( z M\_0 + M\_1 \right)^{-1} x\_{k+1} \right) \quad (z \in \mathbb{C}\_{\text{Re} > \nu\_0}).$$

In particular, we read off that *<sup>V</sup>* <sup>∈</sup> *<sup>H</sup>*2*(*CRe*>ν* ; *H )* for all *ν>ν*0. Now, let *ν>ν*0. By the Theorem of Paley–Wiener (more precisely by Corollary 8.1.3) there exists *<sup>U</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν (*R-<sup>0</sup>; *H )* such that

$$\left(\mathcal{L}\_{\rho}U\right)(t) = V(\mathfrak{i}t + \rho) \quad \text{(a.e.} \, t \in \mathbb{R}, \, \rho > \nu\text{)}.$$

Moreover,

$$zV(z) - \frac{1}{\sqrt{2\pi}}U\_0 = \frac{1}{\sqrt{2\pi}} \left( \sum\_{\ell=1}^k \frac{1}{z^\ell} \mathbf{x}\_\ell + \frac{1}{z^k} \left( zM\_0 + M\_1 \right)^{-1} \mathbf{x}\_{k+1} \right) \quad (z \in \mathbb{C}\_{\text{Re} > \nu})$$

and hence *z* → *zV (z)* − <sup>√</sup> 1 <sup>2</sup>*<sup>π</sup> <sup>U</sup>*<sup>0</sup> <sup>∈</sup> *<sup>H</sup>*2*(*CRe*>ν* ; *H )* as well. Since

$$\begin{aligned} \left(\mathcal{L}\_{\rho}\partial\_{t,\rho}(U-\mathbb{1}\_{[0,\infty)}U\_0)\right)(t) &= (\text{it}+\rho)\left(\mathcal{L}\_{\rho}U\right)(t) - \frac{1}{\sqrt{2\pi}}U\_0\\ &= (\text{it}+\rho)V(\text{it}+\rho) - \frac{1}{\sqrt{2\pi}}U\_0 \quad (\text{a.e.}\ t \in \mathbb{R}, \rho > \nu), \end{aligned}$$

we infer *<sup>U</sup>* <sup>−</sup> <sup>1</sup>[0*,*∞*)U*<sup>0</sup> <sup>∈</sup> *<sup>H</sup>*<sup>1</sup> *<sup>ν</sup> (*R; *H )* and, thus, *<sup>U</sup>* <sup>−</sup> <sup>1</sup>[0*,*∞*)U*<sup>0</sup> is continuous by Theorem 4.1.2. Hence, *<sup>U</sup>* <sup>∈</sup> *C(*R-<sup>0</sup>; *H )* and since spt *<sup>U</sup>* <sup>⊆</sup> <sup>R</sup>-<sup>0</sup> we derive *U (*0+*)* = *U*0. Finally, by the definition of *V* ,

$$M\_0\left(zV(z) - \frac{1}{\sqrt{2\pi}}U\_0\right) = -\frac{1}{\sqrt{2\pi}}M\_1(zM\_0 + M\_1)^{-1}M\_0U\_0 = -M\_1V(z)$$

for all *<sup>z</sup>* <sup>∈</sup> <sup>C</sup>Re*>ν .* Hence,

$$
\partial\_{t, \upsilon} M\_0 (U - \mathbf{1}\_{[0,\infty)} U\_0) + M\_1 U = 0,
$$

from which we see that *U* solves (10.4).

Finally, we treat the case when IV*(M*0*, M*1*)* is closed.

**Theorem 10.2.11** *Let (M*0*, M*1*) be regular and* IV*(M*0*, M*1*) closed. Then the operator <sup>S</sup>* : IV*(M*0*, M*1*)* <sup>→</sup> *C(*R-<sup>0</sup>; *H ), which assigns to each initial state, <sup>U</sup>*<sup>0</sup> <sup>∈</sup> IV*(M*0*, M*1*), its corresponding solution, <sup>U</sup>* <sup>∈</sup> *C(*R-<sup>0</sup>; *H ), of (10.4) is bounded in the sense that*

$$S\_n \colon \operatorname{IV}(M\_0, M\_1) \to \operatorname{C}([0, n]; H), \quad U\_0 \mapsto \operatorname{SU}\_0|\_{[0, n]}.$$

*is bounded for each <sup>n</sup>* <sup>∈</sup> <sup>N</sup>*.*

*Proof* By Proposition 10.2.10 we infer that IV*(M*0*, M*1*)* = IV*<sup>k</sup>* with *k* := ind*(M*0*, M*1*)*. Let *ν>ν*<sup>0</sup> - 0. By Proposition 10.2.7 and Corollary 8.1.3, there exists *C* -0 such that

$$\begin{aligned} \left\| \sqrt{2\pi} \left\| \partial\_{t,\boldsymbol{\nu}}^{-k} \boldsymbol{S} \boldsymbol{U}\_{0} \right\|\_{L\_{2,\boldsymbol{\nu}}(\mathbb{R}\_{\geq 0}; \boldsymbol{H})} &= \left\| \left( \boldsymbol{z} \mapsto \boldsymbol{z}^{-k} (\boldsymbol{z}\boldsymbol{M}\_{0} + \boldsymbol{M}\_{1})^{-1} \boldsymbol{M}\_{0} \boldsymbol{U}\_{0} \right) \right\|\_{\mathcal{H}\_{2}(\mathbb{C}\_{\mathbb{R} \times \boldsymbol{\nu}}; \boldsymbol{H})} \\ &\leqslant \mathcal{C} \sqrt{\frac{\pi}{\nu}} \left\| \boldsymbol{M}\_{0} \boldsymbol{U}\_{0} \right\|\_{H} \end{aligned}$$

for each *U*<sup>0</sup> ∈ IV*(M*0*, M*1*)*, where we have used the regularity of *(M*0*, M*1*)* and

$$\left\| \left( z \mapsto z^{-1} M\_0 U\_0 \right) \right\|\_{\mathcal{H}\_2(\mathbb{C}\_{\mathbb{R} \approx \nu}; H)} = \sqrt{\frac{\pi}{\nu}} \left\| M\_0 U\_0 \right\|\_H.$$

In particular, *<sup>S</sup>* : IV*(M*0*, M*1*)* <sup>→</sup> *<sup>H</sup>* <sup>−</sup>1*(∂<sup>k</sup> t ,ν)* is bounded. Since *<sup>L</sup>*2*,ν*<sup>0</sup> *(*R-<sup>0</sup>; *H)* → *H* <sup>−</sup>1*(∂<sup>k</sup> t ,ν)* continuously, we infer that *<sup>S</sup>* : IV*(M*0*, M*1*)* <sup>→</sup> *<sup>L</sup>*2*,ν*<sup>0</sup> *(*R-<sup>0</sup>; *H )* is bounded by the closed graph theorem. Hence, also

$$S\_n \colon \operatorname{IV}(M\_0, M\_1) \to L\_2([0, n]; H), \quad U\_0 \mapsto SU\_0|\_{[0, n]}$$

is bounded for each *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> and since *C(*[0*, n*]; *H)* <sup>→</sup> *<sup>L</sup>*2*(*[0*, n*]; *H )* continuously, we infer that *Sn* is bounded with values in *C(*[0*, n*]; *H )* again by the closed graph theorem.

*Remark 10.2.12* The variant of the closed graph theorem used in the proof above is the following: Let *X, Y* be Banach spaces and *Z* a Hausdorff topological vector space (e.g. a Banach space) such that *Y* → *Z* continuously. Let *T* : *X* → *Z* be linear and continuous with *T* [*X*] ⊆ *Y* . Then *T* ∈ *L(X, Y )*. Indeed, by the closed graph theorem it suffices to show that *T* : *X* → *Y* is closed. For doing so, let *(xn)n* be a sequence in *X* with *xn* → *x* and *T xn* → *y* for some *x* ∈ *X, y* ∈ *Y* . Then *T xn* → *T x* in *Z* by the continuity of *T* and *T xn* → *y* in *Z* by the continuous embedding. Hence, *y* = *T x* and thus, *T* is closed.

## **10.3 Comments**

The theory of differential algebraic equations in finite dimensions is a very active field. The main motivation for studying these equations comes from the modelling of electrical circuits and from control theory (see e.g. [28] and Exercise 10.5). The main reference for the statements presented in the first part of this chapter is the book by Kunkel and Mehrmann [57]. Of course, also in the finite-dimensional case Wong sequences can be used to determine the consistent initial values, see Exercise 10.1. For instance, in [13] the connection between Wong sequences and the quasi-Weierstraß normal form for matrix pairs is studied. Of course, the theory is not restricted to linear and homogeneous problems. Indeed, in the non-homogeneous case it turns out that the set of consistent initial values also depends on the given right-hand side.

The theory of differential algebraic equations in infinite dimensions is less well studied than the finite-dimensional case. We refer to [114], where the theory of *C*0-semigroups is used to deal with such equations. Moreover, we refer to [97, 98], where sequences of projectors are used to decouple the system. Moreover, there exist several references in the Russian literature, where the equations are called Sobolev type equations (see e.g. [111]). The results on infinite-dimensional problems presented here are based on [121, 124, 125]. In [124] the focus was on systems with index 0 with an emphasis on exponential stability and dichotomy.

We also add the following remark concerning the result in Theorem 10.2.11. By Corollary 10.2.9 we know that *M*<sup>0</sup> : IV*(M*0*, M*1*)* → *H* is injective. If IV*(M*0*, M*1*)* is closed, it follows that the operator *C*: dom*(C)* ⊆ IV*(M*0*, M*1*)* → IV*(M*0*, M*1*)* given by

$$\text{dom}(C) := \left\{ U\_0 \in \text{IV}(M\_0, M\_1) \; ; \; M\_1 U\_0 \in M\_0 \left[ \text{IV}(M\_0, M\_1) \right] \right\},$$

$$CU\_0 := M\_0^{-1} M\_1 U\_0 \quad (U\_0 \in \text{dom}(C))$$

is well-defined and closed. Using this operator, *C*, Theorem 10.2.11 states that if IV*(M*0*, M*1*)* is closed then −*C* generates a *C*0-semigroup on IV*(M*0*, M*1*)*. The precise statement can be found in [121, Theorem 5.7]. Moreover, *C* is bounded if IVind*(M*0*,M*1*)* is closed (cf. Exercise 10.7).

## **Exercises**

**Exercise 10.1** Let *<sup>M</sup>*0*, M*<sup>1</sup> <sup>∈</sup> <sup>C</sup>*n*×*<sup>n</sup>* such that *(M*0*, M*1*)* is regular and define the Wong sequence *(*IV*<sup>j</sup> )j*∈N<sup>0</sup> associated with *(M*0*, M*1*)*. Moreover, let *P,Q* <sup>∈</sup> <sup>C</sup>*n*×*n*, *<sup>C</sup>* <sup>∈</sup> <sup>C</sup>*k*×*k,* and *<sup>N</sup>* <sup>∈</sup> <sup>C</sup>*(n*−*k)*×*(n*−*k)* be as in the quasi-Weierstraß normal form for *(M*0*, M*1*)* with *<sup>N</sup>* nilpotent (cf. Proposition 10.1.5). We decompose a vector *<sup>x</sup>* <sup>∈</sup> <sup>C</sup>*<sup>n</sup>* into *<sup>x</sup>*<sup>q</sup> <sup>∈</sup> <sup>C</sup>*<sup>k</sup>* and 0*<sup>x</sup>* <sup>∈</sup> <sup>C</sup>*n*−*<sup>k</sup>* such that *<sup>x</sup>* <sup>=</sup> *(x,* <sup>q</sup> 0*x)*. Prove that

$$\mathbf{x} \in \mathbb{N}\_{\boldsymbol{f}} \Leftrightarrow \widehat{\boldsymbol{\mathcal{Q}}^{-1}\mathbf{x}} \in \operatorname{ran} \boldsymbol{N}^{\boldsymbol{f}} \quad (\boldsymbol{j} \in \mathbb{N}\_{0}).$$

Moreover, show that for each *z* ∈ *ρ(M*0*, M*1*)* we have

$$\mathrm{IV}\_j = \mathrm{ran}\left( (zM\_0 + M\_1)^{-1}M\_0 \right)^j \quad (j \in \mathbb{N}\_0).$$

**Exercise 10.2** Let *<sup>E</sup>* <sup>∈</sup> <sup>C</sup>*n*×*n*. We set *<sup>k</sup>* := ind*(E,* <sup>1</sup>*)*, where 1 denotes the identity matrix in <sup>C</sup>*n*×*n*. A matrix *<sup>X</sup>* <sup>∈</sup> <sup>C</sup>*n*×*<sup>n</sup>* is called a *Drazin inverse of E* if the following properties hold:


Prove that each matrix *<sup>E</sup>* <sup>∈</sup> <sup>C</sup>*n*×*<sup>n</sup>* has a unique Drazin inverse. *Hint:* For the existence consider the quasi-Weierstraß form for *(E,* 1*)*. Exercises 163

**Exercise 10.3** Let *<sup>M</sup>*0*, M*<sup>1</sup> <sup>∈</sup> <sup>C</sup>*n*×*<sup>n</sup>* with *(M*0*, M*1*)* regular and *<sup>M</sup>*0*M*<sup>1</sup> <sup>=</sup> *<sup>M</sup>*1*M*0*.* Denote by *M*<sup>D</sup> <sup>0</sup> the Drazin inverse of *M*<sup>0</sup> (see Exercise 10.2). Prove:


$$U(t) = \mathbf{e}^{-t\mathcal{M}\_0^\mathcal{D}\mathcal{M}\_1}U0 \quad (t \gg 0).$$

**Exercise 10.4** Let *<sup>M</sup>*0*, M*<sup>1</sup> <sup>∈</sup> <sup>C</sup>*n*×*<sup>n</sup>* with *(M*0*, M*1*)* regular. Prove that there exist two matrices *E,A* <sup>∈</sup> <sup>C</sup>*n*×*<sup>n</sup>* with *(E, A)* regular and *EA* <sup>=</sup> *AE* such that


**Exercise 10.5** We consider the following electrical circuit (see Fig. 10.1) with a resistor with resistance *R >* 0, an inductor with inductance *L >* 0 and a capacitor with capacitance *C >* 0. We denote the respective voltage drops by *vR, vL* and *vC*. Moreover, the current is denoted by *i*. The constitutive relations for resistor, inductor and capacitor are given by

$$Ri = v\_R,$$

$$Li' = v\_L,$$

$$Cv\_C' = i,$$

respectively. Moreover, by Kirchhoff's second law we have

$$v\_{\mathcal{R}} + v\_{\mathcal{C}} + v\_{\mathcal{L}} = 0.$$

Write these equations as a differential algebraic equation and compute the index and the space of consistent initial values. Moreover, compute the solution for each consistent initial value for *R* = 2 and *C* = *L* = 1.

**Fig. 10.1** Electrical circuit

**Exercise 10.6** Prove the assertions (a) to (c) in Lemma 10.2.3.

**Exercise 10.7** Let *M*0*, M*<sup>1</sup> ∈ *L(H )*.


$$M\_0|\_{\mathrm{IV}\_{\mathrm{ind}(M\_0, M\_1)}} \colon \mathrm{IV}\_{\mathrm{ind}(M\_0, M\_1)} \to M\_0 \left[ \mathrm{IV}\_{\mathrm{ind}(M\_0, M\_1)-1} \right],$$

is an isomorphism.

## **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 11 Exponential Stability of Evolutionary Equations**

In this chapter we study the exponential stability of evolutionary equations. Roughly speaking, exponential stability of a well-posed evolutionary equation

$$\left(\partial\_{\mathfrak{l},\boldsymbol{\upsilon}}\mathcal{M}(\partial\_{\mathfrak{l},\boldsymbol{\upsilon}}) + A\right)U = F$$

means that exponentially decaying right-hand sides *F* lead to exponentially decaying solutions *U*. The main problem in defining the notion of exponential decay for a solution of an evolutionary equation is the lack of continuity with respect to time, so a pointwise definition would not make sense in this framework. Instead, we will use our exponentially weighted spaces *<sup>L</sup>*2*,ν(*R; *H )*, but this time for negative *<sup>ν</sup>*, and define the exponential stability by the invariance of these spaces under the solution operator associated with the evolutionary equation under consideration.

## **11.1 The Notion of Exponential Stability**

Throughout this section, let *<sup>H</sup>* be a Hilbert space, *<sup>M</sup>* : dom*(M)* <sup>⊆</sup> <sup>C</sup> <sup>→</sup> *L(H )* <sup>a</sup> material law and *A*: dom*(A)* ⊆ *H* → *H* a skew-selfadjoint operator. Moreover, we assume that there exist *ν*<sup>0</sup> *>* sb *(M)* and *c >* 0 such that

$$\operatorname{Re}\, zM(z) \geqslant c \quad (z \in \mathbb{C}\_{\operatorname{Re}\, \geqslant \vee 0})\,.$$

By Picard's theorem (Theorem 6.2.1) we know that for *ν ν*<sup>0</sup> the operator

$$S\_{\boldsymbol{\nu}} := \left(\overline{\partial\_{\boldsymbol{t},\boldsymbol{\nu}}M(\partial\_{\boldsymbol{t},\boldsymbol{\nu}}) + A}\right)^{-1} \in L(L\_{2,\boldsymbol{\nu}}(\mathbb{R}; H)),$$

is causal and independent of the particular choice of *ν*. We now define the notion of exponential stability.

**Definition** We call the solution operators *(Sν )ν<sup>ν</sup>*<sup>0</sup> *exponentially stable with decay rate ρ*<sup>0</sup> *>* 0 if for all *ρ* ∈ [0*, ρ*0*)* and *ν ν*<sup>0</sup> we have

$$S\_{\boldsymbol{\nu}}F \in L\_{2,-\rho}(\mathbb{R}; H) \quad (F \in L\_{2,\boldsymbol{\nu}}(\mathbb{R}; H) \cap L\_{2,-\rho}(\mathbb{R}; H)).$$

*Remark 11.1.1* We emphasise that the definition of exponential stability does not mean that the evolutionary equation is just solvable for some negative weights. Indeed, if we consider *<sup>H</sup>* <sup>=</sup> <sup>C</sup>, *<sup>A</sup>* <sup>=</sup> 0 and *M(z)* <sup>=</sup> 1 for *<sup>z</sup>* <sup>∈</sup> <sup>C</sup> we obtain that the corresponding evolutionary equation

$$
\partial\_{l,\nu}U = F \tag{11.1}
$$

is well-posed for each *ν* = 0. However, we also place a demand for causality on our solution operator. Thus, we only have to consider parameters *ν >* 0. We obtain the solution *U* by

$$U(t) = \int\_{-\infty}^{t} F(s) \, \mathrm{d}s.$$

As it turns out, the problem (11.1) is not exponentially stable. Indeed, for *F* := <sup>1</sup>[0*,*1] <sup>∈</sup> " *<sup>ν</sup>*∈<sup>R</sup> *<sup>L</sup>*2*,ν(*R*)* the solution *<sup>U</sup>* is given by

$$U(t) = \begin{cases} 0 & \text{if } t < 0, \\ t & \text{if } 0 \le t \le 1, \\ 1 & \text{if } t > 1, \end{cases}$$

which does not belong to the space *<sup>L</sup>*2*,*−*ρ(*R*)* for any *ρ >* 0.

We first show that the aforementioned notion of exponential stability also yields a pointwise exponential decay of solutions if we assume more regularity for our source term *F*.

**Proposition 11.1.2** *Let (Sν )ν<sup>ν</sup>*<sup>0</sup> *be exponentially stable with decay rate ρ*<sup>0</sup> *>* 0*, ν ν*0*, ρ* ∈ [0*, ρ*0*) and F* ∈ dom*(∂t ,ν)*∩dom*(∂t ,*−*ρ). Then U* := *SνF is continuous and satisfies*

$$U(t)\mathbf{e}^{\rho t} \to 0 \quad (t \to \infty).$$

*Proof* We first note that *∂t ,νF* = *∂t ,*−*ρF* by Exercise 11.1. Moreover, since *Sν* is a material law operator (i.e., *Sν* = *S(∂t ,ν )* for some material law *S*; see Remark 6.3.4) we have

$$
\mathcal{S}\_{\boldsymbol{\nu}} \partial\_{\boldsymbol{\nu}, \boldsymbol{\nu}} \subseteq \partial\_{\boldsymbol{\nu}, \boldsymbol{\nu}} \mathcal{S}\_{\boldsymbol{\nu}}.
$$

Thus, in particular, we have

$$S\_{\upsilon}\partial\_{\mathfrak{t},\upsilon}F = \partial\_{\mathfrak{t},\upsilon}S\_{\upsilon}F = \partial\_{\mathfrak{t},\upsilon}U;$$

that is, *<sup>U</sup>* <sup>∈</sup> dom*(∂t ,ν)*. Moreover, since *∂t ,νF* <sup>=</sup> *∂t ,*−*ρF* <sup>∈</sup> *<sup>L</sup>*2*,*−*ρ(*R; *H )*, we infer also *U,∂t ,νU* <sup>∈</sup> *<sup>L</sup>*2*,*−*ρ(*R; *H )* by exponential stability. By Exercise 11.1 this yields *U* ∈ dom*(∂t ,*−*ρ)* with *∂t ,*−*ρU* = *∂t ,νU*. The assertion now follows from the Sobolev embedding theorem (Theorem 4.1.2 and Corollary 4.1.3).

## **11.2 A Criterion for Exponential Stability of Parabolic-Type Equations**

In this section we will prove a useful criterion for exponential stability of a certain class of evolutionary equations. The easiest example we have in mind is the heat equation with homogeneous Dirichlet boundary conditions, which can be written as an evolutionary equation of the form (cf. Theorem 6.2.4)

$$
\begin{pmatrix} \partial\_{t, \upsilon} \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} + \begin{pmatrix} 0 & 0 \\ 0 \ a^{-1} \end{pmatrix} + \begin{pmatrix} 0 & \text{div} \\ \text{grad}\_0 & 0 \end{pmatrix} \end{pmatrix} \begin{pmatrix} \theta \\ q \end{pmatrix} = \begin{pmatrix} \mathcal{Q} \\ 0 \end{pmatrix}.
$$

in *<sup>L</sup>*2*,ν(*R; *H )*, where *<sup>H</sup>* <sup>=</sup> *<sup>L</sup>*2*()* <sup>⊕</sup> *<sup>L</sup>*2*()<sup>d</sup>* with <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* open, and *<sup>a</sup>* <sup>∈</sup> *L(L*2*()<sup>d</sup> )* with

> Re *a c*

for some *c >* 0 which models the heat conductivity, and *ν >* 0.

**Theorem 11.2.1** *Let H*0*, H*<sup>1</sup> *be Hilbert spaces and C* : dom*(C)* ⊆ *H*<sup>0</sup> → *H*<sup>1</sup> *a densely defined closed linear operator which is boundedly invertible. Moreover, let M*<sup>0</sup> ∈ *L(H*0*) be selfadjoint with*

$$M\_0 \gg c\_0$$

*for some <sup>c</sup>*<sup>0</sup> *<sup>&</sup>gt;* <sup>0</sup> *and <sup>M</sup>*<sup>1</sup> : dom*(M*1*)* <sup>⊆</sup> <sup>C</sup> <sup>→</sup> *L(H*1*) be a material law satisfying* sb *(M*1*) <* −*ρ*<sup>1</sup> *for some ρ*<sup>1</sup> *>* 0 *and*

$$\exists \ c \mathbb{I} \geqslant 0 \,\forall \boldsymbol{\mathcal{z}} \in \mathbb{C}\_{\mathsf{Re} \geqslant -\rho\_{\mathbb{I}}} \, : \, \mathsf{Re} \, M \mathbbm{1}(\boldsymbol{\mathcal{z}}) \geqslant c \,\mathsf{I} \,.$$

*Then*

$$S\_{\boldsymbol{\nu}} := \overline{\left(\partial\_{\boldsymbol{l},\boldsymbol{\nu}}\begin{pmatrix} M\_0 \ 0\\ 0 \ 0 \end{pmatrix} + \begin{pmatrix} 0 & 0\\ 0 \ M\_1(\partial\_{\boldsymbol{l},\boldsymbol{\nu}}) \end{pmatrix} + \begin{pmatrix} 0 & -C^\*\\ C & 0 \end{pmatrix}\right)}^{-1} \in L\left(L\_{2,\boldsymbol{\nu}}(\mathbb{R}; H\_0 \oplus H\_1)\right)$$

*for each ν >* 0*. Moreover, for all ν*<sup>0</sup> *>* 0 *the family (Sν )ν<sup>ν</sup>*<sup>0</sup> *is exponentially stable with decay rate <sup>ρ</sup>*<sup>0</sup> := min *ρ*1*, c*1*/ <sup>M</sup>*<sup>1</sup> <sup>2</sup> <sup>∞</sup>*,*CRe*>*−*ρ*<sup>1</sup> *M*<sup>0</sup> *<sup>C</sup>*−<sup>1</sup> <sup>2</sup> *.*

In order to prove this theorem we need a preparatory result.

**Lemma 11.2.2** *Assume the hypotheses of Theorem 11.2.1. Then for each z* ∈ <sup>C</sup>Re*>*−*ρ*<sup>0</sup> *the operator*

$$T(z) \coloneqq \begin{pmatrix} zM\_0 & 0 \\ 0 & M\_1(z) \end{pmatrix} + \begin{pmatrix} 0 & -C^\* \\ C & 0 \end{pmatrix} \\ \colon \text{dom}(C) \times \text{dom}(C^\*) \subseteq H\_0 \oplus H\_1 \to H\_0 \oplus H\_{12}$$

*is boundedly invertible. Moreover,*

$$\sup\_{z \in \mathcal{C}\_{\mathsf{Re}\geq -\rho}} \left\| T(z)^{-1} \right\| < \infty$$

*for each ρ<ρ*0*.*

*Proof* Let *<sup>z</sup>* <sup>∈</sup> <sup>C</sup>Re-<sup>−</sup>*<sup>ρ</sup>* for some *ρ<ρ*0. We note that *M*1*(z)* is boundedly invertible with  *<sup>M</sup>*1*(z)*−<sup>1</sup> <sup>1</sup>*/c*<sup>1</sup> (see Proposition 6.2.3(b)) and *(C*∗*)*−<sup>1</sup> <sup>=</sup> *(C*−1*)*<sup>∗</sup> <sup>∈</sup> *L(H*0*, H*1*)* (see Lemmas 2.2.2 and 2.2.9). The beginning of the proof deals with a reformulation of *T (z)*. For this, let *u, f* ∈ *H*0, *v, g* ∈ *H*1. Then, by definition, *(u, v)* ∈ dom*(T (z))* = dom*(C)* × dom*(C*∗*)* and *T (z)(u, v)* = *(f, g)* if and only if *v* ∈ dom*(C*∗*)* and *u* ∈ dom*(C)* together with

$$zM\_0u - C^\*v = f$$

$$Cu + M\_1(z)v = g.$$

Since both *C*<sup>∗</sup> and *M*1*(z)* are continuously invertible, we obtain equivalently *u* ∈ dom*(C)* together with

$$z(C^\*)^{-1}M\_0u - v = (C^\*)^{-1}f$$

$$M\_1(z)^{-1}Cu + v = M\_1(z)^{-1}g.$$

Adding the latter two equations and retaining the first equation, we obtain the following equivalent system subject to the condition *u* ∈ dom*(C)*

$$\begin{aligned} \upsilon &= z(C^\*)^{-1}(zM\_0u - f) \in \text{dom}(C^\*), \\ (z(C^\*)^{-1}M\_0C^{-1} + M\_1(z)^{-1})Cu &= M\_1(z)^{-1}g + (C^\*)^{-1}f. \end{aligned}$$

We now inspect the operator *S(z)* := *z(C*−1*)* <sup>∗</sup>*M*0*C*−<sup>1</sup> <sup>+</sup> *<sup>M</sup>*1*(z)*−<sup>1</sup> <sup>∈</sup> *L(H*1*)*. By Proposition 6.2.3 for *x* ∈ *H*<sup>1</sup> we estimate

$$\begin{split} \operatorname{Re}\left\langle x, S(z)x \right\rangle &= \operatorname{Re}\left\langle C^{-1}x, zM\_{0}C^{-1}x \right\rangle + \operatorname{Re}\left\langle x, M\_{1}(z)^{-1}x \right\rangle \\ &\geqslant -\rho \left\| M\_{0} \right\| \left\| C^{-1} \right\|^{2} \left\| x \right\|^{2} + \frac{c\_{1}}{\left\| M\_{1}(z) \right\|^{2}} \left\| x \right\|^{2} \\ &\geqslant \underbrace{\left( \frac{c\_{1}}{\left\| M\_{1} \right\|\_{\infty, \mathbb{C}\_{\mathbb{C}\mathbb{R}\times -\rho\_{1}}}} - \rho \left\| M\_{0} \right\| \left\| \left\| C^{-1} \right\|^{2} \right\| \right) \left\| x \right\|^{2} \cdot \mathbb{1}}\_{=:\mu} \end{split}$$

Since *ρ<ρ*<sup>0</sup> and by the definition of *ρ*<sup>0</sup> we infer that *μ >* 0. Hence, *S(z)* is boundedly invertible with

$$\left\| \mathcal{S}(z)^{-1} \right\| \leqslant \frac{1}{\mu}.$$

We now set

$$\begin{aligned} u &:= C^{-1} S(z)^{-1} \left( (C^\*)^{-1} f + M\_1(z)^{-1} g \right) \in \text{dom}(C), \\ v &:= (C^\*)^{-1} (zM\_0 u - f) \in \text{dom}(C^\*). \end{aligned}$$

By the first part of the proof we have that *(u, v)* is the unique solution of *T (z)(u, v)* = *(f, g)*. Moreover, we can estimate

$$\begin{aligned} \|\|u\|\| &\leqslant \left\|C^{-1}\right\| \frac{1}{\mu} \Big(\Big\| (C^\*)^{-1} \Big\| \|f\|\| + \frac{1}{c\_1} \|\|g\|\Big), \text{ and} \\\|\|v\|\| &\leqslant \frac{1}{c\_1} (\|g\| + \|Cu\|) \leqslant \frac{1}{c\_1} \Big(\|g\| + \frac{1}{\mu} (\Big\| (C^\*)^{-1} \Big\| \|f\| + \frac{1}{c\_1} \|g\|\Big)). \end{aligned}$$

which proves that *T (z)* is boundedly invertible with

$$\sup\_{z \in \mathbb{C} \text{Re} \geqslant -\rho} \left\| T(z)^{-1} \right\| < \infty.$$

*Proof of Theorem 11.2.1* Let *H* := *H*<sup>0</sup> ⊕ *H*1. We set

$$M(z) := \begin{pmatrix} M\_0 & 0\\ 0 & z^{-1}M\_1(z) \end{pmatrix} \quad (z \in \text{dom}(M\_1) \text{ } \{0\}).$$

Let *ν >* 0. Then

$$\forall z \in \mathbb{C}\_{\text{Re}\geqslant v} : \text{Re}\, zM(z) \geqslant \min\{\nu c\_0, c\_1\}$$

and hence, the first assertion of the theorem follows from Theorem 6.2.1.

Next, we focus on exponential stability. For *ν >* 0, we have that

$$\mathcal{S}\_{\boldsymbol{\nu}} = T(\partial\_{\boldsymbol{t},\boldsymbol{\nu}})^{-1},$$

where *T* is defined in Lemma 11.2.2. Moreover, by Lemma 11.2.2, the mapping *<sup>T</sup>* <sup>−</sup><sup>1</sup> : <sup>C</sup>Re*>*−*ρ*<sup>0</sup> <sup>→</sup> *L(H )* with *<sup>T</sup>* <sup>−</sup>1*(z)* <sup>=</sup> *T (z)*−<sup>1</sup> defines a material law with sb *T* <sup>−</sup><sup>1</sup> = −*ρ*<sup>0</sup> (the holomorphy of *<sup>T</sup>* is obvious and hence, *<sup>T</sup>* <sup>−</sup><sup>1</sup> is also holomorphic). Thus, we may apply Theorem 5.3.6 to obtain (note that *<sup>T</sup>* <sup>−</sup><sup>1</sup>*(∂t ,ν)* <sup>=</sup> *T (∂t ,ν )* <sup>−</sup>1)

$$S\_{\vee}(f) = T(\partial\_{l,\vee})^{-1} f = T(\partial\_{l,\rho})^{-1} f \in L\_{2,\rho}(\mathbb{R}; H)^{-1}$$

for each *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν (*R; *H )* <sup>∩</sup> *<sup>L</sup>*2*,ρ(*R; *H )* with *ρ >* <sup>−</sup>*ρ*0, which shows exponential stability.

## **11.3 Three Exponentially Stable Models for Heat Conduction**

#### **The Classical Heat Equation**

We recall the classical heat equation (cf. Theorem 6.2.4) on an open subset <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* consisting of two equations, the heat flux balance

$$
\partial\_t \theta + \text{div} \, q = f
$$

and Fourier's law

$$q = -a \operatorname{grad} \theta,$$

where *<sup>f</sup>* is a given source term and *<sup>a</sup>* <sup>∈</sup> *L(L*2*()<sup>d</sup> )* is an operator modelling the heat conductivity of the underlying medium. We will impose Dirichlet boundary conditions which will be incorporated in our equation by replacing the operator grad by grad0 in Fourier's law (cf. Sect. 6.1).

In order to apply Theorem 11.2.1 we need that grad0 is boundedly invertible in some sense. This can be shown using *Poincaré's inequality*.

**Proposition 11.3.1 (Poincaré Inequality)** *Let* <sup>⊆</sup> <sup>R</sup>*<sup>d</sup> be open and contained in a slab; that is, there exist <sup>e</sup>* <sup>∈</sup> <sup>R</sup>*<sup>d</sup> with <sup>e</sup>* <sup>=</sup> <sup>1</sup> *and a, b* <sup>∈</sup> <sup>R</sup>*, a<b such that*

$$\mathfrak{Q} \subseteq \left\{ x \in \mathbb{R}^d \; ; \; a < \langle e, x \rangle < b \right\}.$$

*Then for each u* ∈ dom*(*grad0*) we have*

$$\|\mu\|\_{L\_2(\Omega)} \lesssim (b-a) \left\| \operatorname{grad}\_0 \mu \right\|\_{L\_2(\Omega)^d}.$$

*Proof* Without loss of generality, let *e* = *(*1*,* 0*,...,* 0*)*. Recall that, by definition, *C*∞ <sup>c</sup> *()* is a core for grad0. Thus, it suffices to prove the assertion for functions in *C*∞ <sup>c</sup> *()*. Let *ϕ* ∈ *C*<sup>∞</sup> <sup>c</sup> *()*. We identify *<sup>ϕ</sup>* with its extension by 0 to the whole of <sup>R</sup>*<sup>d</sup>* . By the fundamental theorem of calculus, we may compute

$$\varphi(\mathbf{x}) = \int\_{a}^{\chi\_{\parallel}} \partial\_{\mathbf{l}} \varphi(\mathbf{s}, \mathbf{x}\_{2}, \dots, \mathbf{x}\_{d}) \, \mathrm{d}s \quad (\mathbf{x} \in \mathfrak{Q}) .$$

Hence, by the Cauchy–Schwarz inequality and Tonelli's theorem

$$\begin{split} \int\_{\Omega} |\varrho(\mathbf{x})|^{2} \, \mathrm{d}\mathbf{x} &= \int\_{\Omega} \left| \int\_{a}^{\mathbf{x}\_{1}} \partial\_{1} \varrho(\mathbf{s}, \mathbf{x}\_{2}, \dots, \mathbf{x}\_{d}) \, \mathrm{d}\mathbf{s} \right|^{2} \, \mathrm{d}\mathbf{x} \\ &\leqslant \int\_{\Omega} (b-a) \int\_{a}^{b} |\partial\_{1} \varrho(\mathbf{s}, \mathbf{x}\_{2}, \dots, \mathbf{x}\_{d})|^{2} \, \mathrm{d}\mathbf{s} \, \mathrm{d}\mathbf{x} = (b-a)^{2} \int\_{\Omega} |\partial\_{1} \varrho(\mathbf{x})|^{2} \, \mathrm{d}\mathbf{x} \\ &\leqslant (b-a)^{2} \left\| \operatorname{grad}\_{0} \varrho \right\|\_{L\_{2}(\Omega)^{d}}^{2} \,, \end{split}$$

which shows the assertion.

**Corollary 11.3.2** *Under the assumptions of Proposition 11.3.1 the operator* grad0 *is one-to-one and* ran*(*grad0*) is closed.*

*Proof* The injectivity follows immediately from Poincaré's inequality. To prove the closedness of ran*(*grad0*)*, let *(uk)k*∈<sup>N</sup> in dom*(*grad0*)* with grad0 *uk* <sup>→</sup> *<sup>v</sup>* in *<sup>L</sup>*2*()<sup>d</sup>* for some *<sup>v</sup>* <sup>∈</sup> *<sup>L</sup>*2*()<sup>d</sup>* . By Poincaré's inequality, we infer that *(uk)k*∈<sup>N</sup> is a Cauchysequence in *L*2*()* and hence convergent to some *u* ∈ *L*2*()*. By the closedness of grad0 we obtain *u* ∈ dom*(*grad0*)* and *v* = grad0 *u* ∈ ran*(*grad0*).*

We need another auxiliary result which is interesting in its own right.

**Lemma 11.3.3** *Let H be a Hilbert space and V* ⊆ *H a closed subspace. We denote by*

$$\iota\_V \colon V \to H, \quad x \mapsto x$$

*the canonical embedding of V into H. Then ιV ι* ∗ *<sup>V</sup>* : *H* → *H is the orthogonal projection on V and ι* ∗ *<sup>V</sup> ιV* : *V* → *V is the identity on V .*

## *Proof* The proof is left as Exercise 11.2.

We now come to the exponential stability of the heat equation. First, we need to formulate both the heat flux balance and Fourier's law as a suitable evolutionary equation. For doing so, we assume that <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* is open and contained in a slab. Then ran*(*grad0*)* is closed by Corollary 11.3.2. It is clear that we can write Fourier's law as

$$q = -a \operatorname{grad}\_0 \theta = -a \iota\_{\text{ran}(\text{grad}\_0)} \iota\_{\text{ran}(\text{grad}\_0)}^\* \operatorname{grad}\_0 \theta.$$

Hence, defining *<sup>q</sup>* := *<sup>ι</sup>* ∗ ran*(*grad0*) <sup>q</sup>* and *<sup>a</sup>* := *<sup>ι</sup>* ∗ ran*(*grad0*) aι*ran*(*grad0*)* ∈ *L(*ran*(*grad0*))*, we arrive at

$$\widetilde{q} = -\widetilde{a}\iota^\*\_{\text{ran}(\text{grad}\_0)} \operatorname{grad}\_0 \theta .$$

Moreover, since ran*(*grad0*)*<sup>⊥</sup> = ker*(*div*)*, we derive from the heat flux balance

$$f = \partial\_l \theta + \text{div}\, q = \partial\_l \theta + \text{div}\, \iota\_{\text{ran}(\text{grad}\_0)} \widetilde{q}$$

and hence, assuming that *<sup>a</sup>* is invertible, we may write both equations with the unknowns *(θ ,q)* as an evolutionary equation in *<sup>L</sup>*2*,ν(*R; *H )* for *ν >* 0, where *<sup>H</sup>* := *L*2*()* ⊕ ran*(*grad0*)*. This yields

$$
\begin{pmatrix} \partial\_{\mathfrak{l},\boldsymbol{\psi}} \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} + \begin{pmatrix} 0 & 0 \\ 0 \ \widetilde{a}^{-1} \end{pmatrix} + \begin{pmatrix} 0 & \operatorname{div} \iota\_{\text{ran}(\text{grad}\_0)} \\ \iota\_{\text{ran}(\text{grad}\_0)}^\* \operatorname{grad}\_0 & 0 \end{pmatrix} \begin{pmatrix} \theta \\ \widetilde{q} \end{pmatrix} = \begin{pmatrix} f \\ 0 \end{pmatrix}. \tag{11.2}$$

For notational convenience, we set

$$C := \iota^\*\_{\text{ran}(\text{grad}\_0)} \text{grad}\_0 \colon \text{dom}(\text{grad}\_0) \subseteq L\_2(\Omega) \to \text{ran}(\text{grad}\_0). \tag{11.3}$$

**Lemma 11.3.4** *Let* <sup>⊆</sup> <sup>R</sup>*<sup>d</sup> be open and contained in a slab and <sup>C</sup> as above. Then C is densely defined, closed and boundedly invertible. Moreover*

$$C^\* = -\operatorname{div} \iota\_{\text{ran}(\text{grad}\_0)} \cdot$$

*Proof* The proof is left as Exercise 11.3.

**Proposition 11.3.5** *Let* <sup>⊆</sup> <sup>R</sup>*<sup>d</sup> be open and contained in a slab, <sup>a</sup>* <sup>∈</sup> *L(L*2*()<sup>d</sup> ), and c*<sup>1</sup> *>* 0 *such that*

$$
\operatorname{Re} a \gg c\_{\mathbb{L}}.
$$

*Then <sup>a</sup>* := *<sup>ι</sup>* ∗ ran*(*grad0*)aι*ran*(*grad0*) is boundedly invertible and the solution operators associated with (11.2) are exponentially stable.*

*Proof* For *x* ∈ ran*(*grad0*)* we have

$$\begin{aligned} \operatorname{Re}\left\langle \mathbf{x}, \widetilde{a}\mathbf{x} \right\rangle\_{\operatorname{ran}(\operatorname{grad}\_0)} &= \operatorname{Re}\left\langle \iota\_{\operatorname{ran}(\operatorname{grad}\_0)} \mathbf{x}, a\iota\_{\operatorname{ran}(\operatorname{grad}\_0)} \mathbf{x} \right\rangle\_{L\_2(\Omega)^d} \\ &\geqslant c\_1 \left\| \iota\_{\operatorname{ran}(\operatorname{grad}\_0)} \mathbf{x} \right\|\_{L\_2(\Omega)^d}^2 = c\_1 \left\| \mathbf{x} \right\|\_{\operatorname{ran}(\operatorname{grad}\_0)}^2, \end{aligned}$$

and thus, *<sup>a</sup>* is boundedly invertible. Hence, (11.2) is an evolutionary equation of the form considered in Theorem 11.2.1 with *<sup>M</sup>*<sup>0</sup> := 1, *<sup>M</sup>*1*(z)* := *a*−<sup>1</sup> for *<sup>z</sup>* <sup>∈</sup> <sup>C</sup> and *<sup>C</sup>* given by (11.3). Since Re*a*−<sup>1</sup> *c*<sup>1</sup> *<sup>a</sup>* <sup>2</sup> , Theorem 11.2.1 is applicable and we derive the exponential stability.

#### **The Heat Equation with Additional Delay**

Again we consider the heat equation, but now we replace Fourier's law by

$$q = -a\_1 \operatorname{grad}\_0 \theta - a\_2 \mathfrak{r}\_{-h} \operatorname{grad}\_0 \theta$$

for some operators *<sup>a</sup>*1*, a*<sup>2</sup> <sup>∈</sup> *L(L*2*()<sup>d</sup> )* and *h >* 0. As above, we assume that <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* is open and contained in a slab. We may introduce *<sup>q</sup>* := *<sup>ι</sup>* ∗ ran*(*grad0*) q* and *aj* := *<sup>ι</sup>* ∗ ran*(*grad0*)aj <sup>ι</sup>*ran*(*grad0*)* <sup>∈</sup> *L(L*2*()<sup>d</sup> )* for *<sup>j</sup>* ∈ {1*,* <sup>2</sup>}. Moreover, we assume that there exists *c >* 0 such that

$$\operatorname{Re} a\_{\mathsf{l}} \geqslant c.$$

By Lemma 7.3.1 there exists *<sup>ν</sup>*<sup>0</sup> *<sup>&</sup>gt;* 0 such that the operator*a*<sup>1</sup> <sup>+</sup>*a*2*τ*−*<sup>h</sup>* is boundedly invertible in *<sup>L</sup>*2*,ν (*R;ran*(*grad0*))* and its inverse is uniformly strictly positive definite for each *ν ν*0. Hence, we may write the heat equation with additional delay as an evolutionary equation of the form

$$
\begin{pmatrix} \partial\_{l,\nu} \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} + \begin{pmatrix} 0 & 0 \\ 0 & (\widetilde{a}\_1 + \widetilde{a}\_2 \tau\_{-h})^{-1} \end{pmatrix} + \begin{pmatrix} 0 & -C^\* \\ C & 0 \end{pmatrix} \end{pmatrix} \begin{pmatrix} \theta \\ \widetilde{q} \end{pmatrix} = \begin{pmatrix} f \\ 0 \end{pmatrix} \tag{11.4}
$$

with *C* given by (11.3).

**Proposition 11.3.6** *Let* <sup>⊆</sup> <sup>R</sup>*<sup>d</sup> be open and contained in a slab, h >* <sup>0</sup>*, <sup>a</sup>*1*, a*<sup>2</sup> <sup>∈</sup> *L(L*2*()<sup>d</sup> ), and c >* 0 *such that*

$$\operatorname{Re} a\_{\parallel} \geqslant c$$

*and a*<sup>2</sup> *< c. Then the solution operators (Sν )ν<sup>ν</sup>*<sup>0</sup> *associated with (11.4) are exponentially stable.*

*Proof* Note that *a*<sup>2</sup> *<sup>a</sup>*<sup>2</sup> *< c*. We choose

$$0 < \rho\_1 < \frac{1}{h} \log \frac{c}{\|\widetilde{a}\_2\|}.$$

Then we estimate for *<sup>z</sup>* <sup>∈</sup> <sup>C</sup>Re*>*−*ρ*<sup>1</sup>

$$\left| \operatorname{Re} \left\langle \mathbf{x}, \left( \widetilde{a}\_1 + \widetilde{a}\_2 \mathbf{e}^{-\varepsilon h} \right) \mathbf{x} \right\rangle\_{\operatorname{ran}(\operatorname{grad}\_0)} \right| \geqslant (\mathbf{c} - \left\| \widetilde{a}\_2 \right\| \left\| \mathbf{e}^{\rho\_1 h} \right\| \left\| \mathbf{x} \right\|\_{\operatorname{ran}(\operatorname{grad}\_0)}^2).$$

By the choice of *<sup>ρ</sup>*1, we infer *<sup>c</sup>* := *(c* <sup>−</sup> *a*<sup>2</sup> <sup>e</sup>*ρ*1*h) >* 0. Hence,

$$M\_{\mathbb{L}}(z) := \left(\widetilde{a}\_{\mathbb{L}} + \widetilde{a}\_{2}\mathbf{e}^{-h\mathbb{z}}\right)^{-1} \quad (z \in \mathbb{C}\_{\text{Re} > -\rho\_{\mathbb{L}}}),$$

is well-defined and satisfies

$$\operatorname{Re}\,M\_{\operatorname{l}}(z)\geqslant c\_{\operatorname{l}}\quad(z\in\mathbb{C}\_{\operatorname{Re}\simeq-\rho\_{\operatorname{l}}})\_{\geqslant}$$

for some *c*<sup>1</sup> *>* 0 by Proposition 6.2.3. Thus, Theorem 11.2.1 is applicable and yields the exponential stability of (11.4).

#### **A Dual Phase Lag Model**

In this last variant of heat conduction, we replace Fourier's law by

$$(1 + s\_q \partial\_l) q = (1 + s\_\theta \partial\_l) \operatorname{grad}\_0 \theta,$$

where *sq , sθ >* 0 are the so-called "phases" (cf. Sect. 7.4, where a different type of dual phase lag model is studied). The latter equation can be reformulated as

$$(1 + s\_q \partial\_{\mathfrak{t}, \upsilon})(1 + s\_\theta \partial\_{\mathfrak{t}, \upsilon})^{-1} q = \text{grad}\_0 \theta^q$$

for *ν >* 0. Assuming that <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* is open and contained in a slab, and defining *<sup>q</sup>* := *<sup>ι</sup>* ∗ ran*(*grad0*) q*, the dual phase lag model may be written as

$$
\begin{pmatrix} \partial\_{l,\boldsymbol{\nu}} \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} + \begin{pmatrix} 0 & 0 \\ 0 \ (1 + s\_q \partial\_{l,\boldsymbol{\nu}}) (1 + s\_\theta \partial\_{l,\boldsymbol{\nu}})^{-1} \end{pmatrix} + \begin{pmatrix} 0 & -C^\* \\ C & 0 \end{pmatrix} \end{pmatrix} \begin{pmatrix} \theta \\ \tilde{q} \end{pmatrix} = \begin{pmatrix} f \\ 0 \end{pmatrix} \tag{11.5}
$$

with *C* given by (11.3).

**Proposition 11.3.7** *Let* <sup>⊆</sup> <sup>R</sup>*<sup>d</sup> be open and contained in a slab, <sup>ν</sup>*<sup>0</sup> *<sup>&</sup>gt;* <sup>0</sup>*. Moreover, let sθ > sq >* 0*. Then the solution operators (Sν )ν<sup>ν</sup>*<sup>0</sup> *associated with (11.5) are exponentially stable.*

*Proof* Again, we note that (11.5) is of the form considered in Theorem 11.2.1 with *M*<sup>0</sup> := 1 and

$$M\_{\mathbb{L}}(z) := \frac{1 + s\_q z}{1 + s\_\theta z} \quad (z \in \mathbb{C} \lor \{-s\_\theta^{-1}\}) .$$

Setting *<sup>μ</sup>* := *sq sθ <* 1 we compute

$$\operatorname{Re} M\_1(z) = \operatorname{Re} \left( \mu + \frac{(1 - \mu)}{1 + s\_\theta z} \right) = \mu + (1 - \mu) \frac{1 + s\_\theta \operatorname{Re} z}{|1 + s\_\theta z|^2} \geqslant \mu \quad (z \in \mathbb{C}\_{\operatorname{Re} > -s\_\theta^{-1}}).$$

Thus, Theorem 11.2.1 is applicable and hence, the claim follows.

## **11.4 Exponential Stability for Hyperbolic-Type Equations**

Important examples of exponentially stable equations do not fit in the class of parabolic-like equations studied in Sect. 11.2. As a motivating example we consider the damped wave equation, which can be written as a second-order equation of the form

$$
\partial\_{t,\boldsymbol{\nu}}^2 M\_0 \boldsymbol{\mu} + \partial\_{t,\boldsymbol{\nu}} M\_1 \boldsymbol{\mu} - \operatorname{div} \operatorname{grad}\_0 \boldsymbol{\mu} = \boldsymbol{f}, \tag{11.6}
$$

where *M*0*, M*<sup>1</sup> ∈ *L(L*2*())*, *M*<sup>0</sup> is selfadjoint and *M*0*,*Re *M*<sup>1</sup> *c >* 0, with <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* modelling the underlying medium. It is well-known that this equation is exponentially stable if is bounded. However, if we write this equation as an evolutionary problem in the canonical way; that is, we introduce *v* := *∂t ,νu* and *q* := − grad0 *u* as new unknowns, we end up with an equation of the form

$$
\left(\partial\_{l,v}\begin{pmatrix} M\_0 \ 0\\ 0 \ 1 \end{pmatrix} + \begin{pmatrix} M\_1 \ 0\\ 0 \ 0 \end{pmatrix} + \begin{pmatrix} 0 & \text{div} \\ \text{grad}\_0 & 0 \end{pmatrix}\right) \begin{pmatrix} v\\ q \end{pmatrix} = \begin{pmatrix} f\\ 0 \end{pmatrix},\tag{11.7}
$$

which is not of the form discussed in Sect. 11.2. However, another formulation of (11.6) as an evolutionary equation allows to show exponential stability in a similar way as for parabolic-type equations. More precisely, we aim for a formulation, such that the second block operator matrix in (11.7) has non-vanishing diagonal entries. This leads to a damping effect for both unknowns.

We start to provide a general reformulation scheme of second-order equations as suitable evolutionary equations and afterwards discuss the exponential stability of those.

#### **An Alternative Reformulation for Hyperbolic-Type Equations**

Throughout we assume that *C*: dom*(C)* ⊆ *H*<sup>0</sup> → *H*<sup>1</sup> is a densely defined closed linear operator between two Hilbert spaces *H*<sup>0</sup> and *H*1, which is additionally assumed to be boundedly invertible. Furthermore, let *<sup>M</sup>* : dom*(M)* <sup>⊆</sup> <sup>C</sup> <sup>→</sup> *L(H*0*)* be a material law of the form

$$M(z) = M\_0(z) + z^{-1} M\_1(z) \quad (z \in \text{dom}(M)),$$

where *<sup>M</sup>*0*, M*<sup>1</sup> : dom*(M)* <sup>⊆</sup> <sup>C</sup> <sup>→</sup> *L(H )* are material laws themselves. We consider second-order problems of the form

$$\left(\partial\_{t,\boldsymbol{\nu}}^2 M(\partial\_{t,\boldsymbol{\nu}}) + C^\* C\right) \boldsymbol{\mu} = \boldsymbol{f},\tag{11.8}$$

for a given right-hand side *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *<sup>H</sup>*0*)* and aim for conditions on *<sup>M</sup>* to ensure the exponential stability in a suitable sense.

*Example 11.4.1* The wave equation (11.6) on a bounded domain <sup>⊆</sup> <sup>R</sup>*<sup>n</sup>* is indeed of the form (11.8). We set *C* := *ι* ∗ ran*(*grad0*)* grad0 : dom*(*grad0*)* <sup>⊆</sup> *<sup>L</sup>*2*()* <sup>→</sup> ran*(*grad0*)*, which is boundedly invertible by Poincaré's inequality (see Proposition 11.3.1 and Lemma 11.3.4) and

$$M(z) = M\_0 + z^{-1}M\_1 \quad (z \in \mathbb{C} \backslash \{0\})$$

for *M*0*, M*<sup>1</sup> ∈ *L(L*2*()).*

We now introduce two new unknowns to rewrite (11.8) as an evolutionary equation. For this let *d >* 0 and set *vd* := *∂t ,νu* + *du* and *q* := −*Cu.* Then we formally get

$$
\partial\_{l, \vee} q = -C \partial\_{l, \vee} u = -C(v\_d - du) = -Cv\_d + dCu = -Cv\_d - dq
$$

and

$$\begin{aligned} \partial\_{l,\boldsymbol{\nu}}M(\partial\_{l,\boldsymbol{\nu}})\upsilon\_{d} &= \partial\_{l,\boldsymbol{\nu}}^{2}M(\partial\_{l,\boldsymbol{\nu}})\mu + d\partial\_{l,\boldsymbol{\nu}}M(\partial\_{l,\boldsymbol{\nu}})\mu \\ &= f - C^{\*}C\iota + d\partial\_{l,\boldsymbol{\nu}}M\_{0}(\partial\_{l,\boldsymbol{\nu}})\mu + dM\_{1}(\partial\_{l,\boldsymbol{\nu}})\mu \\ &= f + C^{\*}q + dM\_{0}(\partial\_{l,\boldsymbol{\nu}})(\upsilon\_{d} - d\boldsymbol{u}) + dM\_{1}(\partial\_{l,\boldsymbol{\nu}})\mu \\ &= f + C^{\*}q + dM\_{0}(\partial\_{l,\boldsymbol{\nu}})\upsilon\_{d} - d\left(M\_{1}(\partial\_{l,\boldsymbol{\nu}}) - dM\_{0}(\partial\_{l,\boldsymbol{\nu}})\right)C^{-1}q . \end{aligned}$$

Thus, the new unknowns, *vd* and *q*, satisfy an evolutionary equation of the form

$$\begin{aligned} \left(\partial\_{l,\boldsymbol{\nu}}\begin{pmatrix} M(\partial\_{l,\boldsymbol{\nu}}) \ 0\\0 \end{pmatrix} + d\begin{pmatrix} -M\_{0}(\partial\_{l,\boldsymbol{\nu}}) \ \left(M\_{1}(\partial\_{l,\boldsymbol{\nu}}) - dM\_{0}(\partial\_{l,\boldsymbol{\nu}})\right)C^{-1}\\0 \end{pmatrix} \\ + \begin{pmatrix} 0 & -C^{\*}\\C & 0 \end{pmatrix}\right) \begin{pmatrix} v\_{d} \\ q \end{pmatrix} = \begin{pmatrix} f\\0 \end{pmatrix},\end{aligned} \tag{11.9}$$

with a new material law *Md* : dom*(M)* <sup>⊆</sup> <sup>C</sup> <sup>→</sup> *L(H*<sup>0</sup> <sup>⊕</sup> *<sup>H</sup>*1*)* given by

$$M\_d(z) := \begin{pmatrix} M(z) \ 0 \\ 0 & 1 \end{pmatrix} + z^{-1} d \begin{pmatrix} -M\_0(z) \ (M\_1(z) - dM\_0(z)) \ C^{-1} \\ 0 & 1 \end{pmatrix}.$$

*Remark 11.4.2* We remark that the above formal computation can be done rigorously (both forward and backwards), so that indeed (11.8) and (11.9) are equivalent problems in the sense that the solutions *u* and *(vd ,q)* are linked via

$$v\_d = \partial\_{l, \nu} u + du, \quad q = -Cu.$$

## **11.5 A Criterion for Exponential Stability of Hyperbolic-Type Equations**

In this section we provide sufficient conditions on the material law *M* in order to obtain a well-posed and exponentially stable problem (11.9) for a suitable *d >* 0. So, we assume the same assumptions to be in effect as in the previous section.

*Remark 11.5.1* Assume that (11.9) is exponentially stable with decay rate *ρ*<sup>0</sup> *>* 0; that is, *vd* <sup>∈</sup> *<sup>L</sup>*2*,*−*ρ(*R; *<sup>H</sup>*0*), q* <sup>∈</sup> *<sup>L</sup>*2*,*−*<sup>ρ</sup> (*R; *<sup>H</sup>*1*)* if *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,*−*ρ(*R; *<sup>H</sup>*0*)* <sup>∩</sup> *<sup>L</sup>*2*,ν(*R; *<sup>H</sup>*0*)* for all *<sup>ρ</sup>* <sup>∈</sup> [0*, ρ*0*)* and *ν >* 0 large enough. Then *u, ∂t ,νu* <sup>∈</sup> *<sup>L</sup>*2*,*−*ρ(*R; *<sup>H</sup>*0*)* as well. Indeed, since

$$
\mu = -C^{-1}q \in L\_{2, -\rho}(\mathbb{R}; H\_0),
$$

we derive

$$
\partial\_{l,\upsilon}u = v\_d - d\mu \in L\_{2,-\rho}(\mathbb{R}; H\_0).
$$

Employing Exercise 11.1, we even infer *u* ∈ dom*(∂t ,*−*ρ)* and hence, *u* ∈ *<sup>C</sup>*−*<sup>ρ</sup> (*R; *<sup>H</sup>*0*)* by Sobolev's embedding theorem (see Theorem 4.1.2). Thus, we also obtain the exponential stability of (11.8) in this case.

In order to prove the exponential stability of (11.9), we have to show how a positive definiteness assumption on *M* allows for positive definiteness of *Md* for some *d >* 0. We start with the following observation.

**Lemma 11.5.2** *Let z* ∈ dom*(M), c >* 0*. Assume*

$$\operatorname{Re}\left\langle \mu, zM(\varepsilon)\mu \right\rangle\_{H\_0} \geqslant c \left\| \mu \right\|\_{H\_0}^2 \quad (\mu \in H\_0).$$

*Then for d >* 0 *and (v, q)* ∈ *H*<sup>0</sup> ⊕ *H*<sup>1</sup> *it follows that*

$$\operatorname{Re}\langle (v,q), zM\_d(z)(v,q)\rangle\_{H\_0 \oplus H\_1} \geqslant \min\left\{c-dK(d), \frac{3}{4}d+\operatorname{Re}z\right\} \|(v,q)\|\_{H\_0 \oplus H\_1}^2,$$

*where K(d)* := *m*<sup>0</sup> + *(dm*<sup>0</sup> + *m*1*)* <sup>2</sup> *<sup>C</sup>*−<sup>1</sup> <sup>2</sup> *and mj* :=  *Mj* <sup>∞</sup> *for <sup>j</sup>* ∈ {0*,* <sup>1</sup>}*.* *Proof* Let *v* ∈ *H*<sup>0</sup> and *q* ∈ *H*1. Then we estimate

$$\begin{split} & \text{Re}\left\langle (v,q), zM\_{d}(z)(v,q) \right\rangle\_{H\_{0}\oplus H\_{1}} \\ &= \text{Re}\left\langle v, zM(z)v - dM\_{0}(z)v + d(M\_{1}(z) - dM\_{0}(z))C^{-1}q \right\rangle\_{H\_{0}} + \text{Re}\left\langle q, zq + dq \right\rangle\_{H\_{1}} \\ & \geqslant (c - dm\_{0})\left\| v\right\|\_{H\_{0}}^{2} - d\left(m\_{1} + dm\_{0}\right)\left\| C^{-1}\right\| \left\| q\right\|\_{H\_{1}}\left\| v\right\|\_{H\_{0}} + (\text{Re}\, z + d)\left\| q\right\|\_{H\_{1}}^{2} \\ & \geqslant \left( c - dm\_{0} - \frac{1}{4\varepsilon}d^{2}\left(m\_{1} + dm\_{0}\right)^{2}\left\| C^{-1}\right\|^{2}\right)\left\| v\right\|\_{H\_{0}}^{2} + (\text{Re}\, z + d - \varepsilon)\left\| q\right\|\_{H\_{1}}^{2}, \end{split}$$

for each *ε >* 0, where we have used the Peter–Paul inequality. Choosing *<sup>ε</sup>* <sup>=</sup> *<sup>d</sup>* <sup>4</sup> , we obtain the assertion.

This estimate allows us to derive the positive definiteness of *Md* for a suitable choice of *d >* 0.

**Proposition 11.5.3** *Let c >* 0 *and assume that*

$$\operatorname{Re}\left\langle u, zM(z)u \right\rangle\_{H\_0} \geqslant c \left\| u \right\|\_{H\_0}^2 \quad (u \in H\_0, \ z \in \operatorname{dom}(M)).$$

*Then there exist c, d, ρ*<sup>0</sup> *<sup>&</sup>gt;* <sup>0</sup> *such that*

$$\operatorname{Re}\left\langle(v,q),zM\_d(z)(v,q)\right\rangle\_{H\_0\oplus H\_1} \geqslant \widetilde{c} \left\|(v,q)\right\|\_{H\_0\oplus H\_1}^2$$

*for all <sup>z</sup>* <sup>∈</sup> dom*(M)* <sup>∩</sup> <sup>C</sup>Re*>*−*ρ*<sup>0</sup> *and (v, q)* <sup>∈</sup> *<sup>H</sup>*<sup>0</sup> <sup>⊕</sup> *<sup>H</sup>*1*.*

*Proof* We note that *dK(d)* → 0 as *d* → 0*,* where *K(d)* is given as in Lemma 11.5.2. Hence, we find *d >* 0 such that *dK(d) < c.* Choosing *ρ*<sup>0</sup> *<* <sup>3</sup> 4 *d* and using Lemma 11.5.2, we estimate for each *<sup>z</sup>* <sup>∈</sup> dom*(M)* <sup>∩</sup> <sup>C</sup>Re*>*−*ρ*<sup>0</sup> and *(v, q)* ∈ *H*<sup>0</sup> ⊕ *H*<sup>1</sup>

$$\operatorname{Re}\left\langle(v,q),zM\_d(z)(v,q)\right\rangle\_{H\_0\oplus H\_1} \geqslant \widetilde{c} \left\|(v,q)\right\|\_{H\_0\oplus H\_1}^2,$$

where *<sup>c</sup>* := min *<sup>c</sup>* <sup>−</sup> *dK(d),* <sup>3</sup> <sup>4</sup> *d* − *ρ*<sup>0</sup> *>* 0 showing the assertion.

We are now in the position to state the main result for exponential stability of hyperbolic-type equations.

**Theorem 11.5.4** *Let C*: dom*(C)* ⊆ *H*<sup>0</sup> → *H*<sup>1</sup> *be a densely defined closed linear and boundedly invertible operator between two Hilbert spaces H*<sup>0</sup> *and H*1*. Furthermore, let <sup>M</sup>* : dom*(M)* <sup>⊆</sup> <sup>C</sup> <sup>→</sup> *L(H*0*) be a material law of the form*

$$M(z) = M\_0(z) + z^{-1} M\_1(z) \quad (z \in \text{dom}(M)),$$

*where <sup>M</sup>*0*, M*<sup>1</sup> : dom*(M)* <sup>⊆</sup> <sup>C</sup> <sup>→</sup> *L(H ) are bounded analytic functions. Assume that there exist c, ν*<sup>0</sup> *<sup>&</sup>gt;* <sup>0</sup> *such that* <sup>C</sup>Re*>*−*ν*<sup>0</sup> \ dom*(M) is discrete and*

$$\operatorname{Re}\left\langle \mu, zM(z)\mu \right\rangle\_{H\_0} \ge c \left\| \mu \right\|\_{H\_0}^2$$

*for each u* ∈ *H*0*, z* ∈ dom*(M). Then there exists some d >* 0 *such that problem (11.9) is well-posed and exponentially stable.*

*Proof* We first note that by Proposition 11.5.3 there exist *<sup>ρ</sup>*0*,d,c >* 0 such that

$$\operatorname{Re}\left\langle(v,q),zM\_d(z)(v,q)\right\rangle\_{H\_0\oplus H\_1} \geqslant \widetilde{c} \left\|(v,q)\right\|\_{H\_0\oplus H\_1}^2$$

for all *<sup>z</sup>* <sup>∈</sup> dom*(M)* <sup>∩</sup> <sup>C</sup>Re*>*−*ρ*<sup>0</sup> and *(v, q)* <sup>∈</sup> *<sup>H</sup>*<sup>0</sup> <sup>⊕</sup> *<sup>H</sup>*1. Since *<sup>M</sup>* is a material law, so is *Md* and thus, well-posedness of (11.9) follows from Picard's theorem (see Theorem 6.2.1). Since

$$
\begin{pmatrix} 0 & -C^\* \\ C & 0 \end{pmatrix}
$$

is skew-selfadjoint, the above estimate yields that *zMd (z)* + 0 −*C*<sup>∗</sup> *C* 0 is boundedly invertible for each *<sup>z</sup>* <sup>∈</sup> dom*(M)* <sup>∩</sup> <sup>C</sup>Re*>*−*ρ*<sup>0</sup> with

$$\sup\_{z \in \text{dom}(M) \cap \mathbb{C}\_{\text{Re} \, : \, -\rho\_0}} \|T\_d(z)\| \lesssim \frac{1}{\widetilde{c}},$$

where

$$T\_d(z) := \left(zM\_d(z) + \begin{pmatrix} 0 \ -C^\* \\ C & 0 \end{pmatrix}\right)^{-1}.$$

Setting *<sup>μ</sup>* := min{*ν*0*, ρ*0}, we infer that *Td* is defined on the whole <sup>C</sup>Re*>*−*<sup>μ</sup>* despite a discrete set. Since *Td* is holomorphic and bounded, Riemann's theorem on removable singularities implies that *Td* can be extended to a holomorphic and bounded function on <sup>C</sup>Re*>*−*μ*. We denote this extension again by *Td* . In particular, *Td* is a material law with *sb(Td )* −*μ*. Let now *ρ* ∈ [0*, μ)* and *(f, g)* ∈ *<sup>L</sup>*2*,ν(*R; *<sup>H</sup>*<sup>0</sup> <sup>⊕</sup> *<sup>H</sup>*1*)* <sup>∩</sup> *<sup>L</sup>*2*,*−*ρ(*R; *<sup>H</sup>*<sup>0</sup> <sup>⊕</sup> *<sup>H</sup>*1*)*, where *ν >* 0 is large enough to ensure well-posedness. By Theorem 5.3.6 we derive

$$T\_d(\partial\_{l,\vee})(f,\mathbf{g}) = T\_d(\partial\_{l,-\rho})(f,\mathbf{g}) \in L\_{2,-\rho}(\mathbb{R}; \, H\_0 \oplus H\_1).$$

and since *Td (∂t ,ν)(f, g)* is nothing but the solution of (11.9) with the right-hand side replaced by *(f, g)*, exponential stability follows. **Definition** We call the equation

$$\left(\partial\_{\mathfrak{l},\boldsymbol{\upsilon}}^2 \boldsymbol{M}(\partial\_{\mathfrak{l},\boldsymbol{\upsilon}}) + \boldsymbol{C}^\* \boldsymbol{C}\right) \boldsymbol{\mu} = \boldsymbol{f}$$

*exponentially stable* if there exists some *d >* 0 such that the equation

$$\left(\partial\_{\mathbb{I},\mathbb{V}}M\_d(\partial\_{\mathbb{I},\mathbb{V}}) + \begin{pmatrix} 0 \ -C^\* \\ C & 0 \end{pmatrix}\right)v = g.$$

is exponentially stable.

## **11.6 Examples of Exponentially Stable Hyperbolic Problems**

We will illustrate our findings by providing two concrete examples. Firstly, we discuss the damped wave equation in an abstract form and, secondly, we consider the dual phase lag model, as it was introduced in Sect. 7.4.

#### **The Damped Wave Equation**

We start by formulating an immediate corollary of our main stability theorem.

**Corollary 11.6.1** *Let C* : dom*(C)* ⊆ *H*<sup>0</sup> → *H*<sup>1</sup> *be a densely defined closed linear and boundedly invertible operator between two Hilbert spaces H*<sup>0</sup> *and H*<sup>1</sup> *and let M*0*, M*<sup>1</sup> ∈ *L(H*0*) such that M*<sup>0</sup> *is selfadjoint and M*<sup>0</sup> - 0*,* Re *M*<sup>1</sup> *c >* 0*. Then the second order problem*

$$\left(\partial\_{\mathfrak{t},\boldsymbol{\nu}}^2 M\_0 + \partial\_{\mathfrak{t},\boldsymbol{\nu}} M\_1 + C^\* C\right) \boldsymbol{\mu} = \boldsymbol{f}'$$

*is exponentially stable.*

*Proof* We have to prove that the material law

$$M(z) := M\_0 + z^{-1}M\_1 \quad (z \in \mathbb{C} \backslash \{0\})$$

satisfies the assumptions of Theorem 11.5.4. For Re *z* -0 we have

$$\operatorname{Re}\left\langle \mu, zM(\varepsilon)u \right\rangle\_{H\_0} \geqslant c \left\| \mu \right\|\_{H\_0}^2 \quad (\mu \in H\_0),$$

since Re *zM*<sup>0</sup> - 0. Moreover, for Re *<sup>z</sup>* <sup>∈</sup> [−*ρ*0*,* 0] with *<sup>ρ</sup>*<sup>0</sup> *<sup>&</sup>lt; <sup>c</sup> <sup>M</sup>*<sup>0</sup> (we set *<sup>c</sup>* <sup>0</sup> := ∞) we have that

$$\operatorname{Re} \left\langle \mu, zM(\varepsilon)u \right\rangle\_{H\_0} \geqslant (-\rho\_0 \|M\_0\| + c) \left\| u \right\|\_{H\_0}^2 \quad (\mu \in H\_0).$$

Since <sup>C</sup>Re*>*−*ρ*<sup>0</sup> \ dom*(M)* = {0}, we can apply Theorem 11.5.4.

We now come to a concrete realisation of the operator *<sup>C</sup>*. Let <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* be open and contained in a slab. According to Corollary 11.3.2 the space ran*(*grad0*)* is closed and by Lemma 11.3.4 the operator

$$C := \iota^\*\_{\text{ran}(\text{grad}\_0)} \text{ grad}\_0 \colon \text{dom}(\text{grad}\_0) \subseteq L\_2(\Omega) \to \text{ran}(\text{grad}\_0)$$

is densely defined, closed and boundedly invertible, and its adjoint is given by

$$C^\* = -\operatorname{div} \iota\_{\text{ran}(\text{grad}\_0)} \cdot$$

Thus, we have that

$$C^\*C = -\operatorname{div} \iota\_{\text{ran}(\text{grad}\_0)} \iota\_{\text{ran}(\text{grad}\_0)}^\* \operatorname{grad}\_0 = -\operatorname{div} \operatorname{grad}\_0.$$

Let now *M*0*, M*<sup>1</sup> ∈ *L(L*2*())* with *M*<sup>0</sup> selfadjoint and *M*<sup>0</sup> - 0, Re *M*<sup>1</sup> *c >* 0. By Corollary 11.6.1 the equation

$$\left(\partial\_{\mathfrak{l},\boldsymbol{\upsilon}}^2 M\_0 + \partial\_{\mathfrak{l},\boldsymbol{\upsilon}} M\_{\mathfrak{l}} - \operatorname{div} \operatorname{grad}\_0\right) \boldsymbol{\mu} = \boldsymbol{f} \tag{11.10}$$

is exponentially stable.

*Remark 11.6.2* We emphasise that this result yields the classical exponential stability for the damped wave equation; i.e., the situation where *M*<sup>0</sup> = 1. However, Corollary 11.6.1 is also applicable in the situation where *<sup>M</sup>*<sup>0</sup> <sup>=</sup> <sup>1</sup><sup>0</sup> for some <sup>0</sup> ⊆ and Re *M*<sup>1</sup> *c*. In this case, Eq. (11.10) is a coupled system of the damped wave equation inside <sup>0</sup> and of the heat equation outside 0.

#### **Dual Phase Lag Heat Conduction**

We recall the setting of Sect. 7.4, where we have discussed the equations of dual phase lag heat conduction on an open and bounded subset <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* within the framework of evolutionary equations. The equations under consideration consist of the heat flux balance

$$
\partial\_{\mathfrak{l},\upsilon}\theta + \operatorname{div} q = \mathcal{Q},
$$

and a modified Fourier's law

$$(1 + s\_q \partial\_{l, \nu} + \frac{1}{2} s\_q^2 \partial\_{l, \nu}^2) q = -(1 + s\_\theta \partial\_{l, \nu}) \operatorname{grad} \theta,\tag{11.11}$$

where *sq* <sup>∈</sup> <sup>R</sup>*, sθ <sup>&</sup>gt;* 0 are given. Note that *(*<sup>1</sup> <sup>+</sup> *sθ ∂t ,ν)* is boundedly invertible for *ν >* <sup>−</sup> <sup>1</sup> *sθ* and hence, (11.11) yields

$$-\operatorname{grad}\theta = \partial\_{\mathfrak{l},\boldsymbol{\nu}}(\partial\_{\mathfrak{l},\boldsymbol{\nu}}^{-1} + \mathfrak{s}\_{q} + \frac{1}{2}\mathfrak{s}\_{q}^{2}\partial\_{\mathfrak{l},\boldsymbol{\nu}})(1 + \mathfrak{s}\_{\boldsymbol{\theta}}\partial\_{\mathfrak{l},\boldsymbol{\nu}})^{-1}q\dots$$

Applying the operator *∂t ,ν(∂*<sup>−</sup><sup>1</sup> *t ,ν* <sup>+</sup>*sq* <sup>+</sup> <sup>1</sup> 2 *s*2 *<sup>q</sup> ∂t ,ν)(*1+*sθ ∂t ,ν)*−<sup>1</sup> to the heat flux balance equation (and assuming that *Q* ∈ dom*(∂t ,ν)*) we obtain the following second order problem

$$
\partial\_{t,\boldsymbol{\nu}}^2 \left( \partial\_{t,\boldsymbol{\nu}}^{-1} + \mathbf{s}\_q + \frac{1}{2} \mathbf{s}\_q^2 \partial\_{t,\boldsymbol{\nu}} \right) (1 + \mathbf{s}\_\theta \partial\_{t,\boldsymbol{\nu}})^{-1} \boldsymbol{\theta} - \operatorname{div} \mathbf{grad} \, \boldsymbol{\theta} = \widetilde{\boldsymbol{\mathcal{Q}}}, \tag{11.12}
$$

for a suitable source term *<sup>Q</sup>*. Assuming Dirichlet boundary conditions for *<sup>θ</sup>*, the equation takes the form

$$\left(\partial^2\_{l,\boldsymbol{\upsilon}}\mathcal{M}(\partial\_{l,\boldsymbol{\upsilon}}) + \boldsymbol{C}^\*\boldsymbol{C}\right)\boldsymbol{\theta} = \widetilde{\boldsymbol{\varrho}}\boldsymbol{\zeta}$$

with *C* := *ι* ∗ ran*(*grad0*)* grad0 : dom*(*grad0*)* <sup>⊆</sup> *<sup>L</sup>*2*()* <sup>→</sup> ran*(*grad0*)* and

$$M(z) = \frac{z^{-1} + s\_q + \frac{1}{2}s\_q^2 z}{1 + s\_\theta z} \quad \left( z \in \mathbb{C} \lor \left\{ 0, -\frac{1}{s\_\theta} \right\} \right).$$

Note that

$$M(z) = \frac{s\_{\theta} + \frac{1}{2}s\_{q}^{2}z}{1 + s\_{\theta}z} + z^{-1}\frac{1}{1 + s\_{\theta}z}$$

and hence, *M* is indeed of the form considered in Sect. 11.5 with

$$M\_0(z) = \frac{s\_q + \frac{1}{2}s\_q^2 z}{1 + s\_\theta z}, \quad M\_1(z) = \frac{1}{1 + s\_\theta z},$$

which are both bounded if we restrict the domain of *M* to a right half-plane <sup>C</sup>Re*>*<sup>−</sup> <sup>1</sup> *sθ* <sup>+</sup>*<sup>ε</sup>* for some *ε >* 0.

**Proposition 11.6.3** *If* 0 *< sq sθ <* 2 *then the dual phase lag model (11.12) is exponentially stable.*

*Proof* We apply Theorem 11.5.4. For this we need to show that there exists *c >* 0 such that

$$\operatorname{Re}\left\langle \mu, zM(z)\mu \right\rangle\_{L\_2(\Omega)} \geqslant c \left\| \mu \right\|\_{L\_2(\Omega)}^2$$

for each *<sup>u</sup>* <sup>∈</sup> *<sup>L</sup>*2*()* and *<sup>z</sup>* <sup>∈</sup> <sup>C</sup>Re*>*−*ν*<sup>0</sup> <sup>∩</sup>dom*(M)* for some 0 *< ν*<sup>0</sup> *<sup>&</sup>lt;* <sup>1</sup> *sθ* . Indeed, this is sufficient for exponential stability, since <sup>C</sup>Re*>*−*ν*<sup>0</sup> \ dom*(M)* = {0} is discrete and *C* = *ι* ∗ ran*(*grad0*)* grad0 is boundedly invertible. Similar to the proof of Lemma 7.4.3 we set *<sup>σ</sup>* := *sq sθ* and obtain

$$zM(z) = \frac{1}{2}s\_q z \sigma + \sigma \left(1 - \frac{1}{2}\sigma \right) + \frac{1 - \sigma \left(1 - \frac{1}{2}\sigma \right)}{1 + s\_\theta z}$$

for each *<sup>z</sup>* <sup>∈</sup> dom*(M)*. Since 0 *<σ<* 2 we obtain 0 *< σ* <sup>1</sup> <sup>−</sup> <sup>1</sup> 2*σ* <sup>1</sup> <sup>2</sup> and hence,

$$\begin{aligned} \operatorname{Re} z M(z) &= \frac{1}{2} s\_q \operatorname{Re} z \sigma + \sigma \left( 1 - \frac{1}{2} \sigma \right) + \frac{\left( 1 - \sigma \left( 1 - \frac{1}{2} \sigma \right) \right) \left( 1 + s\_\theta \operatorname{Re} z \right)}{\left| 1 + s\_\theta z \right|^2} \\ &\geqslant -\frac{1}{2} s\_q \nu\_0 \sigma + \sigma \left( 1 - \frac{1}{2} \sigma \right) =: c\_{\mathbb{W}\_0} \end{aligned}$$

for each *<sup>z</sup>* <sup>∈</sup> <sup>C</sup>Re*>*−*ν*<sup>0</sup> <sup>∩</sup> dom*(M)* with 0 *< ν*<sup>0</sup> *<sup>&</sup>lt;* <sup>1</sup> *sθ .* Choosing now 0 *< ν*<sup>0</sup> *<* min{ <sup>1</sup> *sθ ,* <sup>2</sup>−*<sup>σ</sup> sq* }, we obtain *cν*<sup>0</sup> *>* 0 and thus, Theorem 11.5.4 is applicable which yields the assertion.

## **11.7 Comments**

The results of this chapter are based on the results obtained in [116, Section 2]. There, Laplace transform techniques are used to characterise the exponential stability of evolutionary equations in a slightly more general setting. In particular, further criteria for exponential stability of parabolic- and hyperbolic-type equations are given, which also allow for the treatment of integro-differential equations.

In general whether or not a given partial differential equation is (exponentially) stable is both an important and classical question in the area of equations depending on time. The understanding of this question for instance contributes to the study of equilibria of non-linear equations. In the linear case, in particular in the framework of *C*0-semigroups, stability has been studied intensively resulting in an abundance of criteria. Due to strong continuity of the semigroup and, thus, of the considered solutions (exponential) stability is defined via pointwise estimates. As an example criterion we mention Datko's theorem [29] (see also [6, Theorem 5.1.2]), which states that a *C*0-semigroup is exponentially stable if and only if the solution operator associated with the equation

$$\left(\partial\_{\mathfrak{t},\boldsymbol{\nu}} + A\right)U = F$$

leaves *Lp(*R-<sup>0</sup>; *H )* invariant for some (or equivalently all) *p* ∈ [1*,*∞*)*. As it turns out, the latter is equivalent to the invariance of *<sup>L</sup>*2*,*−*<sup>ρ</sup> (*R; *H )* for some *ρ >* 0 and thus, our notion of exponential stability coincides with the usual one used in the theory of *C*0-semigroups. Another important theorem on the exponential stability of *C*0-semigroups on Hilbert spaces is the Theorem of Gearhart–Prüß [96] (see also [38, Chapter 5, Theorem 1.11]), where the exponential stability of a *C*0-semigroup is characterised in terms of the resolvent of its generator.

The wave equation without damping is not exponentially stable. In fact one can even show that energy is preserved during the evolution. Hence, it is a natural question whether it is possible to introduce suitable 'dampers' (i.e., lower order coefficients) leading to an exponentially stable equation. The criterion in Corollary 11.6.1 shows that if the damper *M*<sup>1</sup> is 'global' in the sense that it is induced by a multiplication operator *a(*m*)* for a strictly positive function *a*, the resulting damped wave equation is exponentially stable.

A less general, more detailed analysis of the actual wave equation shows that it is possible to obtain an exponentially stable damped wave equation if the damper is only local or introduced via boundary conditions. Indeed, in [9] the authors proved exponential stability of the damped equation if the damping area [*a >* 0] := {*x* ∈ ; *a(x) >* 0} satisfies the geometric optics condition. This is, for instance, the case if [*a >* 0] contains a neighbourhood of the boundary *∂*.

Besides exponential stability, which is the only type of stability studied so far within the current framework of evolutionary equations, different kinds of asymptotic behaviours were addressed and characterised for *C*0-semigroups. We just mention the celebrated Arendt–Batty–Lyubich–Vu theorem [4, 61] on strong stability of *C*0-semigroups or the Theorem of Borichev–Tomilov [15] on the polynomial stability of *C*0-semigroups on Hilbert spaces.

## **Exercises**

**Exercise 11.1** Let *<sup>H</sup>* be a Hilbert space, *ν, ρ* <sup>∈</sup> <sup>R</sup> and *<sup>u</sup>* <sup>∈</sup> *<sup>L</sup>*1*,*loc*(*R; *H )*. Prove the following statements:

(a) If *u* ∈ dom*(∂t ,ν)* ∩ dom*(∂t ,ρ)* then *∂t ,νu* = *∂t ,ρu*. (b) If *<sup>u</sup>* <sup>∈</sup> dom*(∂t ,ν)* such that *u, ∂t ,νu* <sup>∈</sup> *<sup>L</sup>*2*,ρ(*R; *H )* then *<sup>u</sup>* <sup>∈</sup> dom*(∂t ,ρ)*.

**Exercise 11.2** Prove Lemma 11.3.3.

**Exercise 11.3** Let *H*0*, H*<sup>1</sup> be Hilbert spaces and *A*: dom*(A)* ⊆ *H*<sup>0</sup> → *H*<sup>1</sup> a densely defined closed linear operator. Moreover, we assume that *A* has closed range. Show that the adjoint of the operator *ι* ∗ ran*(A)A*: dom*(A)* ⊆ *H*<sup>0</sup> → ran*(A)* is given by *A*∗*ι*ran*(A)*. If additionally *A* is one-to-one, show that *ι* ∗ ran*(A)A* is boundedly invertible.

**Exercise 11.4** Let <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* be open and contained in a slab. We consider the heat conduction with a memory term given by the equations

$$\begin{aligned} \partial\_{l,\upsilon}\theta + \text{div}\,q &= f, \\ q &= -(1 - k\*)\,\text{grad}\_0\theta, \end{aligned} \tag{11.13}$$

where *<sup>k</sup>* <sup>∈</sup> *<sup>L</sup>*1*,*−*ρ*<sup>1</sup> *(*R-<sup>0</sup>; <sup>R</sup>*)* for some *<sup>ρ</sup>*<sup>1</sup> *<sup>&</sup>gt;* 0 with

$$\int\_0^\infty |k(t)| \text{ dt } < 1.$$

Write (11.13) as a suitable evolutionary equation and prove that this equation is exponentially stable.

**Exercise 11.5** Let *<sup>A</sup>* <sup>∈</sup> <sup>C</sup>*n*×*<sup>n</sup>* for some *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> and consider the evolutionary equation

$$(\partial\_{t,\upsilon} + A)U = F.$$

Prove that the solution operators associated with this problem are exponentially stable if and only if *A* has only eigenvalues with strictly positive real part.

**Exercise 11.6** Let <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* be open.

(a) Let *ϕ* ∈ *C*<sup>∞</sup> <sup>c</sup> *()<sup>d</sup>* . Prove *Korn's inequality*

$$\left\|\mathbf{Grad}\,\varphi\right\|\_{L\_2(\Omega)^{d\times d}\_{\text{sym}}}^2 \gtrsim \frac{1}{2} \sum\_{j=1}^d \left\|\mathbf{grad}\,\varphi\_j\right\|\_{L\_2(\Omega)^d}^2.$$

(b) Use Korn's inequality to prove that for *<sup>u</sup>* <sup>∈</sup> *<sup>L</sup>*2*()<sup>d</sup>* we have

*u* ∈ dom*(*Grad0*)* ⇐⇒ ∀*j* ∈ {1*,...,d*} : *uj* ∈ dom*(*grad0*).*

Moreover, show that in either case

$$\frac{1}{2} \sum\_{j=1}^d \left\| \operatorname{grad}\_0 \boldsymbol{u}\_j \right\|\_{L\_2(\Omega)^d}^2 \leqslant \left\| \operatorname{Grad}\_0 \boldsymbol{u} \right\|\_{L\_2(\Omega)^{d \times d}\_{\text{sym}}}^2 \leqslant \sum\_{j=1}^d \left\| \operatorname{grad}\_0 \boldsymbol{u}\_j \right\|\_{L\_2(\Omega)^d}^2.$$

(c) Let now be contained in a slab. Prove that Grad0 is one-to-one and has closed range.

**Exercise 11.7** Let <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* be open and *<sup>a</sup>* <sup>∈</sup> *L(L*2*()<sup>d</sup> )* with Re *<sup>a</sup> c >* 0.

(a) Let *ν >* 0 and *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *<sup>L</sup>*2*())*. Moreover, assume that is contained in a slab and define *<sup>a</sup>* := *<sup>ι</sup>* ∗ ran*(*grad0*)aι*ran*(*grad0*)*. Let *<sup>θ</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν (*R; *<sup>L</sup>*2*())*, *<sup>q</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *<sup>L</sup>*2*()<sup>d</sup> )* satisfy

$$
\begin{pmatrix} \partial\_{l,\boldsymbol{\upsilon}} \begin{pmatrix} 1 \ 0 \\ 0 \ 0 \end{pmatrix} + \begin{pmatrix} 0 & 0 \\ 0 \ a^{-1} \end{pmatrix} + \begin{pmatrix} 0 & \text{div} \\ \text{grad}\_0 & 0 \end{pmatrix} \end{pmatrix} \begin{pmatrix} \theta \\ q \end{pmatrix} = \begin{pmatrix} f \\ 0 \end{pmatrix}.
$$

and *<sup>θ</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν (*R; *<sup>L</sup>*2*())*, *<sup>q</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R;ran*(*grad0*))* satisfy

$$
\left(\partial\_{l,\vee} \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} + \begin{pmatrix} 0 & 0 \\ 0 \ \widetilde{a}^{-1} \end{pmatrix} + \begin{pmatrix} 0 & \operatorname{div}\iota\_{\operatorname{ran}(\operatorname{grad}\_0)} \\ \iota\_{\operatorname{ran}(\operatorname{grad}\_0)}^\* \operatorname{grad}\_0 & 0 \end{pmatrix} \right) \begin{pmatrix} \widetilde{\theta} \\ \widetilde{q} \end{pmatrix} = \begin{pmatrix} f \\ 0 \end{pmatrix}.
$$

Show that *(θ , ι*∗ ran*(*grad0*)q)* <sup>=</sup> *(θ ,q )*.

(b) Let be bounded and consider the evolutionary equation

$$
\left(\partial\_{t,\boldsymbol{\upsilon}}\begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} + \begin{pmatrix} 0 & 0 \\ 0 \ a^{-1} \end{pmatrix} + \begin{pmatrix} 0 & \operatorname{div}\_0 \\ \operatorname{grad} & 0 \end{pmatrix}\right) \begin{pmatrix} \theta \\ q \end{pmatrix} = \begin{pmatrix} f \\ 0 \end{pmatrix} \dots
$$

Show that the associated solution operators are not exponentially stable.

## **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 12 Boundary Value Problems and Boundary Value Spaces**

This chapter is devoted to the study of inhomogeneous boundary value problems. For this, we shall reformulate the boundary value problem again into a form which fits within the general framework of evolutionary equations. In order to have an idea of the type of boundary values which make sense to study, we start off with a section that deals with the boundary values of functions in the domain of the gradient operator defined on a half-space in <sup>R</sup>*<sup>d</sup>* (for *<sup>d</sup>* <sup>=</sup> 1 we have *<sup>L</sup>*2*(*R*d*−1*)* <sup>=</sup> <sup>K</sup>).

# **12.1 The Boundary Values of** *<sup>H</sup>***1***(*R*d***−<sup>1</sup> <sup>×</sup>** <sup>R</sup>*>***0***)*

In this section we let := <sup>R</sup>*d*−1×R*>*<sup>0</sup> and *<sup>f</sup>* <sup>∈</sup> *<sup>H</sup>*1*()*; our aim is to make sense of the function <sup>R</sup>*d*−<sup>1</sup> *<sup>x</sup>*<sup>q</sup> → *f (x,* <sup>q</sup> <sup>0</sup>*)*. Note that this makes no sense if we only assume *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*()* since <sup>R</sup>*d*−<sup>1</sup> × {0} = *∂* is a set of (*d*-dimensional) Lebesgue-measure zero. However, if we assume *f* to be weakly differentiable, something more can be said and the boundary values can be defined by means of a continuous extension of the so-called trace map. In order to properly formulate this, we need the following density result.

**Theorem 12.1.1** *The set D* := *<sup>φ</sup>* : <sup>→</sup> <sup>K</sup> ; ∃*<sup>ψ</sup>* <sup>∈</sup> *<sup>C</sup>*<sup>∞</sup> <sup>c</sup> *(*R*<sup>d</sup> )*: *<sup>ψ</sup>*| <sup>=</sup> *<sup>φ</sup> is dense in the space H*1*().*

We will need a density result for *H*1*(*R*<sup>d</sup> )* first.

**Lemma 12.1.2** *C*∞ <sup>c</sup> *(*R*<sup>d</sup> ) is dense in <sup>H</sup>*1*(*R*<sup>d</sup> ).*

*Proof* Let *<sup>f</sup>* <sup>∈</sup> *<sup>H</sup>*1*(*R*<sup>d</sup> )*. We first show that *<sup>f</sup>* can be approximated by functions with compact support. For this let *φ* ∈ *C*<sup>∞</sup> <sup>c</sup> *(*R*<sup>d</sup> )* with the properties 0 *<sup>φ</sup>* 1, *<sup>φ</sup>* <sup>=</sup> 1 on *<sup>B</sup> (*0*,* <sup>1</sup>*/*2*)* and *<sup>φ</sup>* <sup>=</sup> 0 on <sup>R</sup> \ *<sup>B</sup> (*0*,* <sup>1</sup>*)*. For all *<sup>k</sup>* <sup>∈</sup> <sup>N</sup> we put *φk* := *φ(*·*/k)* and *fk* := *φkf* <sup>∈</sup> *<sup>L</sup>*2*(*R*<sup>d</sup> )*. Then *fk* has support contained in *<sup>B</sup>* [0*, k*]. The dominated convergence theorem implies that *fk* <sup>→</sup> *<sup>f</sup>* in *<sup>L</sup>*2*(*R*<sup>d</sup> )* as *<sup>k</sup>* → ∞. Next, let *ψ* ∈ *C*<sup>∞</sup> <sup>c</sup> *(*R*<sup>d</sup> )<sup>d</sup>* and compute for all *<sup>k</sup>* <sup>∈</sup> <sup>N</sup>

$$-\langle f\_k, \operatorname{div} \psi \rangle = -\langle \phi\_k f, \operatorname{div} \psi \rangle = -\langle f, \phi\_k \operatorname{div} \psi \rangle = -\langle f, \operatorname{div} (\phi\_k \psi) - (\operatorname{grad} \phi\_k) \cdot \psi \rangle$$

$$= -\langle f, \operatorname{div} (\phi\_k \psi) \rangle + \langle f \operatorname{grad} \phi\_k, \psi \rangle$$

$$= \left\langle (\operatorname{grad} f) \phi\_k + \frac{1}{k} f(\operatorname{grad} \phi)(\cdot/k), \psi \right\rangle,$$

which shows that *fk* <sup>∈</sup> dom*(*grad*)* <sup>=</sup> *<sup>H</sup>*1*(*R*<sup>d</sup> )* and

$$\operatorname{grad} f\_k = (\operatorname{grad} f) \phi\_k + \frac{1}{k} f \left( \operatorname{grad} \phi \right) (\cdot/k).$$

From this expression of grad *fk* we observe grad *fk* <sup>→</sup> grad *<sup>f</sup>* in *<sup>L</sup>*2*(*R*<sup>d</sup> )<sup>d</sup>* by dominated convergence. Hence, *fk* <sup>→</sup> *<sup>f</sup>* in dom*(*grad*)* <sup>=</sup> *<sup>H</sup>*1*(*R*<sup>d</sup> )*.

To conclude the proof of this lemma it suffices to revisit Exercise 3.2. For this, let *(ψk)k* in *C*<sup>∞</sup> <sup>c</sup> *(*R*<sup>d</sup> )* be a *<sup>δ</sup>*-sequence. Then, by Exercise 3.2, we infer *ψk* <sup>∗</sup> *<sup>f</sup>* <sup>→</sup> *<sup>f</sup>* in *<sup>L</sup>*2*(*R*<sup>d</sup> )* as *<sup>k</sup>* → ∞ and hence, by Exercise 12.1, it follows also that grad *(ψk* <sup>∗</sup> *<sup>f</sup> )* <sup>=</sup> *ψk* ∗ grad *f* → grad *f* (note the component-wise definition of the convolution). A combination of the first part of this proof together with an estimate for the support of the convolution (see again Exercise 3.2) yields the assertion.

*Proof of Theorem 12.1.1* Let *<sup>f</sup>* <sup>∈</sup> *<sup>H</sup>*1*()*. The approximation of *<sup>f</sup>* by functions in *D* is done in two steps. First, we shift *f* in the negative *ed* -direction to avoid the boundary, and then we convolve the shifted *f* to obtain smooth approximants in *D*.

Let *f* <sup>∈</sup> *<sup>L</sup>*2*(*R*<sup>d</sup> )* be the extension of *<sup>f</sup>* by zero. Put *ed* := *(δj d )j*∈{1*,...d*}, the *<sup>d</sup>*-th unit vector. Then for all *τ >* 0 we have + *τ ed* ⊆ and, thus by Exercise 12.2, we deduce *fτ* := *f (* · + *τ ed )*| <sup>→</sup> *<sup>f</sup>* in *<sup>H</sup>*1*()* as *<sup>τ</sup>* <sup>→</sup> 0. Thus, it suffices to approximate *fτ* for *τ >* 0.

Let *τ >* 0 and let *(ψk )k* in *C*<sup>∞</sup> <sup>c</sup> *(*R*<sup>d</sup> )* be a *<sup>δ</sup>*-sequence. Then *ψk* <sup>∗</sup> *f (* · + *τ ed )* <sup>∈</sup> *<sup>H</sup>*1*(*R*<sup>d</sup> )*, by Exercise 12.1. Define *fk,τ* := *ψk* <sup>∗</sup> *f (* · + *τ ed )* |. Then we obtain that *fk,τ* <sup>→</sup> *fτ* in *<sup>H</sup>*1*()* as *<sup>k</sup>* → ∞. Indeed, the only thing left to prove is that grad *fk,τ* <sup>→</sup> grad *fτ* in *<sup>L</sup>*2*()<sup>d</sup>* as *<sup>k</sup>* → ∞. For this, we denote by *<sup>g</sup>* the extension of grad *<sup>f</sup>* by 0. Since *<sup>g</sup>* <sup>∈</sup> *<sup>L</sup>*2*(*R*<sup>d</sup> )<sup>d</sup>* it suffices to show that grad *fk,τ* <sup>=</sup> *ψk* <sup>∗</sup> *gτ* on for all large enough *<sup>k</sup>* <sup>∈</sup> <sup>N</sup>*,* where *gτ* <sup>=</sup> *g(*· + *τ ed )*. Let *k >* <sup>1</sup> *<sup>τ</sup>* . Then for all *x* ∈ and *y* ∈ spt *ψk* ⊆ [−1*/k,* 1*/k*] *<sup>d</sup>* we infer *<sup>x</sup>* <sup>−</sup> *<sup>y</sup>* <sup>+</sup> *τ ed* <sup>∈</sup> . In particular, *f (*·−*y*+*τ ed)* <sup>∈</sup> *<sup>H</sup>*1*()* and grad *f (*·−*y*+*τ ed)* <sup>=</sup> *g(*·−*y*+*τ ed)*. Take *<sup>η</sup>* <sup>∈</sup> *<sup>C</sup>*<sup>∞</sup> <sup>c</sup> *()<sup>d</sup>* and compute

$$\begin{aligned} -\left\langle f\_{k,\tau}, \operatorname{div} \eta \right\rangle\_{L\_2(\Omega)} &= -\int\_{\Omega} \int\_{\mathbb{R}^d} \psi\_k(\mathbf{x} - \mathbf{y}) \widetilde{f}(\mathbf{y} + \tau e\_d)^\* \operatorname{dy} \operatorname{div} \eta(\mathbf{x}) \, \mathrm{d}x \\ &= -\int\_{\Omega} \int\_{\mathbb{R}^d} \psi\_k(\mathbf{y}) \widetilde{f}(\mathbf{x} - \mathbf{y} + \tau e\_d)^\* \operatorname{dy} \operatorname{div} \eta(\mathbf{x}) \, \mathrm{d}x \\ &= -\int\_{\Omega} \int\_{[-1/k, 1/k]^d} \psi\_k(\mathbf{y}) f(\mathbf{x} - \mathbf{y} + \tau e\_d)^\* \operatorname{dy} \operatorname{div} \eta(\mathbf{x}) \, \mathrm{d}x \end{aligned}$$

$$\begin{aligned} &= -\int\_{\left[-1/k, 1/k\right]^d} \psi\_k(\mathbf{y}) \langle f(\cdot - \mathbf{y} + \tau e\_d), \text{div}\,\eta\rangle\_{L\_2(\Omega)} \,\text{d}\mathbf{y} \\ &= \int\_{\left[-1/k, 1/k\right]^d} \psi\_k(\mathbf{y}) \langle \mathbf{g}(\cdot - \mathbf{y} + \tau e\_d), \eta\rangle\_{L\_2(\Omega)^d} \,\text{d}\mathbf{y} \\ &= \langle \psi\_k \* \mathbf{g}\_1, \eta\rangle\_{L\_2(\Omega)^d} \,. \end{aligned}$$

As *ψk* <sup>∗</sup> *f (* · + *τ ed )* <sup>∈</sup> *<sup>H</sup>*1*(*R*<sup>d</sup> )*, we conclude the proof using Lemma 12.1.2.

With these preparations at hand, we can define the boundary trace of *H*1*()*.

**Theorem 12.1.3** *The operator*

$$\begin{aligned} \mathcal{Y} \colon \mathcal{D} \subseteq H^1(\Omega) \to L\_2(\mathbb{R}^{d-1}) \\ f \mapsto \left( \mathbb{R}^{d-1} \ni \check{\chi} \mapsto f(\check{\chi}, 0) \right) \end{aligned}$$

*is continuous, densely defined and, thus, admits a unique continuous extension to H*1*() again denoted by γ . Moreover, we have*

$$\|\|\gamma f\|\|\_{L\_2(\mathbb{R}^{d-1})} \lesssim \left(2\|f\|\|\_{L\_2(\Omega)}\|\text{grad }f\|\_{L\_2(\Omega)^d}\right)^{\frac{1}{2}} \lesssim \|f\|\_{H^1(\Omega)} \quad (f \in H^1(\Omega)).$$

*Proof* Note that *γ* is densely defined by Theorem 12.1.1. Let *f* ∈ *C*<sup>∞</sup> <sup>c</sup> *(*R*<sup>d</sup> )* and *<sup>x</sup>*<sup>q</sup> <sup>∈</sup> <sup>R</sup>*d*−1. Let *R >* 0 be such that spt *<sup>f</sup>* <sup>⊆</sup> *<sup>B</sup> (*0*, R)*. Then

$$\begin{split} \mathbb{P}. \text{ Let } \mathcal{R} > 0 \text{ be such that } \text{spt } f \subseteq \mathcal{B} \text{ } (0, \mathcal{R}). \text{ Then} \\ \int\_{\mathbb{R}^{d-1}} \left| f(\check{\boldsymbol{x}}, \boldsymbol{0}) \right|^{2} \, \mathrm{d}\check{\boldsymbol{x}} &= - \int\_{\mathbb{R}^{d-1}} \int\_{0}^{\mathbb{R}} \partial\_{d} \left| f(\check{\boldsymbol{x}}, \hat{\boldsymbol{x}}) \right|^{2} \, \mathrm{d}\hat{\boldsymbol{x}} \, \mathrm{d}\check{\boldsymbol{x}} \\ &= - \int\_{\Omega} \left( f(\boldsymbol{x})^{\*} \partial\_{d} f(\boldsymbol{x}) + \partial\_{d} f^{\*}(\boldsymbol{x}) f(\boldsymbol{x}) \right) \mathrm{d}\boldsymbol{x} \\ &\leqslant 2 \left\| f \right\|\_{L\_{2}(\Omega)} \left\| \mathrm{grad} \, f \right\|\_{L\_{2}(\Omega)^{d}}. \end{split}$$

The remaining inequality follows from 2*ab <sup>a</sup>*<sup>2</sup> <sup>+</sup> *<sup>b</sup>*<sup>2</sup> for all *a, b* <sup>∈</sup> <sup>R</sup>.

Except for one spatial dimension, where the boundary trace can be obtained by point evaluation, the boundary trace *γ* does not map onto the whole of *L*2*(*R*d*−1*)*. Hence, in order to define the space of all possible boundary values for a function in *H*<sup>1</sup> one uses a quotient construction: we set

$$H^{1/2}(\mathbb{R}^{d-1}) := \left\{ \gamma f \; ; \; f \in H^1(\Omega) \right\}.$$

and endow *H*1*/*2*(*R*d*−1*)* with the norm

$$\|\|\chi f\|\|\_{H^{1/2}(\mathbb{R}^{d-1})} := \inf \left\{ \|g\|\_{H^1(\Omega)} \; ; \; g \in H^1(\Omega), \,\chi g = \chi f \right\}.$$

It is not difficult to see that *H*1*/*2*(*R*d*−1*)* is unitarily equivalent to *(*ker *γ )* ⊥, where the orthogonal complement is computed with respect to the scalar product in *H*1*()*. Thus, *H*1*/*2*(*R*d*−1*)* is a Hilbert space.

*Remark 12.1.4* The norm defined on the space *H*1*/*2*(*R*d*−1*)* given above is not the standard norm defined on this space. Indeed, following [72, Section 2.3.8] the usual norm is given by

$$\left(\|\boldsymbol{\mu}\|\right)^{2}\_{L\_{2}(\mathbb{R}^{d-1})} + \int\_{\mathbb{R}^{d-1}} \int\_{\mathbb{R}^{d-1}} \frac{|\boldsymbol{\mu}(\boldsymbol{x}) - \boldsymbol{\mu}(\boldsymbol{y})|^{2}}{|\boldsymbol{x} - \boldsymbol{y}|^{d}} \,\mathrm{d}\boldsymbol{x}\,\mathrm{d}\boldsymbol{y}\right)^{1/2}$$

for *<sup>u</sup>* <sup>∈</sup> *<sup>H</sup>*1*/*2*(*R*d*−1*)*. However, this norm turns out to be equivalent to the norm given above, see e.g. [115, Section 4].

As the notation of this space suggests, it can also be defined as an interpolation space between *H*1*(*R*d*−1*)* and *L*2*(*R*d*−1*)*, see [60, Theorem 15.1].

# **12.2 The Boundary Values of** *H (***div***,* <sup>R</sup>*d***−<sup>1</sup> <sup>×</sup>** <sup>R</sup>*>***0***)*

Let := <sup>R</sup>*d*−1×R*>*0. There is also a space of corresponding boundary traces for the divergence operator. Similarly to the boundary values for the domain of the gradient operator, *H*1*()*, the construction of the boundary trace for *H (*div*)*-vector fields rests on a density result. The proof can be done along the lines of Theorem 12.1.1 and will be addressed in Exercise 12.3.

**Theorem 12.2.1** *<sup>D</sup><sup>d</sup> is dense in H (*div*, ), where <sup>D</sup> is defined as in Theorem 12.1.1.*

Equipped with this result, we can describe all possible boundary values of *H (*div*, )*. It will turn out that vector fields in *H (*div*, )* have a well-defined *normal* trace, which for <sup>=</sup> <sup>R</sup>*d*−<sup>1</sup> <sup>×</sup> <sup>R</sup>*>*<sup>0</sup> is just the negative of the last coordinate of the vector field.

**Theorem 12.2.2** *The operator*

$$\begin{aligned} \gamma\_{\mathfrak{n}} \colon \mathcal{D}^d \subseteq H(\text{div}, \mathfrak{Q}) &\to \left( H^{1/2}(\mathbb{R}^{d-1}) \right)' =: H^{-1/2}(\mathbb{R}^{d-1}),\\ q &\mapsto \left( \mathbb{R}^{d-1} \ni \check{\chi} \mapsto -q\_d(\check{\chi}, 0) \right), \end{aligned}$$

*is densely defined, continuous with norm bounded by* 1 *and has dense range. Thus γ*<sup>n</sup> *admits a unique extension to H (*div*, ) again denoted by γ*n*. Here,* −*qd is the negative of the d-th component of q pointing into the outward normal direction of and* −*qd is identified with the linear functional*

$$H^{1/2}(\mathbb{R}^d) \ni \chi f \mapsto \langle -q\_d(\cdot, 0), \chi f \rangle\_{L\_2(\mathbb{R}^{d-1})}\cdot$$

*Moreover, for all f* ∈ dom*(*grad*) and q* ∈ dom*(*div*) we have*

$$
\langle \text{div}\,q, f \rangle + \langle q, \text{grad}\,f \rangle = (\chi\_{\text{n}}q)(\chi f). \tag{12.1}
$$

*Proof* Let *<sup>f</sup>* <sup>∈</sup> *<sup>D</sup>* and *<sup>q</sup>* <sup>∈</sup> *<sup>D</sup><sup>d</sup>* . Then integration by parts yields

$$\begin{aligned} \text{Proof } \text{ Let } f \in \mathcal{D} \text{ and } q \in \mathcal{D}^d. \text{ Then integration by parts yields} \\\\ \langle \text{div}\, q, f \rangle + \langle q, \text{grad } f \rangle = \int\_{\Omega} \text{div}(q^\* f) = \int\_{\mathbb{R}^{d-1}} \left\langle q^\*(\check{\mathbf{x}}, \mathbf{0}) f(\check{\mathbf{x}}, \mathbf{0}), -qd \right\rangle \text{d}\check{\mathbf{x}} \\ = -\int\_{\mathbb{R}^{d-1}} \left\langle \gamma q\_d^\* \chi f = \langle \gamma\_\mathbf{p} q, \chi f \rangle\_{L\_2(\mathbb{R}^{d-1})} = (\chi\_\mathbf{p} q)(\chi f). \end{aligned}$$

Hence,

$$\left| \langle \gamma\_{\mathbb{n}} q, \gamma f \rangle\_{L\_2(\mathbb{R}^{d-1})} \right| \lesssim \| q \|\_{H(\text{div})} \| f \|\_{H^1} \dots$$

Since *<sup>D</sup>* is dense in *<sup>H</sup>*1*()*, the inequality remains true for all *<sup>f</sup>* <sup>∈</sup> *<sup>H</sup>*1*()*. Thus,

$$\left| \langle \gamma\_{\mathbf{n}} q, \gamma f \rangle\_{L\_2(\mathbb{R}^{d-1})} \right| \lesssim \| q \|\_{H(\text{div})} \| f \|\_{H^1} \quad (f \in H^1(\Omega)).$$

Computing the infimum over all *<sup>g</sup>* <sup>∈</sup> *<sup>H</sup>*1*()* with *γ g* <sup>=</sup> *γf,* we deduce

$$\left| \langle \gamma\_{\mathbf{n}} q, \gamma f \rangle\_{L\_2(\mathbb{R}^{d-1})} \right| \lesssim \| q \|\_{H(\text{div})} \| \gamma f \|\_{H^{1/2}(\mathbb{R}^{d-1})} \quad (f \in H^1(\Omega)).$$

Therefore *<sup>γ</sup>*n*<sup>q</sup>* <sup>∈</sup> *<sup>H</sup>* <sup>−</sup>1*/*2*(*R*d*−1*)* and *<sup>γ</sup>*n*<sup>q</sup> <sup>H</sup>* <sup>−</sup>1*/*<sup>2</sup> *<sup>q</sup> H (*div*)*, which shows continuity of *γ*n. It is left to show that *γ*<sup>n</sup> has dense range. For this, take *γf* ∈ *<sup>H</sup>*1*/*2*(*R*d*−1*)* for some *<sup>f</sup>* <sup>∈</sup> *<sup>H</sup>*1*()* such that

$$\langle \mathsf{y\_ng}, \mathsf{y}f \rangle\_{L\_2(\mathbb{R}^{d-1})} = 0$$

for all *<sup>g</sup>* <sup>∈</sup> *<sup>D</sup><sup>d</sup>* . Next, take *<sup>g</sup>* <sup>∈</sup> *<sup>C</sup>*<sup>∞</sup> <sup>c</sup> *(*R*d*−1*)* and *<sup>ψ</sup>* <sup>∈</sup> *<sup>C</sup>*<sup>∞</sup> <sup>c</sup> *(*R*)* with *ψ(*0*)* <sup>=</sup> 1. Then we set *<sup>g</sup>* : *(x,* <sup>q</sup> <sup>0</sup>*x)* → −*edg(x)ψ(* <sup>q</sup> 0*x)* <sup>∈</sup> *<sup>D</sup><sup>d</sup>* and note that *<sup>γ</sup>*n*<sup>g</sup>* <sup>=</sup> *g*. Hence

$$\langle \langle \chi f, \widetilde{g} \rangle\_{L\_2(\mathbb{R}^{d-1})} = 0 \quad (\widetilde{g} \in C\_c^{\infty}(\mathbb{R}^{d-1})) .$$

Thus, *γf* <sup>=</sup> 0, which implies that the range of *<sup>γ</sup>*<sup>n</sup> is dense, as *<sup>H</sup>* <sup>−</sup>1*/*2*(*R*d*−1*)* is a Hilbert space. The remaining formula (12.1) follows by continuously extending both the left- and right-hand side of the integration by parts formula from the beginning of the proof. Note that for this, we have used both Theorems 12.1.1 and 12.2.1. **Corollary 12.2.3** *Let <sup>f</sup>* <sup>∈</sup> *<sup>H</sup>*1*(), <sup>q</sup>* <sup>∈</sup> *H (*div*, ). Then <sup>f</sup>* <sup>∈</sup> dom*(*grad0*) if and only if γf* = 0*, and q* ∈ dom*(*div0*) if and only if γ*n*q* = 0*.*

*Proof* We only show the statement for *q*. The proof for *f* is analogous. If *q* ∈ dom*(*div0*)*, then there exists a sequence *(ψn)n* in *C*<sup>∞</sup> <sup>c</sup> *()<sup>d</sup>* such that *ψn* <sup>→</sup> *<sup>q</sup>* in *H (*div*, )* as *n* → ∞. Thus, by continuity of *γ*n, we infer 0 = *γ*n*ψn* → *γ*n*q*. Assume on the other hand that *q* ∈ dom*(*div*)* with *γ*n*q* = 0. Using (12.1), we obtain for all *f* ∈ dom*(*grad*)*

$$
\langle \text{div}\,q, f \rangle + \langle q, \text{grad}\,f \rangle = 0.
$$

This equality implies that *q* ∈ dom*(*grad∗*)* = dom*(*div0*)*, which shows the remaining assertion.

The remaining part of this section is devoted to showing that the continuous extension of *γ*<sup>n</sup> maps onto *H* <sup>−</sup>1*/*2*(*R*d*−1*)*. For this we require the following observation, which will also be needed later on.

**Proposition 12.2.4** *Let <sup>U</sup>* <sup>⊆</sup> <sup>R</sup>*<sup>d</sup> be open. Then*

$$H\_0(\text{div}, U)^{\perp\_{H(\text{div}, U)}} = \left\{ q \in H(\text{div}, U) \; ; \; \text{div} \, q \in H^1(U), q = \text{grad} \, \text{div} \, q \right\}.$$

*Proof* Let *q* ∈ *H (*div*,U)*. Then *q* ∈ *H*0*(*div*,U)*⊥*H (*div*,U)* if and only if for all *r* ∈ *H*0*(*div*,U)* we have

$$\begin{aligned} 0 &= \langle r, q \rangle\_{H(\text{div}, U)} = \langle r, q \rangle\_{L\_2(U)^d} + \langle \text{div} \, r, \text{div} \, q \rangle\_{L\_2(U)} \\ &= \langle r, q \rangle\_{L\_2(U)^d} + \langle \text{div}\_0 \, r, \text{div} \, q \rangle\_{L\_2(U)}. \end{aligned}$$

The latter, in turn, is equivalent to div *q* ∈ dom*(*div<sup>∗</sup> <sup>0</sup>*)* <sup>=</sup> dom*(*grad*)* <sup>=</sup> *<sup>H</sup>*1*(U )* and − grad div *q* = div<sup>∗</sup> <sup>0</sup> div *q* = −*q*.

**Theorem 12.2.5** *γ*<sup>n</sup> *maps onto H* <sup>−</sup>1*/*2*(*R*d*−1*). In particular, we have*

$$\|q\|\_{H(\text{div},\Omega)} \lesssim \|\chi\_{\text{n}}q\|\_{H^{-1/2}(\mathbb{R}^{d-1})}$$

*for all q* ∈ *H*0*(*div*, )*⊥*H (*div*,).*

*Proof* By Theorem 12.2.2 it suffices to show that *γ*<sup>n</sup> has closed range. For this, it suffices to show that there exists *c >* 0 such that

$$\|q\|\_{H(\text{div},\Omega)} \lesssim c \|\varphi\_{\text{n}}q\|\_{H^{-1/2}(\mathbb{R}^{d-1})}$$

for all *q* ∈ ker*(γ*n*)*⊥*H (*div*,)*. By Corollary 12.2.3, we obtain ker*(γ*n*)* = *H*0*(*div*, )*. Hence, by Proposition 12.2.4, we deduce that *q* ∈ ker*(γ*n*)*⊥*H (*div*,)* if and only if *q* ∈ dom*(*grad div*)* and *q* = grad div *q*. So, assume that *q* ∈ dom*(*grad div*)* with *q* = grad div *q*. Then (12.1) applied to *q* ∈ dom*(*div*)* and *f* = div *q* ∈ dom*(*grad*)* yields

$$(\gamma\_\mathbf{\hat{n}}q)(\boldsymbol{\chi}\operatorname{div}q) = \langle \operatorname{div}\boldsymbol{q}, \operatorname{div}q \rangle + \langle q, \operatorname{grad}\operatorname{div}q \rangle = \langle \operatorname{div}\boldsymbol{q}, \operatorname{div}q \rangle + \langle \boldsymbol{q}, q \rangle.$$

$$= \|q\|\_{H(\operatorname{div}, \mathfrak{Q})}^2,$$

where we used grad div *q* = *q*. Hence

$$\|q\|\_{H(\text{div},\Omega)}^2 \lesssim \|\chi \text{ div}\, q\|\_{H^{1/2}} \|\chi\_\mathbf{\hat{n}} q\|\_{H^{-1/2}} \lesssim \|\text{div}\, q\|\_{H^1(\Omega)} \|\chi\_\mathbf{\hat{n}} q\|\_{H^{-1/2}}$$

$$= \|q\|\_{H(\text{div},\Omega)} \|\chi\_\mathbf{\hat{n}} q\|\_{H^{-1/2}}$$

where we again used that grad div *q* = *q*. This yields the assertion.

## **12.3 Inhomogeneous Boundary Value Problems**

Let := <sup>R</sup>*d*−<sup>1</sup> <sup>×</sup> <sup>R</sup>*>*0. With the notion of traces we now have a tool at hand that allows us to formulate inhomogeneous boundary value problems. Here we focus on the scalar wave type equation for given Neumann data *<sup>g</sup>* <sup>∈</sup> *<sup>H</sup>* <sup>−</sup>1*/*2*(*R*d*−1*)*. We shall address other boundary value problems in the exercises. Let *<sup>M</sup>* : dom*(M)* <sup>⊆</sup> <sup>C</sup> <sup>→</sup> *L <sup>L</sup>*2*()* <sup>×</sup> *<sup>L</sup>*2*()<sup>d</sup>* be a material law with sb *(M) < ν*<sup>0</sup> for some *<sup>ν</sup>*<sup>0</sup> <sup>∈</sup> <sup>R</sup>. We assume that *M* satisfies the positive definiteness condition in Theorem 6.2.1; that is, we assume there exists *c >* 0 such that for all *<sup>z</sup>* <sup>∈</sup> <sup>C</sup>Re*<sup>ν</sup>*<sup>0</sup> we have Re *zM(z) c*. For *ν ν*<sup>0</sup> we want to solve

$$\begin{cases} \begin{pmatrix} \partial\_{t,\upsilon}M(\partial\_{t,\upsilon}) + \begin{pmatrix} 0 & \text{div} \\ \text{grad } 0 \end{pmatrix} \end{pmatrix} \begin{pmatrix} \upsilon \\ q \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix} & \text{on } \Omega, \\\eta\_{\text{fl}}q(t, \cdot) = \widetilde{\mathfrak{g}} & \text{on } \partial\Omega \text{ for all } t > 0. \end{cases}$$

Let us reformulate this problem. Let *<sup>φ</sup>* <sup>∈</sup> *<sup>C</sup>*∞*(*R*)* such that 0 *<sup>φ</sup>* 1 with *<sup>φ</sup>* <sup>=</sup> <sup>1</sup> on [0*,*∞*)* and *φ* = 0 on *(*−∞*,* −1]. We define the function

$$\mathfrak{g} := \left( t \mapsto \phi(t)\widetilde{\mathfrak{g}} \in H^{-1/2}(\mathbb{R}^{d-1}) \right) \in \bigcap\_{\nu > 0} L\_{2,\nu}(\mathbb{R}; H^{-1/2}(\mathbb{R}^{d-1})) $$

and consider

$$\begin{cases} \begin{pmatrix} \partial\_{t,\upsilon}M(\partial\_{t,\upsilon}) + \begin{pmatrix} 0 & \text{div} \\ \text{grad} & 0 \end{pmatrix} \end{pmatrix} \begin{pmatrix} \upsilon \\ q \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix} & \text{on } \Omega, \\\ \mathsf{y}\_{\mathsf{h}}q(t) = \mathsf{g}(t) & \text{for all } t > 0. \end{cases} \tag{12.2}$$

instead.

**Theorem 12.3.1** *Let ν* max{*ν*0*,* 0}*, ν* = 0*. Then (12.2) admits a unique solution (v, q)* <sup>∈</sup> *<sup>H</sup>*<sup>1</sup> *ν* <sup>R</sup>; dom 0 div grad 0 *.*

*Proof* We start with the existence part. By Theorem 12.2.5, we find *<sup>G</sup>* <sup>∈</sup> *H (*div*, )* such that *<sup>γ</sup>*n*G* <sup>=</sup> *g*; set *<sup>G</sup>* := *φ(*·*)G* <sup>∈</sup> *<sup>H</sup>*<sup>3</sup> *<sup>ν</sup> (*R; *H (*div*, ))*. Consider the following evolutionary equation

$$
\left(\partial\_{l,\boldsymbol{\upsilon}}M(\partial\_{l,\boldsymbol{\upsilon}}) + \begin{pmatrix} 0 & \operatorname{div}\_{0} \\ \operatorname{grad} & 0 \end{pmatrix}\right) \begin{pmatrix} \boldsymbol{\mu} \\ \boldsymbol{r} \end{pmatrix} = \partial\_{l,\boldsymbol{\upsilon}}M(\partial\_{l,\boldsymbol{\upsilon}}) \begin{pmatrix} 0 \\ -G \end{pmatrix} + \begin{pmatrix} -\operatorname{div} \boldsymbol{G} \\ 0 \end{pmatrix}.
$$

Note that the right-hand side is in *H*<sup>2</sup> *<sup>ν</sup> (*R; *<sup>L</sup>*2*()* <sup>×</sup> *<sup>L</sup>*2*()<sup>d</sup> )*. By Theorem 6.2.1, we obtain

$$\begin{split} \binom{u}{r} &= \left(\partial\_{t,\boldsymbol{\nu}}M(\partial\_{t,\boldsymbol{\nu}}) + \begin{pmatrix} 0 & \operatorname{div}\_{0} \\ \operatorname{grad} & 0 \end{pmatrix}\right)^{-1} \left(\partial\_{t,\boldsymbol{\nu}}M(\partial\_{t,\boldsymbol{\nu}}) \begin{pmatrix} 0 \\ -G \end{pmatrix} + \begin{pmatrix} -\operatorname{div} G \\ 0 \end{pmatrix}\right) \\ &\in H^{1}\_{\boldsymbol{\nu}}(\mathbb{R}; L\_{2}(\Omega) \times L\_{2}(\Omega)^{d}) \cap L\_{2,\boldsymbol{\nu}}\Big(\mathbb{R}; \operatorname{dom}\left(\begin{pmatrix} 0 & \operatorname{div} \\ \operatorname{grad} & 0 \end{pmatrix}\right)\Big). \end{split}$$

Indeed, since the solution operator commutes with *∂t ,ν* and the right-hand side lies in *H*<sup>2</sup> *<sup>ν</sup>* , it even follows that *u r* <sup>∈</sup> *<sup>H</sup>*<sup>2</sup> *<sup>ν</sup> (*R; *<sup>L</sup>*2*()* <sup>×</sup> *<sup>L</sup>*2*()<sup>d</sup> )*. From the equality

$$
\left(\partial\_{l,\boldsymbol{\upsilon}}M(\partial\_{l,\boldsymbol{\upsilon}}) + \begin{pmatrix} 0 & \operatorname{div}\_{0} \\ \operatorname{grad} & 0 \end{pmatrix}\right) \begin{pmatrix} \boldsymbol{u} \\ \boldsymbol{r} \end{pmatrix} = \partial\_{l,\boldsymbol{\upsilon}}M(\partial\_{l,\boldsymbol{\upsilon}}) \begin{pmatrix} 0 \\ -G \end{pmatrix} + \begin{pmatrix} -\operatorname{div} G \\ 0 \end{pmatrix}
$$

it follows that

$$\left( \begin{pmatrix} 0 & \text{div}\_0 \\ \text{grad } & 0 \end{pmatrix} \right) \begin{pmatrix} u \\ r \end{pmatrix} \in H^1\_\nu(\mathbb{R}; L\_2(\Omega) \times L\_2(\Omega)^d).$$

Hence,

$$\begin{aligned} \binom{u}{r} &\in \left(1 + \begin{pmatrix} 0 & \operatorname{div}\_0 \\ \operatorname{grad} & 0 \end{pmatrix}\right)^{-1} [H\_\nu^1(\mathbb{R}; L\_2(\Omega) \times L\_2(\Omega)^d)] \\ &\subseteq H\_\nu^1(\mathbb{R}; \operatorname{dom}\left(\begin{pmatrix} 0 & \operatorname{div}\_0 \\ \operatorname{grad} & 0 \end{pmatrix}\right)), \end{aligned}$$

where the resolvent is well-defined since 0 div0 grad 0 is skew-selfadjoint. Also, we deduce that

$$
\left(\partial\_{l,\boldsymbol{\upsilon}}M(\partial\_{l,\boldsymbol{\upsilon}}) + \begin{pmatrix} 0 & \text{div} \\ \text{grad} & 0 \end{pmatrix}\right) \begin{pmatrix} \boldsymbol{\mu} \\ \boldsymbol{r} + \boldsymbol{G} \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix}.
$$

Since *<sup>r</sup>* <sup>∈</sup> *<sup>H</sup>*<sup>1</sup> *<sup>ν</sup> (*R; dom*(*div0*))*, by Corollary 12.2.3 and Theorem 4.1.2 we obtain

$$
\chi\_{\mathfrak{n}}((r+G)(t)) = \chi\_{\mathfrak{n}}G(t) = \mathfrak{g}(t) \quad (t \in \mathbb{R}).
$$

Hence, *(u, r* + *G)* solves (12.2).

Next we address the uniqueness result. For this we note that a straightforward computation shows

$$
\begin{pmatrix} v \\ q - G \end{pmatrix} = \overline{\left( \partial\_{l, \boldsymbol{\nu}} M(\partial\_{l, \boldsymbol{\nu}}) + \begin{pmatrix} 0 & \operatorname{div} 0 \\ \operatorname{grad} & 0 \end{pmatrix} \right)}^{-1} \begin{pmatrix} \partial\_{l, \boldsymbol{\nu}} M(\partial\_{l, \boldsymbol{\nu}}) \begin{pmatrix} 0 \\ -G \end{pmatrix} + \begin{pmatrix} -\operatorname{div} G \\ 0 \end{pmatrix},
$$

which coincides with the formula for *(u, r* + *G)*.

The upshot of the rationale exemplified in the proof is that inhomogeneous boundary value problems can be reduced to an evolutionary equation of the standard form with non-vanishing right-hand side. The treatment of inhomogeneous Dirichlet data works along similar lines.

## **12.4 Abstract Boundary Data Spaces**

Of course inhomogeneous boundary value problems can be addressed for other domains than the half-space <sup>R</sup>*d*−<sup>1</sup> <sup>×</sup> <sup>R</sup>*>*0. Classically, some more specific properties need to be imposed on the description of the boundary *∂*. In this section, however, we deviate from the classical perspective in as much as we like to consider *arbitrary* open sets <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* . For this we introduce

$$\text{BD}(\text{div}) = \{q \in H(\text{div}, \Omega) \, ; \, \text{div} \, q \in \text{dom}(\text{grad}), \, \text{grad} \, \text{div} \, q = q\}\,.$$

$$\text{BD}(\text{grad}) = \left\{ u \in H^1(\Omega) \, ; \, \text{grad} \, u \in \text{dom}(\text{div}), \, \text{div} \, \text{grad} \, u = u \right\}\,.$$

By Proposition 12.2.4 and Exercise 6.7, these spaces are closed subspaces of *H (*div*, )* and *H*1*()*, respectively, and therefore Hilbert spaces. Indeed,

$$\text{BD}(\text{div}) = H\_0(\text{div}, \Omega)^{\perp\_{H(\text{div}, \Omega)}}$$

and

$$\text{BD}(\text{grad}) = H\_0^1(\Omega)^{\perp\_{H^1(\Omega)}}.$$

Now, we are in a position to solve inhomogeneous boundary value problems, where the trace mappings *γ* and *γ*<sup>n</sup> are replaced by the canonical orthogonal projections *π*BD*(*grad*)* and *π*BD*(*div*)* respectively; see Exercise 12.4. We devote the rest of this section to describe the relationship between the classical trace spaces introduced before and the BD-spaces. In the perspective outlined here, there is not much of a difference between Neumann boundary values and Dirichlet boundary values. The next result is an incarnation of this.

**Proposition 12.4.1** *We have*

grad[BD*(*grad*)*] ⊆ BD*(*div*) and* div[BD*(*div*)*] ⊆ BD*(*grad*).*

*Moreover, the mappings*

$$\begin{aligned} \text{grad}\_{\text{BD}} \colon \text{BD}(\text{grad}) &\to \text{BD}(\text{div}),\\ \mu &\mapsto \text{grad}\,\mu \end{aligned}$$

*and*

$$\begin{aligned} \text{div}\_{\text{BD}} \colon \text{BD}(\text{div}) &\to \text{BD}(\text{grad}), \\ q &\mapsto \text{div} \, q \end{aligned}$$

*are unitary, and* grad∗ BD = divBD*.*

*Proof* Let *φ* ∈ BD*(*grad*)*. Then grad *φ* ∈ *H (*div*, )* and div grad *φ* = *φ*. This implies div grad *φ* ∈ dom*(*grad*)* and grad div grad *φ* = grad *φ,* which yields grad *φ* ∈ BD*(*div*)*. Thus, gradBD is defined everywhere; interchanging the roles of grad and div, we obtain divBD is also defined everywhere. We infer divBD gradBD = 1BD*(*grad*)* and gradBD divBD <sup>=</sup> 1BD*(*div*)* and thus gradBD is bijective with grad−<sup>1</sup> BD = divBD. It remains to show that gradBD preserves the norm. For this we compute

$$
\begin{aligned}
\langle \operatorname{grad}\_{\mathrm{BD}} \phi, \operatorname{grad}\_{\mathrm{BD}} \phi \rangle\_{\mathrm{BD}(\mathrm{div})} &= \langle \operatorname{grad} \phi, \operatorname{grad} \phi \rangle\_{H(\mathrm{div})} \\ &= \langle \operatorname{grad} \phi, \operatorname{grad} \phi \rangle\_{L\_2(\Omega)^d} + \langle \operatorname{div} \operatorname{grad} \phi, \operatorname{div} \operatorname{grad} \phi \rangle\_{L\_2(\Omega)^d} \\ &= \langle \operatorname{grad} \phi, \operatorname{grad} \phi \rangle\_{L\_2(\Omega)^d} + \langle \phi, \phi \rangle\_{L\_2(\Omega)} \\ &= \langle \phi, \phi \rangle\_{\mathrm{dom}(\mathrm{grad})} = \langle \phi, \phi \rangle\_{\mathrm{BD}(\mathrm{grad})},
\end{aligned}
$$

which implies that gradBD is unitary. Hence, divBD <sup>=</sup> grad−<sup>1</sup> BD = grad<sup>∗</sup> BD. It is also possible to show an 'integration by parts' formula analogous to (12.1) for the abstract situation:

## **Proposition 12.4.2** *Let <sup>u</sup>* <sup>∈</sup> *<sup>H</sup>*1*() and <sup>q</sup>* <sup>∈</sup> *H (*div*, ). Then*

$$\begin{aligned} \langle \text{div}\,q,\mu\rangle\_{L\_2(\Omega)} + \langle q,\text{grad}\,\mu\rangle\_{L\_2(\Omega)^d} &= \langle \text{div}\_{\text{BD}}\,\pi\_{\text{BD}(\text{div})}q,\pi\_{\text{BD}(\text{grad})}\mu\rangle\_{\text{BD}(\text{grad})} \\ &= \langle \pi\_{\text{BD}(\text{div})}q,\text{grad}\_{\text{BD}}\pi\_{\text{BD}(\text{grad})}\mu\rangle\_{\text{BD}(\text{div})} \end{aligned}$$

*Proof* We decompose *<sup>u</sup>* <sup>=</sup> *<sup>u</sup>*<sup>0</sup> <sup>+</sup> *<sup>u</sup>*<sup>1</sup> and *<sup>q</sup>* <sup>=</sup> *<sup>q</sup>*<sup>0</sup> <sup>+</sup> *<sup>q</sup>*<sup>1</sup> with *<sup>u</sup>*<sup>0</sup> <sup>∈</sup> *<sup>H</sup>*<sup>1</sup> <sup>0</sup> *()*, *q*<sup>0</sup> ∈ *H*0*(*div*, )*, *u*<sup>1</sup> = *π*BD*(*grad*)u* and *q*<sup>1</sup> = *π*BD*(*div*)q*. Then we obtain

 div *q,u <sup>L</sup>*2*()* + *q,* grad *u L*2*()<sup>d</sup>* = div0 *q*0*, u <sup>L</sup>*2*()* + div *q*1*, u <sup>L</sup>*2*()* + *q*0*,* grad *u <sup>L</sup>*2*()<sup>d</sup>* + *q*1*,* grad *u L*2*()<sup>d</sup>* = *q*0*,* − grad *u <sup>L</sup>*2*()<sup>d</sup>* + div *q*1*, u <sup>L</sup>*2*()* + *q*0*,* grad *u <sup>L</sup>*2*()<sup>d</sup>* + *q*1*,* grad *u L*2*()<sup>d</sup>* = div *q*1*, u*<sup>0</sup> *<sup>L</sup>*2*()* + div *q*1*, u*<sup>1</sup> *<sup>L</sup>*2*()* + *q*1*,* grad *u*<sup>0</sup> *<sup>L</sup>*2*()<sup>d</sup>* + *q*1*,* grad *u*<sup>1</sup> *L*2*()<sup>d</sup>* = *q*1*,* − grad0 *u*<sup>0</sup> *<sup>L</sup>*2*()<sup>d</sup>* <sup>+</sup> div *<sup>q</sup>*1*, u*<sup>1</sup> *<sup>L</sup>*2*()* + *q*1*,* grad0 *u*<sup>0</sup> *<sup>L</sup>*2*()<sup>d</sup>* <sup>+</sup> *<sup>q</sup>*1*,* grad *<sup>u</sup>*<sup>1</sup> *L*2*()<sup>d</sup>* = div *q*1*, u*<sup>1</sup> *<sup>L</sup>*2*()* + *q*1*,* grad *u*<sup>1</sup> *L*2*()<sup>d</sup>* = div *q*1*, u*<sup>1</sup> *<sup>L</sup>*2*()* + grad div *q*1*,* grad *u*<sup>1</sup> *<sup>L</sup>*2*()<sup>d</sup>* = div *q*1*, u*<sup>1</sup> BD*(*grad*) .*

The remaining equality follows from div∗ BD = gradBD by Proposition 12.4.1.

In view of Proposition 12.4.2 the proper replacement of *γ*<sup>n</sup> appears to be divBD *π*BD*(*div*)* instead of just *π*BD*(*div*)*. Next, we show the equivalence of the trace spaces for the half-space and the abstract ones introduced in this section.

**Theorem 12.4.3** *Let* := <sup>R</sup>*d*−<sup>1</sup> <sup>×</sup> <sup>R</sup>*>*0*. Then <sup>γ</sup>* <sup>|</sup>BD*(*grad*)* : BD*(*grad*)* <sup>→</sup> *<sup>H</sup>*1*/*2*(*R*d*−1*) and <sup>γ</sup>*n|BD*(*div*)* : BD*(*div*)* <sup>→</sup> *<sup>H</sup>* <sup>−</sup>1*/*2*(*R*d*−1*) are unitary mappings.*

*Proof* We begin with *γ*n. We have shown in Theorem 12.2.2 that *γ*n|BD*(*div*)* is continuous and in Theorem 12.2.5 it has been shown that *(γ*n|BD*(*div*))*−<sup>1</sup> is continuous. Also the two norm inequalities have been established.

The injectivity of *<sup>γ</sup>* <sup>|</sup>BD*(*grad*)* follows from ker *<sup>γ</sup>* <sup>=</sup> *<sup>H</sup>*<sup>1</sup> <sup>0</sup> *()* by Corollary 12.2.3. All that remains simply relies upon recalling that *H*1*/*2*(*R*d*−1*)* is isomorphic to *(*ker *γ )* <sup>⊥</sup> with the orthogonal complement computed in *<sup>H</sup>*1*()*.

## **12.5 Robin Boundary Conditions**

The classical Robin boundary conditions involve both traces, the Dirichlet trace *γ* and the Neumann trace *γ*n. To motivate things, let us again have a look at the case <sup>=</sup> <sup>R</sup>*d*−<sup>1</sup> <sup>×</sup> <sup>R</sup>*>*0. We consider the boundary condition for given *<sup>q</sup>* <sup>∈</sup> *H (*div*, )* and *<sup>u</sup>* <sup>∈</sup> *<sup>H</sup>*1*()*

$$
\gamma\_\mathbf{n} q + \mathbf{i} \boldsymbol{\gamma} \boldsymbol{u} = 0,
$$

in the sense that

$$(\gamma\_\hbar q)(v) = \langle -\mathbf{i}\gamma\mu, v\rangle\_{L\_2(\mathbb{R}^{d-1})} \quad (v \in H^{1/2}(\mathbb{R}^{d-1})).$$

Note that this is an implicit regularity statement as *<sup>γ</sup>*n*<sup>q</sup>* <sup>∈</sup> *<sup>H</sup>* <sup>−</sup>1*/*2*(*R*d*−1*)* is representable as an *L*2*(*R*d*−1*)* function. The next result asserts that an evolutionary equation with a spatial operator of the type 0 div grad 0 with the above Robin boundary condition fits into the setting rendered by Theorem 6.2.1. In other words:

\*\*Theorem 12.5.1\*\*  $Let \ \Omega = \mathbb{R}^{d-1} \times \mathbb{R}\_{>0}$ . Then the operator  $A$ :  $\text{dom}(A) \subseteq L\_2(\Omega)^{d+1} \to L\_2(\Omega)^{d+1}$  with  $A \subseteq \begin{pmatrix} 0 & \text{div} \\ \text{grad} & 0 \end{pmatrix}$  with  $d$ -domain

$$\text{dom}(A) = \left\{ (\mu, q) \in H^1(\Omega) \times H(\text{div}, \Omega) \, ; \, \chi\_\mathbf{n} q + \mathbf{i} \chi \, \mu = 0 \right\}$$

*is skew-selfadjoint.*

*Proof* Let *(u, q), (v, r)* <sup>∈</sup> *<sup>H</sup>*1*()* <sup>×</sup> *H (*div*, )*. Then, by (12.1) we obtain

$$\begin{aligned} & \left| \left( \begin{pmatrix} 0 & \text{div} \\ \text{grad } 0 \end{pmatrix} \begin{pmatrix} u \\ q \end{pmatrix}, \begin{pmatrix} v \\ r \end{pmatrix} \right) \right| + \left| \left( \begin{matrix} u \\ q \end{matrix} \right), \begin{pmatrix} 0 & \text{div} \\ \text{grad } 0 \end{pmatrix} \begin{pmatrix} v \\ r \end{pmatrix} \right| \\ &= \left( \text{div} \, q, v \right) + \left( \text{grad} \, u, r \right) + \left( u, \text{div} \, r \right) + \left( q, \text{grad} \, v \right) = \left( \gamma\_{\text{\tiny\$\text\$\text\$\text\$\text\$\text\$\text\$\texttextquotedblleft} \left( \gamma\_{\text{\tiny\$\text|\texttexttexttext<{}} \text{r} \right)} \left( \text{u} \right) + \left( (\gamma\_{\text{\tiny\$\texttexttexttexttexttexttextendle{}} \text{r} \right) \left( \text{u} \right) \right) \end{aligned}$$

If, in addition, *(u, q)* ∈ dom*(A)*, we obtain

$$\begin{split} & \left\langle A\begin{pmatrix} u \\ q \end{pmatrix}, \begin{pmatrix} v \\ r \end{pmatrix} \right\rangle + \left\langle \begin{pmatrix} u \\ q \end{pmatrix}, \begin{pmatrix} 0 & \text{div} \\ \text{grad } 0 & 0 \end{pmatrix} \begin{pmatrix} v \\ r \end{pmatrix} \right\rangle \\ &= (\gamma\_{\mathbb{R}}q)(\gamma v) + ((\gamma\_{\mathbb{R}}r)(\gamma u))^{\*} = \left\langle -\mathbf{i}\gamma u, \gamma v \right\rangle\_{L\_{2}(\mathbb{R}^{d-1})} + ((\gamma\_{\mathbb{R}}r)(\gamma u))^{\*} \\ &= \left\langle \gamma u, \mathbf{i}\gamma v \right\rangle\_{L\_{2}(\mathbb{R}^{d-1})} + ((\gamma\_{\mathbb{R}}r)(\gamma u))^{\*} = (\left\langle \mathbf{i}\gamma v + \gamma\_{\mathbb{R}}r \right\rangle(\gamma u))^{\*}. \end{split}$$

Since for every *<sup>u</sup>* <sup>∈</sup> *<sup>D</sup>*, we find *<sup>q</sup>* <sup>∈</sup> *<sup>D</sup><sup>d</sup>* such that *(u, q)* <sup>∈</sup> dom*(A)*,

$$\{\gamma[\mathcal{D}] \subseteq \{\gamma\mu \; ; \; \exists q \in H(\text{div}, \Omega) \colon (\mu, q) \in \text{dom}(A)\}\}\dots$$

Thus, the set on the right-hand side is dense in *H*1*/*2*(*R*d*−1*)*. This in turn implies that *(v, r)* ∈ dom*(A*∗*)* if and only if i*γ v* + *γ*n*r* = 0, and in this case we have *A*∗*(v, r)* = −*A(v, r)*. This implies that *A* is skew-selfadjoint. *Remark 12.5.2* The factor i in front of *γ u* is chosen as a mere convenience in order to render the corresponding operator *A* in Theorem 12.5.1 skew-selfadjoint. It is also possible to choose *<sup>β</sup>* <sup>∈</sup> *L(H*1*/*2*(∂))* with <sup>−</sup> Re *<sup>β</sup>* - 0 instead of i. Then one obtains for all *U* ∈ dom*(A)* and *V* ∈ dom*(A*∗*)* the estimates Re *U, AU* - 0 and Re *V,A*∗*V* - 0. Appealing to Remark 6.3.3, it can be shown that the corresponding evolutionary equation

$$(\partial\_{\mathfrak{l},\mathbb{V}}M(\partial\_{\mathfrak{l},\mathbb{V}}) + A)U = F$$

for a suitable material law *M* as in Theorem 6.2.1 is well-posed.

Next, one could argue that in the case of arbitrary , the condition

$$\operatorname{i}\pi\_{\text{BD}(\text{grad})}\mu + \operatorname{div}\_{\text{BD}}\pi\_{\text{BD}(\text{div})}q = 0 \tag{12.3}$$

amounts to a generalisation of the Robin boundary condition just considered. However, this is not true as the following proposition shows.

**Proposition 12.5.3** *Let <sup>u</sup>* <sup>∈</sup> *<sup>H</sup>*1*(), and <sup>q</sup>* <sup>∈</sup> *H (*div*, ). Moreover, we set <sup>κ</sup>* : BD*(*grad*)* <sup>→</sup> *<sup>L</sup>*2*(*R*d*−1*) with κv* <sup>=</sup> *γ v for <sup>v</sup>* <sup>∈</sup> BD*(*grad*). Then <sup>γ</sup>*n*<sup>q</sup>* <sup>+</sup> <sup>i</sup>*γ u* <sup>=</sup> <sup>0</sup> *if and only if*

$$\operatorname{div}\_{\mathrm{BD}} \pi\_{\mathrm{BD}(\mathrm{div})} q + \mathrm{i} \kappa^\* \kappa \pi\_{\mathrm{BD}(\mathrm{grad})} \mu = 0.1$$

*Proof* We first observe that *κπ*BD*(*grad*)w* <sup>=</sup> *γ w* for each *<sup>w</sup>* <sup>∈</sup> *<sup>H</sup>*1*()*. Assume now that *γ*n*q* + i*γ u* = 0 and let *v* ∈ BD*(*grad*)*. Then we compute, using Proposition 12.4.2 and (12.1)

$$\left< \mathbf{i} \,\kappa^\* \pi\_{\text{BD}(\text{grad})} \mu, v \right>\_{\text{BD}(\text{grad})} = \left< \mathbf{i} \,\pi\_{\text{BD}(\text{grad})} \mu, \kappa v \right>\_{L\_2(\mathbb{R}^{d-1})} = \left< \mathbf{i} \,\nu \mu, \gamma v \right>\_{L\_2(\mathbb{R}^{d-1})}$$

$$= - (\gamma\_{\text{n}} q)(\gamma v) = \left< - \text{div} \, q, v \right>\_{L\_2(\Omega)} + \left< -q, \text{grad} \, v \right>\_{L\_2(\Omega)}$$

$$= \left< - \text{div}\_{\text{BD}} \pi\_{\text{BD}(\text{div})} q, \left. v \right>\_{\text{BD}(\text{grad})},$$

which proves one of the asserted implications.

Assume that divBD *<sup>π</sup>*BD*(*div*)q* <sup>+</sup> <sup>i</sup>*κ*∗*κπ*BD*(*grad*)u* <sup>=</sup> 0 and let *<sup>v</sup>* <sup>∈</sup> *<sup>H</sup>*1*/*2*(*R*d*−1*)*. We take *<sup>w</sup>* <sup>∈</sup> *<sup>H</sup>*1*()* with *γ w* <sup>=</sup> *<sup>v</sup>* and compute

$$\left(\left(\mathbb{\nu}q\right)\left(\upsilon\right)\right) = \left\langle \mathrm{div}\,q,\,w\right\rangle\_{L\_{2}(\Omega)} + \left\langle q,\,\mathrm{grad}\,w\right\rangle\_{L\_{2}(\Omega)^{d}}$$

$$= \left\langle \mathrm{div}\_{\mathrm{BD}}\,\pi\_{\mathrm{BD}(\mathrm{div})}q,\,\pi\_{\mathrm{BD}(\mathrm{grad})}w\right\rangle\_{\mathrm{BD}(\mathrm{grad})}$$

$$= \left\langle -\mathrm{i}\kappa^{\*}\kappa\pi\_{\mathrm{BD}(\mathrm{grad})}\mu,\,\pi\_{\mathrm{BD}(\mathrm{grad})}w\right\rangle\_{\mathrm{BD}(\mathrm{grad})}$$

$$= \left\langle -\mathrm{i}\kappa\pi\_{\mathrm{BD}(\mathrm{grad})}\mu,\,\kappa\pi\_{\mathrm{BD}(\mathrm{grad})}w\right\rangle\_{L\_{2}(\mathbb{R}^{d-1})}$$

$$= \left\langle -\mathrm{i}\gamma\mu,\,\upsilon\right\rangle\_{L\_{2}(\mathbb{R}^{d-1})},$$

which shows the remaining implication.

## **12.6 Comments**

The concept of abstract trace spaces has been introduced in [86] in order to study a multi-dimensional analogue for port-Hamiltonian systems. Also concerning differential equations at the boundary (so-called impedance type boundary conditions), the concept of abstract boundary value spaces has been employed, see [91].

A comparison between abstract and classical trace spaces has been provided in [37, 115] particularly concerning *H* <sup>−</sup>1*/*2*(*R*d*−1*)*. A good introduction for trace mappings for more complicated geometries can be found e.g. in [5]. The trace operator can also be suitably established for *H (*curl*, )*-regular vector fields given that is a so-called Lipschitz domain, see [18].

## **Exercises**

**Exercise 12.1** Let *φ* ∈ *C*<sup>∞</sup> <sup>c</sup> *(*R*<sup>d</sup> )*, *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*(*R*<sup>d</sup> )*. Show that

$$\phi \ast f \colon \mathfrak{x} \mapsto \int\_{\mathbb{R}^d} \phi(\mathfrak{x} - \mathfrak{y}) f(\mathfrak{y}) \,\mathrm{d}\mathfrak{y}$$

belongs to *<sup>H</sup>*1*(*R*<sup>d</sup> )* and that grad *(φ* <sup>∗</sup> *<sup>f</sup> )* <sup>=</sup> *(*grad *φ)* <sup>∗</sup> *<sup>f</sup>* . If, in addition, *<sup>f</sup>* <sup>∈</sup> *<sup>H</sup>*1*(*R*<sup>d</sup> )* <sup>=</sup> dom*(*grad*)*, then grad*(φ* <sup>∗</sup> *f )* <sup>=</sup> *<sup>φ</sup>* <sup>∗</sup> grad *<sup>f</sup>* , where the convolution is always taken component wise.

**Exercise 12.2** Let <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* be open. Let *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*()* and denote by *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*(*R*<sup>d</sup> )* the extension of *<sup>f</sup>* by zero. Let *<sup>v</sup>* <sup>∈</sup> <sup>R</sup>*<sup>d</sup>* , *τ >* 0 and define *fτ* := *f (* · + *τv)*|.


**Exercise 12.3** Prove Theorem 12.2.1.

**Exercise 12.4** Let <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* be open, *<sup>M</sup>* : dom*(M)* <sup>⊆</sup> <sup>C</sup> <sup>→</sup> *<sup>L</sup> <sup>L</sup>*2*()* <sup>×</sup> *<sup>L</sup>*2*()<sup>d</sup>* with sb *(M) < ν*<sup>0</sup> for some *<sup>ν</sup>*<sup>0</sup> <sup>∈</sup> <sup>R</sup>, *c >* 0 such that for all *<sup>z</sup>* <sup>∈</sup> <sup>C</sup>Re*<sup>ν</sup>*<sup>0</sup> we have Re *zM(z) c*, *ν* max{*ν*0*,* 0} and *ν* = 0. Show that there exists a unique

$$
\begin{pmatrix} v \\ q \end{pmatrix} \in H\_v^1 \left( \mathbb{R} ; \text{dom} \left( \begin{pmatrix} 0 & \text{div} \\ \text{grad } & 0 \end{pmatrix} \right) \right),
$$

satisfying

$$\begin{cases} \left(\partial\_{t,\boldsymbol{\upsilon}}M(\partial\_{t,\boldsymbol{\upsilon}}) + \begin{pmatrix} 0 & \text{div} \\ \text{grad } \boldsymbol{0} \end{pmatrix} \right) \begin{pmatrix} \boldsymbol{\upsilon} \\ \boldsymbol{q} \end{pmatrix} = \begin{pmatrix} 0 \\ \boldsymbol{0} \end{pmatrix} & \text{on } \Omega, \\\pi\_{\text{BD}(\text{grad})}\boldsymbol{\upsilon}(t) = \boldsymbol{\phi}(t)\boldsymbol{f} & \text{for all } t \in \mathbb{R}, \end{cases}$$

for some bounded *<sup>φ</sup>* <sup>∈</sup> *<sup>C</sup>*∞*(*R*)* with inf spt *φ >* −∞ and *<sup>f</sup>* <sup>∈</sup> BD*(*grad*)*.

**Exercise 12.5** Let <sup>=</sup> <sup>R</sup>*d*−<sup>1</sup> <sup>×</sup> <sup>R</sup>*>*0. Show that there exists a continuous linear operator *<sup>E</sup>* : *<sup>H</sup>*1*()* <sup>→</sup> *<sup>H</sup>*1*(*R*<sup>d</sup> )* such that *E(φ)*| <sup>=</sup> *<sup>φ</sup>* for each *<sup>φ</sup>* <sup>∈</sup> *<sup>H</sup>*1*()*.

**Exercise 12.6 (Korn's Second Inequality)** Let <sup>=</sup> <sup>R</sup>*d*−<sup>1</sup> <sup>×</sup> <sup>R</sup>*>*0. Using Exercise 12.5 show that there exists *c >* 0 such that for all *<sup>φ</sup>* <sup>∈</sup> *<sup>H</sup>*1*()<sup>d</sup>* we have

$$\|\|\phi\|\|\_{H^1(\mathfrak{Q})^d} \lesssim c \left( \|\phi\|\|\_{L\_2(\mathfrak{Q})^d} + \|\text{Grad}\,\phi\|\|\_{L\_2(\mathfrak{Q})^{d \times d}} \right).$$

Thus, describe the space of boundary values of dom*(*Grad*)*.

*Hint:* Prove a corresponding result for <sup>=</sup> <sup>R</sup>*<sup>d</sup>* first after having shown that *C*∞ <sup>c</sup> *(*R*<sup>d</sup> )<sup>d</sup>* forms a dense subset of both *<sup>H</sup>*1*()<sup>d</sup>* and dom*(*Grad*)*.

**Exercise 12.7** Let <sup>⊆</sup> <sup>R</sup><sup>3</sup> be open. Compute BD*(*curl*)* := *<sup>H</sup>*0*(*curl*, )*⊥*H (*curl*,)* and show that curl: BD*(*curl*)* → BD*(*curl*)* is well-defined, unitary and skewselfadjoint.

## **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 13 Continuous Dependence on the Coefficients I**

The power of the functional analytic framework for evolutionary equations lies in its variety. In fact, as we have outlined in earlier chapters, it is possible to formulate many differential equations in the form

$$\left(\partial\_l M(\partial\_l) + A\right)U = F.$$

In this chapter we want to use this versatility and address continuity of the above expression (or more precisely of the solution operator) in *M(∂t)*. To see this more clearly, fix *F* and take a sequence of material laws *(Mn)n*. We will address the following question: what are the conditions or notions of convergence of *(Mn)n* to some *M* in order that *(Un)n* with *Un* given as the solution of

$$(\partial\_{\mathbf{l}}M\_n(\partial\_{\mathbf{l}}) + A) \, U\_n = F$$

converges to *U*, which satisfies

$$\left(\partial\_{\mathbb{I}}M(\partial\_{\mathbb{I}}) + A\right)U = F?$$

In the first of two chapters on this subject, we shall specialise to *A* = 0; that is, we will discuss ordinary differential equations with infinite-dimensional state space. To begin with, we address the convergence of material laws pointwise in the Fourier– Laplace transformed domain and its relation to the convergence of material laws evaluated at the time derivative.

## **13.1 Convergence of Material Laws**

Throughout, let *H* be a Hilbert space. We briefly recall that a sequence *(Tn)n* in *L(H )* converges in the *strong operator topology* to some *T* ∈ *L(H )* if for all *x* ∈ *H* we have

$$T\_n \mathfrak{x} \to T \mathfrak{x} \quad (n \to \infty).$$

*(Tn)n* is said to converge in the *weak operator topology* to *T* ∈ *L(H )* if for all *x,y* ∈ *H* we have

$$
\langle \mathbf{y}, T\_n \mathbf{x} \rangle \to \langle \mathbf{y}, T\mathbf{x} \rangle \quad (n \to \infty).
$$

We denote the set of material laws on *H* with abscissa of boundedness less than or equal to *<sup>ν</sup>*<sup>0</sup> <sup>∈</sup> <sup>R</sup> by

$$\mathcal{M}(H,\mathbb{v}\_0) \coloneqq \{ M \colon \text{dom}(M) \to L(H) \text{ } \colon \space M \text{ material law}, \text{s}\_\mathsf{b}(M) \leqslant \mathsf{v}\_0 \} \dots$$

*Remark 13.1.1* Let *<sup>ν</sup>*<sup>0</sup> <sup>∈</sup> <sup>R</sup>, *ν>ν*0. Then *<sup>M</sup>(H, ν*0*)* is an algebra and *<sup>M</sup>(H, ν*0*) M* → *M(∂t ,ν)* ∈ *L <sup>L</sup>*2*,ν(*R; *H )* is an algebra homomorphism which is one-to-one by Theorem 8.2.1.

**Definition** Let *<sup>ν</sup>*<sup>0</sup> <sup>∈</sup> <sup>R</sup>. A sequence *(Mn)n*∈<sup>N</sup> in *<sup>M</sup>(H, ν*0*)* is called *bounded* if

$$\sup\_{n \in \mathbb{N}} \|M\_n\|\_{\infty, \mathbb{C}\_{\mathbb{R}^{n \times 10}}} < \infty.$$

**Theorem 13.1.2** *Let <sup>ν</sup>*<sup>0</sup> <sup>∈</sup> <sup>R</sup>*, (Mn)n in <sup>M</sup>(H, ν*0*) be bounded. Assume that for all <sup>z</sup>* <sup>∈</sup> <sup>C</sup>Re*>ν*<sup>0</sup> *the sequence (Mn(z))n converges in the weak operator topology of L(H ) with limit M(z) and let ν>ν*0*. Then M* ∈ *M(H, ν*0*) and Mn(∂t ,ν)* → *M(∂t ,ν) as n* → ∞ *in the weak operator topology of L L*2*,ν(*R*,H) .*

*If, in addition, (Mn(z))n converges in the strong operator topology of L(H ) for all <sup>z</sup>* <sup>∈</sup> <sup>C</sup>Re*>ν*<sup>0</sup> *, then, as <sup>n</sup>* → ∞*, Mn(∂t ,ν)* <sup>→</sup> *M(∂t ,ν) in the strong operator topology of L L*2*,ν (*R*,H) .*

*Proof* Let *<sup>z</sup>*<sup>0</sup> <sup>∈</sup> <sup>C</sup>Re*>ν*<sup>0</sup> , *<sup>r</sup>* <sup>∈</sup> *(*0*,*Re *<sup>z</sup>*<sup>0</sup> <sup>−</sup> *<sup>ν</sup>*0*)*. For *x,y* <sup>∈</sup> *<sup>H</sup>*, by Cauchy's integral formula, we deduce

$$\langle \mathbf{y}, M\_n(z\_0)\mathbf{x} \rangle = \frac{1}{2\pi \mathbf{i}} \int\_{\partial B(z\_0, r)} \frac{\langle \mathbf{y}, M\_n(z)\mathbf{x} \rangle\_H}{z - z\_0} \mathrm{d}z \quad (n \in \mathbb{N}).$$

As *(Mn)n* is bounded, Lebesgue's dominated convergence theorem yields

$$\langle \mathbf{y}, \, M(z\_0)\mathbf{x} \rangle = \frac{1}{2\pi \mathbf{i}} \int\_{\partial B(z\_0, r)} \frac{\langle \mathbf{y}, \, M(z)\mathbf{x} \rangle\_H}{z - z\_0} \,\mathrm{d}z \,.$$

Since

$$\|\langle \mathbf{y}, \mathcal{M}(\mathbf{z})\mathbf{x} \rangle\|\_{H} \leqslant \|\mathbf{x}\|\_{H} \|\mathbf{y}\|\_{H} \sup\_{n \in \mathbb{N}} \|\mathcal{M}\_{n}\|\_{\infty, \mathbb{C}\_{\mathbf{R}\mathbf{c} > \mathbf{y}\_{0}}} \quad (\mathbf{z} \in \mathbb{C}\_{\mathbf{R}\mathbf{c} > \mathbf{y}\_{0}}),\tag{13.1}$$

 *y,M(*·*)x <sup>H</sup>* is holomorphic in a neighbourhood of *z*0. By Exercise 5.3 we obtain that *<sup>M</sup>* : <sup>C</sup>Re*>ν*<sup>0</sup> <sup>→</sup> *L(H )* is holomorphic. In fact, the estimate (13.1) even implies that *M* ∈ *M(H, ν*0*)*.

If *<sup>z</sup>* <sup>∈</sup> <sup>C</sup>Re*>ν*<sup>0</sup> and *(Mn(z))n* even converges in the strong operator topology, then the limit is clearly *M(z)*.

The convergence statements for *(Mn(∂t ,ν ))n* (in the weak and strong operator topology) are then implied by Fourier–Laplace transformation.

*Remark 13.1.3* In Theorem 13.1.2, it suffices to assume that *(Mn(z))n* converges only for *z* belonging to a countable subset of CRe*>ν*<sup>0</sup> with an accumulation point in CRe*>ν*<sup>0</sup> .

The next statement is essential for the convergence statement for "ordinary" evolutionary equations.

**Proposition 13.1.4** *Let (Tn)n be a sequence in L(H ) converging in the strong operator topology to some T* ∈ *L(H ) with* 0 ∈ " *<sup>n</sup>*∈<sup>N</sup> *ρ(Tn),* sup*n*∈<sup>N</sup> *<sup>T</sup>* <sup>−</sup><sup>1</sup> *<sup>n</sup> <sup>&</sup>lt;* <sup>∞</sup> *and* ran*(T )* <sup>⊆</sup> *<sup>H</sup> dense. Then <sup>T</sup> is continuously invertible and (T* <sup>−</sup><sup>1</sup> *<sup>n</sup> )n converges to T* <sup>−</sup><sup>1</sup> *in the strong operator topology.*

*Proof* We set *K* := sup*n*∈<sup>N</sup> *<sup>T</sup>* <sup>−</sup><sup>1</sup> *<sup>n</sup>* . We show that *<sup>T</sup>* is continuously invertible first. For this, let *x* ∈ *H*. Then

$$\|\|\mathbf{x}\|\| = \left\| T\_n^{-1} T\_n \mathbf{x} \right\| \lesssim K \left\| T\_n \mathbf{x} \right\| \to K \left\| T \mathbf{x} \right\| \quad (n \to \infty).$$

Hence, *T* is one-to-one and it follows that ran*(T )* ⊆ *H* is closed. Hence, 0 ∈ *ρ(T )*. For *x* ∈ *H* we conclude

$$\left\| T\_n^{-1} \mathbf{x} - T^{-1} \mathbf{x} \right\| = \left\| T\_n^{-1} (T - T\_h) T^{-1} \mathbf{x} \right\| \leqslant K \left\| (T - T\_h) T^{-1} \mathbf{x} \right\| \to 0$$
  $\text{as } (n \to \infty).$ 

We are now in the position to obtain the first result on continuous dependence.

**Theorem 13.1.5** *Let <sup>ν</sup>*<sup>0</sup> <sup>∈</sup> <sup>R</sup>*, (Mn)n a bounded sequence in <sup>M</sup>(H, ν*0*), c >* <sup>0</sup> *such that for all <sup>n</sup>* <sup>∈</sup> <sup>N</sup> *and <sup>z</sup>* <sup>∈</sup> <sup>C</sup>Re*>ν*<sup>0</sup> *we have*

$$\operatorname{Re} z M\_n(z) \gg c.$$

*If (Mn(z))n converges in the strong operator topology for all <sup>z</sup>* <sup>∈</sup> <sup>C</sup>Re*>ν*<sup>0</sup> *then for the limit M(z) we have M* ∈ *M(H, ν*0*) with* Re *zM(z) <sup>c</sup> for all <sup>z</sup>* <sup>∈</sup> <sup>C</sup>Re*>ν*<sup>0</sup> *and for ν>ν*<sup>0</sup> *we have*

$$\left(\partial\_{\mathfrak{t},\boldsymbol{\upsilon}}M\_{n}(\partial\_{\mathfrak{t},\boldsymbol{\upsilon}})\right)^{-1} \to \left(\partial\_{\mathfrak{t},\boldsymbol{\upsilon}}M(\partial\_{\mathfrak{t},\boldsymbol{\upsilon}})\right)^{-1}$$

*in the strong operator topology.*

*Proof* By Theorem 13.1.2, we observe *<sup>M</sup>* <sup>∈</sup> *<sup>M</sup>(H, ν*0*)*. Let *<sup>z</sup>* <sup>∈</sup> <sup>C</sup>Re*>ν*<sup>0</sup> . Then we have Re *zM(z)* = lim*n*→∞ Re *zMn(z) c* and hence *zM(z)* is continuously invertible. Since 0 ∈ " *<sup>n</sup>*∈<sup>N</sup> *ρ(zMn(z))* and  *(zMn(z))*−<sup>1</sup> <sup>1</sup>*/c* by Proposition 6.2.3(b), we deduce by Proposition 13.1.4 applied to *Tn* = *zMn(z)* that *(zMn(z))*−<sup>1</sup> <sup>→</sup> *(zM(z))*−<sup>1</sup> in the strong operator topology. By Theorem 13.1.2, for *ν>ν*<sup>0</sup> we infer *∂t ,νMn(∂t ,ν )* −<sup>1</sup> <sup>→</sup> *∂t ,νM(∂t ,ν)* −<sup>1</sup> in the strong operator topology.

## **13.2 A Leading Example**

We want to illustrate the findings of the previous section with the help of an ordinary differential equation. Also, we shall provide an argument on the limitations of the theory presented above. Let *(, , μ)* be a finite measure space.

Note that for *V* ∈ *L*∞*(μ)* with associated multiplication operator *V (*m*)* as in Theorem 2.4.3 we have that

$$\mathcal{M} \colon z \mapsto 1 + z^{-1} V(\mathbf{m}) \in L(L\_2(\mu))$$

is a material law with sb *(M)* = 0 unless *V* = 0 (in case *V* = 0 we have sb *(M)* = −∞). The corresponding evolutionary equation is given by

$$
\partial\_{\mathfrak{t},\upsilon}u + V(\mathfrak{m})u = f.
$$

We want to study sequences of material laws of this form; that is, material laws induced by sequences *(Vn)n* in *L*∞*(μ)*. First, we provide the following characterisation of the convergence of multiplication operators. We recall that for a Banach space *X* the weak∗ topology *σ (X*- *, X)* on *X* is the coarsest topology such that all the mappings *X*- *x*- → *x*- *(x)* (*x* ∈ *X*) are continuous.

**Proposition 13.2.1** *Let (Vn)n in L*∞*(μ) and V* ∈ *L*∞*(μ). Then the following statements hold.*


(c) *Vn(*m*)* → *V (*m*) in the weak operator topology of L(L*2*(μ)) if and only if Vn* → *V in the weak*<sup>∗</sup> *topology σ L*∞*(μ), L*1*(μ) .*

#### *Proof*


$$\|V\_n(\mathbf{m})f - V(\mathbf{m})f\|\_{L\_2(\mu)}^2 = \int\_{\Omega} |V\_n - V|^2 \, |f|^2 \, \mathrm{d}\mu$$

$$\leq \sup\_{n \in \mathbb{N}} \|V\_n - V\|\_{L\_\infty(\mu)} \, \|f\|\_{L\_\infty(\mu)}^2 \int\_{\Omega} |V\_n - V| \, \mathrm{d}\mu \to 0.$$

Since *L*∞*(μ)* is dense in *L*2*(μ)* and *(Vn(*m*)* − *V (*m*))n* is bounded by Proposition 2.4.6, we obtain *Vn(*m*)* → *V (*m*)* in the strong operator topology of *L(L*2*(μ))*.

Now, let *Vn(*m*)* → *V (*m*)* in the strong operator topology of *L(L*2*(μ))*. Then *(Vn(*m*))n* is bounded in *L(L*2*(μ))* by the uniform boundedness principle. Now Proposition 2.4.6 yields boundedness of *(Vn)n* in *<sup>L</sup>*∞*(μ)*. Moreover, since <sup>1</sup> <sup>∈</sup> *<sup>L</sup>*2*(μ)*, we deduce *Vn* <sup>=</sup> *Vn(*m*)*1 <sup>→</sup> *V (*m*)*1 <sup>=</sup> *<sup>V</sup>* in *<sup>L</sup>*2*(μ)*. Since *<sup>L</sup>*2*(μ)* embeds continuously into *L*1*(μ)* we obtain *Vn* → *V* in *L*1*(μ)*.

(c) The assertion follows easily upon realising that *φ* ∈ *L*1*(μ)* if and only if there exists *ψ*1*, ψ*<sup>2</sup> ∈ *L*2*(μ)* such that *φ* = *ψ*1*ψ*2.

With the latter result at hand together with the results in the previous section, we easily deduce the next theorem on continuous dependence on the coefficients.

**Theorem 13.2.2** *Let (Vn)n in L*∞*(μ) be bounded, V* ∈ *L*∞*(μ), and Vn* → *V in L*1*(μ). Then there exists ν >* 0 *such that*

$$\left(\partial\_{\mathbf{t},\boldsymbol{\nu}} + V\_n(\mathbf{m})\right)^{-1} \to \left(\partial\_{\mathbf{t},\boldsymbol{\nu}} + V(\mathbf{m})\right)^{-1}$$

*in the strong operator topology of L <sup>L</sup>*2*,ν (*R; *<sup>L</sup>*2*(μ)) .*

Note that the convergence statement can be improved, see Exercise 13.3.

*Proof* By Proposition 13.2.1(b) we obtain *Vn(*m*)* → *V (*m*)* in the strong operator topology of *L(L*2*(μ))*. Note that for *ν* -1 + sup*n*∈<sup>N</sup> *Vn <sup>L</sup>*∞*(μ)* we have

$$(\operatorname{Re}(z + V\_n(\mathbf{m})) \rhd 1 \quad (z \in \mathbb{C}\_{\operatorname{Re} > \boldsymbol{\nu}}, n \in \mathbb{N})...$$

Now Theorem 13.1.5 applied to *Mn(z)* <sup>=</sup> <sup>1</sup> <sup>+</sup> *<sup>z</sup>*−<sup>1</sup>*Vn(*m*)* yields the assertion.

*Remark 13.2.3* Theorem 13.2.2 can be generalized in the following way. Let *(Bn)n* in *L(H )*, *B* ∈ *L(H )*, *Bn* → *B* in the strong operator topology. Then there exists *ν >* 0 such that

$$\left(\partial\_{\mathfrak{l},\boldsymbol{\nu}} + B\_n\right)^{-1} \to \left(\partial\_{\mathfrak{l},\boldsymbol{\nu}} + B\right)^{-1}$$

in the strong operator topology of *L <sup>L</sup>*2*,ν(*R; *<sup>L</sup>*2*(μ))* .

In Theorem 13.2.2 we assumed strong convergence of the sequence of multiplication operators *(Vn(*m*))n*. A natural question to ask is whether the stated result can be improved to *(Vn)n* converging in the weak<sup>∗</sup> topology *σ L*∞*(μ), L*1*(μ)* only. The answer is neither 'yes' nor 'no', but rather 'not quite', as we will show in the following. We start with a result on weak∗ limits of scaled periodic functions, which will serve as the prototypical example for a sequence converging in the weak∗ topology of *L*∞.

**Theorem 13.2.4** *Let <sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*∞*(*R*<sup>d</sup> ) be* [0*,* <sup>1</sup>*) <sup>d</sup>* -periodic*; that is,*

$$f(\cdot + k) = f \quad (k \in \mathbb{Z}^d).$$

*Then*

$$f(n \cdot) \to \int\_{[0,1)^d} f(\mathbf{x}) \, \mathbf{dx} \mathbf{1}\_{\mathbb{R}^d}$$

*in the weak*∗ *topology σ <sup>L</sup>*∞*(*R*<sup>d</sup> ), L*1*(*R*<sup>d</sup> ) as n* → ∞*.*

*Proof* Without loss of generality, we may assume [0*,*1*) <sup>d</sup> f (x)* d*x* = 0. By the density of simple functions in *<sup>L</sup>*1*(*R*<sup>d</sup> )* and the boundedness of *(f (n*·*))n* in *<sup>L</sup>*∞*(*R*<sup>d</sup> )*, it suffices to show

$$\int\_{\mathcal{Q}} f(nx) \,\mathrm{d}x \to 0 \quad (n \to \infty).$$

for *Q* = [*a, b*] := [*a*1*, b*1] × *...* × [*ad , bd* ] where *a* = *(a*1*,...,ad ), b* = *(b*1*,...,bd )* <sup>∈</sup> <sup>R</sup>*<sup>d</sup>* . By translation and the periodicity of *<sup>f</sup>* we may assume *<sup>a</sup>* <sup>=</sup> 0. Thus, it suffices to show

$$\int\_{[0,b]} f(nx) \, dx \to 0 \quad (n \to \infty).$$

for all *b* ∈ *(*0*,*∞*) <sup>d</sup>* . So, let *<sup>b</sup>* <sup>=</sup> *(b*1*,...,bd )* <sup>∈</sup> *(*0*,*∞*) <sup>d</sup>* . Let *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>. Then we find *<sup>z</sup>* <sup>∈</sup> <sup>N</sup>*<sup>d</sup>* <sup>0</sup> and *ζ* ∈ [0*,* 1*) <sup>d</sup>* such that *nb* <sup>=</sup> *<sup>z</sup>* <sup>+</sup> *<sup>ζ</sup>* . We compute

$$\begin{aligned} &\int\_{[0,b]} f(\mathbf{x}\mathbf{x}) \, \mathbf{d} \mathbf{x} \\ &= \frac{1}{n^d} \int\_{[0,nb]} f(\mathbf{x}) \, \mathbf{d} \mathbf{x} \\ &= \frac{1}{n^d} \int\_{[0,\varepsilon 1] \times [0,nb\_1] \times \dots \times [0,nb\_d]} f(\mathbf{x}) \, \mathbf{d} \mathbf{x} + \frac{1}{n^d} \int\_{(\varepsilon 1,\varepsilon 1 + \xi 1] \times [0,nb\_2] \times \dots \times [0,nb\_d]} f(\mathbf{x}) \, \mathbf{d} \mathbf{x} .\end{aligned}$$

We now estimate

$$\begin{split} \left| \frac{1}{n^d} \int\_{(\varepsilon\_1, \varepsilon\_1 + \xi\_1) \times [0, nb\_2] \times \dots \times [0, nb\_d]} f(\mathbf{x}) \, \mathrm{d}x \right| &\leqslant \frac{1}{n^d} \int\_{(\varepsilon\_1, \varepsilon\_1 + \xi\_1) \times [0, nb\_2] \times \dots \times [0, nb\_d]} |f(\mathbf{x})| \, \mathrm{d}x \\ &\leqslant \frac{1}{n^d} \int\_{(0,1] \times [0, nb\_2] \times \dots \times [0, nb\_d]} \mathrm{d}x \, \|f\|\_{L\_{\infty}(\mu)} \\ &= \frac{1}{n} b\_2 \cdot \dots \cdot b\_d \, \|f\|\_{L\_{\infty}(\mu)} \cdot \mathrm{d}x \end{split}$$

Continuing in this manner and using *zj nbj* for all *j* ∈ {1*,...,d*}, we obtain

$$\left| \int\_{[0,b]} f(nx) \, \mathrm{d}x \right| \leqslant \frac{1}{n^d} \left| \int\_{[0,z]} f(x) \, \mathrm{d}x \right| + \frac{1}{n} \sum\_{j=1}^d \frac{b\_1 \cdot \ldots \cdot b\_d}{b\_j} \, \|f\|\_{L\_\infty(\mu)} \dots$$

Since *f* is [0*,* 1*) <sup>d</sup>* -periodic and *<sup>z</sup>* <sup>∈</sup> <sup>N</sup>*<sup>d</sup>* <sup>0</sup> we observe

$$\int\_{[0,z]} f(\mathbf{x}) \, \mathbf{dx} = \prod\_{j=1}^d z\_j \int\_{[0,1)^d} f(\mathbf{x}) \, \mathbf{dx} = \mathbf{0}.$$

Thus,

$$\left| \int\_{[0,b]} f(nx) \, \mathrm{d}x \right| \lesssim \frac{1}{n} \sum\_{j=1}^{d} \frac{b\_1 \cdot \ldots \cdot b\_d}{b\_j} \left\| f \right\|\_{L\_{\infty}(\mu)},$$

which tends to 0 as *n* → ∞.

*Remark 13.2.5* Note that Theorem 13.2.4 also yields

$$f(n \cdot) \to \int\_{[0,1)^d} f(\mathbf{x}) \, \mathbf{dx} \, \mathbb{1}\_{\Omega}$$

in the weak<sup>∗</sup> topology *σ (L*∞*(), L*1*())* for all measurable subsets <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* with non-zero Lebesgue measure.

We now present an example which shows that weak<sup>∗</sup> convergence of *(Vn)n* does not yield the result of Theorem 13.2.2.

*Example 13.2.6* Let *(, , μ)* <sup>=</sup> *((*0*,* <sup>1</sup>*),B((*0*,* <sup>1</sup>*)), λ*|*(*0*,*1*))*. For *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> let *Vn* be given by *Vn(x)* := sin*(*2*πnx)* for *x* ∈ *(*0*,* 1*)*. Then, by Theorem 13.2.4, we obtain *Vn* → 0 in *σ L*∞*((*0*,* 1*)), L*1*((*0*,* 1*))* as *n* → ∞. Let *ν >* 1. Then *∂t ,ν* + *Vn(*m*)* is continuously invertible as an operator in *L*2*,ν* <sup>R</sup>; *<sup>L</sup>*2*((*0*,* <sup>1</sup>*))* . Let *f* <sup>∈</sup> *C(*[0*,* 1]*)* and denote *<sup>f</sup>* : *<sup>t</sup>* → <sup>1</sup>[0*,*∞*)(t)f* . Then *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν* <sup>R</sup>; *<sup>L</sup>*2*((*0*,* <sup>1</sup>*))* . The solution *un* ∈ *L*2*,ν* <sup>R</sup>; *<sup>L</sup>*2*((*0*,* <sup>1</sup>*))* of

$$(\partial\_{l,\upsilon} + V\_n(\mathbf{m}))\mu\_n = f$$

is given by the variations of constants formula; that is,

$$u\_n(t, \mathbf{x}) = \mathbb{1}\_{[0, \infty)}(t) \int\_0^t \exp\left(-(t - s)\sin(2\pi nx)\right) \mathrm{d}s \widetilde{f}(\mathbf{x}) \quad (t \in \mathbb{R}, x \in (0, 1)).$$

Thus, if a variant of Theorem 13.2.2 were true also in this case, *(un)n* needs to converge (in some sense) to the solution *u* of

$$
\partial\_{t, \upsilon} u = f,
$$

which is given by

$$
\mu(t, \mathbf{x}) = \mathbb{1}\_{[0,\infty)}(t) t \widetilde{f}(\mathbf{x}) \quad (t \in \mathbb{R}, x \in (0, 1)).
$$

However, by Theorem 13.2.4, for *x* ∈ *(*0*,* 1*)* we deduce

$$\int\_0^t \exp\left(-\left(t-s\right)\sin(2\pi nx)\right)ds \to \int\_0^t J\left(-\left(t-s\right)\right)ds \quad (n \to \infty).$$

in *σ L*∞*((*0*,* 1*)), L*1*((*0*,* 1*))* for each *t* -0, where

$$J(s) := \int\_0^1 \exp\left(s \sin(2\pi x)\right) dx \quad (s \in \mathbb{R})$$

denotes the 0-th order modified Bessel function of the first kind, cf. [1, p. 9.6.19]. Moreover, for *ϕ* ∈ *C*<sup>∞</sup> <sup>c</sup> *(*R*)*, *<sup>A</sup>* <sup>∈</sup> *<sup>B</sup>((*0*,* <sup>1</sup>*))* and using dominated convergence we obtain

$$\begin{split} & \langle \mu\_n, \varrho \mathbbm{1}\_A \rangle\_{L\_{2,v}(\mathbb{R}; L\_2((0,1)))} \\ &= \int\_0^\infty \int\_0^1 \int\_0^l \exp\left(-(t-s)\sin(2\pi nx)\right) \mathrm{d}s \,\widetilde{f}(\mathbf{x})^\* \mathbbm{1}\_A(\mathbf{x}) \, \mathrm{d}x \, \varrho(t) \mathrm{e}^{-2\nu t} \, \mathrm{d}t \\ & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \,\_2 \mathrm{d}s \, \widetilde{f}(\mathbf{x})^\* \mathbbm{1}\_A(\mathbf{x}) \, \mathrm{d}x \, \varrho(t) \mathrm{e}^{-2\nu t} \, \mathrm{d}t \\ &= \langle \widetilde{u}, \varrho \mathbbm{1}\_A \rangle\_{L\_{2,v}(\mathbb{R}; L\_2((0,1)))} \end{split}$$

with

$$
\widetilde{u}(t, \mathbf{x}) := \mathbb{1}\_{[0, \infty)}(t) \int\_0^t J(-(t - s)) \, \mathrm{d}s \, \widetilde{f}(\mathbf{x}) \quad (t \in \mathbb{R}, x \in (0, 1)).
$$

Since *(un)n* is bounded in *L*2*,ν* <sup>R</sup>; *<sup>L</sup>*2*((*0*,* <sup>1</sup>*))* and, by Lemma 3.1.9, the set *<sup>ϕ</sup>*1*<sup>A</sup>* ; *<sup>A</sup>* <sup>∈</sup> *<sup>B</sup>((*0*,* <sup>1</sup>*)), ϕ* <sup>∈</sup> *<sup>C</sup>*<sup>∞</sup> <sup>c</sup> *(*R*)* is total in *L*2*,ν* <sup>R</sup>; *<sup>L</sup>*2*((*0*,* <sup>1</sup>*))* , we infer *un* <sup>→</sup> *<sup>u</sup>* weakly in *<sup>L</sup>*2*,ν* <sup>R</sup>; *<sup>L</sup>*2*((*0*,* <sup>1</sup>*))* as *<sup>n</sup>* → ∞. In particular, *<sup>u</sup>* <sup>=</sup> *<sup>u</sup>*. Furthermore, *<sup>u</sup>* is *not* of the form

$$\int\_0^t \exp\left(-\left(t-s\right)\widetilde{V}(\mathbf{x})\right) \mathrm{d}s \,\widetilde{f}(\mathbf{x}) \, ^t$$

for some *V* <sup>∈</sup> *<sup>L</sup>*∞*((*0*,* <sup>1</sup>*))* and hence, we *cannot* hope for *<sup>u</sup>* to satisfy an equation of the type

$$(\partial\_{\mathfrak{t},\upsilon} + \widetilde{V}(\mathfrak{m}))\widetilde{\mu} = f.$$

As we shall see next, in the framework of evolutionary equations it is possible to derive an equation involving suitable limits of *(Vn)n* and *f* as a right-hand side.

## **13.3 Convergence in the Weak Operator Topology**

In this section, we consider a particular class of material laws and characterise convergence of the solution operators of the corresponding evolutionary equations in the weak operator topology. The main theorem that will serve to compute the limit equation satisfied by *<sup>u</sup>* in Example 13.2.6 reads as follows.

**Theorem 13.3.1** *Let H be a Hilbert space, (Bn)n a bounded sequence in L(H ) and ν >* sup*n*∈<sup>N</sup> *Bn . Then (∂t ,ν* <sup>+</sup> *Bn)*−<sup>1</sup> *<sup>n</sup> converges in the weak operator topology of L(L*2*,ν (*R; *H )) if and only if for all <sup>k</sup>* <sup>∈</sup> <sup>N</sup> *the sequence (B<sup>k</sup> n)n converges in the weak operator topology of L(H ). In either case, we have*

$$\left(\partial\_{t,\boldsymbol{\nu}} + B\_{\boldsymbol{n}}\right)^{-1} \to \sum\_{k=0}^{\infty} \left(-\partial\_{t,\boldsymbol{\nu}}^{-1}\right)^{k} C\_{k} \partial\_{t,\boldsymbol{\nu}}^{-1}$$

*in the weak operator topology of L(L*2*,ν (*R; *H )), where Ck* <sup>∈</sup> *L(H ) denotes the weak limit of (B<sup>k</sup> n)n for <sup>k</sup>* <sup>∈</sup> <sup>N</sup> *and <sup>C</sup>*<sup>0</sup> := <sup>1</sup>*<sup>H</sup> .*

*Remark 13.3.2* In the situation of Theorem 13.3.1, let *B<sup>k</sup> <sup>n</sup>* → *Ck* in the weak operator topology for all *<sup>k</sup>* <sup>∈</sup> <sup>N</sup>. Let *<sup>L</sup>* := sup*n*∈<sup>N</sup> *Bn*, *ν >* <sup>2</sup>*L*, and *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *H )*. By Theorem 13.3.1, if *(∂t ,ν* <sup>+</sup> *Bn)un* <sup>=</sup> *<sup>f</sup>* for all *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>, then *(un)n* converges weakly in *<sup>L</sup>*2*,ν (*R; *H )* to some element *<sup>u</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *H )*. In order to determine the differential equation satisfied by *u*, we make the following observations: by weak convergence,

$$\|C\_k\| \lesssim \liminf\_{n \to \infty} \left\| B\_n^k \right\| \lesssim L^k.$$

Hence, since *∂*−<sup>1</sup> *t ,ν L*2*,ν* <sup>1</sup> *<sup>ν</sup>* (see Sect. 3.2) we infer that

$$\sum\_{k=1}^{\infty} \left(-\,\partial\_{t,\nu}^{-1}\right)^k C\_k$$

converges in *L(L*2*,ν (*R; *H ))* and

$$\left\| \sum\_{k=1}^{\infty} \left( -\partial\_{t,\boldsymbol{\nu}}^{-1} \right)^{k} \boldsymbol{C}\_{k} \right\| \leqslant \sum\_{k=1}^{\infty} \left\| \partial\_{t,\boldsymbol{\nu}}^{-1} \right\|^{k} \left\| \boldsymbol{C}\_{k} \right\| \\ < \sum\_{k=1}^{\infty} \frac{1}{2^{k}} = 1. \,\,\,\,$$

Hence, since *C*<sup>0</sup> = 1*<sup>H</sup>* we deduce that <sup>∞</sup> *k*=0 <sup>−</sup> *<sup>∂</sup>*−<sup>1</sup> *t ,ν <sup>k</sup> Ck* is boundedly invertible by the Neumann series. Thus, we obtain

$$\begin{split} f &= \partial\_{l,\boldsymbol{\nu}} \left( \sum\_{k=0}^{\infty} \left( -\partial\_{l,\boldsymbol{\nu}}^{-1} \right)^{k} C\_{k} \right)^{-1} \widetilde{\boldsymbol{u}} = \partial\_{l,\boldsymbol{\nu}} \left( 1\_{H} + \sum\_{k=1}^{\infty} \left( -\partial\_{l,\boldsymbol{\nu}}^{-1} \right)^{k} C\_{k} \right)^{-1} \widetilde{\boldsymbol{u}} \\ &= \partial\_{l,\boldsymbol{\nu}} \sum\_{\ell=0}^{\infty} \left( -\sum\_{k=1}^{\infty} \left( -\partial\_{l,\boldsymbol{\nu}}^{-1} \right)^{k} C\_{k} \right)^{\ell} \widetilde{\boldsymbol{u}} = \partial\_{l,\boldsymbol{\nu}} \widetilde{\boldsymbol{u}} + \partial\_{l,\boldsymbol{\nu}} \sum\_{\ell=1}^{\infty} \left( -\sum\_{k=1}^{\infty} \left( -\partial\_{l,\boldsymbol{\nu}}^{-1} \right)^{k} C\_{k} \right)^{\ell} \widetilde{\boldsymbol{u}}. \end{split}$$

Before we prove Theorem 13.3.1 we revisit Example 13.2.6.

*Example 13.3.3 (Example 13.2.6 Continued)* By Theorem 13.3.1, we need to compute the limit of *(*sin*k(*2*πn*·*))n* in the weak<sup>∗</sup> topology of *<sup>L</sup>*∞*((*0*,* <sup>1</sup>*))* for all *<sup>k</sup>* <sup>∈</sup> <sup>N</sup>. By Theorem 13.2.4, we obtain for all *<sup>k</sup>* <sup>∈</sup> <sup>N</sup>

$$\begin{aligned} \lim\_{n \to \infty} \sin^k(2\pi n \cdot) &= \int\_0^1 \sin^k(2\pi \xi) \, \mathrm{d}\xi \, \mathbb{1}\_{(0,1)} \\ &= \begin{cases} \frac{(2m)!}{\left(m! 2^m\right)^2} \mathbb{1}\_{(0,1)}, & k = 2m \text{ for some } m \in \mathbb{N}, \\ 0, & k \text{ odd}, \end{cases} \end{aligned}$$

in *σ L*∞*((*0*,* 1*)), L*1*((*0*,* 1*))* . Hence, *un* <sup>→</sup> *<sup>u</sup>* weakly, where *<sup>u</sup>* satisfies

$$\partial\_{t,\boldsymbol{\nu}}\widetilde{\boldsymbol{\mu}} + \partial\_{t,\boldsymbol{\nu}}\sum\_{\ell=1}^{\infty} \left(-\sum\_{m=1}^{\infty} \partial\_{t,\boldsymbol{\nu}}^{-2m} \frac{(2m)!}{\left(m!2^m\right)^2}\right)^{\ell} \widetilde{\boldsymbol{\mu}} = \boldsymbol{f}$$

for *ν >* 2 by Remark 13.3.2.

*Proof of Theorem 13.3.1* Before we prove the equivalence, we make some observations. Since *ν >* sup*n*∈<sup>N</sup> *Bn* =: *L*, by a Neumann series argument we deduce that

$$\left(\partial\_{\boldsymbol{t},\boldsymbol{\nu}} + B\_{\boldsymbol{n}}\right)^{-1} = \sum\_{k=0}^{\infty} \left(-\partial\_{\boldsymbol{t},\boldsymbol{\nu}}^{-1} B\_{\boldsymbol{n}}\right)^{k} \partial\_{\boldsymbol{t},\boldsymbol{\nu}}^{-1} = \sum\_{k=0}^{\infty} \left(-\partial\_{\boldsymbol{t},\boldsymbol{\nu}}^{-1}\right)^{k} B\_{\boldsymbol{n}}^{k} \partial\_{\boldsymbol{t},\boldsymbol{\nu}}^{-1}.$$

The series ∞ *k*=0 <sup>−</sup> *<sup>∂</sup>*−<sup>1</sup> *t ,ν <sup>k</sup> Bk n∂*<sup>−</sup><sup>1</sup> *t ,ν* is absolutely convergent in *L(L*2*,ν (*R; *H ))*. Also note that for *Mn* : <sup>C</sup>Re*>L <sup>z</sup>* → <sup>∞</sup> *<sup>k</sup>*=0*(*−<sup>1</sup> *<sup>z</sup> )kBk n* 1 *<sup>z</sup>* we have *Mn* ∈ *M(H, ν)*.

Assume now that *(B<sup>k</sup> n)n* converges in the weak operator topology to some *Ck* for all *<sup>k</sup>* <sup>∈</sup> <sup>N</sup>. A little computation reveals that as *<sup>n</sup>* → ∞,

$$M\_n(z) \to \sum\_{k=0}^{\infty} \left(-\frac{1}{z}\right)^k C\_k \frac{1}{z} =: M(z) \quad (z \in \mathbb{C}\_{\text{Re} > L})$$

in the weak operator topology, where the series on the right-hand side converges in *L(H )* since

$$\|\|C\_k\|\| \lesssim \liminf\_{n \to \infty} \left\| B\_n^k \right\| \lesssim L^k \quad (k \in \mathbb{N}).$$

Moreover, since *ν>L*, the sequence *(Mn)n* is bounded in *M(H, ν)* and thus, *M* ∈ *M(H, ν)* and

$$M\_n(\partial\_{\mathfrak{l},\mathbb{U}}) \to M(\partial\_{\mathfrak{l},\mathbb{U}})$$

in the weak operator topology by Theorem 13.1.2.

Now, we assume that *(∂t ,ν* + *Bn)* −1 *<sup>n</sup>* converges in the weak operator topology. Then *(Mn(∂t ,ν))n* converges in the weak operator topology. Let *<sup>k</sup>* <sup>∈</sup> <sup>N</sup>. We need to show that for all *φ,ψ* ∈ *H* the sequence *( φ,B<sup>k</sup> nψ <sup>H</sup> )n* is convergent to some number *ck,φ,ψ* as *n* → ∞. The Riesz representation theorem then yields the existence of *Ck* ∈ *L(H )* with *φ,Ckψ* = *ck,φ,ψ* . So, let *φ,ψ* ∈ *H*. Moreover, we consider the functions *mn* and *hn* given by

$$m\_n(z) := \sum\_{k=0}^{\infty} (-z)^k z \left< \phi, B\_n^k \psi \right>\_H \quad (z \in B(0, 1/L), n \in \mathbb{N})$$

and

$$h\_n(z) := \langle \phi, M\_n(z)\psi \rangle\_H = \sum\_{k=0}^{\infty} \frac{1}{z} \left(-\frac{1}{z}\right)^k \left\langle \phi, B\_n^k \psi \right\rangle\_H \quad (z \in \mathbb{C}\_{\text{Re} > L}, n \in \mathbb{N}).$$

Clearly, *mn* and *hn* are holomorphic on their respective domains for each *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> and the sequences *(mn)n* and *(hn)n* are uniformly bounded on compact subsets (in other words they form normal families). Moreover,

$$m\_n(z) = h\_n\left(\frac{1}{z}\right) \quad \left(z \in B\left(1/(2L), 1/(2L)\right), n \in \mathbb{N}\right).$$

We aim to show that the coefficients of the power series of *mn* converge as *n* tends to infinity. The proof will be done in two steps. In step 1, we will prove that the sequence *(hn)n* converges to a holomorphic function *<sup>h</sup>*: <sup>C</sup>Re*>L* <sup>→</sup> <sup>C</sup> uniformly on compact sets. Then, in the second step, we will use this to deduce that *(mn)n* also converges uniformly on compact sets and prove the assertion with the help of Cauchy's integral formula.

*Step 1:* By Proposition 5.3.2, *(Mn(*im + *ν))n* converges in the weak operator topology of *L(L*2*(*R; *H ))*. For *f, g* <sup>∈</sup> *<sup>L</sup>*2*(*R*)* we thus obtain that

$$\left( \langle f, h\_n(\mathrm{im} + \nu)g \rangle\_{L\_2(\mathbb{R})} \right)\_n = \left( \langle f\phi, M\_n(\mathrm{im} + \nu)g\psi \rangle\_{L\_2(\mathbb{R}; H)} \right)\_n$$

is convergent. Thus, using *<sup>L</sup>*2*(*R*)* · *<sup>L</sup>*2*(*R*)* <sup>=</sup> *<sup>L</sup>*1*(*R*)*, we obtain that

$$\Psi \colon L\_1(\mathbb{R}) \ni u \mapsto \lim\_{n \to \infty} \left( \int\_{\mathbb{R}} h\_n(\mathrm{i}t + \nu) \mu(t) \, \mathrm{d}t \right) \in \mathbb{C}$$

defines a linear functional, which is continuous, since

$$\sup\_{n \in \mathbb{N}} \sup\_{t \in \mathbb{R}} \|M\_n(\text{it} + \nu)\|\_{L(H)} = \sup\_{n \in \mathbb{N}} \|M\_n(\text{im} + \nu)\|\_{L(L\_2(\mathbb{R}; H))} < \infty$$

by boundedness of *(Bn)n*. Hence, since *L*1*(*R*)*- <sup>=</sup> *<sup>L</sup>*∞*(*R*)*, we find a unique *<sup>h</sup>* <sup>∈</sup> *<sup>L</sup>*∞*(*R*)* with

$$\lim\_{n \to \infty} \int\_{\mathbb{R}} h\_n(\mathbf{i}t + \nu) \mu(t) \, \mathrm{d}t = \int\_{\mathbb{R}} \widetilde{h}(t) \mu(t) \, \mathrm{d}t \quad (\mu \in L\_1(\mathbb{R})).$$

We now show that every subsequence *(hnk )k* of *(hn)n* has a subsequence *(hnkl )l* which converges locally uniformly to a holomorphic function *<sup>h</sup>*: <sup>C</sup>Re*>L* <sup>→</sup> <sup>C</sup> such that *h(*<sup>i</sup> · +*ν)* <sup>=</sup> *<sup>h</sup>* a.e., and that this implies that the limit *<sup>h</sup>* does not depend on the subsequences. Then we conclude that *(hn)n* itself converges locally uniformly to *h*.

So, let *(hnk )k* be a subsequence of *(hn)*. By Montel's theorem (see [104, Theorem 6.2.2]), we find a subsequence *(hnkl )l* of *(hnk )k* such that *hnkl* → *h* as *l* → <sup>∞</sup> uniformly on compact subsets of <sup>C</sup>Re*>L* for some holomorphic function *<sup>h</sup>*: <sup>C</sup>Re*>L* <sup>→</sup> <sup>C</sup>. In particular, we obtain

$$\lim\_{l \to \infty} \int\_{\mathbb{R}} h\_{n\_l}(\mathbf{i}t + \boldsymbol{\nu}) \boldsymbol{\varphi}(t) \, \mathrm{d}t = \int\_{\mathbb{R}} h(\mathbf{i}t + \boldsymbol{\nu}) \boldsymbol{\varphi}(t) \, \mathrm{d}t \quad (\boldsymbol{\varphi} \in \mathbf{C}\_{\mathbb{C}}(\mathbb{R})),$$

by dominated convergence and hence, *h(*i*<sup>t</sup>* <sup>+</sup> *ν)* <sup>=</sup> *h(t)* for almost every *<sup>t</sup>* <sup>∈</sup> <sup>R</sup>. This shows that the limit *h* is independent of choice of the subsequences *(hnk )k* and *(hnkl )l*. Indeed, if <sup>0</sup>*h*: <sup>C</sup>Re*>L* <sup>→</sup> <sup>C</sup> is the limit of another subsubsequence of *(hn)n* as above, then <sup>0</sup>*h(*i·+*ν)* <sup>=</sup> *<sup>h</sup>* <sup>=</sup> *h(*i·+*ν)* a.e. Since0*<sup>h</sup>* and *<sup>h</sup>* are holomorphic, the identity theorem yields <sup>0</sup>*<sup>h</sup>* <sup>=</sup> *<sup>h</sup>*.

Now, assume for a contradiction that *(hn)n* does not converge locally uniformly to *<sup>h</sup>*. Then we find a subsequence *(hnk )k* of *(hn)n*, a compact set *<sup>K</sup>* <sup>⊆</sup> <sup>C</sup>Re*>L* and *ε >* 0 such that

$$\left\| h\_{n\boldsymbol{k}} - \boldsymbol{h} \right\|\_{\infty, K} \geqslant \varepsilon \quad (\boldsymbol{k} \in \mathbb{N}).\tag{13.2}$$

However, the subsequence *(hnk )k* has a subsequence *(hnkl )l* which converges locally uniformly to *h*, contradicting (13.2). Thus, *(hn)n* itself converges locally uniformly to *<sup>h</sup>*, and, in particular, *hn* <sup>→</sup> *<sup>h</sup>* pointwise on <sup>C</sup>Re*>L*.

*Step 2:* By what we have shown in Step 1, the sequence *(mn)n*∈<sup>N</sup> converges pointwise on *B* 1*/(*2*L),* 1*/(*2*L)* . Since *(mn)n* is also uniformly bounded on compact subsets of *B(*0*,* 1*/L)*, we derive that *(mn)n* converges uniformly on compact subsets of *B(*0*,* 1*/L)* by Vitali's theorem (see [104, Theorem 6.2.8]). Choosing 0 *<r<* 1*/L*, we thus obtain by Cauchy's integral formula

$$\left\langle \phi, \, B\_n^k \psi \right\rangle\_H = (-1)^k \frac{1}{2\pi \mathbf{i}} \int\_{\partial B(0,r)} \frac{m\_n(\boldsymbol{z})}{\boldsymbol{z}^{k+2}} \, \mathrm{d}\boldsymbol{z}.$$

Thus *(B<sup>k</sup> n)n* converges in the weak operator topology as *n* → ∞.

## **13.4 Comments**

The problems discussed here are contained in [133, 138] for both the weak and the strong operator topology. The case of differential-algebraic equations has been invoked as well.

The appearance of memory effects; that is, the occurrence of higher order integral operators due to a weak convergence of the coefficients has been first observed by Tartar and can, for instance, be found in [113]. The limit equation, however, is described by a convolution term rather than a power series of integral operators. It is, however, possible to reformulate these resulting equations into one another, see [135].

The last characterisation of weak convergence in Theorem 13.3.1 was formulated for the first time in [89].

## **Exercises**

**Exercise 13.1** Let *(Vn)n* in *<sup>L</sup>*∞*(*R*<sup>d</sup> )* and *<sup>V</sup>* <sup>∈</sup> *<sup>L</sup>*∞*(*R*<sup>d</sup> )*. Characterise convergence of *Vn(*m*)* <sup>→</sup> *V (*m*)* in the strong operator topology of *L(L*2*(*R*<sup>d</sup> ))* in terms of convergence of *(Vn)n* similar to as was done in Proposition 13.2.1.

**Exercise 13.2** Show that there exists an unbounded sequence *(Vn)n* in *L*∞*((*0*,* 1*))* and *V* ∈ *L*∞*((*0*,* 1*))* with *Vn* → *V* in *L*1*((*0*,* 1*))*.

**Exercise 13.3** Let *(, , μ)* be a finite measure space, *(Vn)n* a bounded sequence in *L*∞*(μ)* and assume that *Vn* → *V* in *L*1*(μ)* for some *V* ∈ *L*∞*(μ)*. Show that there exists *ν >* 0 such that

$$\left(\partial\_{\mathbf{t},\boldsymbol{\nu}} + V\_n(\mathbf{m})\right)^{-1} \to \left(\partial\_{\mathbf{t},\boldsymbol{\nu}} + V(\mathbf{m})\right)^{-1}$$

in the strong operator topology of *L <sup>L</sup>*2*,ν(*R; *<sup>L</sup>*2*(μ)), H*<sup>1</sup> *<sup>ν</sup> (*R; *<sup>L</sup>*2*(μ))* .

**Exercise 13.4** Let *D* = *<sup>n</sup>*∈<sup>Z</sup> [*<sup>n</sup>* <sup>+</sup> <sup>1</sup>*/*2*, n* <sup>+</sup> 1], *Vn* := <sup>1</sup>*D(n*·*)*. For suitable *ν >* <sup>0</sup> compute the limit of

$$\left(\left(\partial\_{\mathbf{l},\boldsymbol{\nu}} + V\_n(\mathbf{m})\right)^{-1}\right)\_{\boldsymbol{m}}$$

in the weak operator topology of *L*2*,ν* <sup>R</sup>; *<sup>L</sup>*2*((*0*,* <sup>1</sup>*))* .

**Exercise 13.5** Let *H* be a Hilbert space, *c >* 0 and *c Bn* = *B*<sup>∗</sup> *<sup>n</sup>* ∈ *L(H )* for all *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>. Characterise, in terms of convergence of *(Bn)n* in a suitable sense, that

$$\left(\left(\partial\_{\mathbf{f},\boldsymbol{\psi}}B\_{\boldsymbol{n}}\right)^{-1}\right)\_{\boldsymbol{n}}$$

converges in the weak operator topology. In the case of convergence, find its limit and a sufficient condition for which there exists a *B* ∈ *L(H )* such that

$$(\partial\_{l,\boldsymbol{\upsilon}}B\_{\boldsymbol{n}})^{-1} \to (\partial\_{l,\boldsymbol{\upsilon}}B)^{-1}$$

in the weak operator topology.

**Exercise 13.6** Let *H* be a Hilbert space. Show that *BL(H )* := {*B* ∈ *L(H )* ; *B* 1} is a compact subset under the weak operator topology. If, in addition, *H* is separable, show that *BL(H )* is also metrisable under the weak operator topology.

**Exercise 13.7** Let *H* be a separable Hilbert space, *(Bn)n* in *L(H )* bounded. Show that there exists a subsequence *(Bnk )k* of *(Bn)n*, a material law *M* : dom*(M)* → *L(H )* and *ν >* 0 such that given *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν (*R; *H )* and *(uk)k* in *<sup>L</sup>*2*,ν (*R; *H )* with

$$
\partial\_{l,\upsilon}\mu\_k + B\_{n\_k}\mu\_k = f \quad (k \in \mathbb{N}),
$$

we deduce that *(uk)k* converges weakly to some *<sup>u</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν (*R; *H )* with the property that

$$
\partial\_{t, \upsilon} M(\partial\_{t, \upsilon}) \mu = f.
$$

## **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 14 Continuous Dependence on the Coefficients II**

This chapter is concerned with the study of problems of the form

$$\left(\partial\_{\mathfrak{l},\boldsymbol{\upsilon}}M\_n(\partial\_{\mathfrak{l},\boldsymbol{\upsilon}}) + A\right)U\_n = F$$

for a suitable sequence of material laws *(Mn)n* when *A* = 0. The aim of this chapter will be to provide the conditions required for convergence of the material law sequence to imply the existence of a limit material law *M* such that the limit *U* = lim*n*→∞ *Un* exists and satisfies

$$\left(\partial\_{\mathbb{H},\mathbb{V}}M(\partial\_{\mathbb{H},\mathbb{V}}) + A\right)U = F.$$

Additionally, for material laws of the form *Mn(∂t ,ν)* <sup>=</sup> *<sup>M</sup>*0*,n* <sup>+</sup> *<sup>∂</sup>*−<sup>1</sup> *t ,ν M*1*,n* it will be desirable to have the respective limit material law satisfy *M(∂t ,ν)* <sup>=</sup> *<sup>M</sup>*<sup>0</sup> <sup>+</sup> *<sup>∂</sup>*−<sup>1</sup> *t ,ν M*<sup>1</sup> for some *M*0*, M*<sup>1</sup> ∈ *L(H )*. This cannot be expected (as we have seen in the guiding example in the previous chapter) if *A* is a bounded operator, the Hilbert space *H* is infinite-dimensional, and the material law sequence only converges pointwise in the weak operator topology. It will turn out, however, that if *A* is "strictly unbounded" then a suitable result can hold, even if we only assume weak convergence of the material law operators.

## **14.1 A Convergence Theorem**

The main convergence theorem of this chapter will be presented next.

**Theorem 14.1.1** *Let <sup>H</sup> be a Hilbert space, <sup>ν</sup>*<sup>0</sup> <sup>∈</sup> <sup>R</sup>*, (Mn)n in <sup>M</sup>(H, ν*0*) and <sup>M</sup>* <sup>∈</sup> *<sup>M</sup>(H, ν*0*). Assume there exists c >* <sup>0</sup> *such that for all <sup>n</sup>* <sup>∈</sup> <sup>N</sup> *we have*

$$\operatorname{Re}\,\varepsilon \mathcal{M}\_n(\boldsymbol{\varepsilon}) \geqslant \boldsymbol{c} \quad (\boldsymbol{\varepsilon} \in \mathbb{C}\_{\operatorname{Re} > \boldsymbol{\nu\_0}})\dots$$

© The Author(s) 2022

221

C. Seifert et al., *Evolutionary Equations*, Operator Theory: Advances and Applications 287, https://doi.org/10.1007/978-3-030-89397-2\_14

*Let A*: dom*(A)* ⊆ *H* → *H be skew-selfadjoint and assume* dom*(A)* → *H compactly. If Mn(z)* → *M(z) as n* → ∞ *in the weak operator topology for all <sup>z</sup>* <sup>∈</sup> <sup>C</sup>Re*>ν*<sup>0</sup> *, then*

$$\left(\overline{\partial\_{\mathbb{I},\boldsymbol{\vee}}M\_{n}(\partial\_{\mathbb{I},\boldsymbol{\vee}}) + A}\right)^{-1} \to \left(\overline{\partial\_{\mathbb{I},\boldsymbol{\vee}}M(\partial\_{\mathbb{I},\boldsymbol{\vee}}) + A}\right)^{-1}$$

*in the strong operator topology of L(L*2*,ν(*R; *H )) for each ν>ν*0*.*

For the proof of this theorem, we need a lemma first.

**Lemma 14.1.2** *Let H be a Hilbert space, A*: dom*(A)* ⊆ *H* → *H skewselfadjoint, c >* 0*, (Tn)n in L(H ) with* Re *Tn <sup>c</sup> for all <sup>n</sup>* <sup>∈</sup> <sup>N</sup>*, and <sup>T</sup>* <sup>∈</sup> *L(H ). Assume* dom*(A)* → *H compactly and Tn* → *T in the weak operator topology. Then* 0 ∈ " *<sup>n</sup>*∈<sup>N</sup> *ρ(Tn* <sup>+</sup> *A)* <sup>∩</sup> *ρ(T* <sup>+</sup> *A) and*

$$(T\_n + A)^{-1} \to (T + A)^{-1}$$

*in the norm topology of L(H ).*

*Proof* From Re *Tn <sup>c</sup>* it follows that 0 <sup>∈</sup> *ρ(Tn* <sup>+</sup> *A)* (*<sup>n</sup>* <sup>∈</sup> <sup>N</sup>) and *(Tn* <sup>+</sup> *A)*−<sup>1</sup> *n* is bounded in *L(H )*. Indeed, since *B* := *Tn* + *A* satisfies Re *B* = Re *Tn c* and dom*(B)* = dom*(A)* = dom*(B*∗*)* due to the skew-selfadjointness of *A*, Proposition 6.3.1 yields the assertion. Moreover, since

$$A(T\_n + A)^{-1} = 1 - T\_n(T\_n + A)^{-1}$$

for all *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>, it follows that *(Tn* <sup>+</sup> *A)*−<sup>1</sup> *<sup>n</sup>* is also bounded in *L(H,* dom*(A))* by the boundedness of *(Tn)n* in *L(H )*. Due to the convergence of *(Tn)n* to *T* , it follows that Re *T <sup>c</sup>*, and thus, *(T* <sup>+</sup> *A)*−<sup>1</sup> <sup>∈</sup> *L(H,* dom*(A))*. Before we come to a proof of the desired result, we will prove an auxiliary observation.

Claim: for all *(fn)n* in *H* weakly converging to *f* , we have *(Tn* + *A)* <sup>−</sup><sup>1</sup> *fn* <sup>→</sup> *(T* + *A)* <sup>−</sup><sup>1</sup> *f* in the norm topology of *H*.

For proving the claim, let *(fn)n* in *H* be weakly convergent to some *f* . Consider *un* := *(Tn* <sup>+</sup> *A)*−<sup>1</sup>*fn*. Then *(un)n* is bounded in dom*(A)*, since *(Tn* <sup>+</sup> *A)*−<sup>1</sup> *n* is bounded in *L(H,* dom*(A))* and *(fn)n* is bounded in *H*. Hence, there exists a subsequence *(unk )k* which weakly converges to some *u* in dom*(A)*. Since dom*(A)* → *H* compactly, we infer *unk* → *u* in the norm topology of *H*. Hence, in the equality

$$T\_{n\_k}u\_{n\_k} + Au\_{n\_k} = f\_{n\_k},$$

as *Tnk* → *T* in the weak operator topology and *unk* → *u* in *H*, we may let *k* → ∞ and obtain for the weak limits

$$Tu + Au = f;$$

that is, *u* = *(T* + *A)* <sup>−</sup><sup>1</sup> *f* . Having identified the limit, a contradiction argument (here a so-called 'subsequence argument', see Exercise 14.3) concludes that *(un)n* itself converges weakly in dom*(A)* and strongly in *H* to *u*. Thus, the claim is proved.

Next, assume by contradiction that *(Tn* <sup>+</sup> *A)*−<sup>1</sup> *<sup>n</sup>* does not converge in operator norm to *(T* + *A)* <sup>−</sup>1. Then we find an *ε >* 0 and a strictly increasing sequence of integers, *(nk)k*, and a sequence of unit vectors *(fnk )k* in *H* such that

$$\left\| \left( T\_{n\_k} + A \right)^{-1} f\_{n\_k} - (T + A)^{-1} f\_{n\_k} \right\| \geqslant \varepsilon. \tag{14.1}$$

By possibly taking another subsequence, we may assume without loss of generality that *fnk <sup>k</sup>* converges weakly to some *f* ∈ *H*. By the claim proved above, we deduce *Tnk* + *A* −<sup>1</sup> *fnk* <sup>→</sup> *(T* <sup>+</sup> *A)* <sup>−</sup><sup>1</sup> *<sup>f</sup>* and *(T* <sup>+</sup> *A)* <sup>−</sup><sup>1</sup> *fnk* <sup>→</sup> *(T* <sup>+</sup> *A)* <sup>−</sup><sup>1</sup> *f* , both in the norm topology of *H* as *k* → ∞*.* Thus, we may let *k* → ∞ in (14.1), and obtain the desired contradiction.

*Proof of Theorem 14.1.1* By Theorem 13.1.2 it suffices to show that for all *z* ∈ CRe*>ν*<sup>0</sup>

$$(zM\_n(z) + A)^{-1} \to (zM(z) + A)^{-1} \quad (n \to \infty)$$

in the strong operator topology. This, however, follows from Lemma 14.1.2 applied to *Tn* = *zMn(z)*.

*Remark 14.1.3* Note that we only used convergence in the strong operator topology in the proof of Theorem 14.1.1. However, the assertion in Lemma 14.1.2 is about convergence in the norm topology. The reason that we cannot assert the convergence claimed in Theorem 14.1.1 in the norm topology is that the compact embedding of dom*(A)* → *H* only works locally for fixed *z*, and not uniformly in *z*. This situation can, however, be rectified. We refer to Exercise 14.1 for this.

## **14.2 The Theorem of Rellich and Kondrachov**

In order to apply Theorem 14.1.1, we need to provide a setting where the condition on the compactness of the embedding is satisfied. In fact, it is true that *H*1*()* embeds compactly into *<sup>L</sup>*2*()* given <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* is bounded and has 'continuous boundary', see e.g. [5, Theorem 7.11]. In this chapter, we restrict ourselves to a proof of a less general statement.

A preparatory result needed to prove the compact embedding theorem is given next.

**Proposition 14.2.1** *Let <sup>I</sup>* <sup>⊆</sup> <sup>R</sup> *be an open, bounded, non-empty interval. Then the mapping <sup>H</sup>*1*(*R*) <sup>f</sup>* → *<sup>f</sup>* <sup>|</sup>*<sup>I</sup>* <sup>∈</sup> *<sup>H</sup>*1*(I ) is well-defined, continuous and onto. Moreover, there exists a continuous right inverse <sup>H</sup>*1*(I )* <sup>→</sup> *<sup>H</sup>*1*(*R*).*

For the proof of this proposition, we need an auxiliary result first.

**Lemma 14.2.2** *Let* <sup>⊆</sup> <sup>R</sup>*<sup>d</sup> be open and connected. Moreover, let <sup>u</sup>* <sup>∈</sup> *<sup>H</sup>*1*() with* grad *u* = 0*. Then u is constant.*

We leave the proof of this lemma as Exercise 14.2.

*Proof of Proposition 14.2.1* The mapping *<sup>H</sup>*1*(*R*)* <sup>→</sup> *<sup>H</sup>*1*(I ), f* → *<sup>f</sup>* <sup>|</sup>*<sup>I</sup>* is readily confirmed to be continuous. It remains to prove that it is onto. Let *I* = *(a, b)*, *<sup>u</sup>* <sup>∈</sup> *<sup>H</sup>*1*(I )* and define the function *<sup>v</sup>* by

$$v(t) := \int\_{a}^{t} \partial \mu(s) \, \text{ds} \quad (t \in (a, b)).$$

Clearly, *v* ∈ *L*2*((a, b))* and we compute for each *ϕ* ∈ *C*<sup>∞</sup> <sup>c</sup> *((a, b))*

$$
\langle v, \varphi' \rangle\_{L\_2((a,b))} = \int\_a^b \left( \int\_a^t \partial u(s) \, \mathrm{d}s \right)^\* \varphi'(t) \, \mathrm{d}t = \int\_a^b \int\_s^b \varphi'(t) \, \mathrm{d}t \, \partial u(s)^\* \, \mathrm{d}s
$$

$$
= - \langle \partial u, \varphi \rangle\_{L\_2((a,b))}.
$$

This shows *<sup>v</sup>* <sup>∈</sup> *<sup>H</sup>*1*((a, b))* with *∂v* <sup>=</sup> *∂u*. Hence, by Lemma 14.2.2 there exists a constant *<sup>c</sup>* <sup>∈</sup> <sup>C</sup> with *<sup>u</sup>* <sup>=</sup> *<sup>c</sup>* <sup>+</sup> *<sup>v</sup>*. We now define *<sup>f</sup>* by

$$f(t) := \begin{cases} 0 & \text{if } t < a - 1 \text{ or } t > b + 1, \\ ct + c(1 - a) & \text{if } a - 1 \le t \le a, \\ u(t) & \text{if } a < t < b, \\ -(c + v(b))t + (c + v(b))(1 + b) & \text{if } b \le t \le b + 1. \end{cases}$$

We then easily see that *<sup>f</sup>* <sup>∈</sup> *<sup>H</sup>*1*(*R*)* and clearly *<sup>f</sup>* <sup>|</sup>*(a,b)* <sup>=</sup> *<sup>u</sup>*. In order to see that *u* → *f* is continuous, we need to establish that the value *c* depends continuously on *u*. This, however, follows from the estimate

$$\begin{aligned} |c| &= \frac{1}{\sqrt{b-a}} \left( \int\_a^b |c|^2 \right)^{1/2} \leqslant \frac{1}{\sqrt{b-a}} (\|u\|\_{L\_2(a,b)} + \|v\|\_{L\_2(a,b)})\\ &\leqslant \frac{1}{\sqrt{b-a}} (\|u\|\_{L\_2(a,b)} + (b-a) \|\partial u\|\_{L\_2(a,b)})\\ &\leqslant \frac{\sqrt{2} \max\{1, \,(b-a)\}}{\sqrt{b-a}} \, \|u\|\_{H^1(a,b)} \cdot \end{aligned}$$

**Theorem 14.2.3** *Let <sup>I</sup>* <sup>⊆</sup> <sup>R</sup> *be an open bounded interval. Then <sup>H</sup>*1*(I )* <sup>→</sup> *<sup>L</sup>*2*(I ) compactly.*

*Proof* By Proposition 14.2.1, we find a continuous mapping *<sup>E</sup>* : *<sup>H</sup>*1*(I )* <sup>→</sup> *<sup>H</sup>*1*(*R*)* such that for all *<sup>u</sup>* <sup>∈</sup> *<sup>H</sup>*1*(I )* we have *E(u)*|*<sup>I</sup>* <sup>=</sup> *<sup>u</sup>*. Moreover, by Exercise 4.3 the mapping *<sup>H</sup>*1*(*R*)* <sup>→</sup> *<sup>C</sup>*1*/*2*(*R*)* is continuous. Thus,

$$H^1(I) \xrightarrow{E} H^1(\mathbb{R}) \hookrightarrow \mathcal{C}^{1/2}(\mathbb{R}) \to \mathcal{C}^{1/2}(I),$$

is a composition of continuous mappings, where the last mapping is the restriction to *<sup>I</sup>* . Since *<sup>C</sup>*1*/*2*(I )* <sup>→</sup> *C(I )* compactly by the Arzelà–Ascoli theorem, and *C(I )* <sup>→</sup> *<sup>L</sup>*2*(I )* continuously, we infer *<sup>H</sup>*1*(I )* <sup>→</sup> *<sup>L</sup>*2*(I )* compactly.

We now have the opportunity to study the limit behaviour of a periodic mixed type problem.

*Example 14.2.4 (Highly Oscillatory Problems)* Let *<sup>s</sup>*1*, s*<sup>2</sup> : <sup>R</sup> <sup>→</sup> [0*,* 1] be 1 periodic, measurable functions. Then for *ν >* 0, we set

$$S^{(n)} := \overline{\left(\partial\_{l,\nu} \begin{pmatrix} s\_1(n\mathbf{m}) & 0\\ 0 & s\_2(n\mathbf{m}) \end{pmatrix} + \begin{pmatrix} 1 - s\_1(n\mathbf{m}) & 0\\ 0 & 1 - s\_2(n\mathbf{m}) \end{pmatrix} + \begin{pmatrix} 0 & \partial\\ \partial\_0 & 0 \end{pmatrix}\right)^{-1}},$$

where *∂* = div and *∂*<sup>0</sup> = grad0 are regarded as operators in *L*2*((*0*,* 1*))* with respective domains *H*1*((*0*,* 1*))* and *H*<sup>1</sup> <sup>0</sup> *((*0*,* 1*))*. Then, by Theorem 14.2.3, the operator *A* := 0 *∂ ∂*<sup>0</sup> 0 satisfies the assumptions of Theorem 14.1.1. Moreover, Theorem 13.2.4 implies that the remaining assumptions of Theorem 14.1.1 are satisfied. Hence, we deduce that *S(n) <sup>n</sup>* converges in the strong operator topology on *L L*2*,ν* <sup>R</sup>; *<sup>L</sup>*2*((*0*,* <sup>1</sup>*))* to the limit

$$
\begin{pmatrix}
\frac{1}{\partial\_{\mathbf{t},\boldsymbol{\nu}}} \begin{pmatrix} \int\_0^1 \mathbf{s}\_1 & \mathbf{0} \\ \mathbf{0} & \int\_0^1 \mathbf{s}\_2 \end{pmatrix} + \begin{pmatrix} 1 - \int\_0^1 \mathbf{s}\_1 & \mathbf{0} \\ \mathbf{0} & 1 - \int\_0^1 \mathbf{s}\_2 \end{pmatrix} + \begin{pmatrix} \mathbf{0} & \mathbf{0} \\ \boldsymbol{\partial\_0} \mathbf{0} \end{pmatrix} \end{pmatrix}^{-1}
$$

Next, we aim to provide an application to more than one spatial dimension. For this, we will also need a corresponding compactness statement. This is the subject of the rest of this section.

**Theorem 14.2.5 (Rellich–Kondrachov)** *Let* <sup>⊆</sup> <sup>R</sup>*<sup>d</sup> be open and bounded. Then H*<sup>1</sup> <sup>0</sup> *()* → *L*2*() compactly.*

*Proof* Without loss of generality (by shifting and shrinking of and extending by 0), we may assume that = *(*0*,* 1*) <sup>d</sup>* . We carry out the proof by induction on the spatial dimension *d*. The case *d* = 1 has been dealt with in Theorem 14.2.3. Assume the statement is true for some *d* −1. Using that *C*<sup>∞</sup> <sup>c</sup> *((*0*,* 1*) <sup>d</sup> )* is dense in *H*<sup>1</sup> <sup>0</sup> *((*0*,* 1*) d )*,

*.*

we infer the continuity of the injection

$$R \colon H\_0^1((0,1)^d) \to H^1\left(\mathbb{R}; L\_2((0,1)^{d-1})\right) \cap L\_2\left(\mathbb{R}; H\_0^1((0,1)^{d-1})\right),$$

$$\phi \mapsto \left(t \mapsto \left(\omega \mapsto \phi(t,\omega)\right)\right),$$

where we identify *φ* with its extension to R*<sup>d</sup>* by 0. The range space is endowed with the usual sum scalar product.

Let *(φn)n* be a weakly convergent nullsequence in *H*<sup>1</sup> <sup>0</sup> *((*0*,* 1*) <sup>d</sup> )*. In particular, *(Rφn)n* is bounded in *H*<sup>1</sup> <sup>R</sup>; *<sup>L</sup>*2*((*0*,* <sup>1</sup>*) <sup>d</sup>*−1*)* and hence, it is also bounded in *C*b <sup>R</sup>; *<sup>L</sup>*2*((*0*,* <sup>1</sup>*) <sup>d</sup>*−1*)* by Theorem 4.1.2 (and Corollary 4.1.3); that is,

$$\sup\_{t \in [0,1], n \in \mathbb{N}} \|\phi\_n(t, \cdot)\|\_{L\_2((0,1)^{d-1})} < \infty. \tag{14.2}$$

Let *f* ∈ *L*2*((*0*,* 1*) <sup>d</sup>*−1*)*. Then *(φn,f )n* given by

$$\phi\_{n,f} \colon t \mapsto \langle \phi\_n(t, \cdot), f \rangle\_{L\_2((0,1)^{d-1})}$$

is a weakly convergent nullsequence in *H*1*((*0*,* 1*))*. We obtain by Theorem 14.2.3 that *φn,f* → 0 in *L*2*((*0*,* 1*))* as *n* → ∞. By separability of *L*2*((*0*,* 1*) <sup>d</sup>*−1*)* we find *D* ⊆ *L*2*((*0*,* 1*) <sup>d</sup>*−1*)* countable and dense, a subsequence (again labeled by *n*) and a nullset *<sup>N</sup>* <sup>⊆</sup> <sup>R</sup> such that *φn,f (t)* <sup>→</sup> 0 for all *<sup>t</sup>* <sup>∈</sup> <sup>R</sup> \ *<sup>N</sup>* and *<sup>f</sup>* <sup>∈</sup> *<sup>D</sup>* as *<sup>n</sup>* → ∞. By (14.2), we deduce *φn,f (t)* <sup>→</sup> 0 for all *<sup>t</sup>* <sup>∈</sup> <sup>R</sup> \ *<sup>N</sup>* and *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*((*0*,* <sup>1</sup>*) <sup>d</sup>*−1*)* as *n* → ∞, or, in other words, *φn(t,*·*)* → 0 weakly in *L*2*((*0*,* 1*) <sup>d</sup>*−1*)* for each *<sup>t</sup>* <sup>∈</sup> <sup>R</sup> \ *<sup>N</sup>* as *<sup>n</sup>* → ∞.

Next, we show that there exists a nullset *<sup>N</sup>* <sup>⊆</sup> *<sup>N</sup>*<sup>1</sup> <sup>⊆</sup> <sup>R</sup> such that *φn(t,*·*)* <sup>→</sup> <sup>0</sup> in *L*2*((*0*,* 1*) <sup>d</sup>*−1*)* for all *<sup>t</sup>* <sup>∈</sup> <sup>R</sup> \ *<sup>N</sup>*1. For this, since *(Rφn)n* in *<sup>L</sup>*<sup>2</sup> <sup>R</sup>; *<sup>H</sup>*<sup>1</sup> <sup>0</sup> *((*0*,* 1*) <sup>d</sup>*−1*)* is bounded, we find a nullset *<sup>N</sup>* <sup>⊆</sup> *<sup>N</sup>*<sup>1</sup> <sup>⊆</sup> <sup>R</sup> such that *(φn(t,*·*))n* is bounded in *H*<sup>1</sup> <sup>0</sup> *((*0*,* 1*) <sup>d</sup>*−1*)* for all *<sup>t</sup>* <sup>∈</sup> <sup>R</sup> \ *<sup>N</sup>*1. Let *<sup>t</sup>* <sup>∈</sup> <sup>R</sup> \ *<sup>N</sup>*1. Then there exists a further subsequence *(φnk (t,*·*))k* which converges weakly in *<sup>H</sup>*<sup>1</sup> <sup>0</sup> *((*0*,* 1*) <sup>d</sup>*−1*)*. By the induction hypothesis, *(φnk (t,*·*))nk* converges strongly in *L*2*((*0*,* 1*) <sup>d</sup>*−1*)*, and since we have already seen that it is a weak nullsequence in *L*2*((*0*,* 1*) <sup>d</sup>*−1*)*, we derive *φnk (t,*·*)* → 0 in *L*2*((*0*,* 1*) <sup>d</sup>*−1*)*. By a subsequence argument we derive that

$$
\phi\_n(t, \cdot) \to 0
$$

in *L*2*((*0*,* 1*) <sup>d</sup>*−1*)* for all *<sup>t</sup>* <sup>∈</sup> <sup>R</sup> \ *<sup>N</sup>*1.

Now, for *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> we deduce

$$\left\|\phi\_n\right\|\_{L\_2((0,1)^d)}^2 = \int\_0^1 \left\|\phi\_n(t, \cdot)\right\|\_{L\_2((0,1)^{d-1})}^2 \,\mathrm{d}t \to 0,$$

where we have used dominated convergence, which is possible due to (14.2).

## **14.3 The Periodic Gradient**

In this section we investigate the gradient on periodic functions on R*<sup>d</sup>* . Throughout, we set *Y* := [0*,* 1*) d* .

#### **Definition (Periodic Gradient)** We define

$$C\_\sharp^{\infty}(Y) := \left\{ \phi|\_Y \; ; \; \phi \in C^\infty(\mathbb{R}^d), \; \phi(\cdot + k) = \phi \; (k \in \mathbb{Z}^d) \right\}$$

and

$$\operatorname{grad}\_{\sharp,\infty} \colon C\_{\sharp}^{\infty}(Y) \subseteq L\_2(Y) \to L\_2(Y)^d.$$

$$\phi \mapsto \operatorname{grad}\phi.$$

Moreover, we set div*"* := − grad<sup>∗</sup> *",*<sup>∞</sup> and grad*"* := − div<sup>∗</sup> *"* = grad*",*∞.

*Remark 14.3.1* The operators just introduced can easily be shown to lie between the operator realisations we have introduced in earlier chapters. Indeed, it is easy to see that

$$\text{div}\_0 \subseteq \text{div}\_{\sharp} \text{ and } \text{grad}\_0 \subseteq \text{grad}\_{\sharp}$$

and, consequently, we also have

$$\mathsf{grad}\_{\sharp} \subseteq \mathsf{grad} \text{ and } \mathsf{div}\_{\sharp} \subseteq \mathsf{div}\,.$$

The corresponding domains for the operators grad*"* and div*"* will be denoted by *H*<sup>1</sup> *" (Y )* and *H"(*div*,Y)*, respectively.

For the next results, we define the periodic extension operator. For *<sup>φ</sup>* <sup>∈</sup> *<sup>L</sup>*2*(Y )<sup>m</sup>* we put

$$\phi\_{\mathrm{pe}}(\alpha + k) := \phi(\alpha)$$

for almost every *<sup>x</sup>* <sup>∈</sup> *<sup>Y</sup>* and all *<sup>k</sup>* <sup>∈</sup> <sup>Z</sup>*<sup>d</sup>* .

We start with the following two observations.

**Lemma 14.3.2** *Let f* ∈ *L*2*(Y ) and (ρk)k be a δ-sequence in C*<sup>∞</sup> <sup>c</sup> *(*R*<sup>d</sup> ) (cf. Exercise 3.1). Define*

$$f\_k := (\rho\_k \* f\_{\text{pe}})|\_Y \quad (k \in \mathbb{N})\dots$$

*Then fk* ∈ *C*<sup>∞</sup> *" (Y ) for each <sup>k</sup>* <sup>∈</sup> <sup>N</sup> *and fk* <sup>→</sup> *<sup>f</sup> in <sup>L</sup>*2*(Y ) as <sup>k</sup>* → ∞*.* *Proof* It follows as in Exercise 3.2 that *ρk* ∗*f*pe is in *C*∞. Moreover, one easily sees that *ρk* ∗ *f*pe is [0*,* 1*) <sup>d</sup>* -periodic, and hence, *fk* <sup>∈</sup> *<sup>C</sup>*<sup>∞</sup> *" (Y )* for each *<sup>k</sup>* <sup>∈</sup> <sup>N</sup>. For the convergence we observe

$$\left(\rho\_k \ast (\mathbb{1}\_{Y + B(0,1)} f\_{\text{pe}})\right)(\mathbf{x}) = f\_k(\mathbf{x}) \quad (\mathbf{x} \in Y, k \in \mathbb{N}).$$

Moreover, by Exercise 3.2 we have *ρk* <sup>∗</sup> *(*1*Y*+*B(*0*,*1*)f*pe*)* <sup>→</sup> <sup>1</sup>*Y*+*B(*0*,*1*)f*pe in *<sup>L</sup>*2*(*R*<sup>d</sup> )* as *k* → ∞, and thus,

$$f\_k = \left(\rho\_k \* (\mathbb{1}\_{Y + B(0, 1)} f\_{\text{pe}})\right)|\_Y \to (\mathbb{1}\_{Y + B(0, 1)} f\_{\text{pe}})|\_Y = f \quad (k \to \infty) \quad \text{in } L\_2(Y). \quad \Box$$

**Lemma 14.3.3** *C*∞ *" (Y )<sup>d</sup> is a core for* div*".*

*Proof* First we note that *C*∞ *" (Y )<sup>d</sup>* <sup>⊆</sup> dom*(*div*")*. To see this, for *<sup>φ</sup>* <sup>∈</sup> *<sup>C</sup>*<sup>∞</sup> *" (Y ),*  ∈ *C*∞ *" (Y )<sup>d</sup>* we compute

$$\langle \operatorname{grad} \phi, \Psi \rangle\_{L\_2(Y)^d} = \int\_Y \langle \operatorname{grad} \phi(\mathbf{x}), \Psi(\mathbf{x}) \rangle\_{\mathbb{K}^d} \, \mathrm{d}\mathbf{x} = -\int\_Y \phi(\mathbf{x})^\* \operatorname{div} \Psi(\mathbf{x}) \, \mathrm{d}\mathbf{x}$$

$$= \langle \phi, -\operatorname{div} \Psi \rangle\_{L\_2(Y)}$$

by integration by parts (note that the boundary values cancel out due to the periodicity of *φ* and ). Now, let *q* ∈ dom*(*div*")* and *(ρk )k* be a *δ*-sequence in *C*∞ <sup>c</sup> *(*R*<sup>d</sup> )*. For *<sup>k</sup>* <sup>∈</sup> <sup>N</sup> we define

$$q\_k := (\rho\_k \* q\_{\rm pe})|\_Y,$$

and obtain *qk* ∈ *C*<sup>∞</sup> *" (Y )<sup>d</sup>* and *qk* <sup>→</sup> *<sup>q</sup>* in *<sup>L</sup>*2*(Y )<sup>d</sup>* as *<sup>k</sup>* → ∞ by Lemma 14.3.2. It is left to show that div *qk* → div*" q* in *L*2*(Y )* as *k* → ∞. For doing so, we show that div *qk* = *ρk* ∗ *(*div*" q)*pe |*<sup>Y</sup>* , which would then yield the assertion again by Lemma 14.3.2. So, let *<sup>k</sup>* <sup>∈</sup> <sup>N</sup> and *<sup>φ</sup>* <sup>∈</sup> *<sup>C</sup>*<sup>∞</sup> *" (Y )*. We compute

$$\begin{aligned} \langle q\_{\mathbb{K}}, \operatorname{grad} \phi \rangle\_{L\_{2}(Y)^{d}} &= \int\_{Y} \left\langle \int\_{\mathbb{R}^{d}} \rho\_{\mathbb{K}}(\mathbf{y}) q\_{\mathbb{pe}}(\mathbf{x} - \mathbf{y}) \operatorname{d}\mathbf{y}, \operatorname{grad} \phi(\mathbf{x}) \right\rangle\_{\mathbb{K}^{d}} \, \mathrm{d}\mathbf{x} \\ &= \int\_{\mathbb{R}^{d}} \rho\_{\mathbb{K}}(\mathbf{y}) \int\_{Y} \langle q\_{\mathbb{pe}}(\mathbf{x} - \mathbf{y}), \operatorname{grad} \phi(\mathbf{x}) \rangle\_{\mathbb{K}^{d}} \, \mathrm{d}\mathbf{x} \, \mathrm{d}\mathbf{y} \\ &= \int\_{\mathbb{R}^{d}} \rho\_{\mathbb{K}}(\mathbf{y}) \int\_{Y-\mathbf{y}} \langle q\_{\mathbb{pe}}(\mathbf{x}), \operatorname{ (grad} \phi)\_{\mathbb{pe}}(\mathbf{x} + \mathbf{y}) \rangle\_{\mathbb{K}^{d}} \, \mathrm{d}\mathbf{x} \, \mathrm{d}\mathbf{y} \\ &= \int\_{\mathbb{R}^{d}} \rho\_{\mathbb{K}}(\mathbf{y}) \int\_{Y} \langle q(\mathbf{x}), \operatorname{ (grad} \phi)\_{\mathbb{pe}}(\mathbf{x} + \mathbf{y}) \rangle\_{\mathbb{K}^{d}} \, \mathrm{d}\mathbf{x} \, \mathrm{d}\mathbf{y} \end{aligned}$$

$$\begin{split} &= \int\_{\mathbb{R}^{d}} \rho\_{\mathbb{k}}(\mathbf{y}) \int\_{Y} \Big\langle q(\mathbf{x}), (\operatorname{grad} \phi\_{\mathbb{p}\mathbf{e}}(\cdot + \mathbf{y}))(\mathbf{x}) \Big\rangle\_{\mathbb{R}^{d}} \, \operatorname{d}\mathbf{x} \, \operatorname{d}\mathbf{y} \\ &= - \int\_{\mathbb{R}^{d}} \rho\_{\mathbb{k}}(\mathbf{y}) \int\_{Y} \Big\langle \operatorname{div}\_{\mathbb{k}} q(\mathbf{x}), \phi\_{\mathbb{p}\mathbf{e}}(\mathbf{x} + \mathbf{y}) \Big\rangle\_{\mathbb{R}^{d}} \, \operatorname{d}\mathbf{x} \, \operatorname{d}\mathbf{y} \\ &= - \int\_{\mathbb{R}^{d}} \rho\_{\mathbb{k}}(\mathbf{y}) \int\_{Y+\mathbf{y}} \Big\langle (\operatorname{div}\_{\mathbb{k}} q)\_{\mathbb{p}\mathbf{e}}(\mathbf{x} - \mathbf{y}), \phi\_{\mathbb{p}\mathbf{e}}(\mathbf{x}) \Big\rangle\_{\mathbb{R}^{d}} \, \operatorname{d}\mathbf{x} \, \operatorname{d}\mathbf{y} \\ &= - \left\langle (\rho\_{\mathbb{k}} \* (\operatorname{div}\_{\mathbb{k}} q)\_{\mathbb{p}\mathbf{e}}) \Big|\_{Y}, \phi \right\rangle\_{L\_{2}(Y)}, \end{split}$$

where we have used periodicity as well as *φ*pe*(*· + *y)* ∈ *C*<sup>∞</sup> *" (Y )*.

*Remark 14.3.4* The proof of Lemma 14.3.3 reveals that every *q* ∈ ker*(*div*")* can be approximated by elements in *C*∞ *" (Y )<sup>d</sup>* <sup>∩</sup> ker*(*div*")*.

**Proposition 14.3.5** *Let* <sup>⊆</sup> <sup>R</sup>*<sup>d</sup> be open, bounded, <sup>u</sup>* <sup>∈</sup> *<sup>H</sup>*<sup>1</sup> *" (Y ) and q* ∈ *H"(*div*,Y). Then <sup>u</sup>*pe| <sup>∈</sup> *<sup>H</sup>*1*(), <sup>q</sup>*pe| <sup>∈</sup> *H (*div*, ) and*

$$\operatorname{grad} \left( \mu\_{\text{pe}} \vert \Omega \right) = \left( \operatorname{grad}\_{\sharp} \mu \right)\_{\text{pe}} \vert \Omega \text{ and } \operatorname{div} \left( q\_{\text{pe}} \vert \Omega \right) = \left( \operatorname{div}\_{\sharp} q \right)\_{\text{pe}} \vert \Omega \text{.} $$

*Proof* Let first *φ* ∈ *C*<sup>∞</sup> *" (Y )*. Then by definition *<sup>φ</sup>*pe <sup>∈</sup> *<sup>C</sup>*∞*(*R*<sup>d</sup> )* and we easily see

$$(\operatorname{grad}\phi\_{\mathrm{pe}} = (\operatorname{grad}\phi)\_{\mathrm{pe}} = (\operatorname{grad}\_{\sharp}\phi)\_{\mathrm{pe}}.$$

Moreover, since is bounded, we infer *<sup>φ</sup>*pe <sup>∈</sup> *<sup>H</sup>*1*()*. By definition of *<sup>H</sup>*<sup>1</sup> *" (Y )* we find a sequence *(φk)k*∈<sup>N</sup> in *C*<sup>∞</sup> *" (Y )* such that *φk* → *u* in *L*2*(Y )* and grad*" φk* → grad*" <sup>u</sup>* in *<sup>L</sup>*2*(Y )<sup>d</sup>* as *<sup>k</sup>* → ∞. Since

$$L\_2(Y) \to L\_2(\Omega), \quad f \mapsto f\_{\text{pe}}.$$

is bounded due to the boundedness of , we also derive *φk,*pe → *u*pe in *L*2*()* and *(*grad*" φk)*pe <sup>→</sup> *(*grad*" u)*pe in *<sup>L</sup>*2*()<sup>d</sup>* as *<sup>k</sup>* → ∞. By what we have shown above, we infer

$$(\operatorname{grad}\phi\_{k,\textup{pe}} = (\operatorname{grad}\_{\sharp}\phi\_k)\_{\textup{pe}} \to (\operatorname{grad}\_{\sharp}\mu)\_{\textup{pe}} \quad (k \to \infty).$$

in *<sup>L</sup>*2*()<sup>d</sup>* , and thus, *<sup>u</sup>*pe <sup>∈</sup> *<sup>H</sup>*1*()* with grad *<sup>u</sup>*pe <sup>=</sup> *(*grad*" u)*pe by the closedness of grad. The proof for *q* follows by the same argument with Lemma 14.3.3 as an additional resource.

The extension result just established yields the following compactness statement.

**Theorem 14.3.6 (Rellich–Kondrachov II)** *The embedding H*<sup>1</sup> *" (Y )* → *L*2*(Y ) is compact.*

*Proof* Let *(φn)n* be a bounded sequence in *H*<sup>1</sup> *" (Y )*. Let <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* be open and bounded such that *Y* ⊆ *.* By Proposition 14.3.5, we deduce that *φn,*pe| *<sup>n</sup>* is bounded in *<sup>H</sup>*1*()*. Let *<sup>ψ</sup>* <sup>∈</sup> *<sup>C</sup>*<sup>∞</sup> <sup>c</sup> *()* with *ψ* = 1 on *Y* . Then *ψφn,*pe *<sup>n</sup>* is bounded in *H*<sup>1</sup> <sup>0</sup> *()*. By Theorem 14.2.5, we find an *L*2*()*-convergent subsequence. This sequence also converges in *L*2*(Y )*. Since *ψ* = 1 on *Y* , we obtain the assertion.

Next, we provide a Poincaré-type inequality for the periodic gradient.

**Proposition 14.3.7** *There exists c* - <sup>0</sup> *such that for all <sup>u</sup>* <sup>∈</sup> *<sup>H</sup>*<sup>1</sup> *" (Y )*

$$\left\| u - \int\_{Y} u \right\|\_{L\_2(Y)} \leqslant c \left\| \operatorname{grad}\_{\sharp} u \right\|\_{L\_2(Y)^d}.$$

*In particular,* ran*(*grad*")* <sup>⊆</sup> *<sup>L</sup>*2*(Y )<sup>d</sup> is closed,* ker*(*grad*")* <sup>=</sup> lin{1*<sup>Y</sup>* } *and the operator*

$$\operatorname{grad}\_{\sharp} \colon H^{\mathsf{l}}\_{\sharp}(Y) \cap \{\mathbbm{1}\_{Y}\}^{\perp} \to \operatorname{ran}(\operatorname{grad}\_{\sharp})$$

*is an isomorphism.*

*Proof* The proof is left as Exercise 14.4.

We are now in a position to formulate the particular example we have in mind. Problems of this type with highly oscillatory coefficients are also referred to as *homogenisation problems*. We refer to the comments section for more details on this.

*Example 14.3.8 (Homogenisation Problem for the Wave Equation)* Let *c >* 0, *<sup>a</sup>* : <sup>R</sup>*<sup>d</sup>* <sup>→</sup> <sup>K</sup>*d*×*<sup>d</sup>* be bounded, measurable, *a(x)* <sup>=</sup> *a(x)*<sup>∗</sup> *<sup>c</sup>* for all *<sup>x</sup>* <sup>∈</sup> <sup>R</sup>*<sup>d</sup>* . Furthermore, assume that *a* is [0*,* 1*) <sup>d</sup>* -periodic. Let *ν >* 0, *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *<sup>L</sup>*2*(Y ))* and for *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> consider the problem of finding *un* <sup>∈</sup> *<sup>L</sup>*2*,ν (*R; *<sup>L</sup>*2*(Y ))* such that

$$\left(\partial\_{t,\upsilon}^{2}u\_{n} - \operatorname{div}\_{\sharp}a(n\mathfrak{m})\operatorname{grad}\_{\sharp}u\_{n} = f. \tag{14.3}$$

We have already established that there exists a uniquely determined solution, *un*. Employing the same trick as in Sect. 11.3, we shall rewrite (14.3) using *vn* := *∂t ,νun*, the canonical embedding *ι"* : ran*(*grad*")* <sup>→</sup> *<sup>L</sup>*2*(Y )<sup>d</sup>* as well as *qn* := −*ι* ∗ *"a(n*m*)ι"ι* ∗ *"* grad*" un* to obtain

$$
\begin{pmatrix}
\partial\_{\mathbb{H},\boldsymbol{v}} \begin{pmatrix} 1 & 0 \\ 0 \left( \iota^\*\_{\mathbb{H}} a(n\mathbf{m}) \iota\_{\mathbb{H}} \right)^{-1} \end{pmatrix} + \begin{pmatrix} 0 & \operatorname{div}\_{\mathbb{H}} \iota\_{\mathbb{H}} \\ \iota^\*\_{\mathbb{H}} \operatorname{grad}\_{\mathbb{H}} & 0 \end{pmatrix} \end{pmatrix} \begin{pmatrix} \boldsymbol{v}\_{n} \\ \boldsymbol{q}\_{n} \end{pmatrix} = \begin{pmatrix} \boldsymbol{f} \\ \boldsymbol{0} \end{pmatrix}.
$$

Note that we have used that *ι* ∗ *"a(n*m*)ι"* : ran*(*grad*")* → ran*(*grad*")* is continuously invertible and strictly positive definite (uniformly in *n*); see Proposition 11.3.5. Also

note that *ι* ∗ *"a(n*m*)ι"* is selfadjoint. As in Exercise 11.3 we see that *ι* ∗ *"* grad*"* ∗ = − div*" ι"*. Thus, the operator

$$S^{(n)} := \left(\overline{\partial\_{\mathbb{H},\mathbb{V}}\begin{pmatrix} 1 & 0\\ 0 \left(\iota^\*\_{\sharp}a(n\mathbf{m})\iota\_{\sharp}\right)^{-1} \end{pmatrix} + \begin{pmatrix} 0 & \operatorname{div}\_{\sharp}\iota\_{\sharp} \\ \iota^\*\_{\sharp}\operatorname{grad}\_{\sharp} & 0 \end{pmatrix}}\right)^{-1}$$

is well-defined and bounded in *<sup>L</sup>*2*,ν(*R; *<sup>L</sup>*2*(Y )* <sup>×</sup> ran*(*grad*"))*. We aim to find the limit of *(S(n))n* as *<sup>n</sup>* → ∞. For this, we want to apply Theorem 14.1.1. We readily see using Theorem 14.3.6 and Exercise 14.5 that

$$A := \begin{pmatrix} 0 & \operatorname{div}\_{\sharp} \iota\_{\sharp} \\ \iota\_{\sharp}^\* \operatorname{grad}\_{\sharp} & 0 \end{pmatrix}.$$

satisfies the assumptions in Theorem 14.1.1. Thus, it is left to analyse *ι* ∗ *"a(n*m*)ι"* −1 *<sup>n</sup>*. This is the subject of the next section. For this reason, we define

$$\mathfrak{a}\_n := \left(\iota^\*\_\sharp a(n\mathbf{m})\iota\_\sharp\right)^{-1} \quad (n \in \mathbb{N})\,.$$

## **14.4 The Limit of** *(***a***n)n*

In this section, we shall apply our earlier findings to higher-dimensional problems. Again, we fix *Y* := [0*,* 1*) <sup>d</sup>* as well as *ι"* : ran*(*grad*")* <sup>→</sup> *<sup>L</sup>*2*(Y )<sup>d</sup>* , the canonical embedding. Before we are able to present the central result of this section, we need a preliminary result.

Throughout, let *<sup>a</sup>* : <sup>R</sup>*<sup>d</sup>* <sup>→</sup> <sup>K</sup>*d*×*<sup>d</sup>* be measurable, bounded and [0*,* <sup>1</sup>*) <sup>d</sup>* -periodic such that Re *a(x) <sup>c</sup>* for each *<sup>x</sup>* <sup>∈</sup> <sup>R</sup>*<sup>d</sup>* for some *c >* 0.

**Lemma 14.4.1** *Let <sup>ξ</sup>* <sup>∈</sup> <sup>K</sup>*<sup>d</sup> . Then there exists a unique vξ* <sup>∈</sup> *<sup>L</sup>*2*(Y )<sup>d</sup> with vξ* <sup>−</sup> *<sup>ξ</sup>* <sup>∈</sup> ran*(*grad*") and a(*m*)vξ* ∈ ker*(*div*").*

*Proof* Take *<sup>w</sup>* <sup>∈</sup> *<sup>H</sup>*<sup>1</sup> *" (Y )* such that

$$\text{grad}\_{\sharp} w = -\iota\_{\sharp} \left( \iota\_{\sharp}^\* a(\mathbf{m}) \iota\_{\sharp} \right)^{-1} \iota\_{\sharp}^\* a(\mathbf{m}) \xi = -\iota\_{\sharp} \mathfrak{a}\_n \iota\_{\sharp}^\* a(\mathbf{m}) \xi .$$

This is possible, since the right-hand side belongs to ran*(*grad*")* by definition. We put *vξ* := grad*" w* + *ξ*. Then *vξ* − *ξ* ∈ ran*(*grad*")* and we have

$$\iota^\*\_{\sharp} a(\mathbf{m}) v\_{\xi} = \iota^\*\_{\sharp} a(\mathbf{m}) \left( \operatorname{grad}\_{\sharp} w + \xi \right) = \iota^\*\_{\sharp} a(\mathbf{m}) \left( -\iota\_{\sharp} \mathfrak{a}\_{\imath} \iota^\*\_{\sharp} a(\mathbf{m}) \xi + \xi \right).$$

$$= -\iota^\*\_{\sharp} a(\mathbf{m}) \iota\_{\sharp} \mathfrak{a}\_{\imath} \iota^\*\_{\sharp} a(\mathbf{m}) \xi + \iota^\*\_{\sharp} a(\mathbf{m}) \xi = 0.$$

The latter gives *a(*m*)vξ* ∈ ran*(*grad*")*<sup>⊥</sup> = ker*(*div*")*. For the uniqueness, we assume *v* ∈ ran*(*grad*")* with *a(*m*)v* ∈ ker*(*div*")*. Then

$$(\iota\_\sharp^\* a(\mathbf{m}) \iota\_\sharp) \iota\_\sharp^\* v = \iota\_\sharp^\* a(\mathbf{m}) v = 0,$$

which implies *ι* ∗ *"v* = 0 since *ι* ∗ *"a(*m*)ι"* is invertible. Thus *v* = 0.

The previous result induces the linear mapping

$$a\_{\text{hom}} \colon \mathbb{K}^d \ni \xi \mapsto \int\_Y a v\_{\xi} \in \mathbb{K}^d,$$

where *vξ* <sup>∈</sup> *<sup>L</sup>*2*(Y )<sup>d</sup>* is the unique vector field from Lemma 14.4.1.

*Remark 14.4.2* We gather some elementary facts on *a*hom.

(a) We have *(a*∗*)*hom = *a*<sup>∗</sup> hom. In particular, if *a* is pointwise selfadjoint then so is *<sup>a</sup>*hom. Indeed, let *ξ,ζ* <sup>∈</sup> <sup>K</sup>*<sup>d</sup>* and *vξ* and *vζ* <sup>∈</sup> *<sup>L</sup>*2*(Y )<sup>d</sup>* be the corresponding functions for *a*∗ and *a*, respectively, according to Lemma 14.4.1. Then there exist *wξ , wζ* ∈ dom*(*grad*")* with *vξ* − *ξ* = grad*" wξ* and *vζ* − *ζ* = grad*" wζ* . We compute

$$\begin{split} \left< (a^\*)\_{\text{hom}} \xi, \xi \right>\_{\mathbb{K}^d} &= \int\_Y \left< \left( a^\* v\_{\xi} \right) (\mathbf{y}), v\_{\zeta} (\mathbf{y}) - \text{grad}\_{\boldsymbol{\pi}} w\_{\zeta} (\mathbf{y}) \right>\_{\mathbb{K}^d} \, \mathrm{d} \mathbf{y} \\ &= \int\_Y \left< \left( a^\* v\_{\xi} \right) (\mathbf{y}), v\_{\zeta} (\mathbf{y}) \right>\_{\mathbb{K}^d} \, \mathrm{d} \mathbf{y} \\ &\quad - \int\_Y \left< \left( a^\* v\_{\xi} \right) (\mathbf{y}), \operatorname{grad}\_{\boldsymbol{\#}} w\_{\zeta} (\mathbf{y}) \right>\_{\mathbb{K}^d} \, \mathrm{d} \mathbf{y} \\ &= \int\_Y \left< v\_{\xi} (\mathbf{y}), \left( a v\_{\zeta} \right) (\mathbf{y}) \right>\_{\mathbb{K}^d} \, \mathrm{d} \mathbf{y} - \left< a^\* v\_{\xi}, \operatorname{grad}\_{\boldsymbol{\#}} w\_{\zeta} \right>\_{L\_2(Y)^d} \\ &= \int\_Y \left< v\_{\xi} (\mathbf{y}), \left( a v\_{\zeta} \right) (\mathbf{y}) \right>\_{\mathbb{K}^d} \, \mathrm{d} \mathbf{y} \\ &= \int\_Y \left< \operatorname{grad}\_{\boldsymbol{\#}} w\_{\xi} (\mathbf{y}) + \xi, \left< a v\_{\zeta} \right> (\mathbf{y}) \right>\_{\mathbb{K}^d} \, \mathrm{d} \mathbf{y} \\ &= \int\_Y \left< \xi, \left( a v\_{\zeta} \right) (\mathbf{y}) \right>\_{\mathbb{K}^d} \, \mathrm{d} \mathbf{y} = \left< \xi, \operatorname{ a$$

(b) Re *a*hom is strictly positive definite. As above, one shows

$$\operatorname{Re}\left\langle \xi, a\_{\operatorname{hom}} \xi \right\rangle\_{\mathbb{K}^d} = \operatorname{Re} \int\_Y \left\langle v\_{\xi}(\mathbf{y}), (a v\_{\xi})(\mathbf{y}) \right\rangle\_{\mathbb{K}^d} \, \operatorname{d}\mathbf{y} \geqslant c \left\| v\_{\xi} \right\|\_{L\_2(Y)^d}^2 \quad (\xi \in \mathbb{K}^d).$$

and since the right-hand side is strictly positive if *ξ* = 0 by Lemma 14.4.1, we derive the assertion.

The construction of *a*hom now allows us to formulate the main result of this section.

**Theorem 14.4.3** *We have*

$$\mathfrak{a}\_n = \left(\iota^\*\_\sharp a(n\mathfrak{m})\iota\_\sharp\right)^{-1} \to \left(\iota^\*\_\sharp a\_{\hom}\iota\_\sharp\right)^{-1} =: \mathfrak{a}\_{\hom} \quad (n \to \infty).$$

*in the weak operator topology of L(*ran*(*grad*")).*

The proof of Theorem 14.4.3 requires some more preparation. One of the results needed is a variant of Theorem 13.2.4 for *L*2*(Y )*. However, it will be beneficial to finish Example 14.3.8 first.

*Example 14.4.4 (Example 14.3.8 Continued)* The operator sequence *(S(n))n* converges in the strong operator topology of *L L*2*,ν* <sup>R</sup>; *<sup>L</sup>*2*(Y )* <sup>×</sup> ran*(*grad*")* to the following limit

$$
\begin{pmatrix}
\begin{matrix}
\begin{matrix}
\begin{matrix} 1 & 0 \\
\end{matrix} \end{matrix} \\
\begin{matrix}
\begin{matrix} 0 & \mathfrak{a}\_{\text{hom}} \\
\end{matrix}
\end{pmatrix}
\end{pmatrix} + \begin{pmatrix}
\begin{matrix}
0 & \text{div}\_{\sharp} \iota\_{\sharp} \\
\iota\_{\sharp}^{\*} \operatorname{grad}\_{\sharp} & 0
\end{matrix}
\end{pmatrix}
\end{pmatrix}^{-1}
\begin{matrix}
\begin{matrix}
\begin{matrix} 1 \\ \end{matrix} \end{matrix}
\end{pmatrix}
$$

**Lemma 14.4.5** *Let <sup>f</sup>* : <sup>R</sup>*<sup>d</sup>* <sup>→</sup> <sup>K</sup> *be measurable and* [0*,* <sup>1</sup>*) <sup>d</sup> -periodic. Let* <sup>⊆</sup> <sup>R</sup>*<sup>d</sup> be open, bounded and assume f* |*<sup>Y</sup>* ∈ *L*2*(Y ). Then*

$$f(n \cdot) \to \left(\int\_Y f\right) \mathbf{1}\_{\Omega}$$

*weakly in L*2*() as n* → ∞*.*

*Proof* Due to the boundedness of we find a finite set *<sup>F</sup>* <sup>⊆</sup> <sup>Z</sup>*<sup>d</sup>* such that ⊆ *<sup>k</sup>*∈*<sup>F</sup> <sup>k</sup>* <sup>+</sup> *<sup>Y</sup>* . Thus, by periodicity, it suffices to restrict our attention to the case when = *Y* . We define

$$X := \left\{ f \colon \mathbb{R}^d \to \mathbb{K} \text{; } \ f \text{ is } [0,1)^d \text{-periodic}, f|\_Y \in L\_2(Y) \right\}$$

endowed with the norm *f <sup>X</sup>* := *f* |*<sup>Y</sup> <sup>L</sup>*2*(Y )*. It is not difficult to see that *X* is a Hilbert space. For *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>, we define *Tn* : *<sup>X</sup>* <sup>→</sup> *<sup>L</sup>*2*(Y )* by *Tnf* := *f (n*·*)*. Then, for all *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>, *Tn* is an isometry. Indeed, for *<sup>f</sup>* <sup>∈</sup> *<sup>X</sup>*, we compute

$$\int\_{Y} |f(n\mathbf{x})|^{2} \,\mathrm{d}\mathbf{x} = \frac{1}{n^{d}} \int\_{nY} |f(\mathbf{y})|^{2} \,\mathrm{d}\mathbf{y} = \frac{1}{n^{d}} n^{d} \int\_{Y} |f(\mathbf{y})|^{2} \,\mathrm{d}\mathbf{y} = \|f\|\_{L\_{2}(Y)}^{2},$$

where we used periodicity again. Recall that *S(Y )* denotes the simple functions on *Y* and consider

$$D := \{ f \in X \; ; \; f|\_Y \in \mathcal{S}(Y) \} \;.$$

Then *<sup>D</sup>* is dense in *<sup>X</sup>*. Also, if *<sup>h</sup>* <sup>∈</sup> *<sup>D</sup>*, then *<sup>h</sup>* <sup>∈</sup> *<sup>L</sup>*∞*(*R*<sup>d</sup> )*. By Theorem 13.2.4, we note

$$\langle T\_n h, g \rangle\_{L\_2(Y)} = \langle h(n \cdot), g \rangle\_{L\_2(Y)} \to \left\langle \left( \int\_Y h \right) \mathbb{1}\_Y, g \right\rangle\_{L\_2(Y)} \quad (n \to \infty),$$

for all *g* ∈ *L*2*(Y )* ⊆ *L*1*(Y )*. Hence, *Tnh* → *T h* weakly in *L*2*(Y )* as *n* → ∞, where for *f* ∈ *X*, we define *Tf* := *Y f* <sup>1</sup>*<sup>Y</sup>* <sup>∈</sup> *<sup>L</sup>*2*(Y )*. Next, if *f* ∈ *X*, *h* ∈ *D* and *g* ∈ *L*2*(Y )*, then

$$|\langle T\_n f - Tf, g \rangle| \lesssim |\langle T\_n f - T\_n h, g \rangle| + |\langle T\_n h - Th, g \rangle| + |\langle Th - Tf, g \rangle|$$

$$\quad \lesssim \|f - h\|\_X \|g\|\_{L\_2(Y)} + |\langle T\_n h - Th, g \rangle|$$

$$\quad + \|T\| \|g\|\_{L\_2(Y)} \|f - h\|\_X.$$

Hence, for *ε >* 0, by density of *D* in *X*, we find *h* ∈ *D* such that

$$\|f - h\| \ge \|g\|\_{L\_2(Y)} + \|T\| \|g\|\_{L\_2(Y)} \|f - h\| \lesssim \frac{\varepsilon}{2}.$$

Then, we find *<sup>n</sup>*<sup>0</sup> <sup>∈</sup> <sup>N</sup> so that for all *<sup>n</sup> n*0, | *Tnh* − *T h, g* | *ε/*2 resulting in | *Tnf* − *Tf, g* | *ε*.

**Lemma 14.4.6** *Let (qn)n and (rn)n be weakly convergent sequences in a Hilbert space H with weak limits q,r* ∈ *H, respectively. Moreover, let X* ⊆ *H be a closed subspace and ι*: *X* → *H the canonical embedding. Assume that*

$$q\_n \in X \text{ for each } n \in \mathbb{N} \text{ and } \left(\iota^\* r\_n\right)\_n \text{ is strongly convergent in } X.$$

*Then*

$$\lim\_{n \to \infty} \left< r\_n, \left. q\_n \right\rangle\_H = \left< r, \left. q \right\rangle\_H \right.$$

*Proof* Since *ι* <sup>∗</sup> : *H* → *X* is continuous it is also weakly continuous, and thus,

$$\iota^\* r\_n \to \iota^\* r \quad (n \to \infty).$$

strongly in *<sup>X</sup>*. For *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> we compute

$$\langle r\_n, q\_n \rangle\_H = \langle r\_n, \iota^\* q\_n \rangle\_H = \langle \iota^\* r\_n, \iota^\* q\_n \rangle\_X \to \langle \iota^\* r, \iota^\* q \rangle\_X \dots$$

Since *X* is a closed subspace, it is also weakly closed and thus *q* ∈ *X* which yields

$$
\langle \iota^\* r, \iota^\* q \rangle\_X = \langle r, q \rangle\_H \,. \tag{7}
$$

The next theorem is a version of the so-called 'div-curl lemma'.

**Theorem 14.4.7** *Let (qn)n and (rn)n be weakly convergent sequences in L*2*(Y )<sup>d</sup> to some q,r* <sup>∈</sup> *<sup>L</sup>*2*(Y )<sup>d</sup> , respectively. Assume that*

$$\forall q\_n \in \text{ran}(\text{grad}\_{\sharp}) \; for \; each \; n \in \mathbb{N} \; and \; \left(\iota^\*\_{\sharp} r\_n\right)\_n \; is \; strongly \; convergent \; in \; \text{ran}(\text{grad}\_{\sharp}).$$

*Then*

$$\int\_{Y} \langle r\_{\hbar}(\mathbf{x}), q\_{\hbar}(\mathbf{x}) \rangle\_{\mathbb{K}^{d}} \, \phi(\mathbf{x}) \, \mathrm{d}x \to \int\_{Y} \langle r(\mathbf{x}), q(\mathbf{x}) \rangle\_{\mathbb{K}^{d}} \, \phi(\mathbf{x}) \, \mathrm{d}x$$

*for all φ* ∈ *C*<sup>∞</sup> <sup>c</sup> *(Y ) as n* → ∞*.*

*Proof* Let *φ* ∈ *C*<sup>∞</sup> <sup>c</sup> *(Y )*, *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>. Since *qn* <sup>∈</sup> ran*(*grad*")*, we find a unique *wn* <sup>∈</sup> *H*<sup>1</sup> *" (Y )* with *wn* ∈ {1*<sup>Y</sup>* }<sup>⊥</sup> <sup>=</sup> ker*(*grad*")*<sup>⊥</sup> such that

$$\text{grad}\_{\sharp} w\_n = q\_n.$$

Moreover, since grad*"* : *<sup>H</sup>*<sup>1</sup> *" (Y )* ∩ {1*<sup>Y</sup>* }<sup>⊥</sup> <sup>→</sup> ran*(*grad*")* is an isomorphism by Proposition 14.3.7, we infer that *(wn)n* is a weakly convergent sequence in *H*<sup>1</sup> *" (Y )* and denote its weak limit by *<sup>w</sup>* <sup>∈</sup> *<sup>H</sup>*<sup>1</sup> *" (Y )*. By Theorem 14.3.6, we deduce *wn* → *w* strongly in *L*2*(Y )<sup>d</sup>* . Moreover, note that *(φwn)n* weakly converges to *φw* in *H*<sup>1</sup> *" (Y ).* In particular, grad*" (φwn)* <sup>→</sup> grad*" (φw)* weakly in *<sup>L</sup>*2*(Y )<sup>d</sup>* . For *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>, we compute

$$\int\_{Y} \langle r\_{\hbar}(\mathbf{x}), q\_{\hbar}(\mathbf{x}) \rangle\_{\mathbb{K}^{d}} \, \phi(\mathbf{x}) \, \mathrm{d}\mathbf{x} = \langle r\_{\hbar}, q\_{\hbar} \phi \rangle\_{L(Y)^{d}} = \left\langle r\_{\hbar}, \left( \mathrm{grad}\_{\sharp} w\_{\hbar} \right) \phi \right\rangle\_{L(Y)^{d}}$$

$$= \left\langle r\_{\hbar}, \mathrm{grad}\_{\sharp} (\phi w\_{\hbar}) \right\rangle\_{L(Y)^{d}} - \left\langle r\_{\hbar}, \, w\_{\hbar} \mathrm{grad}\_{\sharp} \phi \right\rangle\_{L\_{2}(Y)^{d}}.$$

Now, the first term on the right-hand side of this equality tends to *r,* grad*"(φw) <sup>L</sup>*2*(Y )<sup>d</sup>* by Lemma 14.4.6 applied to *<sup>X</sup>* <sup>=</sup> ran*(*grad*")*, which is closed by Proposition 14.3.7. The second term tends to *r, w* grad*" φ <sup>L</sup>*2*(Y )<sup>d</sup>* by strong convergence of *(wn)n* and weak convergence of *(rn)n* in *L*2*(Y )<sup>d</sup>* . Thus, we obtain

$$\int\_{Y} \left< r\_{n}(\mathbf{x}), q\_{n}(\mathbf{x}) \right>\_{\mathbb{R}^{d}} \phi(\mathbf{x}) \, \mathrm{d}x \to \left< r, \operatorname{grad}\_{\sharp} (\phi w) \right>\_{L\_{2}(Y)^{d}} - \left< r, \, w \operatorname{grad}\_{\sharp} \phi \right>\_{L\_{2}(Y)^{d}}$$

$$= \int\_{Y} \left< r(\mathbf{x}), q(\mathbf{x}) \right>\_{\mathbb{R}^{d}} \phi(\mathbf{x}) \, \mathrm{d}x \quad (n \to \infty). \qquad \square$$

We will apply the latter theorem to the concrete case when *rn* = *a(n*m*)qn* in order to determine the weak limit of *(a(n*m*)qn)n*.

**Lemma 14.4.8** *Let (qn)n and (a(n*m*)qn)n be weakly convergent in L*2*(Y )<sup>d</sup> to some q and r, respectively. Assume that*

$$\forall \ q\_n \in \text{ran}(\text{grad}\_{\mathfrak{f}}) \; for \; each \; n \in \mathbb{N} \; and \; \left(\iota^{\text{st}}\_{\mathfrak{f}}a(n\text{m})q\_n\right)\_n \; is \; strongly \; convergent \; in \; \text{ran}(\text{grad}\_{\mathfrak{f}}).$$

*Then r* = *a*hom*q.*

*Proof* Let *<sup>ξ</sup>* <sup>∈</sup> <sup>K</sup>*<sup>d</sup>* and choose *<sup>v</sup>* := *vξ* <sup>∈</sup> *<sup>L</sup>*2*(Y )<sup>d</sup>* according to Lemma 14.4.1 for *<sup>a</sup>*<sup>∗</sup> instead of *<sup>a</sup>*; that is, *<sup>v</sup>* <sup>−</sup> *<sup>ξ</sup>* <sup>∈</sup> ran*(*grad*")* and *<sup>a</sup>*∗*(*m*)v* <sup>∈</sup> ker*(*div*")*. For *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>, we define *vn* := *<sup>v</sup>*pe*(n*·*)* <sup>∈</sup> *<sup>L</sup>*2*(Y )<sup>d</sup>* . Next, let *<sup>g</sup>* <sup>∈</sup> *<sup>C</sup>*<sup>∞</sup> *" (Y )*. Then we compute

$$\begin{aligned} \left< a^\*(n\mathbf{m}) v\_n, \operatorname{grad}\_{\sharp} g \right>\_{L\_2(Y)^d} &= \int\_Y \left< a^\*(n\mathbf{x}) v\_{\mathrm{pe}}(n\mathbf{x}), \operatorname{grad}\_{\sharp} g(\mathbf{x}) \right>\_{\mathbb{K}^d} d\mathbf{x} \\ &= \frac{1}{n^d} \int\_{nY} \left< a^\*(\mathbf{y}) v\_{\mathrm{pe}}(\mathbf{y}), \left< \operatorname{grad}\_{\sharp} g \right>(\mathbf{y}/n) \right>\_{\mathbb{K}^d} d\mathbf{y} \\ &= \frac{1}{n^{d-1}} \int\_{nY} \left< a^\*(\mathbf{y}) v\_{\mathrm{pe}}(\mathbf{y}), \left< \operatorname{grad} g(\cdot/n)(\mathbf{y}) \right>\_{\mathbb{K}^d} d\mathbf{y} .\end{aligned}$$

In order to compute the last integral, we employ Lemma 14.3.3 and Remark 14.3.4 to find a sequence *(φk)k*∈<sup>N</sup> in *C*<sup>∞</sup> *" (Y )<sup>d</sup>* <sup>∩</sup> ker*(*div*")* such that *φk* <sup>→</sup> *<sup>a</sup>*∗*(*m*)v* as *<sup>k</sup>* → ∞ in *<sup>L</sup>*2*(Y )<sup>d</sup>* . The latter implies *(φk)*pe <sup>→</sup> *<sup>a</sup>*∗*(*m*)v*pe as *<sup>k</sup>* → ∞ in *<sup>L</sup>*2*(nY )<sup>d</sup>* for each *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> and div*(φk)*pe <sup>=</sup> 0 for all *<sup>k</sup>* <sup>∈</sup> <sup>N</sup> by Proposition 14.3.5. Thus, we obtain with integration by parts (note that the boundary terms vanish due to the periodicity of *φk* and *g*)

$$\begin{aligned} \left< a^\*(n\mathbf{m}) v\_n, \operatorname{grad}\_{\sharp} \mathbf{g} \right>\_{L\_2(Y)^d} &= \frac{1}{n^{d-1}} \left< a^\*(\mathbf{m}) v\_{\mathrm{pe}}, \left( \operatorname{grad} \operatorname{g}(\cdot/n) \right) \right>\_{L\_2(nY)^d} \\ &= \frac{1}{n^{d-1}} \lim\_{k \to \infty} \left< (\phi\_k)\_{\mathrm{pe}}, \left( \operatorname{grad} \operatorname{g}(\cdot/n) \right) \right>\_{L\_2(nY)^d} = 0. \end{aligned}$$

Since *C*∞ *" (Y )* is a core for grad*"*, we infer that *a*∗*(n*m*)vn* ∈ ran*(*grad*")* ⊥ and hence,

$$
\iota^\*\_\sharp a^\*(n\mathbf{m})\upsilon\_n = 0 \quad (n \in \mathbb{N}).
$$

Moreover, we have *a*∗*(n*m*)vn* → *<sup>Y</sup> <sup>a</sup>*∗*<sup>v</sup>* <sup>=</sup> *(a*∗*)*hom*<sup>ξ</sup>* weakly in *<sup>L</sup>*2*(Y )<sup>d</sup>* as *<sup>n</sup>* → ∞ by Lemma 14.4.5. Thus, by Theorem 14.4.7 applied to *qn* and *rn* := *a*∗*(n*m*)vn*, we deduce that for all *φ* ∈ *C*<sup>∞</sup> <sup>c</sup> *(Y )*

$$\lim\_{n \to \infty} \int\_Y \left< a^\*(nx) v\_n(x), q\_n(x) \right>\_{\mathbb{K}^d} \phi(x) \,\mathrm{d}x = \int\_Y \left< (a^\*)\_{\mathrm{hom}} \xi, q(x) \right>\_{\mathbb{K}^d} \phi(x) \,\mathrm{d}x.$$

On the other hand, *vn* → *Y v* <sup>1</sup>*<sup>Y</sup>* <sup>=</sup> *<sup>ξ</sup>*1*<sup>Y</sup>* weakly in *<sup>L</sup>*2*(Y )<sup>d</sup>* as *<sup>n</sup>* → ∞ by Lemma 14.4.5, where *<sup>Y</sup> v* = *ξ* follows from *v* − *ξ* ∈ ran*(*grad*")*. Thus, we can apply Theorem 14.4.7 to *qn* := *vn* and *rn* := *a(n*m*)qn* and obtain for all *φ* ∈ *C*<sup>∞</sup> <sup>c</sup> *(Y )*

$$\begin{aligned} \int\_{Y} \left< a^\*(n\mathbf{x}) v\_n(\mathbf{x}), q\_n(\mathbf{x}) \right>\_{\mathbb{K}^d} \phi(\mathbf{x}) \, \mathrm{d}x &= \int\_{Y} \left< v\_n(\mathbf{x}), a(n\mathbf{x}) q\_n(\mathbf{x}) \right>\_{\mathbb{K}^d} \phi(\mathbf{x}) \, \mathrm{d}x \\ &\to \int\_{Y} \left< \xi, r(\mathbf{x}) \right>\_{\mathbb{K}^d} \phi(\mathbf{x}) \, \mathrm{d}x \end{aligned}$$

as *n* → ∞. Thus, we have

$$\int\_{Y} \left< (a^\*)\_{\text{hom}} \xi, \, q(\mathbf{x}) \right>\_{\mathbb{K}^d} \phi(\mathbf{x}) \, \mathrm{d}x = \int\_{Y} \left< \xi, \, r(\mathbf{x}) \right>\_{\mathbb{K}^d} \phi(\mathbf{x}) \, \mathrm{d}x$$

for each *φ* ∈ *C*<sup>∞</sup> <sup>c</sup> *(Y )*. Hence, we infer

$$\langle \xi, r(\mathfrak{x}) \rangle\_{\mathbb{K}^d} = \left\langle (a^\*)\_{\text{hom}} \xi, q(\mathfrak{x}) \right\rangle\_{\mathbb{K}^d} = \langle \xi, a\_{\text{hom}} q(\mathfrak{x}) \rangle\_{\mathbb{K}^d}$$

for almost every *x* ∈ *Y* , where we have used Remark 14.4.2(a). Since the latter holds for each *<sup>ξ</sup>* <sup>∈</sup> <sup>K</sup>*<sup>d</sup>* , we deduce *<sup>r</sup>* <sup>=</sup> *<sup>a</sup>*hom*q*.

*Proof of Theorem 14.4.3* Let *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> and for *<sup>u</sup>* <sup>∈</sup> ran*(*grad*")* we put *qn* := <sup>a</sup>*nu*. We need to show that *(qn)n* weakly converges to ahom*u*. For this, we choose subsequences (without relabeling) such that both *(qn)n* and *(a(n*m*)qn)n* weakly converge to some *q* and *r*, respectively. By definition, we have *qn* ∈ ran*(*grad*")* and *ι* ∗ *"a(n*m*)qn* <sup>=</sup> *<sup>u</sup>* for each *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>. Hence, by Lemma 14.4.8, we deduce *<sup>a</sup>*hom*<sup>q</sup>* <sup>=</sup> *<sup>r</sup>*. As ran*(*grad*")* is closed, it is also weakly closed, and hence, *q* ∈ ran*(*grad*")*. Thus, we have

$$
\iota\_\sharp^\* a\_{\text{hom}\iota\_\sharp} q = \iota\_\sharp^\* r,
$$

or equivalently

$$q = \mathfrak{a}\_{\text{hom}} \iota\_\sharp^\* r.$$

Now, since *u* = *ι* ∗ *"a(n*m*)qn* → *ι* ∗ *" r* weakly, we infer

$$q = \mathfrak{a}\_{\text{hom}} u.$$

A subsequence argument now yields the claim.

## **14.5 Comments**

The theory of finding partial differential equations as appropriate limit problems of partial differential equations with highly oscillatory coefficients is commonly referred to as 'homogenisation'. The mathematical theory of homogenisation goes back to the late 1960s and early 70s. We refer to [11] as an early monograph wrapping up the available theory to that date.

The usual way of addressing homogenisation problems is to look at static (i.e., time-independent) problems first. The corresponding elliptic equation is then intensively studied. Even though it might be hidden in the derivations above, the 'study of the elliptic problem' essentially boils down to addressing the limit behaviour of a*<sup>n</sup>* as *n* → ∞; see [37, 132]. Consequently, generalisations of the periodic case have been introduced. The periodic case (and beyond) is covered in [11, 21]; non-periodic cases and corresponding notions have been introduced in [108, 109] and, independently, in [70, 71].

An important technical tool to obtain results in this direction is the div-curl lemma or the notion of 'compensated compactness'. In the above presented material, this is Theorem 14.4.7; the main difficulty to overcome is that of finding a limit of a product *( qn, rn )n* of weakly convergent sequences *(qn)n ,(rn)n* in *L*2*()*<sup>3</sup> for some open <sup>⊆</sup> <sup>R</sup>3. It turns out that if *(*curl *qn)n* and *(*div *rn)n* converge strongly in an appropriate sense, then *qn, rn φ* converges to the desired limit for all *φ* ∈ *C*<sup>∞</sup> <sup>c</sup> *()*. In Theorem 14.4.7 the curl-condition is strengthened in as much as we ask *qn* to be a gradient, which results in curl *qn* = 0. The div-condition is replaced by the condition involving *ι* ∗ *"* , which can in fact be shown to be equivalent, see [130]. The restriction to periodic boundary value problems is a mere convenience. It can be shown that the arguments work similarly for non-periodic boundary conditions, and even with the same limit, see [113, Lemma 10.3].

There are many generalisations of the div-curl lemma. For this, we refer to [17] (and the references given there) and to the rather recently found operator-theoretic perspective, with plenty of applications not solely restricted to the operators div and curl, see [80, 130].

We shortly comment on the term 'compensated compactness'. In general, one cannot expect for two weakly convergent sequences *(qn)n* and *(rn)n* in *L*2*()*<sup>3</sup> that the sequence of their scalar product *qn, rn* to converge to the scalar product of the limits. If, however, either *(qn)n* or *(rn)n* are bounded in a space compactly embedded into *L*2*()*3, then either of those sequence converge in norm in *L*2*()*<sup>3</sup> and lim*n*→∞ *qn, rn* = lim*n*→∞ *qn,*lim*n*→∞ *rn* follows. However, even though neither *H*0*(*curl*, )* nor *H (*div*, )* are compactly embedded into *L*2*()*3, one can still conclude that for bounded sequences *(qn)n* in *H*0*(*curl*, )* and *(rn)n* in *H (*div*, )* we have

$$\lim\_{n \to \infty} \langle q\_n, r\_n \rangle = \left\langle \lim\_{n \to \infty} q\_n, \lim\_{n \to \infty} r\_n \right\rangle.$$

Thus, one might argue that the respectively missing compactness of the embeddings of *H*0*(*curl*, )* and *H (*div*, )* into *L*2*()*<sup>3</sup> is somehow 'compensated'. Following the core arguments in [130], one might also argue that the deeper reason for the convergence of the scalar products is more closely related to (general) Helmholtz decompositions.

The way of deriving the homogenised equation (i.e., the limit of a*n*) is akin to some derivations in [21, 128]. Further reading on homogenisation problems can also be found in these references. The first step of combining homogenisation processes and evolutionary equations has been made in [135] and has had some profound developments for both quantitative and qualitative results; see [23, 42, 136, 138].

## **Exercises**

**Exercise 14.1** Under the same assumptions of Theorem 14.1.1 show

$$\left\| \left( \left( \overline{\partial\_{l,\boldsymbol{\upsilon}} M\_n(\partial\_{l,\boldsymbol{\upsilon}}) + A} \right)^{-1} - \left( \overline{\partial\_{l,\boldsymbol{\upsilon}} M(\partial\_{l,\boldsymbol{\upsilon}}) + A} \right)^{-1} \right) \partial\_{l,\boldsymbol{\upsilon}}^{-1} \right\|\_{L(L\_{2,\boldsymbol{\upsilon}}(\mathbb{R}; H))} \to 0.$$

**Exercise 14.2** Let <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* be open and *ε >* 0. We define the set

$$\Omega\_{\varepsilon} := \{ \mathfrak{x} \in \Omega \; ; \; \text{dist}(\mathfrak{x}, \partial \Omega) > \varepsilon \} \; .$$

(a) Let *(φk)k*∈<sup>N</sup> in *C*<sup>∞</sup> <sup>c</sup> *(*R*<sup>d</sup> )* be a *<sup>δ</sup>*-sequence (cf. Exercise 3.1) and *<sup>u</sup>* <sup>∈</sup> *<sup>H</sup>*1*()*. We identify each function on by its extension to <sup>R</sup>*<sup>d</sup>* by 0. Prove that for *<sup>k</sup>* <sup>∈</sup> <sup>N</sup> large enough, *φk* <sup>∗</sup> *<sup>u</sup>* <sup>∈</sup> *<sup>H</sup>*<sup>1</sup>*(ε)* with

$$\operatorname{grad}(\phi\_k \* \mu) = \phi\_k \* \operatorname{grad} \mu \text{ on } \Omega\_\varepsilon.$$

(b) Use (a) to prove Lemma 14.2.2.

**Exercise 14.3** Prove the 'subsequence argument': Let *X* be a topological space and *(xn)n* a sequence in *X*. Assume that there exists *x* ∈ *X* such that each subsequence of *(xn)n* has a subsequence converging to *x*. Show that *xn* → *x* as *n* → ∞.

**Exercise 14.4** Let *H*0*, H*<sup>1</sup> be Hilbert spaces and *C*: dom*(C)* ⊆ *H*<sup>0</sup> → *H*<sup>1</sup> be a closed linear operator such that dom*(C)* → *H*<sup>0</sup> compactly. Let *P*ker*(C)*<sup>⊥</sup> : *H*<sup>0</sup> → *H*<sup>0</sup> denote the orthogonal projection onto the closed subspace ker*(C)*⊥. Prove that there exists *c >* 0 such that

$$\forall \mu \in \text{dom}(C) : \left\| P\_{\text{ker}(C)^\perp} \mu \right\|\_{H\_0} \leqslant c \left\| Cu \right\|\_{H\_1} \dots$$

Apply this result to prove Proposition 14.3.7.

**Exercise 14.5** Let *H*0*, H*<sup>1</sup> be Hilbert spaces. Let *C*: dom*(C)* ⊆ *H*<sup>0</sup> → *H*<sup>1</sup> be closed and densely defined. Assume that dom*(C)* ∩ ker*(C)*<sup>⊥</sup> → *H*<sup>0</sup> compactly. Show that, then, dom*(C*∗*)* ∩ ker*(C*∗*)*<sup>⊥</sup> → *H*<sup>1</sup> compactly.

**Exercise 14.6** Let *ν >* 0, = [0*,* 1*) <sup>d</sup>* , *<sup>s</sup>* <sup>∈</sup> *<sup>L</sup>*∞*(*R*)* be 1-periodic, 0 *<sup>s</sup>* 1, and *<sup>a</sup>* as in Example 14.3.8. Show that *(un)n* in *<sup>L</sup>*2*,ν (*R; *<sup>L</sup>*2*(Y ))* satisfying

$$
\partial\_{t, \boldsymbol{\nu}}^2 \mathbf{s}(n\mathbf{m})\boldsymbol{\mu}\_n + \partial\_{t, \boldsymbol{\nu}}(1 - \mathbf{s}(n\mathbf{m}))\boldsymbol{\mu}\_n - \mathrm{div}\_{\boldsymbol{\sharp}}\boldsymbol{a}(n\mathbf{m})\,\mathrm{grad}\_{\boldsymbol{\sharp}}\boldsymbol{\mu}\_n = f^\*
$$

for some *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν (*R; *<sup>L</sup>*2*(Y ))* is convergent to some *<sup>u</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν (*R; *<sup>L</sup>*2*(Y ))*. Which limit equation is satisfied by *u*?

**Exercise 14.7** Let *(αn)n* be a nullsequence in [0*,* 1] and let *a* be as in Example 14.3.8. Show

$$\left( \begin{pmatrix} \mathfrak{d}\_{l,\boldsymbol{\nu}} & 0\\ 0 & \mathfrak{d}\_{l,\boldsymbol{\nu}}^{\boldsymbol{a}\_{\boldsymbol{\nu}}} \mathfrak{a}\_{\boldsymbol{\nu}} \end{pmatrix} + \begin{pmatrix} 0 & \operatorname{div}\_{\boldsymbol{\sharp}} \iota\_{\sharp} \\ \iota\_{\sharp}^{\ast} \operatorname{grad}\_{\sharp} & 0 \end{pmatrix} \right)^{-1} \to \left( \begin{pmatrix} \mathfrak{d}\_{l,\boldsymbol{\nu}} & 0\\ 0 & \mathfrak{a}\_{\boldsymbol{\nu}\text{hom}} \end{pmatrix} + \begin{pmatrix} 0 & \operatorname{div}\_{\mathbb{E}} \iota\_{\sharp} \\ \iota\_{\sharp}^{\ast} \operatorname{grad}\_{\sharp} & 0 \end{pmatrix} \right)^{-1}$$

in the strong operator topology. Show that if *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,*−*μ(*R; *<sup>L</sup>*2*(Y )*⊥*)*, where *L*2*(Y )*<sup>⊥</sup> := *φ* ∈ *L*2*(Y )*; *<sup>Y</sup> φ* = 0 for some small enough *μ >* 0, we have

$$\left( \begin{pmatrix} \partial\_{t,\boldsymbol{\nu}} & 0\\ 0 & \mathfrak{a}\_{\text{hom}} \end{pmatrix} + \begin{pmatrix} 0 & \operatorname{div}\_{\sharp} \iota\_{\sharp} \\ \iota\_{\sharp}^{\*} \operatorname{grad}\_{\sharp} & 0 \end{pmatrix} \right)^{-1} \begin{pmatrix} f\\ 0 \end{pmatrix} \in L\_{2,-\mu}\left( \mathbb{R}; L\_{2}(Y) \times \operatorname{ran}(\operatorname{grad}\_{\sharp}) \right).$$

## **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 15 Maximal Regularity**

In this chapter, we address the issue of maximal regularity. More precisely, we provide a criterion on the 'structure' of the evolutionary equation

$$\left(\overline{\partial\_{l,\boldsymbol{\upsilon}}M(\partial\_{l,\boldsymbol{\upsilon}}) + A}\right)U = F$$

in question and the right-hand side *F* in order to obtain *U* ∈ dom*(∂t ,νM(∂t ,ν))* ∩ dom*(A)*. If *<sup>F</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *H )*, *<sup>U</sup>* <sup>∈</sup> dom*(∂t ,νM(∂t ,ν))* <sup>∩</sup> dom*(A)* is the optimal regularity one could hope for. However, one cannot expect *U* to be as regular since *∂t ,νM(∂t ,ν)* + *A* is simply not closed in general. Hence, in all the cases where *∂t ,νM(∂t ,ν)* + *A* is *not* closed, the desired regularity property does not hold for *<sup>F</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *H )*. However, note that by Picard's theorem, *<sup>F</sup>* <sup>∈</sup> dom*(∂t ,ν)* implies the desired regularity property for *U* given the positive definiteness condition for the material law is satisfied and *A* is skew-selfadjoint. In this case, one even has *U* ∈ dom*(∂t ,ν )*∩dom*(A)*, which is more regular than expected. Thus, in the general case of an unbounded, skew-selfadjoint operator *A* neither the condition *F* ∈ dom*(∂t ,ν )* nor *<sup>F</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *H )* yields precisely the regularity *<sup>U</sup>* <sup>∈</sup> dom*(∂t ,νM(∂t ,ν))*∩dom*(A)* since

$$
\operatorname{dom}(\partial\_{l,\vee}) \cap \operatorname{dom}(A) \subseteq \operatorname{dom}(\partial\_{l,\vee}M(\partial\_{l,\vee})) \cap \operatorname{dom}(A) \subseteq \operatorname{dom}(\overline{\partial\_{l,\vee}M(\partial\_{l,\vee}) + A}),
$$

where the inclusions are proper in general. It is the aim of this chapter to provide an example case, where less regularity of *F* actually yields *more* regularity for *U*. If one focusses on time-regularity only, this improvement of regularity is in stark contrast to the general theory developed in the previous chapters. Indeed, in this regard, one can coin the (time) regularity asserted in Picard's theorem as "*U* is as regular as *F*". For a more detailed account on the usual perspective of maximal regularity (predominantly) for parabolic equations, we refer to the Comments section of this chapter.

## **15.1 Guiding Examples and Non-Examples**

Before we present the abstract theory, we motivate the general setting looking at a particular example. Traditionally, in the discussion of partial differential equations and their classification, people focus on regularity theory. Thus, one finds the nonexhaustive categories 'elliptic', 'parabolic', and 'hyperbolic'. Since we do not want to dive into the intricacies of this classification much less their regularity, we only name some examples of the said subclasses. Laplace's equation from Chap. 1 falls into the class of elliptic PDEs, the heat equation is a paradigm example of a parabolic equation and Maxwell's equations or the transport equation are hyperbolic.

Since we predominantly treat time-dependent equations and elliptic PDEs usually are time-independent, we only look at examples for hyperbolic and parabolic equations more closely. As for the hyperbolic case, we consider the transport equation next and highlight that any 'gain' in regularity as hinted at in the introduction of this chapter is not possible.

*Example 15.1.1* We define *<sup>∂</sup>* : *<sup>H</sup>*1*(*R*)* <sup>⊆</sup> *<sup>L</sup>*2*(*R*)* <sup>→</sup> *<sup>L</sup>*2*(*R*), φ* → *<sup>φ</sup>*- . Then, by Corollary 3.2.6, *∂*<sup>∗</sup> = −*∂*; that is, *∂* is skew-selfadjoint. We consider for *ν >* 0 the operator

$$\partial\_{t,\nu} + \vartheta$$

in *<sup>L</sup>*2*,ν(*R; *<sup>L</sup>*2*(*R*))*. Then, by Picard's theorem, 0 <sup>∈</sup> *<sup>ρ</sup> ∂t ,ν* + *∂* ; that is, *∂t ,ν* + *∂* −<sup>1</sup> <sup>∈</sup> *L(L*2*,ν(*R; *<sup>L</sup>*2*(*R*)))*. Next, consider the functions

$$\mu \colon (t, x) \mapsto \mathbbm{1}\_{\mathbb{R}\_{\geq 0}}(t) t \mathbf{e}^{-t} h(x - t)$$

$$f \colon (t, x) \mapsto \mathbbm{1}\_{\mathbb{R}\_{\geq 0}}(t) (1 - t) \mathbf{e}^{-t} h(x - t)$$

for some *<sup>h</sup>* <sup>∈</sup> *<sup>L</sup>*2*(*R*)*. Then it is not difficult to see that *u, f* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *<sup>L</sup>*2*(*R*)).* If *h* ∈ *C*<sup>∞</sup> <sup>c</sup> *(*R*)*, then

$$\mu \in H^1\_{\nu}(\mathbb{R}; \; H^1(\mathbb{R})) \subseteq \text{dom}(\partial\_{l, \nu} + \partial),$$

and

$$(\partial\_{\mathfrak{l},\mathbb{\nu}} + \partial)u = f.$$

If *<sup>h</sup>* <sup>∈</sup> *<sup>L</sup>*2*(*R*)* \*H*1*(*R*)*, then one can show that *<sup>u</sup>* <sup>∈</sup> dom *∂t ,ν* + *∂* , *∂t ,ν* + *∂ u* = *f* and

$$
\mu \notin \text{dom}(\partial\_{\mathfrak{l},\boldsymbol{\nu}}) \cap \text{dom}(\partial) .
$$

For this observation, we refer to Exercise 15.1. Thus, being in the domain of *∂t ,ν* + *∂* does not necessarily imply being in the domain of either dom*(∂t ,ν)* or dom*(∂)*.

The last example has shown that we cannot expect an improvement of regularity for the considered transport equation. In fact, it is possible to provide an example of a similar type for the wave equation (and similar hyperbolic type equations including Maxwell's equations). Thus, in order to have an improvement of regularity one needs to further restrict the class of evolutionary equations. We now provide a guiding example, where we discuss an abstract variant of the heat equation.

*Example 15.1.2* Let <sup>2</sup> be the space of square summable sequences indexed by *n* ∈ N. We note that <sup>2</sup> is isomorphic to *L*2*(*#N*)*, where #<sup>N</sup> is the counting measure on N. We introduce m: dom*(*m*)* ⊆ <sup>2</sup> → <sup>2</sup> the operator of multiplying by the argument. Then, m is an unbounded, selfadjoint operator. Next, we consider the operator

$$
\partial\_{\mathbf{t},\boldsymbol{\nu}} \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} + \begin{pmatrix} 0 \ 0 \\ 0 \ 1 \end{pmatrix} + \begin{pmatrix} 0 & -\mathbf{m} \\ \mathbf{m} & 0 \end{pmatrix},
$$

on *<sup>L</sup>*2*,ν(*R; 2*)*. Then, Picard's theorem applies and we obtain

$$0 \in \rho \left( \partial\_{\mathfrak{t}, \mathbb{V}} \begin{pmatrix} 1 \ 0 \\ 0 \ 0 \end{pmatrix} + \begin{pmatrix} 0 \ 0 \\ 0 \ 1 \end{pmatrix} + \begin{pmatrix} 0 & -\mathbf{m} \\ \mathbf{m} & 0 \end{pmatrix} \right).$$

For *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; 2*)* define

$$
\begin{pmatrix} \mu \\ q \end{pmatrix} := \overline{\left( \partial\_{t,\boldsymbol{\nu}} \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} + \begin{pmatrix} 0 \ 0 \\ 0 \ 1 \end{pmatrix} + \begin{pmatrix} 0 & -\mathbf{m} \\ \mathbf{m} & 0 \end{pmatrix} \right)^{-1} \begin{pmatrix} f \\ 0 \end{pmatrix}} \cdot \boldsymbol{1}
$$

Then *u* ∈ dom*(∂t ,ν)* ∩ dom*(*m*)* and *q* ∈ dom*(*m*)*. We ask the reader to fill in the details in Exercise 15.2.

*Remark 15.1.3* The last example is in fact an abstract version of the heat equation on bounded domains. We refer to [90, Section 2.2.2] for a corresponding reasoning for the Schrödinger equation.

Let us compare the two different examples, the transport equation and the abstract parabolic equation. From the perspective of evolutionary equations; that is, looking at equations of the form

$$(\partial\_{\mathbb{I},\mathbb{V}}M\_{\mathbb{I}} + M\_{\mathbb{I}} + A)U = F,$$

for the transport equation we have *M*<sup>0</sup> = 1 and *M*<sup>1</sup> = 0. In the case of the abstract parabolic equation, *M*<sup>0</sup> has a nontrivial kernel, which is compensated in *M*1. Moreover, the decomposition of kernel and range of *M*<sup>0</sup> is comparable to the block structure of *A*. Thus, we may hope for an improvement of regularity as in Example 15.1.2 if these abstract conditions are met. This observation is the starting point of parabolic evolutionary pairs to be defined in the next section.

## **15.2 The Maximal Regularity Theorem and Fractional Sobolev Spaces**

In order to be able to formulate the main theorem of this chapter, we need the notion of fractional Sobolev spaces. For this, we recall from Example 5.3.4 and Sect. 7.2 that we already dealt with fractional powers of the time-derivative. For *α, ν*≥ 0, we thus consistently define

$$
\partial\_{t,\nu}^{\alpha} := \mathcal{L}\_{\nu}^\*(\mathrm{im} + \nu)^{\alpha} \mathcal{L}\_{\nu},
$$

with maximal domain in *<sup>L</sup>*2*,ν(*R; *H )*, where we agree with setting *<sup>L</sup>*<sup>0</sup> := *<sup>F</sup>*. Note that in this case, using Proposition 7.2.1, 0 <sup>∈</sup> *ρ(∂<sup>α</sup> t ,ν)* given *ν >* 0. Hence, the following construction yields Hilbert spaces; for this also recall that ·*,*· *<sup>A</sup>* denotes the graph inner product of a linear operator *A* defined in a Hilbert space.

**Definition** Let *α, ν* ≥ 0. Then we define

$$H^{\alpha}\_{\boldsymbol{\nu}}(\mathbb{R}; H) := \left( \text{dom}(\partial^{\alpha}\_{\boldsymbol{t}, \boldsymbol{\nu}}), (f, \mathbf{g}) \mapsto \langle \partial^{\alpha}\_{\boldsymbol{t}, \boldsymbol{\nu}}f, \partial^{\alpha}\_{\boldsymbol{t}, \boldsymbol{\nu}}\mathbf{g} \rangle\_{L\_{2, \boldsymbol{\nu}}(\mathbb{R}; H)} \right)$$

for *ν >* 0 and

$$H\_0^{\mathfrak{a}}(\mathbb{R}; H) := \left( \{ f \in L\_2(\mathbb{R}; H); \,\, \mathcal{F}f \in \text{dom}(\text{(im})^{\mathfrak{a}}) \}, (f, \mathbf{g}) \mapsto \langle \mathcal{F}f, \mathcal{F}\mathbf{g} \rangle\_{(\text{in})^{\mathfrak{a}}} \right).$$

**Lemma 15.2.1** *For all α, ν* <sup>≥</sup> <sup>0</sup> *the space <sup>H</sup><sup>α</sup> <sup>ν</sup> (*R; *H ) is a Hilbert space. Moreover, H<sup>α</sup> <sup>ν</sup> (*R; *H)* <sup>→</sup> *<sup>L</sup>*2*,ν(*R; *H ) continuously and densely.*

*Proof* We only show the claim for *ν >* 0. By Fourier–Laplace transformation, the claim follows if we show that

$$(\text{im} + \nu)^{\mathfrak{a}} \colon \text{dom}((\text{im} + \nu)^{\mathfrak{a}}) \subseteq L\_2(\mathbb{R}; H) \to L\_2(\mathbb{R}; H)^{\mathfrak{a}}$$

is densely defined and continuously invertible. For this, we find *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> and *<sup>β</sup>* <sup>∈</sup> [0*,* <sup>1</sup>*)* such that *<sup>α</sup>* <sup>=</sup> *<sup>n</sup>* <sup>+</sup> *<sup>β</sup>*. It is easy to see that *(*im <sup>+</sup> *ν)<sup>α</sup>* <sup>=</sup> *(*im <sup>+</sup> *ν)n(*im <sup>+</sup> *ν)β*. Thus, continuous invertibility readily follows from the continuous invertibility of *(*im+*ν)* and *(*im+*ν)<sup>β</sup>* (for the latter, see also Proposition 7.2.1). For the case when *<sup>H</sup>* <sup>=</sup> <sup>K</sup>, it follows from Theorem 2.4.3 that *(*im <sup>+</sup> *ν)<sup>α</sup>* is densely defined. Thus, it follows from Lemma 3.1.8 that *(*im+*ν)<sup>α</sup>* is densely defined also for general *<sup>H</sup>*.

In order to state our main theorem, we introduce the notion of parabolic pairs.

**Definition** Let *<sup>M</sup>* : dom*(M)* <sup>⊆</sup> <sup>C</sup> <sup>→</sup> *L(H )* be a material law, *<sup>A</sup>*: dom*(A)* <sup>⊆</sup> *H* → *H* and *α* ∈ *(*0*,* 1]. We call *(M, A)* an *(α-)fractional parabolic pair* if the following conditions are met: there exist *ν >* max{0*,*sb *(M)*} and *c >* 0 such that

$$(\operatorname{Re} z \mathcal{M}(z) \gg c \quad (z \in \mathbb{C}\_{\operatorname{Re} > \nu}),$$

and moreover, we find a closed subspace *H*<sup>0</sup> ⊆ *H*, *H*<sup>1</sup> := *H* <sup>⊥</sup> <sup>0</sup> *, C*: dom*(C)* ⊆ *H*<sup>0</sup> → *H*<sup>1</sup> closed and densely defined, and *M*<sup>00</sup> ∈ *M(H*0; *ν)*, *N* ∈ *M(H*; *ν)* such that

$$M(z) = \begin{pmatrix} M\_{00}(z) \ 0 \\ 0 & 0 \end{pmatrix} + z^{-1}N(z), \quad A = \begin{pmatrix} 0 & -C^\* \\ C & 0 \end{pmatrix},$$

and

$$\operatorname{Re} z^{1-\alpha} M\_{00}(z) \geqslant c' \quad (z \in \mathbb{C}\_{\operatorname{Re} > \mathbb{V}})$$

for some *c*- *<sup>&</sup>gt;* 0, and <sup>C</sup>Re*>ν <sup>z</sup>* → *<sup>z</sup>*1−*αM*00*(z)* <sup>∈</sup> *L(H*0*)* is bounded. A 1 fractional parabolic pair is called *parabolic*.

*Remark 15.2.2*

(a) If *(M, A)* is *α*-fractional parabolic and *β*-fractional parabolic with the same decomposition *H* = *H*<sup>0</sup> ⊕ *H*1, then *α* = *β*. Indeed, assume that *α<β*. Then

$$z^{1-\beta}M\_{00}(z) = z^{\alpha-\beta}z^{1-\alpha}M\_{00}(z) \to 0 \quad (|z| \to \infty, z \in \mathbb{C}\_{\text{Re}>\nu})$$

contradicting the real-part condition.

(b) If *(M, A)* is *α*-fractional parabolic, then there exists *μ>ν* such that for all *<sup>z</sup>* <sup>∈</sup> <sup>C</sup>Re*>μ*

$$\operatorname{Re} z^{1-\alpha} \left( M\_{00}(z) + z^{-1} N\_{00}(z) \right) \gtrless c'/2 \tag{15.1}$$

for some *c*- *>* 0, where *N*00*(z)* := *ι* ∗ *H*0 *N(z)ιH*<sup>0</sup> ∈ *L(H*0*)*. Indeed, this follows from the fact that *<sup>z</sup>*−*αN*00*(z)* <sup>→</sup> 0 as Re *<sup>z</sup>* → ∞.

The main theorem of this chapter is the following:

**Theorem 15.2.3** *Let α* ∈ *(*0*,* 1] *and (M, A) be α-fractional parabolic (with H* = *<sup>H</sup>*<sup>0</sup> <sup>⊕</sup> *<sup>H</sup>*<sup>1</sup> *and <sup>C</sup> from <sup>H</sup>*<sup>0</sup> *to <sup>H</sup>*1*) and assume that* (15.1) *holds for all <sup>z</sup>* <sup>∈</sup> <sup>C</sup>*Re>ν for some ν >* max{0*,*sb *(M)*}*. Let <sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν (*R; *<sup>H</sup>*0*) and <sup>g</sup>* <sup>∈</sup> *<sup>H</sup>α/*<sup>2</sup> *<sup>ν</sup> (*R; *<sup>H</sup>*1*). Then the solution (u, v)* := *∂t ,νM(∂t ,ν)* + *A* −<sup>1</sup> *(f, g)* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *H ) satisfies*

$$u \in H\_{\upsilon}^{\alpha}(\mathbb{R}; H\_0) \cap H\_{\upsilon}^{\alpha/2}(\mathbb{R}; \text{dom}(C))$$

$$v \in H\_{\upsilon}^{\alpha/2}(\mathbb{R}; H\_1) \cap L\_{2, \upsilon}(\mathbb{R}; \text{dom}(C^\*)).$$

*More precisely,*

$$\begin{aligned} \left(\overline{\partial\_{t,\boldsymbol{\nu}}M(\partial\_{t,\boldsymbol{\nu}}) + A}\right)^{-1} & \colon L\_{2,\boldsymbol{\nu}}(\mathbb{R}; H\_0) \oplus H\_{\boldsymbol{\nu}}^{a/2}(\mathbb{R}; H\_1) \\ & \to \left(H\_{\boldsymbol{\nu}}^a(\mathbb{R}; H\_0) \cap H\_{\boldsymbol{\nu}}^{a/2}(\mathbb{R}; \text{dom}(\mathcal{C}))\right) \oplus \left(H\_{\boldsymbol{\nu}}^{a/2}(\mathbb{R}; H\_1) \cap L\_{2,\boldsymbol{\nu}}(\mathbb{R}; \text{dom}(\mathcal{C}^\*))\right). \end{aligned}$$

*is continuous.*

*Example 15.2.4 (Heat Equation)* Let us recall the heat equation from Theorem 6.2.4. For <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* open, we let *<sup>a</sup>* <sup>∈</sup> *L(L*2*()<sup>d</sup> )* such that

$$\text{Re}\,a \gg c$$

in the sense of positive definiteness. It is not difficult to see that

$$\left(z \mapsto \begin{pmatrix} 1 & 0\\ 0 \ az^{-1} \end{pmatrix}, \begin{pmatrix} 0 & \text{div}\_0\\ \text{grad} & 0 \end{pmatrix} \right).$$

is parabolic; with the obvious orthogonal decomposition of the underlying Hilbert space. Let *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν (*R; *<sup>L</sup>*2*())*. Then

$$
\begin{pmatrix} \theta \\ q \end{pmatrix} := \overline{\left( \partial\_{t, \upsilon} \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} + \begin{pmatrix} 0 & 0 \\ 0 \ a^{-1} \end{pmatrix} + \begin{pmatrix} 0 & \text{div}\_0 \\ \text{grad} & 0 \end{pmatrix} \right)^{-1} \begin{pmatrix} f \\ 0 \end{pmatrix}}
$$

particularly satisfies the regularity statement

$$\theta \in H^1\_{\nu}(\mathbb{R}; L\_2(\Omega)) \cap L\_{2,\nu}(\mathbb{R}; H^1(\Omega)) \text{ and } q \in L\_{2,\nu}(\mathbb{R}; H\_0(\text{div}, \Omega)).$$

The next example deals with a parabolic variant of the equations introduced in (7.3) and (7.4) describing fractional elasticity. We modify the equations at hand by considering *α* ∈ [1*,* 2].

*Example 15.2.5 (Parabolic Fractional Viscoelasticity)* Let <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* open and recall the differential operators Div and Grad0 from Sect. 7.1 defined in the spaces *L*2*()d*×*<sup>d</sup>* sym and *<sup>L</sup>*2*()<sup>d</sup>* , respectively. Let *c >* 0 and *<sup>D</sup>* <sup>∈</sup> *<sup>L</sup> L*2*()d*×*<sup>d</sup>* sym , *ρ* = *ρ*<sup>∗</sup> ∈ *L(L*2*()<sup>d</sup> )*. For *ν >* 0 and *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *<sup>L</sup>*2*()<sup>d</sup> )* consider the problem of finding *<sup>u</sup>*: <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup>*<sup>d</sup>* such that

$$
\partial\_{\mathfrak{t},\boldsymbol{\nu}}\rho\,\partial\_{\mathfrak{t},\boldsymbol{\nu}}u - \mathrm{Div}\,T = f \tag{15.2}
$$

$$T = D\partial\_{\mathfrak{f},\boldsymbol{\upsilon}}^{\alpha} \operatorname{Grad}\_{\boldsymbol{0}} \boldsymbol{u},\tag{15.3}$$

for some *α* ∈ [1*,* 2*)*, where *ρ c* and Re *D c* in the sense of positive definiteness. We rewrite the system just introduced by using *<sup>v</sup>* := *<sup>∂</sup><sup>α</sup> t ,νu* to (formally) obtain

$$\begin{aligned} \partial\_{t,\boldsymbol{\upsilon}}\rho \,\partial\_{t,\boldsymbol{\upsilon}}^{1-\alpha}\boldsymbol{\upsilon} - \text{Div } T &= f\\ T &= D \,\mathrm{Grad}\_0 \,\boldsymbol{\upsilon}. \end{aligned}$$

Note that *γ* := 1 + *(*1 − *α)* ∈ *(*0*,* 1]. Thus, using the selfadjointness and positive definiteness of *ρ* as well as Proposition 7.2.1, we infer

$$\operatorname{Re}(\boldsymbol{z}^{\boldsymbol{\nu}}\boldsymbol{\rho}) \geqslant \boldsymbol{\nu}^{\boldsymbol{\nu}}\boldsymbol{c} \quad (\boldsymbol{z}\in\mathbb{C}\_{\operatorname{Re}\geqslant\boldsymbol{\nu}})\dots$$

Consequently, applying Proposition 6.2.3(b) to *a* = *D*, we get that

$$\left(z \mapsto \begin{pmatrix} z^{\gamma - 1}\rho & 0\\ 0 & z^{-1}D^{-1} \end{pmatrix}, \begin{pmatrix} 0 & -\text{Div} \\ -\text{Grado} & 0 \end{pmatrix}\right).$$

is *γ* -fractional parabolic. In consequence, the solution *(v, T )* of

$$
\begin{pmatrix}
\begin{pmatrix}
\partial\_{\mathbf{t},\boldsymbol{\nu}}^{\boldsymbol{\nu}-1}\boldsymbol{\rho} & \mathbf{0} \\
\mathbf{0} & \partial\_{\mathbf{t},\boldsymbol{\nu}}^{-1}D^{-1}
\end{pmatrix} + \begin{pmatrix}
\mathbf{0} & -\operatorname{Div} \\
\end{pmatrix}
\end{pmatrix}
\begin{pmatrix}
\boldsymbol{\upsilon} \\
\boldsymbol{T}
\end{pmatrix} = \begin{pmatrix}
\boldsymbol{f} \\
\mathbf{0}
\end{pmatrix}
$$

additionally satisfies the following regularity properties

$$v \in H\_{\boldsymbol{\nu}}^{\boldsymbol{\nu}}\left(\mathbb{R}; \; L\_2(\Omega)^d\right) \cap H\_{\boldsymbol{\nu}}^{\boldsymbol{\nu}/2}\left(\mathbb{R}; \; \text{dom}(\text{Grad}\_0)\right),$$

$$T \in H\_{\boldsymbol{\nu}}^{\boldsymbol{\nu}/2}\left(\mathbb{R}; \; L\_2(\Omega)\_{\text{sym}}^{d \times d}\right) \cap L\_{2,\boldsymbol{\nu}}\left(\mathbb{R}; \; \text{dom}(\text{Div})\right).$$

Rephrasing this for *<sup>u</sup>* <sup>=</sup> *<sup>∂</sup>*−*<sup>α</sup> t ,ν <sup>v</sup>*, we even have

$$
\mu \in H^2\_{\nu}(\mathbb{R}; \, L\_2(\Omega)^d) \cap H^{1+\alpha/2}\_{\nu}(\mathbb{R}; \, \text{dom}(\text{Grad}\_0)),
$$

which, since *α/*2 1, particularly implies that the equations (15.2) and (15.3) are equalities valid in *L*2*,ν* <sup>R</sup>; *<sup>L</sup>*2*()<sup>d</sup>* and *L*2*,ν* <sup>R</sup>; *<sup>L</sup>*2*()d*×*<sup>d</sup>* sym , respectively.

## **15.3 The Proof of Theorem 15.2.3**

The decisive estimate in connection to the proof of Theorem 15.2.3 is contained in the following statement. For the entire rest of the section, we shall denote the norm and scalar product in *H<sup>α</sup> <sup>ν</sup> (*R; *K)*, *<sup>K</sup>* some Hilbert space, by ·*<sup>α</sup>* and ·*,*· *α*, respectively.

**Lemma 15.3.1** *Let H*0*, H*<sup>1</sup> *be Hilbert spaces, C* : dom*(C)* ⊆ *H*<sup>0</sup> → *H*<sup>1</sup> *densely defined and closed. Let <sup>α</sup>* ∈ [0*,* <sup>1</sup>]*, Mj* : dom*(Mj )* <sup>⊆</sup> <sup>C</sup> <sup>→</sup> *L(Hj ) material laws for j* ∈ {0*,* 1}*, ν >* max{sb *(M*0*),*sb *(M*1*),* 0} *with*

$$\mathbb{C}\_{\mathbf{Re}\geqslant v} \ni z \mapsto z^{1-\alpha}M\_0(z) \in L(H\_0)$$

*bounded. Assume there exists c >* <sup>0</sup> *such that for all <sup>z</sup>* <sup>∈</sup> <sup>C</sup>Re*ν*

$$\operatorname{Re} z M\_0(z) \geqslant c, \quad \operatorname{Re} M\_1(z) \geqslant c, \quad \operatorname{Re} z^{1-\alpha} M\_0(z) \geqslant c.$$

*Let <sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν (*R; *<sup>H</sup>*0*), <sup>g</sup>* <sup>∈</sup> *<sup>H</sup>α/*<sup>2</sup> *<sup>ν</sup> (*R; *<sup>H</sup>*1*) as well as <sup>u</sup>* <sup>∈</sup> *<sup>H</sup>*<sup>1</sup> *ν* <sup>R</sup>; dom*(C) and <sup>v</sup>* <sup>∈</sup> *<sup>H</sup>*<sup>1</sup> *ν* <sup>R</sup>; dom*(C*∗*) . Assume the equalities*

$$
\partial\_{\mathfrak{l},\boldsymbol{\upsilon}}M\_0(\partial\_{\mathfrak{l},\boldsymbol{\upsilon}})\boldsymbol{\mu} - \boldsymbol{C}^\*\boldsymbol{\upsilon} = f,
$$

$$
\boldsymbol{\upsilon} + M\_1(\partial\_{\mathfrak{l},\boldsymbol{\upsilon}})\boldsymbol{C}\boldsymbol{\mu} = \mathbf{g}.
$$

*Then*

$$\begin{aligned} &\|\|u\|\|\_{\alpha}^{2} + \|Cu\|\_{\alpha/2}^{2} + \|v\|\_{\alpha/2}^{2} + \|C^\*v\|\_{0}^{2} \\ &\lesssim 2\left(1 + \left(m\_1^2 + m\_0^2 + \frac{1}{2}\right)\left(\frac{2}{c} + \frac{m\_1}{c^2}\right)^2\right) \left(\|f\|\_{0}^{2} + \|g\|\_{\alpha/2}^{2}\right) \end{aligned}$$

*with <sup>m</sup>*<sup>1</sup> := *<sup>M</sup>*1∞*,*CRe*>ν and <sup>m</sup>*<sup>0</sup> :=  *<sup>z</sup>* → *<sup>z</sup>*1−*αM*0*(z)* <sup>∞</sup>*,*CRe*>ν .*

*Proof* We compute

$$\begin{split} c \left\| \|Cu\|\|\_{\alpha/2}^{2} \leqslant c \left\| Cu\right\|\|\_{\alpha/2}^{2} + c \left\| u \right\|\|\_{\alpha/2}^{2} \\ \leqslant \operatorname{\mathbf{Re}} \left\langle M\_{1}(\partial\_{t,\boldsymbol{\upsilon}})Cu, Cu \right\rangle\_{\alpha/2} + \operatorname{\mathbf{Re}} \left\langle \partial\_{t,\boldsymbol{\upsilon}}M\_{0}(\partial\_{t,\boldsymbol{\upsilon}})\mu, u \right\rangle\_{\alpha/2} \\ = \operatorname{\mathbf{Re}} \left\langle g - \boldsymbol{\upsilon}, Cu \right\rangle\_{\alpha/2} + \operatorname{\mathbf{Re}} \left\langle \partial\_{t,\boldsymbol{\upsilon}}M\_{0}(\partial\_{t,\boldsymbol{\upsilon}})\mu, u \right\rangle\_{\alpha/2} \\ \leqslant \|g\|\_{\alpha/2} \left\| Cu \right\|\_{\alpha/2} + \operatorname{\mathbf{Re}} \left\langle \partial\_{t,\boldsymbol{\upsilon}}M\_{0}(\partial\_{t,\boldsymbol{\upsilon}})\mu - C^{\*} \boldsymbol{\upsilon}, u \right\rangle\_{\alpha/2} \\ = \|g\|\_{\alpha/2} \left\| Cu \right\|\_{\alpha/2} + \operatorname{\mathbf{Re}} \left\langle f, \left(\partial\_{t,\boldsymbol{\upsilon}}^{\*}\right)^{\alpha/2} \left(\partial\_{t,\boldsymbol{\upsilon}}\right)^{\alpha/2} u \right\rangle\_{0} \\ \leqslant \|g\|\_{\alpha/2} \left\| Cu \right\|\_{\alpha/2} + \|f\|\_{0} \left\|u\right\|\_{\alpha}, \end{split}$$

where we used that

$$\begin{split} \left\| \left( \partial\_{t,\boldsymbol{\nu}}^{\ast} \right)^{\alpha/2} \left( \partial\_{t,\boldsymbol{\nu}} \right)^{\alpha/2} u \right\|\_{0} &= \left\| \left( -\mathrm{im} + \boldsymbol{\nu} \right)^{\alpha/2} \left( \mathrm{im} + \boldsymbol{\nu} \right)^{\alpha/2} u \right\|\_{L\_{2}(\mathbb{R}; H\_{0})} \\ &= \left\| \frac{(-\mathrm{im} + \boldsymbol{\nu})^{\alpha/2}}{\left( \mathrm{im} + \boldsymbol{\nu} \right)^{\alpha/2}} \left( \mathrm{im} + \boldsymbol{\nu} \right)^{\alpha} u \right\|\_{L\_{2}(\mathbb{R}; H\_{0})} \\ &\leqslant \left\| \left( \mathrm{im} + \boldsymbol{\nu} \right)^{\alpha} u \right\|\_{L\_{2}(\mathbb{R}; H\_{0})} = \left\| u \right\|\_{\boldsymbol{\alpha}}. \end{split}$$

Moreover,

*<sup>c</sup> <sup>u</sup>* <sup>2</sup> *<sup>α</sup>* Re ' *<sup>∂</sup>*1−*<sup>α</sup> t ,ν <sup>M</sup>*0*(∂t ,ν )∂<sup>α</sup> t ,νu, ∂<sup>α</sup> t ,νu* ( 0 = Re *∂t ,νM*0*(∂t ,ν)u, ∂<sup>α</sup> t ,νu* 0 = Re *<sup>f</sup>* <sup>+</sup> *<sup>C</sup>*∗*v, ∂<sup>α</sup> t ,νu* 0 *<sup>f</sup>* <sup>0</sup> *<sup>u</sup> <sup>α</sup>* <sup>+</sup> Re ' *∂*∗ *t ,να/*<sup>2</sup> *v, ∂α/*<sup>2</sup> *t ,ν Cu*( 0 *f* <sup>0</sup> *u <sup>α</sup>* + *v α/*<sup>2</sup> *Cu α/*<sup>2</sup> = *f* <sup>0</sup> *u <sup>α</sup>* +  *<sup>g</sup>* <sup>−</sup> *<sup>M</sup>*1*(∂t ,ν)Cu α/*<sup>2</sup> *Cu α/*<sup>2</sup> *<sup>f</sup>* <sup>0</sup> *<sup>u</sup> <sup>α</sup>* <sup>+</sup> *<sup>g</sup> α/*<sup>2</sup> *Cu α/*<sup>2</sup> <sup>+</sup> *<sup>m</sup>*<sup>1</sup> *Cu* <sup>2</sup> *α/*2 <sup>1</sup> <sup>+</sup> *<sup>m</sup>*<sup>1</sup> *c f* <sup>0</sup> *u <sup>α</sup>* + *g α/*<sup>2</sup> *Cu α/*<sup>2</sup> *.*

Thus, we obtain for *ε >* 0

$$\begin{aligned} &c\left(\|u\|\_{\alpha}^{2}+\|Cu\|\_{\alpha/2}^{2}\right) \\ &\lesssim \left(2+\frac{m\_{1}}{c}\right)\left(\|f\|\_{0}\|u\|\_{\alpha}+\|g\|\_{\alpha/2}\|Cu\|\_{\alpha/2}\right) \\ &\lesssim \frac{1}{2}\left(2+\frac{m\_{1}}{c}\right)\left(\frac{1}{\varepsilon}\left(\|f\|\_{0}^{2}+\|g\|\_{\alpha/2}^{2}\right)+\varepsilon\left(\|u\|\_{\alpha}^{2}+\|Cu\|\_{\alpha/2}^{2}\right)\right). \end{aligned}$$

Choosing *<sup>ε</sup>* <sup>=</sup> *<sup>c</sup>*2*/(*2*<sup>c</sup>* <sup>+</sup> *<sup>m</sup>*1*)* and subtracting the term involving *<sup>u</sup>* and *Cu* on both sides of the inequality, we deduce

$$\begin{aligned} \frac{c}{2} \left( \|\mu\|\_{\alpha}^{2} + \|Cu\|\_{\alpha/2}^{2} \right) &\leqslant \frac{1}{2} \left( 2 + \frac{m\_{1}}{c} \right) \frac{1}{\varepsilon} \left( \|f\|\_{0}^{2} + \|g\|\_{\alpha/2}^{2} \right) \\ &= \frac{1}{2c} \left( 2 + \frac{m\_{1}}{c} \right)^{2} \left( \|f\|\_{0}^{2} + \|g\|\_{\alpha/2}^{2} \right) \end{aligned}$$

and therefore

$$\left(\|u\|\_{\alpha}^{2} + \|Cu\|\_{\alpha/2}^{2}\right) \leqslant \left(\frac{2}{c} + \frac{m\_{1}}{c^{2}}\right)^{2} \left(\|f\|\_{0}^{2} + \|g\|\_{\alpha/2}^{2}\right).$$

Finally, we compute

$$\begin{aligned} \frac{1}{2} \|v\|\_{\alpha/2}^2 &\leqslant \|g\|\_{\alpha/2}^2 + \left\|M\_1(\partial\_{t,\upsilon})Cu\right\|\_{\alpha/2}^2\\ &\leqslant \|g\|\_{\alpha/2}^2 + m\_1^2 \left(\frac{2}{c} + \frac{m\_1}{c^2}\right)^2 \left(\|f\|\_{0}^2 + \|g\|\_{\alpha/2}^2\right). \end{aligned}$$

and

$$\begin{split} \frac{1}{2} \left\| C^\* v \right\|\_{0}^{2} &\leqslant \left\| \partial\_{t,\boldsymbol{\nu}} M\_0(\partial\_{t,\boldsymbol{\nu}}) \mu \right\|\_{0}^{2} + \left\| f \right\|\_{0}^{2} \\ &\leqslant \left\| \partial\_{t,\boldsymbol{\nu}}^{1-\alpha} M\_0(\partial\_{t,\boldsymbol{\nu}}) \partial\_{t,\boldsymbol{\nu}}^{\alpha} \mu \right\|\_{0}^{2} + \left\| f \right\|\_{0}^{2} \\ &\leqslant m\_0^2 \left\| \boldsymbol{\mu} \right\|\_{\alpha}^{2} + \left\| f \right\|\_{0}^{2} \\ &\leqslant m\_0^2 \left( \frac{2}{c} + \frac{m\_1}{c^2} \right)^2 \left( \left\| f \right\|\_{0}^{2} + \left\| g \right\|\_{\alpha/2}^{2} \right) + \left\| f \right\|\_{0}^{2} .\end{split}$$

The next preliminary finding is a refinement of the surjectivity statement in Picard's theorem.

**Proposition 15.3.2** *Let <sup>H</sup> be a Hilbert space, <sup>M</sup>* : dom*(M)* <sup>⊆</sup> <sup>C</sup> <sup>→</sup> *L(H ) <sup>a</sup> material law, ν >* sb *(M), with ν >* 0*, and A*: dom*(A)* ⊆ *H* → *H skewselfadjoint. Assume there exists c >* <sup>0</sup> *such that for all <sup>z</sup>* <sup>∈</sup> <sup>C</sup>Re*>ν we have*

$$\operatorname{Re} z M(z) \geqslant c.$$

*Let β* ∈ [0*,* 1]*.*

(a) *The inclusion*

$$\left(\partial\_{\mathbb{H},\boldsymbol{\nu}}M(\partial\_{\mathbb{H},\boldsymbol{\nu}}) + A\right)\Big[H^2\_{\boldsymbol{\nu}}(\mathbb{R};\,\text{dom}(A))\Big] \subseteq H^{\beta}\_{\boldsymbol{\nu}}(\mathbb{R};H)$$

*is dense.*

(b) *Let H*<sup>0</sup> ⊆ *H be a closed subspace and H*<sup>1</sup> := *H* <sup>⊥</sup> <sup>0</sup> *. Then*

$$\left(\partial\_{\mathbb{I},\boldsymbol{\nu}}M(\partial\_{\mathbb{I},\boldsymbol{\nu}}) + A\right) \Big[H^2\_{\boldsymbol{\nu}}(\mathbb{R}; \operatorname{dom}(A))\Big] \subseteq L\_{2,\boldsymbol{\nu}}(\mathbb{R}; H\_0) \oplus H^{\beta}\_{\boldsymbol{\nu}}(\mathbb{R}; H\_{\mathbb{I}})^{\beta}$$

*is dense.*

#### *Proof*

(a) Since *H*<sup>1</sup> *<sup>ν</sup> (*R; *H )* is dense in *<sup>H</sup><sup>β</sup> <sup>ν</sup> (*R; *H )* (this is a consequence of Lemma 15.2.1), it suffices to show the claim for *β* = 1. Next, by Picard's theorem, for *f* ∈ dom*(∂t ,ν)*, we obtain *u* = *∂t ,νM(∂t ,ν)* + *A* −<sup>1</sup> *<sup>f</sup>* <sup>∈</sup> dom*(∂t ,ν)* ∩ *L*2*,ν* <sup>R</sup>; dom*(A)* . In particular, it follows that

$$\left(\partial\_{\mathbb{H},\boldsymbol{\nu}}M(\partial\_{\mathbb{H},\boldsymbol{\nu}}) + A\right)\Big[H^{1}\_{\boldsymbol{\nu}}(\mathbb{R};H) \cap L\_{2,\boldsymbol{\nu}}(\mathbb{R};\,\text{dom}(A))\Big] \subseteq L\_{2,\boldsymbol{\nu}}(\mathbb{R};H),$$

is dense. Multiplying this inclusion by *∂*−<sup>1</sup> *t ,ν* , we infer that

$$\left(\partial\_{\mathbb{I},\boldsymbol{\nu}}M(\partial\_{\mathbb{I},\boldsymbol{\nu}}) + A\right) \Big[H^2\_{\boldsymbol{\nu}}(\mathbb{R}; \boldsymbol{H}) \cap H^1\_{\boldsymbol{\nu}}(\mathbb{R}; \operatorname{dom}(A))\Big] \subseteq H^1\_{\boldsymbol{\nu}}(\mathbb{R}; \boldsymbol{H})$$

is dense. Hence, for *<sup>f</sup>* <sup>∈</sup> *<sup>H</sup>*<sup>1</sup> *<sup>ν</sup> (*R; *H )*, we find *(un)n* in *<sup>H</sup>*<sup>2</sup> *<sup>ν</sup> (*R; *H )* <sup>∩</sup> *H*<sup>1</sup> *<sup>ν</sup> (*R; dom*(A))* such that *fn* := *∂t ,νM(∂t ,ν)* + *A un* <sup>→</sup> *<sup>f</sup>* in *<sup>H</sup>*<sup>1</sup> *<sup>ν</sup> (*R; *H )* as *<sup>n</sup>* → ∞. Next, for *ε >* 0, *(*<sup>1</sup> <sup>+</sup> *ε∂t ,ν)*−1*<sup>u</sup>* <sup>∈</sup> *<sup>H</sup>*<sup>2</sup> *<sup>ν</sup> (*R; dom*(A))* given *<sup>u</sup>* <sup>∈</sup> *<sup>H</sup>*<sup>1</sup> *<sup>ν</sup> (*R; dom*(A))*. Moreover, *(*<sup>1</sup> <sup>+</sup> *ε∂t ,ν )*−1*<sup>f</sup>* <sup>→</sup> *<sup>f</sup>* in *<sup>H</sup>*<sup>1</sup> *<sup>ν</sup> (*R; *H )* as *<sup>ε</sup>* <sup>→</sup> 0, by Lemma 9.3.3(b) and the fact that *∂*−<sup>1</sup> *t ,ν* commutes with *(*<sup>1</sup> <sup>+</sup> *ε∂t ,ν)*−1. Thus, we compute for *ε >* 0 and *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>

$$\begin{aligned} &\left\| \left( \partial\_{l,\boldsymbol{\nu}} M(\partial\_{l,\boldsymbol{\nu}}) + A \right) (1 + \varepsilon \partial\_{l,\boldsymbol{\nu}})^{-1} u\_n - f \right\|\_1 \\ &\leqslant \left\| \left( 1 + \varepsilon \partial\_{l,\boldsymbol{\nu}} \right)^{-1} f\_n - (1 + \varepsilon \partial\_{l,\boldsymbol{\nu}})^{-1} f \right\|\_1 + \left\| (1 + \varepsilon \partial\_{l,\boldsymbol{\nu}})^{-1} f - f \right\|\_1 \\ &\leqslant \left\| f\_n - f \right\|\_1 + \left\| (1 + \varepsilon \partial\_{l,\boldsymbol{\nu}})^{-1} f - f \right\|\_1 \to 0 \end{aligned}$$

as *n* → ∞ and *ε* → 0, which concludes the proof of (a).

(b) By (a), it suffices to show that

$$H^{\beta}\_{\boldsymbol{\nu}}(\mathbb{R}; H) = H^{\beta}\_{\boldsymbol{\nu}}(\mathbb{R}; H\_0) \oplus H^{\beta}\_{\boldsymbol{\nu}}(\mathbb{R}; H\_1) \subseteq L\_{2, \boldsymbol{\nu}}(\mathbb{R}; H\_0) \oplus H^{\beta}\_{\boldsymbol{\nu}}(\mathbb{R}; H\_1)$$

is dense (note that the first equality follows from the fact that *H u* → *(u*0*, u*1*)* ∈ *H*<sup>0</sup> ⊕ *H*<sup>1</sup> is unitary). The desired density result thus follows from Lemma 15.2.1.

Next, we shall proceed with a proof of our main theorem in this chapter.

*Proof of Theorem 15.2.3* For *i, j* ∈ {0*,* 1} we set *Nij (z)* := *ι* ∗ *Hi N(z)ιHj* . Let *(f, g)* ∈ *∂t ,νM(∂t ,ν)* + *A H*<sup>2</sup> *ν* <sup>R</sup>; dom*(C)* <sup>⊕</sup> dom*(C*∗*)* . Defining

$$\mathcal{S}(\boldsymbol{\mu}, \boldsymbol{\upsilon}) := \left( \overline{\partial\_{\boldsymbol{l}, \boldsymbol{\upsilon}} M(\partial\_{\boldsymbol{l}, \boldsymbol{\upsilon}}) + A} \right)^{-1} (f, g) \in H^2\_{\boldsymbol{\upsilon}} (\mathbb{R}; \, \text{dom}(\boldsymbol{C}) \oplus \text{dom}(\boldsymbol{C}^\*)),$$

we have

$$\begin{aligned} \partial\_{t,\boldsymbol{\upsilon}}M\_{00}(\partial\_{\boldsymbol{t},\boldsymbol{\upsilon}})\boldsymbol{\mu} + N\_{00}(\partial\_{\boldsymbol{t},\boldsymbol{\upsilon}})\boldsymbol{\mu} - \boldsymbol{C}^\*\boldsymbol{\upsilon} &= \boldsymbol{f} - N\_{01}(\partial\_{\boldsymbol{t},\boldsymbol{\upsilon}})\boldsymbol{\upsilon}, \\ N\_{11}(\partial\_{\boldsymbol{t},\boldsymbol{\upsilon}})\boldsymbol{\upsilon} + \boldsymbol{C}\boldsymbol{\mu} &= \boldsymbol{g} - N\_{10}(\partial\_{\boldsymbol{t},\boldsymbol{\upsilon}})\boldsymbol{\mu}. \end{aligned}$$

Since Re *zM(z) c*, we infer

$$\operatorname{Re} N\_{11}(\partial\_{\mathbb{I},\mathbb{V}}) \geqslant c.$$

Thus, by Proposition 6.2.3(b), we deduce that *<sup>M</sup>*1*(∂t ,ν)* := *<sup>N</sup>*11*(∂t ,ν)*−<sup>1</sup> satisfies the real-part condition imposed on *M*<sup>1</sup> in Lemma 15.3.1. Moreover, since *(M, A)* is *α*-fractional parabolic,

$$M\_0(z) := M\_{00}(z) + z^{-1} N\_{00}(z)$$

fulfills the real part and boundedness assumptions in Lemma 15.3.1. Introducing

$$\begin{aligned} \widetilde{f} &:= f - N\_{01}(\partial\_{\mathfrak{t}, \boldsymbol{\nu}}) v \in H\_{\boldsymbol{\nu}}^{1}(\mathbb{R}; H\_{0}) \subseteq L\_{2, \boldsymbol{\nu}}(\mathbb{R}; H\_{0}) \\ \widetilde{g} &:= M\_{1}(\partial\_{\mathfrak{t}, \boldsymbol{\nu}}) g - M\_{1}(\partial\_{\mathfrak{t}, \boldsymbol{\nu}}) N\_{10}(\partial\_{\mathfrak{t}, \boldsymbol{\nu}}) u \in H\_{\boldsymbol{\nu}}^{1}(\mathbb{R}; H\_{1}) \subseteq H\_{\boldsymbol{\nu}}^{a/2}(\mathbb{R}; H\_{1}) \,, \end{aligned}$$

we get

$$
\partial\_{\mathfrak{l},\boldsymbol{\upsilon}}M\_0(\partial\_{\mathfrak{l},\boldsymbol{\upsilon}})\boldsymbol{\mu} - C^\*\boldsymbol{\upsilon} = \widetilde{f},
$$

$$
\boldsymbol{\upsilon} + M\_1(\partial\_{\mathfrak{l},\boldsymbol{\upsilon}})C\boldsymbol{\upsilon} = \widetilde{\boldsymbol{\mathfrak{g}}}.
$$

Thus, using Lemma 15.3.1, we find *κ* - 0 in terms of *M*0, *M*<sup>1</sup> and the positivity constants such that (recall that *m*<sup>1</sup> := *M*1∞*,*CRe*>ν* )

$$\begin{split} & \|\boldsymbol{u}\|\_{\boldsymbol{a}}^{2} + \|\boldsymbol{C}\boldsymbol{u}\|\_{\boldsymbol{a}/2}^{2} + \|\boldsymbol{v}\|\_{\boldsymbol{a}/2}^{2} + \|\boldsymbol{C}^{\*}\boldsymbol{v}\|\_{0}^{2} \\ & \quad \leqslant \kappa \left( \left\|\boldsymbol{\tilde{f}}\right\|\_{0}^{2} + \left\|\boldsymbol{\tilde{g}}\right\|\_{\boldsymbol{a}/2}^{2} \right) \\ & \quad \leqslant 2\kappa \left( \left\|\boldsymbol{f}\right\|\_{0}^{2} + \left\|\boldsymbol{N}\right\|\_{\infty,\mathbb{C}\_{\text{Re}\simeq\boldsymbol{\nu}}}^{2} \left\|\boldsymbol{v}\right\|\_{0}^{2} + m\_{1}^{2} \left\|\boldsymbol{g}\right\|\_{\boldsymbol{a}/2}^{2} + m\_{1}^{2} \left\|\boldsymbol{N}\right\|\_{\infty,\mathbb{C}\_{\text{Re}\simeq\boldsymbol{\nu}}}^{2} \left\|\boldsymbol{u}\right\|\_{\boldsymbol{a}/2}^{2} \right) \\ & \quad \leqslant 2\kappa \left( \left\|\boldsymbol{f}\right\|\_{0}^{2} + \left\|\boldsymbol{N}\right\|\_{\infty,\mathbb{C}\_{\text{Re}\simeq\boldsymbol{\nu}}}^{2} \left\|\boldsymbol{v}\right\|\_{0}^{2} + m\_{1}^{2} \left\|\boldsymbol{g}\right\|\_{\boldsymbol{a}/2}^{2} + 2m\_{1}^{2} \left\|\boldsymbol{N}\right\|\_{\infty,\mathbb{C}\_{\text{Re}\simeq\boldsymbol{\nu}}}^{2} \left\{\varepsilon \left\|\boldsymbol{u}\right\|\_{\boldsymbol{a}}^{2} + \frac{1}{\varepsilon} \left\|\boldsymbol{u}\right\|\_{0}^{2}\right\} \right). \end{split}$$

for all *ε >* 0, where in the last estimate, we used

$$\|\|u\|\|\_{\alpha/2}^2 = \left\langle \partial\_{t,\nu}^{\alpha/2}u, \partial\_{t,\nu}^{\alpha/2}u \right\rangle\_0 = \left\langle u, (\partial\_{t,\nu}^{\alpha/2})^\* \partial\_{t,\nu}^{\alpha/2}u \right\rangle\_0 \lesssim \|\|u\|\|\_0 \|\|u\|\|\_{\alpha} \dots$$

Hence, choosing *ε >* 0 small enough and using that *∂t ,νM(∂t ,ν)* + *A* −<sup>1</sup> is continuous from *<sup>L</sup>*2*,ν (*R; *H )* into itself, we find *<sup>κ</sup>*- -0 such that

$$\|\|u\|\|\_{\alpha}^{2} + \|Cu\|\_{\alpha/2}^{2} + \|v\|\_{\alpha/2}^{2} + \|C^\*v\|\_{0}^{2} \leqslant \kappa' \left(\|f\|\_{0}^{2} + \|g\|\_{\alpha/2}^{2}\right),$$

which establishes the assertion (using the density result in Proposition 15.3.2(b)).

## **15.4 Comments**

The issue of maximal regularity (in Hilbert spaces for simplicity) is a priori formulated for equations of the type

$$u' + Au = f,$$

where *f* lies in some *L*<sup>2</sup> *(*0*,T)*; *H* and *A* is an unbounded operator in *H*. The question of maximal regularity then addresses, whether a solution *u* to this equation exists and satisfies *u* ∈ *L*<sup>2</sup> *(*0*,T)*; dom*(A)* <sup>∩</sup> *<sup>H</sup>*<sup>1</sup> *(*0*,T)*; *H* . In Hilbert spaces, whether or not this question can be answered in the affirmative solely relies on the properties of *A*. Hence, one shortens this question to whether *A* 'has maximal regularity'. The present situation is conveniently understood: *A* has maximal regularity if and only if −*A* is the generator of a holomorphic semigroup, see [33, Theorem 2.2] and [105, Lemma 3,1]. One major example class is the class of operators that are defined with the help of forms, see [5] for an introductory text. People then studied the situation of time-dependent *A*. It has then been shown in various contexts and under suitable conditions on the (smoothness of the) timedependence of *A*, whether *A* has maximal regularity or not. For this, we refer to [2, 8, 30] for an account of possible conditions. The evolutionary equations case, which is addressed for the first time in [88] in the time-independent and in [123] for the non-autonomous case, is different in as much as the focus of the underlying rationale is shifted away from the spatial derivative operator towards the material law. The proof of Theorem 15.2.3 outlined above is the autonomous version of [123].

## **Exercises**

**Exercise 15.1** Consider the situation of Example 15.1.1.


$$0 \in \rho \left( \partial\_{\mathfrak{t}, \mathbb{V}} \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} + \begin{pmatrix} 0 \ \partial \\ \partial & 0 \end{pmatrix} \right).$$

Show that there exist *f, g* <sup>∈</sup> *<sup>L</sup>*2*,ν (*R; *<sup>L</sup>*2*(*R*))* such that for

$$
\begin{pmatrix} u\_f \\ v\_f \end{pmatrix} := \left( \overline{\partial\_{\mathfrak{f}, \mathbb{V}} \begin{pmatrix} 1 \ 0 \\ 0 \ 1 \end{pmatrix} + \begin{pmatrix} 0 \ \partial \\ \partial \ 0 \end{pmatrix}} \right)^{-1} \begin{pmatrix} f \\ 0 \end{pmatrix},
$$

and

$$
\begin{pmatrix} \mu\_{\mathcal{g}} \\ \upsilon\_{\mathcal{g}} \end{pmatrix} := \overline{\left( \partial\_{\mathfrak{f},\boldsymbol{\upsilon}} \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} + \begin{pmatrix} 0 & \boldsymbol{\partial} \\ \boldsymbol{\partial} & 0 \end{pmatrix} \right)^{-1} \begin{pmatrix} 0 \\ \boldsymbol{g} \end{pmatrix}}
$$

we have *uf , ug* ∈*/* dom*(∂t ,ν)*.

**Exercise 15.2** Let *u* and *q* be defined as in Example 15.1.2. Show that *u* ∈ dom*(∂t ,ν )* and *q* ∈ dom*(*m*)* by explicit computation (not using Theorem 15.2.3). *Hint:* Find an ordinary differential equation satisfied by *u*. Use the explicit solution of this ordinary differential equation to show the claim.

**Exercise 15.3** Let *α* -0 and *ν >* 0. Show that

$$\begin{aligned} \partial\_{\mathfrak{t},\boldsymbol{\upsilon}} \colon \text{dom}(\partial\_{\mathfrak{t},\boldsymbol{\upsilon}}^{\lceil \boldsymbol{\alpha} \rceil + 1}) \subseteq H\_{\boldsymbol{\upsilon}}^{\boldsymbol{\alpha}}(\mathbb{R}) \to H\_{\boldsymbol{\upsilon}}^{\boldsymbol{\alpha}}(\mathbb{R}),\\ \boldsymbol{\iota} \mapsto \partial\_{\mathfrak{t},\boldsymbol{\upsilon}} \boldsymbol{\mu} \end{aligned}$$

is densely defined closable with continuous invertible closure.

**Exercise 15.4 (Local Maximal Regularity)** Let *H*0*, H*<sup>1</sup> be Hilbert spaces, *a* ∈ *L(H*1*)* be such that Re *a c* for some *c >* 0. Furthermore, let *C*: dom*(C)* ⊆ *H*<sup>0</sup> → *H*<sup>1</sup> be densely defined and closed. Let *T >* 0. Show that for every *f* ∈ *L*<sup>2</sup> *(*0*, T )*; *H*<sup>0</sup> there exists a unique *<sup>u</sup>* <sup>∈</sup> *<sup>H</sup>*<sup>1</sup> *(*0*, T )*; *H*<sup>0</sup> ∩ *L*2 *(*0*, T )*; dom*(C*∗*aC)* with *u(*0*)* = 0 such that

$$u'(t) + C^\* a Cu(t) = f(t) \quad (\text{a.e.} \ t \in (0, T)).$$

Exercises 257

*Hint:* Reformulate the equation satisfied by *u* into an evolutionary equation, apply Theorem 15.2.3.

**Exercise 15.5** Let *H*0*, H*<sup>1</sup> be Hilbert spaces, *a* ∈ *L(H*1*)* be such that Re *a c* for some *c >* 0. Furthermore, let *C*: dom*(C)* ⊆ *H*<sup>0</sup> → *H*<sup>1</sup> be densely defined and closed. Let *T >* 0. Define *∂*<sup>0</sup> : dom*(∂*0*)* ⊆ *L*<sup>2</sup> *(*0*, T )*; *H*<sup>0</sup> → *L*<sup>2</sup> *(*0*, T )*; *H*<sup>0</sup> with *∂*0*u* = *u*and

$$\text{dom}(\partial\_0) = \left\{ u \in H^1\left( (0, T) \; ; \; H\_0 \right) \; ; \; u(0) = 0 \right\} \; .$$

Show that for *<sup>u</sup>* <sup>∈</sup> *<sup>H</sup>*<sup>1</sup> *(*0*, T )*; *H*<sup>0</sup> the point-evaluation *u(*0*)* = 0 is well-defined. Then show that *∂*<sup>0</sup> + *C*∗*aC* is continuously invertible and closed as an operator in *L*2 *(*0*, T )*; *H*<sup>0</sup> .

*Hint:* For the first part use Theorem 12.1.3. For the second part, apply the result of Exercise 15.4. Show that in the situation of the previous exercise, there exists *κ >* 0 independently of *f* and *u* with

$$\|\|u\|\|\_{H^1\_1((0,T);H\_0)\cap L\_2((0,T);\text{dom}(C^\*uC))} \lesssim \kappa \|f\|\|\_{L\_2((0,T);H\_0)}.$$

**Exercise 15.6** Recall Maxwell's equations from Theorem 6.2.8:

$$
\partial\_{\mathfrak{t},\boldsymbol{\nu}} \begin{pmatrix} \varepsilon & 0 \\ 0 \ \mu \end{pmatrix} + \begin{pmatrix} \sigma & 0 \\ 0 \ 0 \end{pmatrix} + \begin{pmatrix} 0 & -\operatorname{curl} \\ \operatorname{curl}\_{\mathbb{D}} & 0 \end{pmatrix},
$$

in *L*2*,ν* <sup>R</sup>; *<sup>L</sup>*2*()*<sup>3</sup> <sup>×</sup> *<sup>L</sup>*2*()*<sup>3</sup> with *ε, μ, σ* : <sup>→</sup> <sup>R</sup>3×<sup>3</sup> satisfying the following property: there exist *c >* 0 and *ν*<sup>0</sup> *>* 0 such that for all *ν ν*<sup>0</sup> we have

$$
\nu \varepsilon(\mathbf{x}) + \text{Re}\,\sigma(\mathbf{x}) \gtrless c, \quad \mu(\mathbf{x}) \gtrless c \quad (\mathbf{x} \in \mathfrak{Q}).
$$

By Theorem 6.2.8, for *ν <sup>ν</sup>*<sup>0</sup> and *<sup>j</sup>*<sup>0</sup> <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *<sup>L</sup>*2*()*3*)*, there exists a unique pair *(E, H )* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *<sup>L</sup>*2*()*6*)* such that

$$
\begin{pmatrix} E \\ H \end{pmatrix} := \overline{\begin{pmatrix} \partial\_{l,\psi} \begin{pmatrix} \varepsilon & 0 \\ 0 \ \mu \end{pmatrix} + \begin{pmatrix} \sigma & 0 \\ 0 \ 0 \end{pmatrix} + \begin{pmatrix} 0 & -\operatorname{curl} \\ \operatorname{curl}\_0 & 0 \end{pmatrix}} \begin{pmatrix} j\_0 \\ 0 \end{pmatrix}}
$$

Assume there exist open sets 0*,* <sup>1</sup> ⊆ such that <sup>0</sup> ⊆ <sup>1</sup> ⊆ <sup>1</sup> ⊆ with spt *<sup>j</sup>*0*(t)* <sup>⊆</sup> <sup>0</sup> for a.e. *<sup>t</sup>* <sup>∈</sup> <sup>R</sup>. Moreover, *<sup>j</sup>*<sup>0</sup> <sup>∈</sup> *<sup>H</sup>*1*/*<sup>2</sup> *<sup>ν</sup>* <sup>R</sup>; *<sup>L</sup>*2*(*1*)*<sup>3</sup> . Furthermore, assume *<sup>ε</sup>* <sup>=</sup> 0 on 1. Show that *<sup>t</sup>* → *H (t)*|<sup>0</sup> <sup>∈</sup> *<sup>H</sup>*<sup>1</sup> *ν* <sup>R</sup>; *<sup>L</sup>*2*(*0*)*<sup>3</sup> .

**Exercise 15.7** Let *H*0*, H*<sup>1</sup> be Hilbert spaces, *a, b* ∈ *L(H*1*)* be such that Re *b c* for some *c >* 0. Furthermore, let *C*: dom*(C)* ⊆ *H*<sup>0</sup> → *H*<sup>1</sup> be densely defined and closed. Let *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*(*R; *<sup>H</sup>*0*)* with inf spt *f >* −∞. Show that for *ν >* 0 large

*.*

enough, there exists for a unique *<sup>u</sup>* <sup>∈</sup> *<sup>H</sup>*<sup>2</sup> *<sup>ν</sup> (*R; *<sup>H</sup>*0*)*∩dom *C*∗*(a*+*b∂t ,ν)C* satisfying

$$
\partial\_{\mathfrak{t},\boldsymbol{\upsilon}}^2 \boldsymbol{\mu} + \boldsymbol{C}^\* (\boldsymbol{a} + b \partial\_{\mathfrak{t},\boldsymbol{\upsilon}}) \boldsymbol{C} \boldsymbol{u} = \boldsymbol{f}.
$$

*Hint:* Use the substitution *w* := *∂t ,νu* and *q* := −*(a* + *b∂t ,ν)Cu* to reformulate the equation in question as an evolutionary equation. Then apply Theorem 15.2.3.

## **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 16 Non-Autonomous Evolutionary Equations**

Previously, we focussed on evolutionary equations of the form

$$\left(\overline{\partial\_{\mathfrak{l},\boldsymbol{\upsilon}}\mathcal{M}(\partial\_{\mathfrak{l},\boldsymbol{\upsilon}}) + A}\right)U = F.$$

In this chapter, where we turn back to well-posedness issues, we replace the material law operator *M(∂t ,ν)*, which is invariant under translations in time, by an operator of the form

$$
\mathcal{M} + \partial\_{t, \upsilon}^{-1} \mathcal{N},
$$

where both *<sup>M</sup>* and *<sup>N</sup>* are bounded linear operators in *<sup>L</sup>*2*,ν(*R; *H )*. Thus, it is the aim in the following to provide criteria on *M* and *N* under which the operator

$$
\partial\_{l,\nu} \mathcal{M} + \mathcal{N} + A \tag{16.1}
$$

is closable with continuously invertible closure in *<sup>L</sup>*2*,ν(*R; *H )*. In passing, we shall also replace the skew-selfadjointness of *A* by a suitable real part condition. Under additional conditions on *M* and *N* , we will also see that the solution operator is causal. Finally, we will put the autonomous version of Picard's theorem into perspective of the non-autonomous variant developed here.

In order to get grip on the domain of the anticipated operator sum, we need to assume a commutator condition of the coefficient operators and the time-derivative. Thus, the replacement for the assumption of the coefficient to be a "material law operator" (i.e., a bounded analytic function of the time-derivative) is to be evolutionary and to have a bounded commutator with the time-derivative (in a suitable sense). Since we proved in Theorem 8.2.1 that bounded analytic functions of the time-derivative are exactly the ones that are causal and autonomous (and evolutionary), one may view the following theorem as a direct generalisation of Picard's theorem in the way that "autonomous" is dropped.

## **16.1 Examples**

In principle finding examples for the non-autonomous theory is relatively simple. The prototype case focusses on time-dependent multiplication operators. In order to illustrate our findings below, we shall revisit the heat equation and Maxwell's equations.

#### **Non-Autonomous Heat Equation**

Let <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* be open and *<sup>a</sup>* : <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup>*d*×*<sup>d</sup>* bounded and measurable. Assume there exists *c >* 0 such that

$$
\operatorname{Re} a(t, x) \geqslant c \quad \text{(a.e. } (t, x) \in \mathbb{R} \times \Omega).
$$

Then the non-autonomous variant of the equations describing heat conduction are

$$\begin{aligned} \partial\_{t,\upsilon}\theta + \text{div}\_0 q &= \mathcal{Q} \\ q(t,x) &= a(t,x)\,\text{grad}\,\theta(t,x) \quad ((t,x)\in\mathbb{R}\times\Omega) \end{aligned}$$

The resulting block operator matrix

$$
\partial\_{\mathfrak{t},\boldsymbol{\upsilon}} \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} + \begin{pmatrix} 0 & 0 \\ 0 & a^{-1} \end{pmatrix} + \begin{pmatrix} 0 & \mathrm{div}\_0 \\ \mathrm{grad}\ 0 \end{pmatrix},
$$

is then closable and continuously invertible in *L*2*,ν* <sup>R</sup>; *<sup>L</sup>*2*()* <sup>×</sup> *<sup>L</sup>*2*()<sup>d</sup>* for all *ν >* 0 by Theorem 16.3.1.

#### **Non-Autonomous Maxwell's Equations**

Let <sup>⊆</sup> <sup>R</sup><sup>3</sup> be open and *ε, μ, σ* : <sup>R</sup>× <sup>→</sup> <sup>R</sup>3×<sup>3</sup> bounded and measurable. Assume that *ε* and *μ* are Lipschitz continuous w.r.t. the temporal variables uniformly in space; that is, there exists *L* -0 such that

$$\|\boldsymbol{\varepsilon}(\mathbf{s},\mathbf{x}) - \boldsymbol{\varepsilon}(\mathbf{t},\mathbf{x})\|\_{\mathbb{R}^{3 \times 3}} + \|\mu(\mathbf{s},\mathbf{x}) - \mu(\mathbf{t},\mathbf{x})\|\_{\mathbb{R}^{3 \times 3}} \lesssim L \, |\mathbf{t} - \mathbf{s}| \quad (\mathbf{s},\mathbf{t} \in \mathbb{R}, \mathbf{x} \in \mathfrak{Q}).$$

Assume *ε(t , x)* <sup>=</sup> *ε(t , x)* and *μ(t , x)* <sup>=</sup> *μ(t , x)* for all *<sup>t</sup>* <sup>∈</sup> <sup>R</sup>, *<sup>x</sup>* <sup>∈</sup> . Furthermore, assume there exist *c, ν*<sup>0</sup> *>* 0 such that for all *ν ν*<sup>0</sup> we have

$$
\mu(t, \mathbf{x}) \geqslant c, \text{ and } \nu \varepsilon(t, \mathbf{x}) + \frac{1}{2} \varepsilon'(t)(\mathbf{x}) + \text{Re}\,\sigma(t, \mathbf{x}) \geqslant c \quad ((t, \mathbf{x}) \in \mathbb{R} \times \mathfrak{Q}).
$$

Then it will not be difficult to see that the operator

$$
\partial\_{l, \boldsymbol{\nu}} \begin{pmatrix} \boldsymbol{\varepsilon} (\mathbf{m}\_l, \mathbf{m}\_\boldsymbol{\chi}) & \mathbf{0} \\ \mathbf{0} & \mu (\mathbf{m}\_l, \mathbf{m}\_\boldsymbol{\chi}) \end{pmatrix} + \begin{pmatrix} \boldsymbol{\sigma} (\mathbf{m}\_l, \mathbf{m}\_\boldsymbol{\chi}) & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \end{pmatrix} + \begin{pmatrix} \mathbf{0} & -\mathbf{curl} \\ \mathbf{curl}\_0 & \mathbf{0} \end{pmatrix},
$$

is closable and continuously invertible in *L*2*,ν* <sup>R</sup>; *<sup>L</sup>*2*()*<sup>3</sup> <sup>×</sup> *<sup>L</sup>*2*()*<sup>3</sup> for all *ν ν*<sup>0</sup> by Theorem 16.3.1; see also Exercise 16.1.

## **16.2 Non-Autonomous Picard's Theorem—The ODE Case**

Let *H* be a Hilbert space and *ν >* 0. In this section we will focus on the ODE-case first, which is modelled by *A* = 0 in (16.1).

**Theorem 16.2.1** *Let M,M*- *, <sup>N</sup>* <sup>∈</sup> *L(L*2*,ν(*R; *H )) with <sup>M</sup>, <sup>N</sup> causal and* Re*M* -0*. Assume*

$$\mathcal{M}\partial\_{\mathfrak{l},\boldsymbol{\nu}} \subseteq \partial\_{\mathfrak{l},\boldsymbol{\nu}}\mathcal{M} - \mathcal{M}'$$

*and*

$$\operatorname{Re}\left\langle \phi, \left( \partial\_{\mathfrak{t}, \upsilon} \mathcal{M} + \mathcal{N} \right) \phi \right\rangle \geqslant c \left\langle \phi, \phi \right\rangle$$

*for some c >* 0 *and all φ* ∈ dom *∂t ,νM . Then*

$$0 \in \rho \left( \partial\_{\mathfrak{l}, \upsilon} \mathcal{M} + \mathcal{N} \right),$$

 *(∂t ,ν<sup>M</sup>* <sup>+</sup> *<sup>N</sup> )*−<sup>1</sup> <sup>1</sup>*/c, and ∂t ,ν<sup>M</sup>* <sup>+</sup> *<sup>N</sup>* −<sup>1</sup> *is causal. Moreover,*

$$\operatorname{Re}\left\langle \phi, \left(\partial\_{\mathbb{I},\boldsymbol{\nu}}\mathcal{M} + \mathcal{N}\right)^{\*}\phi\right\rangle \geqslant c\left\langle\phi, \phi\right\rangle \quad \left(\phi \in \operatorname{dom}\left(\left(\partial\_{\mathbb{I},\boldsymbol{\nu}}\mathcal{M} + \mathcal{N}\right)^{\*}\right)\right)\dots$$

*Remark 16.2.2* The only non-trivial condition in Theorem 16.2.1 is the commutator condition

$$\mathcal{M}\partial\_{\mathfrak{l},\mathbb{V}} \subseteq \partial\_{\mathfrak{l},\mathbb{V}}\mathcal{M} - \mathcal{M}'.$$

This condition is satisfied for multiplication operators induced by a Lipschitz continuous function, see also Exercise 16.1.

We leave the proof of 0 ∈ *ρ ∂t ,νM* + *N* and the norm estimate as Exercise 16.4. For the proof of causality, we need some preparations. The first result will also be of some value in the next chapter. It deals with a reformulation of causality for resolvents.

**Proposition 16.2.3** *Let <sup>B</sup>*: dom*(B)* <sup>⊆</sup> *<sup>L</sup>*2*,ν(*R; *H )* <sup>→</sup> *<sup>L</sup>*2*,ν(*R; *H ) be linear,* <sup>0</sup> <sup>∈</sup> *ρ(B), and assume that there exists c >* 0 *such that for all φ* ∈ dom*(B) we have*

$$\operatorname{Re}\left\langle \phi, \mathcal{B}\phi \right\rangle\_{L\_{2,\boldsymbol{\nu}}(\mathbb{R};H)} \geqslant c \left\langle \phi, \phi \right\rangle\_{L\_{2,\boldsymbol{\nu}}(\mathbb{R};H)} \cdot \mathbb{R}$$

*Then the following two statements are equivalent:*


$$\operatorname{Re}\left\langle \mathbb{1}\_{(-\infty,a]}\phi, \mathcal{B}\phi \right\rangle\_{L\_{2,\boldsymbol{\nu}}(\mathbb{R};H)} \geqslant c \left\langle \mathbb{1}\_{(-\infty,a]}\phi, \phi \right\rangle\_{L\_{2,\boldsymbol{\nu}}(\mathbb{R};H)}\dots$$

*Proof* (ii)⇒(i): Let *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *H )* and *<sup>a</sup>* <sup>∈</sup> <sup>R</sup> with spt *<sup>f</sup>* ⊆ [*a,*∞*)*. Then, using (ii), for *<sup>φ</sup>* := *<sup>B</sup>*−1*<sup>f</sup>* <sup>∈</sup> dom*(B)* we have

$$\begin{aligned} 0 &= \operatorname{Re} \left< \mathbb{1}\_{(-\infty, a]} \phi, f \right>\_{L\_{2,\boldsymbol{\upsilon}}(\mathbb{R}; H)} = \operatorname{Re} \left< \mathbb{1}\_{(-\infty, a]} \phi, \mathcal{B}\phi \right>\_{L\_{2,\boldsymbol{\upsilon}}(\mathbb{R}; H)}, \\ &\geqslant c \left< \mathbb{1}\_{(-\infty, a]} \phi, \phi \right>\_{L\_{2,\boldsymbol{\upsilon}}(\mathbb{R}; H)} = c \left\| \mathbb{1}\_{(-\infty, a]} \phi \right\|\_{L\_{2,\boldsymbol{\upsilon}}(\mathbb{R}; H)}^2, \end{aligned}$$

which yields spt *<sup>φ</sup>* ⊆ [*a,*∞*).* Thus, *<sup>B</sup>*−<sup>1</sup> is causal.

(i)⇒(ii): Let *<sup>a</sup>* <sup>∈</sup> <sup>R</sup>, *<sup>φ</sup>* <sup>∈</sup> dom*(B)*, and *<sup>f</sup>* := *<sup>B</sup>φ*. Then *<sup>φ</sup>*<sup>1</sup> := *<sup>B</sup>*−11*(*−∞*,a*]*<sup>f</sup>* <sup>∈</sup> dom*(B)* and, using causality of *<sup>B</sup>*<sup>−</sup>1, we obtain

$$\mathbb{1}\_{\left( -\infty, a\right]} \phi\_{\mathbb{I}} = \mathbb{1}\_{\left( -\infty, a\right]} \mathcal{B}^{-1} \mathbb{1}\_{\left( -\infty, a\right]} f = \mathbb{1}\_{\left( -\infty, a\right]} \mathcal{B}^{-1} f = \mathbb{1}\_{\left( -\infty, a\right]} \phi.$$

We thus compute

$$\begin{split} \left( \operatorname{Re} \left\langle \mathbb{1}\_{(-\infty,a]} \phi, \mathcal{B}\phi \right\rangle\_{L\_{2,v}(\mathbb{R};H)} \right)\_{L\_{2,v}(\mathbb{R};H)} &= \operatorname{Re} \left\langle \mathbb{1}\_{(-\infty,a]} \phi\_{1}, f \right\rangle\_{L\_{2,v}(\mathbb{R};H)} \\ &= \operatorname{Re} \left\langle \phi\_{1}, \mathcal{B}\phi\_{1} \right\rangle\_{L\_{2,v}(\mathbb{R};H)} \geqslant c \left\langle \phi\_{1}, \phi\_{1} \right\rangle\_{L\_{2,v}(\mathbb{R};H)} \\ &\geqslant c \left\| \mathbb{1}\_{(-\infty,a]} \phi\_{1} \right\|\_{L\_{2,v}(\mathbb{R};H)}^{2} = c \left\| \mathbb{1}\_{(-\infty,a]} \phi \right\|\_{L\_{2,v}(\mathbb{R};H)}^{2} \\ &= c \left\langle \mathbb{1}\_{(-\infty,a]} \phi, \phi \right\rangle\_{L\_{2,v}(\mathbb{R};H)}, \end{split}$$

where in the last estimate we used that multiplication by <sup>1</sup>*(*−∞*,a*] is a contraction. 

**Lemma 16.2.4** *Let <sup>B</sup>*: dom*(B)* <sup>⊆</sup> *<sup>L</sup>*2*,ν(*R; *H )* <sup>→</sup> *<sup>L</sup>*2*,ν(*R; *H ) be linear. Let λ,μ* ∈ *ρ(B) be contained in the same connected component of ρ(B). Assume that (μ* <sup>−</sup> *<sup>B</sup>)*−<sup>1</sup> *is causal. Then (λ* <sup>−</sup> *<sup>B</sup>)*−<sup>1</sup> *is causal.*

*Proof* Let *Z* be the connected component of *ρ(B)* shared by both *μ* and *λ*. Define

$$M := \left\{ \eta \in Z \; ; \; \forall a \in \mathbb{R} \colon \mathbb{1}\_{\left( -\infty, a \right]}(m)(\eta - \mathcal{B})^{-1} \mathbb{1}\_{\left( -\infty, a \right]}(m) = \mathbb{1}\_{\left( -\infty, a \right]}(m)(\eta - \mathcal{B})^{-1} \right\}$$

Then, *μ* ∈ *M*. Next, we show that *M* is open and closed in *Z*. For this, let *η*<sup>0</sup> ∈ *M*. By Proposition 2.4.1, we have *<sup>B</sup> (η*0*, r)* <sup>⊆</sup> *ρ(B)* with *<sup>r</sup>* := <sup>1</sup>*/ (η*<sup>0</sup> <sup>−</sup> *<sup>B</sup>)*−<sup>1</sup>. As *B (η*0*, r)* is connected, we infer *B (η*0*, r)* ⊆ *Z*. Furthermore, from Proposition 2.4.1, we infer for *η* ∈ *B (η*0*, r)* that

$$(\eta - \mathcal{B})^{-1} = \sum\_{k=0}^{\infty} (\eta\_0 - \eta)^k ((\eta\_0 - \mathcal{B})^{-1})^{k+1}.$$

Hence, since *<sup>η</sup>*<sup>0</sup> <sup>∈</sup> *<sup>M</sup>*, we obtain for all *<sup>a</sup>* <sup>∈</sup> <sup>R</sup>,

$$\begin{split} \mathbb{1}\_{\left(-\infty,a\right]}(m)(\eta-\mathcal{B})^{-1} &= \mathbb{1}\_{\left(-\infty,a\right]}(m) \sum\_{k=0}^{\infty} (\eta\_{0}-\eta)^{k} ((\eta\_{0}-\mathcal{B})^{-1})^{k+1} \\ &= \sum\_{k=0}^{\infty} (\eta\_{0}-\eta)^{k} \mathbb{1}\_{\left(-\infty,a\right]}(m) ((\eta\_{0}-\mathcal{B})^{-1})^{k+1} \\ &= \sum\_{k=0}^{\infty} (\eta\_{0}-\eta)^{k} \mathbb{1}\_{\left(-\infty,a\right]}(m) ((\eta\_{0}-\mathcal{B})^{-1})^{k+1} \mathbb{1}\_{\left(-\infty,a\right]}(m) \\ &= \mathbb{1}\_{\left(-\infty,a\right]}(m) \sum\_{k=0}^{\infty} (\eta\_{0}-\eta)^{k} ((\eta\_{0}-\mathcal{B})^{-1})^{k+1} \mathbb{1}\_{\left(-\infty,a\right]}(m) \\ &= \mathbb{1}\_{\left(-\infty,a\right]}(m) (\eta-\mathcal{B})^{-1} \mathbb{1}\_{\left(-\infty,a\right]}(m) .\end{split}$$

Thus, *B (η*0*, r)* ⊆ *M* and *M* is open in *Z*. Next, let *(ηn)n* be a sequence in *M*, convergent to some *<sup>η</sup>* <sup>∈</sup> *<sup>Z</sup>*. For *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> the equality

$$\mathbb{1}\_{\left( -\infty, a\right]}(m) (\eta\_n - \mathcal{B})^{-1} = \mathbb{1}\_{\left( -\infty, a\right]}(m) (\eta\_n - \mathcal{B})^{-1} \mathbb{1}\_{\left( -\infty, a\right]}(m) \quad (a \in \mathbb{R})$$

as well as the continuity of *(*· − *<sup>B</sup>)*−<sup>1</sup> imply that *<sup>η</sup>* <sup>∈</sup> *<sup>M</sup>*. Hence, *<sup>M</sup>* is closed. We infer *M* = *Z* from the connectedness of *Z* and, thus, *λ* ∈ *M*.

**Lemma 16.2.5** *Let <sup>ν</sup>* <sup>∈</sup> <sup>R</sup> *and <sup>M</sup>* <sup>∈</sup> *L(L*2*,ν (*R; *H )) be causal. If there exists c >* <sup>0</sup> *such that*

$$\operatorname{Re}\langle \phi, \mathcal{M}\phi \rangle\_{L\_{2,\boldsymbol{\nu}}(\mathbb{R};H)} \geqslant c \langle \phi, \phi \rangle\_{L\_{2,\boldsymbol{\nu}}(\mathbb{R};H)} \quad (\phi \in L\_{2,\boldsymbol{\nu}}(\mathbb{R};H)),$$

*then <sup>M</sup>*−<sup>1</sup> *is causal.*

*Proof* We have 0 ∈ *ρ(M)* by Proposition 6.2.3(b). In particular, we obtain for all *<sup>a</sup>* <sup>∈</sup> <sup>R</sup> and *<sup>φ</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν (*R; *H )*, using causality of *<sup>M</sup>*, that

$$\begin{split} \operatorname{Re}\left\{\mathbbm{1}\_{(-\infty,a]}\phi,\mathcal{M}\phi\right\}\_{L\_{2,v}(\mathbb{R};H)} &= \operatorname{Re}\left\{\mathbbm{1}\_{(-\infty,a]}\phi,\mathbbm{1}\_{(-\infty,a]}\mathcal{M}\phi\right\}\_{L\_{2,v}(\mathbb{R};H)} \\ &= \operatorname{Re}\left\{\mathbbm{1}\_{(-\infty,a]}\phi,\mathbbm{1}\_{(-\infty,a]}\mathcal{M}\mathbbm{1}\_{(-\infty,a]}\phi\right\}\_{L\_{2,v}(\mathbb{R};H)} \\ &= \operatorname{Re}\left\{\mathbbm{1}\_{(-\infty,a]}\phi,\mathcal{M}\mathbbm{1}\_{(-\infty,a]}\phi\right\}\_{L\_{2,v}(\mathbb{R};H)} \\ &\geqslant c\left\{\mathbbm{1}\_{(-\infty,a]}\phi,\mathbbm{1}\_{(-\infty,a]}\phi\right\}\_{L\_{2,v}(\mathbb{R};H)} \\ &= c\left\{\mathbbm{1}\_{(-\infty,a]}\phi,\phi\right\}\_{L\_{2,v}(\mathbb{R};H)}, \end{split}$$

which yields causality of *<sup>M</sup>*−<sup>1</sup> by Proposition 16.2.3 applied to *<sup>B</sup>* <sup>=</sup> *<sup>M</sup>*.

**Lemma 16.2.6** *Let M, N ,M*-<sup>∈</sup> *L(L*2*,ν(*R; *H )). Assume*

$$\mathcal{M}\partial\_{\mathfrak{l},\mathbb{V}} \subseteq \partial\_{\mathfrak{l},\mathbb{V}}\mathcal{M} - \mathcal{M}'$$

*and*

$$\operatorname{Re}\left\langle \phi, (\partial\_{l,\boldsymbol{\nu}}\mathcal{M} + \mathcal{N})\phi \right\rangle \geqslant c \left\langle \phi, \phi \right\rangle \quad (\phi \in \operatorname{dom}(\partial\_{l,\boldsymbol{\nu}})).$$

*Then*

$$Z := \left\{ \eta \in [0, \infty) \; ; \; \left( \partial\_{l, \upsilon} (\mathcal{M} + \eta) + \mathcal{N} \right)^{-1} \; \_{causal} \right\}$$

*is closed.*

*Proof* As it was mentioned before, the proof of 0 ∈ *ρ ∂t ,ν(M* + *η)* + *N* for *η* ∈ [0*,*∞*)* is postponed to Exercise 16.4. For all *η* ∈ [0*,*∞*)* and *φ* ∈ dom*(∂t ,ν)* we have

$$\operatorname{Re}\left[\phi,\left(\partial\_{\mathfrak{t},\mathbb{V}}(\mathcal{M}+\eta)+\mathcal{N}\right)\phi\right]\geqslant c\left<\phi,\phi\right>\quad(\phi\in\operatorname{dom}(\partial\_{\mathfrak{t},\mathbb{V}})).$$

Note that this inequality to hold for all *φ* ∈ dom*(∂t ,ν)* is sufficient for it to hold for all *φ* ∈ dom*(∂t ,ν(M* + *η))*. Indeed, this is a consequence of dom*(∂t ,ν )* being a core for *∂t ,ν(M* + *η)*, which is easily seen (see also Lemma 16.3.3). Hence, by Proposition 16.2.3, *η* ∈ *Z* if and only if

$$\operatorname{Re}\left\langle \mathbb{1}\_{\left( -\infty, a\right]}\phi, \left( \partial\_{l,\nu}(\mathcal{M}+\eta) + \mathcal{N} \right) \phi \right\rangle \geqslant c \left\langle \mathbb{1}\_{\left( -\infty, a\right]}\phi, \phi \right\rangle \quad (\phi \in \operatorname{dom}(\partial\_{l,\nu})).$$

Before we show closedness of *Z*, we shortly recall that integration by parts yields for all *<sup>a</sup>* <sup>∈</sup> <sup>R</sup>

$$\operatorname{Re}\left\langle \mathbb{1}\_{\left( -\infty, a\right]} \phi, \,\partial\_{t, \upsilon} \phi \right\rangle = \frac{1}{2} \left\| \phi(a) \right\|^2 \mathbf{e}^{-2\upsilon a} + \nu \left\langle \mathbb{1}\_{\left( -\infty, a\right]} \phi, \phi \right\rangle \quad (\phi \in \operatorname{dom}(\partial\_{t, \upsilon})).$$

In order to show that *Z* is closed, let *(ηn)n* be a sequence in *Z*, convergent to some *<sup>η</sup>* ∈ [0*,*∞*)*. Then we compute for all *<sup>a</sup>* <sup>∈</sup> <sup>R</sup> and *<sup>φ</sup>* <sup>∈</sup> dom*(∂t ,ν)* and *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>

$$\begin{split} &\operatorname{Re}\left\langle \mathbb{1}\_{(-\infty,a]}\phi, \left(\partial\_{l,\boldsymbol{\nu}}(\mathcal{M}+\boldsymbol{\eta})+\mathcal{N}\right)\phi \right\rangle \\ &=\operatorname{Re}\left\langle \mathbb{1}\_{(-\infty,a]}\phi, \left(\partial\_{l,\boldsymbol{\nu}}(\mathcal{M}+\boldsymbol{\eta}\_{n})+\mathcal{N}\right)\phi \right\rangle +\operatorname{Re}\left\langle \mathbb{1}\_{(-\infty,a]}\phi, \partial\_{l,\boldsymbol{\nu}}(\boldsymbol{\eta}-\boldsymbol{\eta}\_{n})\phi \right\rangle \\ &\geqslant c\left\langle \mathbb{1}\_{(-\infty,a]}\phi, \phi \right\rangle +\frac{1}{2}(\eta-\eta\_{n})\left\|\phi(a)\right\|^{2}\exp(-2\nu a) + (\eta-\eta\_{n})\nu\left\langle \mathbb{1}\_{(-\infty,a]}\phi, \phi \right\rangle. \end{split}$$

Letting *n* → ∞, we infer

$$\operatorname{Re}\left\langle \mathbb{1}\_{\left( -\infty, a\right]}\phi, \left( \partial\_{\mathbb{I}, \mathbb{V}} (\mathcal{M} + \eta) + \mathcal{N} \right) \phi \right\rangle \geqslant c \left\langle \mathbb{1}\_{\left( -\infty, a\right]}\phi, \phi \right\rangle$$

for *φ* ∈ dom*(∂t ,ν )*. Hence, *η* ∈ *Z*.

*Proof of Theorem 16.2.1* Keeping Exercise 16.4 in mind, we only need to show that the solution operator *(∂t ,ν<sup>M</sup>* <sup>+</sup> *<sup>N</sup> )*−<sup>1</sup> is causal.

By Lemma 16.2.6, it suffices to show that for all *η >* 0,

$$(\partial\_{\mathfrak{k},\upsilon}(\mathcal{M}+\eta)+\mathcal{N})^{-1}$$

is causal. Hence, we may assume that 0 ∈ *ρ(M)* and, using Lemma 16.2.5, that *<sup>M</sup>*−<sup>1</sup> is causal. In this situation, it remains to show that

$$(\partial\_{\mathfrak{t},\boldsymbol{\nu}}\mathcal{M}+\mathcal{N})^{-1}=\mathcal{M}^{-1}(\partial\_{\mathfrak{t},\boldsymbol{\nu}}+\mathcal{N}\mathcal{M}^{-1})^{-1}$$

is causal. As *<sup>M</sup>*−<sup>1</sup> is causal, it furthermore suffices to show causality of

$$\left(\partial\_{\mathfrak{k},\upsilon} + \kappa\right)^{-1}$$

where *<sup>K</sup>* := *NM*−<sup>1</sup> is causal. Using Re*<sup>M</sup>* - 0 and the inequality assumed for *∂t ,νM* + *N* , we conclude that *(∂t ,ν* + *μ* + *K)* is continuously invertible for all *μ* - 0. Since *∂*−<sup>1</sup> *t ,ν* is causal, Lemma 16.2.4 yields that *(∂t ,ν* <sup>+</sup> *μ)*−<sup>1</sup> is causal. From Re*(∂t ,ν* + *μ) ν* + *μ* it follows that  *(∂t ,ν* <sup>+</sup> *μ)*−<sup>1</sup> <sup>1</sup>*/(ν* <sup>+</sup> *μ)*. Hence, we find *μ >* 0 such that  *(∂t ,ν* <sup>+</sup> *μ)*−1*<sup>K</sup> <sup>&</sup>lt;* 1. Thus,

$$\begin{aligned} \left( (\partial\_{l,\boldsymbol{\upsilon}} + \mu + \mathcal{K})^{-1} \right)^{-1} &= \left( 1 + (\partial\_{l,\boldsymbol{\upsilon}} + \mu)^{-1} \mathcal{K} \right)^{-1} (\partial\_{l,\boldsymbol{\upsilon}} + \mu)^{-1} \\ &= \sum\_{k=0}^{\infty} (-1)^{k} \left( (\partial\_{l,\boldsymbol{\upsilon}} + \mu)^{-1} \mathcal{K} \right)^{k} (\partial\_{l,\boldsymbol{\upsilon}} + \mu)^{-1} \end{aligned}$$

is causal as a composition of causal operators. Finally, Lemma 16.2.4 implies causality of *(∂t ,ν* <sup>+</sup> *<sup>K</sup>)*−<sup>1</sup> as desired.

## **16.3 Non-Autonomous Picard's Theorem—The PDE Case**

Let *H* be a Hilbert space. In Sect. 4.2, we have already discussed the notion of uniformly Lipschitz continuous mappings. Here we concentrate on linear uniformly Lipschitz continuous mappings, which we call *evolutionary* as a short hand:

**Definition** Let *<sup>ν</sup>*<sup>0</sup> <sup>∈</sup> <sup>R</sup>. A mapping

$$\mathcal{M} \colon \mathcal{S}\_{\mathfrak{c}}(\mathbb{R}; H) \to \bigcap\_{\nu \geqslant \nu\_0} L\_{2, \nu}(\mathbb{R}; H)$$

is called *evolutionary (at ν*0*)* if it is linear and uniformly Lipschitz continuous (at *ν*0); that is, for all *ν <sup>ν</sup>*0, the mapping *<sup>M</sup>*: *Sc(*R; *H )* <sup>⊆</sup> *<sup>L</sup>*2*,ν (*R; *H )* <sup>→</sup> *<sup>L</sup>*2*,ν(*R; *H )* is linear and continuous. Moreover, its continuous extension to the whole of *<sup>L</sup>*2*,ν (*R; *H )*, denoted by *<sup>M</sup><sup>ν</sup>* , satisfies sup*ν<sup>ν</sup>*<sup>0</sup> *<sup>M</sup><sup>ν</sup> <sup>&</sup>lt;* <sup>∞</sup>.

The set of all evolutionary mappings is defined as

$$S\_{\mathrm{ev}}(H,\nu\_0) := \left\{ \mathcal{M} \colon S\_{\mathrm{c}}(\mathbb{R}; H) \to \bigcap\_{\nu \geqslant \nu\_0} L\_{2,\nu}(\mathbb{R}; H) \text{ ; } \mathcal{M} \text{ evolutionary at } \nu\_0 \right\}.$$

We have seen that material law operators are evolutionary (see Theorem 5.3.6 and the concluding lines of the proof). In the non-autonomous version of Picard's theorem (Theorem 6.2.1), evolutionary mappings will replace the notion of material law operators. Hence, we allow for an explicit time-dependence in the coefficients.

Recall from Lemma 4.2.5(a), that *<sup>M</sup><sup>ν</sup>* is causal and independent of *<sup>ν</sup>* in the sense of Lemma 4.2.5(c).

The non-autonomous version of Picard's theorem now reads as follows.

**Theorem 16.3.1** *Let <sup>μ</sup>* <sup>∈</sup> <sup>R</sup>*, <sup>M</sup>,M*- *, <sup>N</sup>* <sup>∈</sup> *<sup>S</sup>*ev*(H, μ),* Re*M<sup>ν</sup>* - 0 *for all ν μ and A*: dom*(A)* ⊆ *H* → *H be closed and densely defined. Assume that there exists c >* 0 *such that the following conditions are satifsfied:*

(a) *<sup>M</sup>μ∂t ,μ* <sup>⊆</sup> *∂t ,μM<sup>μ</sup>* <sup>−</sup> *M*- *μ,*

$$\text{(b) } for \text{ } all \text{ } \upsilon \gg \mu \text{ } and \, \phi \in \text{dom}(\partial\_{\mathfrak{l},\upsilon}) \text{ } we \text{ } have$$

$$\operatorname{Re} \left\langle \phi, \left( \partial\_{l, \boldsymbol{\upsilon}} \mathcal{M}^{\boldsymbol{\upsilon}} + \mathcal{N}^{\boldsymbol{\upsilon}} \right) \phi \right\rangle\_{L\_{2, \boldsymbol{\upsilon}}(\mathbb{R}; H)} \geqslant c \left\langle \phi, \phi \right\rangle\_{L\_{2, \boldsymbol{\upsilon}}(\mathbb{R}; H)},$$

(c) *for all x* ∈ dom*(A) and y* ∈ dom*(A*∗*) we have*

$$\operatorname{Re}\left\langle \mathbf{x}, A\mathbf{x} \right\rangle\_H \geqslant 0 \text{ and } \operatorname{Re}\left\langle \mathbf{y}, A^\*\mathbf{y} \right\rangle\_H \geqslant 0.$$

*Then for all ν* max{*μ,* 0}*, ν* = 0*, the operator*

*∂t ,νM<sup>ν</sup>* <sup>+</sup> *<sup>N</sup> <sup>ν</sup>* <sup>+</sup> *<sup>A</sup>*: *<sup>H</sup>*<sup>1</sup> *<sup>ν</sup> (*R; *H )* <sup>∩</sup> *<sup>L</sup>*2*,ν* <sup>R</sup>; dom*(A)* <sup>⊆</sup> *<sup>L</sup>*2*,ν(*R; *H )* <sup>→</sup> *<sup>L</sup>*2*,ν(*R; *H )* *is closable and its closure is continuously invertible. Moreover, with S<sup>ν</sup>* ∈ *L(L*2*,ν (*R; *H )) being the inverse of this closure, <sup>S</sup><sup>ν</sup> L(L*2*,ν (*R;*H ))* <sup>1</sup>*/c, <sup>S</sup><sup>ν</sup> is eventually independent of ν and S<sup>ν</sup> is causal.*

#### *Remark 16.3.2*

(a) It is a consequence of Theorem 16.3.1 that the mapping

$$\mathcal{S} \colon \mathcal{S}\_c(\mathbb{R}; H) \to \bigcap\_{\nu \geqslant \mu} L\_{2,\nu}(\mathbb{R}; H)$$

$$f \mapsto \left(\overline{\partial\_{l,\mu} \mathcal{M}^\mu + \mathcal{N}^\mu + A}\right)^{-1} f$$

is evolutionary.

(b) It will follow from the techniques used in the proof of Theorem 16.3.1, that a similar results holds without the assumption of evolutionarity for the operator coefficients. We refer to the formulation in Exercise 16.5 and ask the reader to provide a proof for this.

The proof of the non-autonomous version of Picard's theorem requires some preparations. Being still a linear theory, the well-posedness result is—similar to the autonomous version of Picard's theorem—based on Proposition 6.3.1. Furthermore, we need some results on the interaction of the time derivative and the nonautonomous coefficients. Thus, for the next lemma, we introduce the commutator

$$[A,B] := AB - BA$$

for two linear operators *A* and *B* on its natural domain

$$\text{dom}(AB) \cap \text{dom}(BA).$$

**Lemma 16.3.3** *Let <sup>ν</sup>* <sup>∈</sup> <sup>R</sup>*, <sup>M</sup>,M*- *, N* ∈ *S*ev*(H, ν). For ε >* 0 *small enough, denote Sε* := *(*<sup>1</sup> <sup>+</sup> *ε∂t ,ν )*−1*.*

(a) *If <sup>M</sup>ν∂t ,ν* <sup>⊆</sup> *∂t ,νM<sup>ν</sup>* <sup>−</sup> *M*- *ν , then for all ε >* 0 *we have*

$$\overline{[\partial\_{\mathfrak{t},\boldsymbol{\nu}}\mathcal{M}^{\boldsymbol{\nu}},\mathcal{S}\_{\boldsymbol{\varepsilon}}]} = \varepsilon \partial\_{\mathfrak{t},\boldsymbol{\nu}}\mathcal{S}\_{\boldsymbol{\varepsilon}}(\mathcal{M}^{\boldsymbol{\prime}})^{\boldsymbol{\nu}}\mathcal{S}\_{\boldsymbol{\varepsilon}} \in L(L\_{2,\boldsymbol{\nu}}(\mathbb{R};H)).$$

*In this case, we also have that* [*∂t ,νM<sup>ν</sup> , Sε*] → <sup>0</sup> *in the strong operator topology of L(L*2*,ν (*R; *H )).*

(b) *We have that* [*N , Sε*] → 0 *as ε* → 0 *in the strong operator topology of L(L*2*,ν (*R; *H )).*

*Proof*

(a) Let *ε >* 0 and *φ* ∈ dom*(∂t ,ν)*. Then

$$\begin{split} \overline{[\partial\_{\mathbb{I},\boldsymbol{\upsilon}}\mathcal{M}^{\boldsymbol{\upsilon}},\mathcal{S}\_{\varepsilon}]\phi} &= \partial\_{\mathbb{I},\boldsymbol{\upsilon}}(\mathcal{M}^{\boldsymbol{\upsilon}}\mathcal{S}\_{\varepsilon}-\mathcal{S}\_{\varepsilon}\mathcal{M}^{\boldsymbol{\upsilon}})\phi \\ &= \partial\_{\mathbb{I},\boldsymbol{\upsilon}}\mathcal{S}\_{\varepsilon}((\mathbbm{1}+\varepsilon\partial\_{\mathbb{I},\boldsymbol{\upsilon}})\mathcal{M}^{\boldsymbol{\upsilon}}-\mathcal{M}^{\boldsymbol{\upsilon}}(\mathbbm{1}+\varepsilon\partial\_{\mathbb{I},\boldsymbol{\upsilon}}))S\_{\varepsilon}\phi \\ &= \varepsilon\partial\_{\mathbb{I},\boldsymbol{\upsilon}}\mathcal{S}\_{\varepsilon}(\mathcal{M}^{\boldsymbol{\upsilon}})^{\boldsymbol{\upsilon}}\mathcal{S}\_{\varepsilon}\phi, \end{split}$$

which shows the first equality. Since *Sε* → 1 as *ε* → 0 in the strong operator topology and *ε∂t ,νSε* = *(*1−*Sε)* → 0 as *ε* → 0 in the strong operator topology, we infer the convergence statement in (a).

(b) This statement follows from *Sε* → 1 in the strong operator topology.

**Lemma 16.3.4** *Let <sup>μ</sup>* <sup>∈</sup> <sup>R</sup>*, <sup>M</sup>,M*- *, N* ∈ *S*ev*(H, μ) and A*: dom*(A)* ⊆ *H* → *H be closed and densely defined. Assume <sup>M</sup>μ∂t ,μ* <sup>⊆</sup> *∂t ,μM<sup>μ</sup>* <sup>−</sup> *M*- *μ. Then for all ν μ*

$$(\partial\_{l,\boldsymbol{\upsilon}}\mathcal{M}^{\boldsymbol{\upsilon}} + \mathcal{N}^{\boldsymbol{\upsilon}} + A)^{\*} = \overline{(\partial\_{l,\boldsymbol{\upsilon}}\mathcal{M}^{\boldsymbol{\upsilon}} + \mathcal{N}^{\boldsymbol{\upsilon}})^{\*} + A^{\*}} = \overline{(\mathcal{M}^{\boldsymbol{\upsilon}})^{\*}\partial\_{l,\boldsymbol{\upsilon}}^{\*} + (\mathcal{N}^{\boldsymbol{\upsilon}})^{\*} + A^{\*}}.$$

*Proof* Let *ν <sup>μ</sup>*. It is not difficult to see that *<sup>M</sup>μ∂t ,μ* <sup>⊆</sup> *∂t ,μM<sup>μ</sup>* <sup>−</sup> *M*- *<sup>μ</sup>* implies *<sup>M</sup>ν∂t ,ν* <sup>⊆</sup> *∂t ,νM<sup>ν</sup>* <sup>−</sup> *M*- *ν* , see Exercise 16.2.

Let *g* ∈ dom *(∂t ,νM<sup>ν</sup>* <sup>+</sup> *<sup>N</sup> <sup>ν</sup>* <sup>+</sup> *A)*∗ . For *ε >* 0 small enough, we define *Sε* := *(*<sup>1</sup> <sup>+</sup> *ε∂t ,ν)*−<sup>1</sup> as well as *gε* := *<sup>S</sup>*<sup>∗</sup> *<sup>ε</sup> <sup>g</sup>*. For *<sup>u</sup>* <sup>∈</sup> dom*(∂t ,νM<sup>ν</sup>* <sup>+</sup> *<sup>N</sup> <sup>ν</sup>* <sup>+</sup> *A)* we compute

$$\begin{aligned} & \left\langle (\partial\_{l,\boldsymbol{\nu}} \mathcal{M}^{\boldsymbol{\nu}} + \mathcal{N}^{\boldsymbol{\nu}} + A)\boldsymbol{\mu}, \mathbf{g}\_{\varepsilon} \right\rangle \\ &= \left\langle S\_{\varepsilon}(\partial\_{l,\boldsymbol{\nu}} \mathcal{M}^{\boldsymbol{\nu}} + \mathcal{N}^{\boldsymbol{\nu}} + A)\boldsymbol{\mu}, \mathbf{g} \right\rangle \\ &= \left\langle (\partial\_{l,\boldsymbol{\nu}} \mathcal{M}^{\boldsymbol{\nu}} + \mathcal{N}^{\boldsymbol{\nu}} + A)S\_{\varepsilon}\boldsymbol{\mu}, \mathbf{g} \right\rangle - \left\langle [\partial\_{l,\boldsymbol{\nu}} \mathcal{M}^{\boldsymbol{\nu}}, S\_{\varepsilon}]\boldsymbol{\mu} + [\mathcal{N}^{\boldsymbol{\nu}}, S\_{\varepsilon}]\boldsymbol{\mu}, \mathbf{g} \right\rangle. \end{aligned} \tag{16.2}$$

We read off that *gε* ∈ dom *(∂t ,νM<sup>ν</sup>* <sup>+</sup> *<sup>N</sup> <sup>ν</sup>* <sup>+</sup> *A)*∗ and

$$\begin{aligned} & (\partial\_{\mathbb{L},\boldsymbol{\nu}} \mathcal{M}^{\boldsymbol{\nu}} + \mathcal{N}^{\boldsymbol{\nu}} + A)^{\*} \mathbf{g}\_{\varepsilon} \\ & \qquad = S\_{\varepsilon}^{\*} (\partial\_{\mathbb{L},\boldsymbol{\nu}} \mathcal{M}^{\boldsymbol{\nu}} + \mathcal{N}^{\boldsymbol{\nu}} + A)^{\*} \mathbf{g} - [\partial\_{\mathbb{L},\boldsymbol{\nu}} \mathcal{M}^{\boldsymbol{\nu}}, S\_{\varepsilon}]^{\*} \mathbf{g} - [\mathcal{N}^{\boldsymbol{\nu}}, S\_{\varepsilon}]^{\*} \mathbf{g} . \end{aligned}$$

By Lemma 9.3.3, we infer that *gε* → *g* weakly as *ε* → 0. Similarly, we obtain

$$\mathrm{S}\_{\varepsilon}^{\*}(\partial\_{\mathrm{I},\boldsymbol{\nu}}\mathcal{M}^{\boldsymbol{\nu}}-\mathcal{N}^{\boldsymbol{\nu}}+\boldsymbol{A})^{\*}\operatorname{g} + [\partial\_{\mathrm{I},\boldsymbol{\nu}}\mathcal{M}^{\boldsymbol{\nu}},\mathrm{S}\_{\varepsilon}]^{\*}\operatorname{g} - [\mathcal{N}^{\boldsymbol{\nu}},\mathrm{S}\_{\varepsilon}]^{\*}\operatorname{g} \to (\partial\_{\mathrm{I},\boldsymbol{\nu}}\mathcal{M}^{\boldsymbol{\nu}}+\mathcal{N}^{\boldsymbol{\nu}}+\boldsymbol{A})^{\*}\operatorname{g}$$

weakly as *ε* → 0. Next, we show that *gε* ∈ dom*(A)* for all *ε >* 0. For this, we realise that *gε* ∈ dom*(∂*<sup>∗</sup> *t ,ν)* = dom*(∂t ,ν)* and, thus, revisiting (16.2), we infer

$$
\begin{split}
\langle Au, \,\mathrm{g}\_{\varepsilon}\rangle &= -\left\langle (\partial\_{\mathrm{l},\boldsymbol{\nu}}\mathcal{M}^{\boldsymbol{\nu}} + \mathcal{N}^{\boldsymbol{\nu}})u, \,\mathrm{g}\_{\varepsilon}\right\rangle + \left\langle (\partial\_{\mathrm{l},\boldsymbol{\nu}}\mathcal{M}^{\boldsymbol{\nu}} + \mathcal{N}^{\boldsymbol{\nu}} + A)S\_{\varepsilon}u, \,\mathrm{g}\right\rangle \\ &- \left\langle [\partial\_{\mathrm{l},\boldsymbol{\nu}}\mathcal{M}^{\boldsymbol{\nu}}, S\_{\varepsilon}]u, \,\mathrm{g}\right\rangle - \left\langle [\mathcal{N}^{\boldsymbol{\nu}}, S\_{\varepsilon}]u, \,\mathrm{g}\right\rangle
\end{split}
$$

$$\begin{split} &= -\left\langle \mu, ((\mathcal{M}^{\upsilon})^{\*} \partial\_{\mathfrak{t},\upsilon}^{\*} + (\mathcal{N}^{\upsilon})^{\*}) g\_{\varepsilon} \right\rangle + \left\langle \mu, S\_{\varepsilon}^{\*} (\partial\_{\mathfrak{t},\upsilon} \mathcal{M}^{\upsilon} + \mathcal{N}^{\upsilon} + A)^{\*} g \right\rangle \\ &- \left\langle \mu, [\partial\_{\mathfrak{t},\upsilon} \mathcal{M}^{\upsilon}, S\_{\varepsilon}]^{\*} g + [\mathcal{N}^{\upsilon}, S\_{\varepsilon}]^{\*} g \right\rangle. \end{split}$$

Since *H*<sup>1</sup> *<sup>ν</sup> (*R; *H )* <sup>∩</sup> *<sup>L</sup>*2*,ν* <sup>R</sup>; dom*(A)* is dense in *L*2*,ν* <sup>R</sup>; dom*(A)* , we read off that *gε* ∈ dom*(A*∗*).* Thus, since *gε* ∈ dom*(∂*<sup>∗</sup> *t ,ν )* anyway, we obtain by the first statements in Theorem 2.3.2 and Theorem 2.3.4 that

$$(\partial\_{\mathfrak{t},\boldsymbol{\nu}}\mathcal{M}^{\boldsymbol{\nu}} + \mathcal{N}^{\boldsymbol{\nu}} + A)^{\*}\mathfrak{g}\_{\varepsilon} = (\mathcal{M}^{\boldsymbol{\nu}})^{\*}\partial\_{\mathfrak{t},\boldsymbol{\nu}}^{\*}\mathfrak{g}\_{\varepsilon} + (\mathcal{N}^{\boldsymbol{\nu}})^{\*}\mathfrak{g}\_{\varepsilon} + A^{\*}\mathfrak{g}\_{\varepsilon},$$

which together with the above convergence result shows the assertion.

**Lemma 16.3.5** *Let μ, ν* <sup>∈</sup> <sup>R</sup>*, <sup>μ</sup> <sup>ν</sup>. Let <sup>S</sup><sup>ν</sup>* <sup>∈</sup> *L(L*2*,ν(*R; *H )) as well as <sup>S</sup><sup>μ</sup>* <sup>∈</sup> *L(L*2*,μ(*R; *H )) be causal and <sup>D</sup>* <sup>⊆</sup> *<sup>L</sup>*2*,ν (*R; *H )*∩*L*2*,μ(*R; *H ) dense in <sup>L</sup>*2*,μ(*R; *H ) such that <sup>S</sup><sup>ν</sup>* <sup>=</sup> *<sup>S</sup><sup>μ</sup> on <sup>D</sup>. Then <sup>S</sup><sup>ν</sup>* <sup>=</sup> *<sup>S</sup><sup>μ</sup> on <sup>L</sup>*2*,ν(*R; *H )* <sup>∩</sup> *<sup>L</sup>*2*,μ(*R; *H ).*

*Proof* Let *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν (*R; *H )*∩*L*2*,μ(*R; *H )*. By density of *<sup>D</sup>*, we may find a sequence *(fn)n* in *<sup>D</sup>* such that *fn* <sup>→</sup> *<sup>f</sup>* in *<sup>L</sup>*2*,μ(*R; *H )*. Let *<sup>a</sup>* <sup>∈</sup> <sup>R</sup>. Then <sup>1</sup>*(*−∞*,a*]*fn* <sup>→</sup> <sup>1</sup>*(*−∞*,a*]*<sup>f</sup>* in *<sup>L</sup>*2*,ν(*R; *H )* <sup>∩</sup> *<sup>L</sup>*2*,μ(*R; *H )*. Since both *<sup>S</sup><sup>μ</sup>* and *<sup>S</sup><sup>ν</sup>* are causal, we infer for *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> that

$$\mathbb{1}\_{\left( -\infty, a\right]} \mathbb{S}^{\mu} \mathbb{1}\_{\left( -\infty, a\right]} f\_{\hbar} = \mathbb{1}\_{\left( -\infty, a\right]} \mathbb{S}^{\mu} f\_{\hbar} = \mathbb{1}\_{\left( -\infty, a\right]} \mathbb{S}^{\nu} f\_{\hbar} = \mathbb{1}\_{\left( -\infty, a\right]} \mathbb{S}^{\nu} \mathbb{1}\_{\left( -\infty, a\right]} f\_{\hbar}.$$

Letting *n* → ∞, we deduce that both the left-hand side as well as the right-hand side converge in *<sup>L</sup>*2*,*loc*(*R; *H )*. Consequently, we infer, using causality again that

$$\mathbb{1}\_{(-\\\\\infty,a]}\mathbb{S}^{\mu}f = \mathbb{1}\_{(-\\\infty,a]}\mathbb{S}^{\mu}\mathbb{1}\_{(-\\\infty,a]}f = \mathbb{1}\_{(-\\\infty,a]}\mathbb{S}^{\nu}\mathbb{1}\_{(-\\\infty,a]}f = \mathbb{1}\_{(-\\\infty,a]}\mathbb{S}^{\vee}f.$$

This equality holds for all *<sup>a</sup>* <sup>∈</sup> <sup>R</sup>, thus *<sup>S</sup>μf* <sup>=</sup> *<sup>S</sup>νf* and the assertion follows.

The following lemma is proved in the (easy) Exercise 16.7.

**Lemma 16.3.6** *Let H*0*, H*<sup>1</sup> *be Hilbert spaces. Let B* : dom*(B)* ⊆ *H*<sup>0</sup> → *H*<sup>1</sup> *be closed and densely defined. Let V be a Hilbert space such that V* → dom*(B) continuously and densely. If D* ⊆ *V is a dense subspace, then D is a core for B.*

*Proof of Theorem 16.3.1* Define *B* := *∂t ,νM<sup>ν</sup>* <sup>+</sup> *<sup>N</sup> <sup>ν</sup>* <sup>+</sup> *<sup>A</sup>* with dom*(B)* <sup>=</sup> *H*<sup>1</sup> *<sup>ν</sup> (*R; *H )* <sup>∩</sup> *<sup>L</sup>*2*,ν* <sup>R</sup>; dom*(A)* . By the last equality in Lemma 16.3.4, we have dom*(B* ∗*)* <sup>⊇</sup> *<sup>H</sup>*<sup>1</sup> *<sup>ν</sup> (*R; *H )* <sup>∩</sup> *<sup>L</sup>*2*,ν* <sup>R</sup>; dom*(A*∗*)* . Hence, *B* <sup>∗</sup> is densely defined and, therefore, by Lemma 2.2.7, *B* is closable. Next, we want to apply Proposition 6.3.1 to *B* := *B* . For this, we let *<sup>φ</sup>* <sup>∈</sup> dom*(B)* and compute

$$\begin{split} \operatorname{Re} \left\langle \phi, B\phi \right\rangle &= \operatorname{Re} \left\langle \phi, (\partial\_{\mathbb{I}, \mathbb{V}} \mathcal{M}^{\mathbb{V}} + \mathcal{N}^{\mathbb{V}} + A) \phi \right\rangle \\ &\geqslant c \left\langle \phi, \phi \right\rangle + \operatorname{Re} \left\langle \phi, A\phi \right\rangle \geqslant c \left\langle \phi, \phi \right\rangle \dots \end{split}$$

Since dom*(B)* is a core for *<sup>B</sup>*, we deduce

$$\operatorname{Re}\left\langle \phi, B\phi \right\rangle \geqslant c \left\langle \phi, \phi \right\rangle \quad (\phi \in \operatorname{dom}(B)).$$

Using Lemma 16.3.4, we obtain *<sup>D</sup>* := dom *∂t ,νM<sup>ν</sup>* <sup>+</sup> *<sup>N</sup> <sup>ν</sup>* <sup>∗</sup> ∩*L*2*,ν* <sup>R</sup>; dom*(A*∗*)* is a core for *B*∗. Using Theorem 16.2.1, we estimate for all *ψ* ∈ *D* that

$$\operatorname{Re}\left\langle \psi, B^\*\psi \right\rangle = \operatorname{Re}\left\langle \psi, \left( \partial\_{l,\upsilon} \mathcal{M}^\upsilon + \mathcal{N}^\upsilon \right)^\* \psi + A^\* \psi \right\rangle \geqslant c \left\langle \psi, \psi \right\rangle \dots$$

Hence,

$$\operatorname{Re}\left\langle \psi, B^\*\psi \right\rangle \geqslant c \left\langle \psi, \psi \right\rangle \quad (\psi \in \operatorname{dom}(B^\*)).$$

Thus, Proposition 6.3.1 applies and we deduce that 0 ∈ *ρ(B)* and  *<sup>B</sup>*−<sup>1</sup> <sup>1</sup>*/c*.

Next, since *(∂t ,νM<sup>ν</sup>* <sup>+</sup> *<sup>N</sup> <sup>ν</sup> )*−<sup>1</sup> is causal by Theorem 16.2.1, using Proposition 16.2.3 for *<sup>φ</sup>* <sup>∈</sup> *<sup>H</sup>*<sup>1</sup> *<sup>ν</sup> (*R; *H )* <sup>∩</sup> *<sup>L</sup>*2*,ν* <sup>R</sup>; dom*(A)* <sup>=</sup> dom*(B)* we obtain for *<sup>a</sup>* <sup>∈</sup> <sup>R</sup> that

$$\begin{split} \mathrm{Re}\left\langle \mathbb{1}\_{(-\infty,a]}\phi, B\phi \right\rangle &= \mathrm{Re}\left\langle \mathbb{1}\_{(-\infty,a]}\phi, (\partial\_{l,\upsilon}\mathcal{M}^{\upsilon} + \mathcal{N}^{\upsilon} + A)\phi \right\rangle \\ &= \mathrm{Re}\left\langle \mathbb{1}\_{(-\infty,a]}\phi, (\partial\_{l,\upsilon}\mathcal{M}^{\upsilon} + \mathcal{N}^{\upsilon})\phi \right\rangle \phi + \mathrm{Re}\left\langle \mathbb{1}\_{(-\infty,a]}\phi, \mathbb{1}\_{(-\infty,a]}A\phi \right\rangle \\ &\geqslant c\left\langle \mathbb{1}\_{(-\infty,a]}\phi, \phi \right\rangle + \mathrm{Re}\left\langle \mathbb{1}\_{(-\infty,a]}\phi, A\mathbb{1}\_{(-\infty,a]}\phi \right\rangle \geqslant c\left\langle \mathbb{1}\_{(-\infty,a]}\phi, \phi \right\rangle. \end{split}$$

The inequality Re <sup>1</sup>*(*−∞*,a*]*φ,Bφ c* <sup>1</sup>*(*−∞*,a*]*φ,φ* carries over to all *φ* ∈ dom*(B)* using that dom*(B)* is, by definition, a core for *<sup>B</sup>*. Again appealing to Proposition 16.2.3 we obtain that *<sup>B</sup>*−<sup>1</sup> is causal. Finally, in order to show that *<sup>S</sup><sup>ν</sup>* is eventually independent of *ν*, we want to apply Lemma 16.3.5. Since we have shown that for all *ν η μ*, the operators *S<sup>ν</sup>* and *S<sup>η</sup>* are continuous and causal, it remains to construct a set *<sup>U</sup>* <sup>⊆</sup> *<sup>L</sup>*2*,ν(*R; *H )*∩*L*2*,η(*R; *H )* dense in *<sup>L</sup>*2*,ν (*R; *H )* such that *S<sup>ν</sup>* = *S<sup>η</sup>* on *U*. We put

$$U := (\partial\_{\mathbb{I}, \mathbb{V}} \mathcal{M}^{\mathbb{V}} + \mathcal{N}^{\mathbb{V}} + A) \Big[ C\_{\mathbb{c}}^{\infty} (\mathbb{R}; \operatorname{dom}(A)) \Big],$$

which is evidently a subset of *<sup>L</sup>*2*,ν (*R; *H )*. Observe that *<sup>C</sup>*<sup>∞</sup> c <sup>R</sup>; dom*(A)* <sup>⊆</sup> *<sup>L</sup>*2*,η(*R; *H )* <sup>∩</sup> *<sup>L</sup>*2*,ν(*R; *H )*. Moreover, *<sup>M</sup><sup>ν</sup>* <sup>=</sup> *<sup>M</sup><sup>η</sup>* as well as *<sup>N</sup> <sup>ν</sup>* <sup>=</sup> *<sup>N</sup> <sup>η</sup>* on *<sup>L</sup>*2*,η(*R; *H )*<sup>∩</sup> *<sup>L</sup>*2*,ν (*R; *H )*. Thus, both *<sup>M</sup><sup>ν</sup>* and *<sup>N</sup> <sup>ν</sup>* leave *<sup>L</sup>*2*,η(*R; *H )*<sup>∩</sup> *<sup>L</sup>*2*,ν (*R; *H )* invariant, by Lemma 4.2.5. Hence, since *A C*∞ c <sup>R</sup>; dom*(A)* <sup>⊆</sup> *<sup>C</sup>*<sup>∞</sup> <sup>c</sup> *(*R; *H )*, we infer that *<sup>U</sup>* <sup>⊆</sup> *<sup>L</sup>*2*,η(*R; *H )* <sup>∩</sup> *<sup>L</sup>*2*,ν(*R; *H )*.

Finally, by Lemma 9.4.1, *C*∞ c <sup>R</sup>; dom*(A)* is dense in *<sup>L</sup>*2*,ν* <sup>R</sup>; dom*(A)* <sup>∩</sup> *H*<sup>1</sup> *<sup>ν</sup> (*R; *H )*. We now apply Lemma 16.3.6 to *<sup>C</sup>*<sup>∞</sup> c <sup>R</sup>; dom*(A)* <sup>⊆</sup> *<sup>V</sup>* := *<sup>L</sup>*2*,ν (*R; dom*(A))* <sup>∩</sup> *H (*R; *H )* and *<sup>B</sup>* to get that *<sup>C</sup>*<sup>∞</sup> c <sup>R</sup>; dom*(A)* is a core for *B*. Since *B* is surjective, this implies that *U* = *B C*∞ c <sup>R</sup>; dom*(A)* <sup>⊆</sup> *<sup>L</sup>*2*,ν(*R; *H )* is dense which yields the assertion.

## **16.4 Comments**

Traditionally, non-autonomous equations have been dealt with—similar to the autonomous case—by mimicking techniques and results from non-autonomous ordinary differential equations. In consequence, the fundamental solution is the central object of attention, which finds itself in the concept of so-called evolution families *(U (t, s))t<sup>s</sup>* or propagators, see e.g. [53, 112]. Similar to the autonomous case, one is interested in the initial value problem

$$\begin{cases} \mu'(t) + A(t)\mu(t) = 0, \quad t > 0, \\ \mu(0) = \mu\_0, \end{cases}$$

for a given parameter dependent operator family *(A(t))t* of *unbounded* operators. The solution is then given by *u(t)* = *U (t ,* 0*)u*0. In applications, for instance to parabolic equations, *A(t)* = − div *a(t)* grad.

One is then interested in whether *(A(t))t* gives rise to an evolution family. There, the main issue is to understand the behaviour of the possibly different domains of *A(t)* for any given *t*. Focussing on inhomogeneous problems rather than initial value problems, we again are changing the perspective in the case of evolutionary equations. The presented time-space perspective entirely dispenses with the possible domain issues and requires only mild regularity conditions of the coefficients. In particular, as it has been demonstrated for the heat equation in Sect. 16.1, we merely require boundedness and measurability for *a*, whereas for Maxwell's equations we need Lipschitz continuity for the coefficients *ε* and *μ*.

The first result on the well-posedness of non-autonomous evolutionary equations has been found in [92]. In this source, the focus was on multiplication operators as coefficients and Lipschitz continuity of the operator coefficients with respect to time was assumed. The method of proof has been used to generalise this to the commutator assumption presented here, see [137, 138]. Theorem 16.3.1 also has a nonlinear analogue. This can be found in [122]. For an autonomous well-posedness result for nonlinear evolutionary inclusions we also refer to Chap. 17.

## **Exercises**

**Exercise 16.1** Let *<sup>V</sup>* : <sup>R</sup> <sup>→</sup> <sup>R</sup> be Lipschitz continuous.


$$V(\mathbf{m})^\upsilon \partial\_{\mathbf{l},\boldsymbol{\upsilon}} \subseteq \partial\_{\mathbf{l},\boldsymbol{\upsilon}} V(\mathbf{m})^\upsilon - V'(\mathbf{m})^\upsilon.$$

(c) In the situation of (b), show that for *φ* ∈ dom*(∂t ,ν)*, we have

$$\operatorname{Re}\left\langle \phi, \partial\_{\mathfrak{t}, \boldsymbol{\nu}} V(\mathbf{m}) \phi \right\rangle = \boldsymbol{\nu} \left\langle \phi, \, V(\mathbf{m}) \phi \right\rangle + \frac{1}{2} \left\langle \phi, \, V'(\mathbf{m}) \phi \right\rangle.$$

**Exercise 16.2** Let *<sup>H</sup>* be a Hilbert space, *<sup>μ</sup>* <sup>∈</sup> <sup>R</sup>. Let *<sup>M</sup>,M*- ∈ *S*ev*(H, μ)*. Assume that

$$\mathcal{M}^{\mu}\partial\_{\mathfrak{t},\mu} \subseteq \partial\_{\mathfrak{t},\mu}\mathcal{M}^{\mu} - (\mathcal{M}^{\prime})^{\mu} \dots$$

Show that then for all *ν μ* we have

$$\mathcal{M}^{\boldsymbol{\upsilon}} \partial\_{\boldsymbol{t}, \boldsymbol{\upsilon}} \subseteq \partial\_{\boldsymbol{t}, \boldsymbol{\upsilon}} \mathcal{M}^{\boldsymbol{\upsilon}} - (\mathcal{M}^{\boldsymbol{\prime}})^{\boldsymbol{\upsilon}}.$$

**Exercise 16.3** Let *H* be a Hilbert space, *ν, c >* 0*, M* ∈ *M(H, ν).* Assume that

$$
\operatorname{Re} zM(z) \geqslant c.
$$

Show that then

$$\operatorname{Re}\left\langle \partial\_{t,\boldsymbol{\nu}}M(\partial\_{t,\boldsymbol{\nu}})\phi, \mathbbm{1}\_{(-\infty,a]}\phi \right\rangle \geqslant c \left\| \mathbbm{1}\_{(-\infty,a]}\phi \right\|^2$$

for all *<sup>φ</sup>* <sup>∈</sup> dom*(∂t ,ν)* and *<sup>a</sup>* <sup>∈</sup> <sup>R</sup>.

**Exercise 16.4** In the situation of Theorem 16.2.1, show that 0 ∈ *ρ(∂t ,νM* + *N )* and  *(∂t ,ν<sup>M</sup>* <sup>+</sup> *<sup>N</sup> )*−<sup>1</sup> <sup>1</sup>*/c*. *Hint*: Show Re *∂t ,ν<sup>M</sup>* <sup>+</sup> *<sup>N</sup>* <sup>∗</sup> *c* first.

**Exercise 16.5** Prove the following 'non-causal' version of Theorem 16.3.1: Let *H* a Hilbert space, *<sup>ν</sup>* <sup>∈</sup> <sup>R</sup>. Let *<sup>M</sup>,M*- *, <sup>N</sup>* <sup>∈</sup> *L(L*2*,ν (*R; *H ))* and *<sup>A</sup>*: dom*(A)* <sup>⊆</sup> *<sup>H</sup>* <sup>→</sup> *H* be closed and densely defined. Assume that there exists *c >* 0 such that the following conditions are satifsfied:

(a) *M∂t ,ν* ⊆ *∂t ,νM* − *M*- ,

(b) for all *φ* ∈ dom*(∂t ,ν)* we have

$$\operatorname{Re}\left\langle \phi, \left(\partial\_{\mathfrak{t},\boldsymbol{\nu}}\mathcal{M} + \mathcal{N}\right)\phi\right\rangle\_{L\_{2,\boldsymbol{\nu}}(\mathbb{R};H)} \geqslant c \left\langle \phi, \phi\right\rangle\_{L\_{2,\boldsymbol{\nu}}(\mathbb{R};H)},$$

(c) for all *x* ∈ dom*(A)* and *y* ∈ dom*(A*∗*)* we have

$$\operatorname{Re}\left\langle \mathbf{x}, A\mathbf{x} \right\rangle\_H \geqslant 0 \text{ and } \operatorname{Re}\left\langle \mathbf{y}, A^\*\mathbf{y} \right\rangle\_H \geqslant 0.$$

Then

$$\partial\_{\mathbb{H},\mathbb{V}}\mathcal{M} + \mathcal{N} + A \colon H^1\_{\boldsymbol{\nu}}(\mathbb{R}; H) \cap L\_{2,\boldsymbol{\nu}}(\mathbb{R}; \operatorname{dom}(A)) \subseteq L\_{2,\boldsymbol{\nu}}(\mathbb{R}; H) \to L\_{2,\boldsymbol{\nu}}(\mathbb{R}; H)$$

References 273

is closable and its closure is continuously invertible. Denoting the respective inverse by *S*, we have *S L(L*2*,ν (*R;*H ))* 1*/c*.

**Exercise 16.6** Without using Theorem 16.3.1 or Exercise 16.5 show that if *M* ∈ *M(H, ν)* and *N* ∈ *S*ev*(H, ν)* satisfy

$$\operatorname{Re}\left|\phi,(\partial\_{\mathfrak{t},\boldsymbol{\nu}}M(\partial\_{\mathfrak{t},\boldsymbol{\nu}})+\mathcal{N}^{\boldsymbol{\nu}})\phi\right|\geqslant c\left\langle\phi,\phi\right\rangle \quad (\phi\in\operatorname{dom}(\partial\_{\mathfrak{t},\boldsymbol{\nu}})) $$

for some *c >* 0, then 0 ∈ *ρ ∂t ,νM(∂t ,ν)* <sup>+</sup> *<sup>N</sup> <sup>ν</sup>* <sup>+</sup> *<sup>A</sup>* , for all skew-selfadjoint *A*: dom*(A)* ⊆ *H* → *H*.

*Hint:* Compute the adjoint of *∂t ,νM(∂t ,ν)* <sup>+</sup> *<sup>N</sup> <sup>ν</sup>* <sup>+</sup> *<sup>A</sup>* with the help of Theorem 6.2.1 and Theorem 2.3.2.

**Exercise 16.7** Prove Lemma 16.3.6.

## **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 17 Evolutionary Inclusions**

This chapter is devoted to the study of *evolutionary inclusions*. In contrast to evolutionary equations, we will replace the skew-selfadjoint operator *A* by a socalled maximal monotone relation *A* ⊆ *H* ×*H* in the Hilbert space *H*. The resulting problem is then no longer an equation, but just an inclusion; that is, we consider problems of the form

$$(u, f) \in \partial\_{l, \vee} M(\partial\_{l, \vee}) + A,\tag{17.1}$$

where *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *H )* is given and *<sup>u</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *H )* is to be determined. This generalisation allows the treatment of certain non-linear problems, since we will not require any linearity for the relation *A*. Moreover, the property that *A* is just a relation and not neccessarily an operator can be used to treat hysteresis phenomena, which for instance occur in the theory of elasticity and electro-magnetism.

We begin to define the notion of maximal monotone relations in the first part of this chapter. In particular, we introduce the notion of the so-called Yosida approximation of *A* and provide a useful perturbation result for maximal monotone relations, which will be the key argument for proving the well-posedness of (17.1). For this, we prove the celebrated Theorem of Minty, which characterises the maximal monotone relations by a range condition. The second section is devoted to the main result of this chapter, namely the well-posedness of (17.1), which generalises Picard's theorem (see Theorem 6.2.1) to a broader class of problems. In the concluding section we consider Maxwell's equations in a polarisable medium as an application.

## **17.1 Maximal Monotone Relations and the Theorem of Minty**

**Definition** Let *A* ⊆ *H* × *H*. We call *A monotone* if

∀*(u, v), (x, y)* ∈ *A* : Re *u* − *x,v* − *y* -0*.*

Moreover, we call *A maximal monotone* if *A* is monotone and for each monotone relation *B* ⊆ *H* × *H* with *A* ⊆ *B* it follows that *A* = *B.*

*Remark 17.1.1* Let *A* ⊆ *H* × *H* be a monotone relation.

(a) It is clear that *A* is maximal monotone if and only if for each *x,y* ∈ *H* with

∀*(u, v)* ∈ *A* : Re *u* − *x,v* − *y* -0

it follows that *(x, y)* ∈ *A*.

(b) From (a) it follows that *A* is *demiclosed*; i.e., for each sequence *((xn, yn))n*∈<sup>N</sup> in *A* with *xn* → *x* in *H* and *yn* → *y* weakly or *xn* → *x* weakly and *yn* → *y* in *H* for some *x,y* ∈ *H* as *n* → ∞ it follows that *(x, y)* ∈ *A* (note that in both cases we have *u* − *xn, v* − *yn* → *u* − *x,v* − *y* for each *(u, v)* ∈ *A*).

We start to present some first properties of monotone and maximal monotone relations.

**Proposition 17.1.2** *Let A* ⊆ *H* × *H be monotone and λ >* 0*. Then the following statements hold:*


*Proof* For showing (a), we assume that *(f, u), (g, x)* <sup>∈</sup> *(*<sup>1</sup> <sup>+</sup> *λA)*−<sup>1</sup> for some *f, g, u, x* ∈ *H*. Then we find *v, y* ∈ *H* such that *(u, v), (x, y)* ∈ *A* and *u* + *λv* = *f* as well as *x* + *λy* = *g*. The monotonicity of *A* then yields

$$\left\|\|\boldsymbol{\mu} - \boldsymbol{\mathbf{x}}\|\right\|^2 = \text{Re}\left\langle f - \boldsymbol{g} - \lambda(\boldsymbol{v} - \mathbf{y}), \boldsymbol{\mu} - \mathbf{x} \right\rangle \leqslant \text{Re}\left\langle f - \mathbf{g}, \boldsymbol{\mu} - \mathbf{x} \right\rangle \leqslant \left\| f - \mathbf{g} \right\| \left\|\boldsymbol{\mu} - \mathbf{x} \right\|.$$

If now *<sup>f</sup>* <sup>=</sup> *g,* then *<sup>u</sup>* <sup>=</sup> *<sup>x</sup>*. Hence, *(*<sup>1</sup> <sup>+</sup> *λA)*−<sup>1</sup> is a mapping and the inequality proves its Lipschitz-continuity with  *(*<sup>1</sup> <sup>+</sup> *λA)*−<sup>1</sup> Lip 1.

To prove (b), let *B* ⊆ *H* × *H* be monotone with *A* ⊆ *B* and let *(x, y)* ∈ *B*. Since 1 + *λA* is onto, we find *(u, v)* ∈ *A* ⊆ *B* such that *u* + *λv* = *x* + *λy.* Since *(*<sup>1</sup> <sup>+</sup> *λB)*−<sup>1</sup> is a mapping by (a), we infer that

$$\mathbf{x} = (\mathbf{l} + \lambda \mathbf{B})^{-1} (\mathbf{x} + \lambda \mathbf{y}) = (\mathbf{l} + \lambda \mathbf{B})^{-1} (\boldsymbol{\mu} + \lambda \boldsymbol{\nu}) = \boldsymbol{\mu}$$

and hence, also *v* = *y*, which proves that *(x, y)* ∈ *A* and thus, *A* = *B*.

*Example 17.1.3* Let *B* : dom*(B)* ⊆ *H* → *H* be a densely defined, closed linear operator. Assume Re *u, Bu* - 0 and Re *v,B*∗*v* - 0 for all *u* ∈ dom*(B)* and *v* ∈ dom*(B*∗*)*. Then *B* is maximal monotone. Indeed, the monotonicity follows from the linearity of *B* and by Proposition 6.3.1 the operator 1 + *B* is continuously invertible, hence onto. Thus, the maximal monotonicity follows by Proposition 17.1.2(b). In particular, every skew-selfadjoint operator is maximal monotone. Moreover, if *<sup>M</sup>* : dom*(M)* <sup>⊆</sup> <sup>C</sup> <sup>→</sup> *L(H )* is a material law such that there exist *c >* 0*, ν*<sup>0</sup> sb *(M)* with

$$\operatorname{Re}\left\langle zM(z)\phi,\phi\right\rangle \geqslant c\left\|\phi\right\|^2 \quad (\phi \in H, z \in \mathbb{C}\_{\operatorname{Re}\gtrsim\vee\_0}),$$

then *∂t ,νM(∂t ,ν)* − *c* is maximal monotone for each *ν ν*0.

Our first goal is to show that the implication in Proposition 17.1.2(b) is actually an equivalence. This is Minty's theorem. For this, we start to introduce subgradients of convex, proper, lower semi-continuous mappings, which form the probably most prominent example of maximal monotone relations.

**Definition** Let *f* : *H* → *(*−∞*,*∞]. We call *f*

(a) *convex* if for all *x,y* ∈ *H,λ* ∈ *(*0*,* 1*)* we have

$$f(\lambda x + (1 - \lambda)y) \lesssim \lambda f(\mathbf{x}) + (1 - \lambda)f(\mathbf{y}).$$


$$\{f \triangleleft c\} = \{x \in H \; ; \; f(x) \triangleleft c\}$$

is closed.

(d) *coercive* if for each *<sup>c</sup>* <sup>∈</sup> <sup>R</sup> the sublevel set [*<sup>f</sup> <sup>c</sup>*] is bounded.

*Remark 17.1.4* If *f* : *H* → *(*−∞*,*∞] is convex, the sublevel sets [*f c*] are convex for each *<sup>c</sup>* <sup>∈</sup> <sup>R</sup>. Hence, if *<sup>f</sup>* is convex, l.s.c. and coercive, the sets [*<sup>f</sup> <sup>c</sup>*] are weakly sequentially compact (or, by the Eberlein–Šmulian theorem [50, theorem 13.1], equivalently, weakly compact) for each *<sup>c</sup>* <sup>∈</sup> <sup>R</sup>*.* Indeed, if *(xn)n*∈<sup>N</sup> is a sequence in [*<sup>f</sup> <sup>c</sup>*] for some *<sup>c</sup>* <sup>∈</sup> <sup>R</sup>, then it is bounded and thus, posseses a weakly convergent subsequence with weak limit *x* ∈ *H*. Since [*f c*] is closed and convex, Mazur's theorem [50, Corollary 2.11] yields that it is weakly closed and thus, *x* ∈ [*f c*] proving the claim.

**Definition** Let *f* : *H* → *(*−∞*,*∞] be convex. We define the *subgradient* of *f* by

$$\partial f := \{ (\mathbf{x}, \mathbf{y}) \in H \times H \; ; \; \forall \mathbf{u} \in H \; ; \; f(\mathbf{u}) \geqslant f(\mathbf{x}) + \mathbf{Re} \; \langle \mathbf{y}, \boldsymbol{\mu} - \mathbf{x} \rangle \} \dots$$

*Remark 17.1.5* Note that *u* → *f (x)* + Re *y,u* − *x* is an affine function touching the graph of *f* in *x*. Thus, the subgradient is the set of all pairs *(x, y)* ∈ *H* such that there exists an affine function with slope *y* touching the graph of *f* in *x*. It is not hard to show that if *f* is differentiable in *x*, then *(x, y)* ∈ *∂f* if and only if *y* = *f* - *(x)* (see Exercise 17.1). Thus, the subgradient of *f* provides a generalisation of the derivative for arbitrary convex functions.

**Proposition 17.1.6** *Let f* : *H* → *(*−∞*,*∞] *be convex and proper. Then the following statements hold:*


#### *Proof*

(a) If *(x, y)* ∈ *∂f* we have *f (u) f (x)* + Re *y,u* − *x* for each *u* ∈ *H*. Since *f* is proper, we find *u* ∈ *H* such that *f (u) <* ∞ and hence, also *f (x) <* ∞*.* Let now *(u, v), (x, y)* ∈ *∂f* . Then we have *f (u) f (x)* + Re *y,u* − *x* and *f (x) f (u)* + Re *v, x* − *u* = *f (u)* − Re *v, u* − *x .* Summing up both expressions (note that *f (x), f (u) <* ∞ by what we have shown before), we infer

$$\operatorname{Re}\left\langle \mathbf{y} - \boldsymbol{v}, \boldsymbol{u} - \boldsymbol{\omega} \right\rangle \leqslant \mathbf{0},$$

which shows the monotonicity.


$$\begin{aligned} \lambda \left( f(u) - f(\mathbf{x}) \right) &\geqslant f(w) - f(\mathbf{x}) \\ &= g(w) - g(\mathbf{x}) + \frac{\alpha}{2} (\left\| \mathbf{x} - \mathbf{y} \right\|^2 - \left\| w - \mathbf{y} \right\|^2) \\ &\geqslant \frac{\alpha}{2} (\left\| \mathbf{x} - \mathbf{y} \right\|^2 - \left\| w - \mathbf{y} \right\|^2) \end{aligned}$$

$$\begin{aligned} &= \frac{\alpha}{2} \left( \|\mathbf{x} - \mathbf{y}\|^2 - \|\lambda(\mu - \mathbf{x}) + \mathbf{x} - \mathbf{y}\|^2 \right) \\ &= \frac{\alpha}{2} \left( -2\lambda \operatorname{Re} \langle \mu - \mathbf{x}, \mathbf{x} - \mathbf{y} \rangle - \lambda^2 \left\| \mu - \mathbf{x} \right\|^2 \right) .\end{aligned}$$

Dividing the latter expression by *λ* and taking the limit *λ* → 0, we infer

$$-\alpha \operatorname{Re} \left< \mu - x, x - y \right> \leqslant f(\mu) - f(x),$$

which proves *(x, α(y* − *x))* ∈ *∂f.*

Assume now that *(x, α(y* − *x))* ∈ *∂f* . For each *u* ∈ *H* we have

$$\left\|\mathbf{x} - \mathbf{y}\right\|^2 - 2\operatorname{Re}\left\langle \mathbf{y} - \mathbf{x}, \mu - \mathbf{x} \right\rangle = \left\|\mathbf{y} - \mathbf{x} - (\mu - \mathbf{x})\right\|^2 - \left\|\mu - \mathbf{x}\right\|^2 \leqslant \left\|\mu - \mathbf{y}\right\|^2$$

and thus,

$$f(\boldsymbol{\mu}) \geqslant f(\boldsymbol{\mathfrak{x}}) + \operatorname{Re} \left< \boldsymbol{\alpha}(\mathbf{y} - \boldsymbol{\mathfrak{x}}), \boldsymbol{\mu} - \boldsymbol{\mathfrak{x}} \right> \geqslant f(\boldsymbol{\mathfrak{x}}) + \frac{\boldsymbol{\mathfrak{a}}}{2} (\left\| \boldsymbol{\mathfrak{x}} - \mathbf{y} \right\|^2 - \left\| \boldsymbol{\mathfrak{x}} - \mathbf{y} \right\|^2),$$

which shows the claim.

(d) We first show that there exists an affine function *<sup>h</sup>* : *<sup>H</sup>* <sup>→</sup> <sup>R</sup> with *<sup>h</sup> <sup>f</sup>* . For this, we consider the epigraph of *f* given by

$$\text{epi } f := \{ (\mathbf{x}, \beta) \in H \times \mathbb{R} \; ; \; f(\mathbf{x}) \leqslant \beta \} \;.$$

Since *f* is convex and l.s.c., one easily verifies that epi *f* is convex and closed. Moreover, since *<sup>f</sup>* is proper, epi *<sup>f</sup>* <sup>=</sup> <sup>∅</sup>*.* Let now *<sup>z</sup>* <sup>∈</sup> *<sup>H</sup>* with *f (z) <* <sup>∞</sup> and *η < f (z).* Then *(z, η)* <sup>∈</sup> *(H* <sup>×</sup> <sup>R</sup>*)* \ epi *<sup>f</sup>* and by the Hahn–Banach theorem we find *<sup>w</sup>* <sup>∈</sup> *<sup>H</sup>* and *<sup>γ</sup>* <sup>∈</sup> <sup>R</sup> such that

$$\operatorname{Re}\left\langle w, z \right\rangle + \mathcal{Y}\eta < \operatorname{Re}\left\langle w, x \right\rangle + \mathcal{Y}\beta$$

for all *(x, β)* ∈ epi *f.* In particular

$$\operatorname{Re}\left\langle w, z \right\rangle + \mathcal{Y}\eta < \operatorname{Re}\left\langle w, x \right\rangle + \mathcal{Y}f(x)$$

for each *x* ∈ *H* and since this holds also for *x* = *z,* we infer *γ >* 0*.* Choosing *h(x)* := <sup>1</sup> *γ* Re *w, z* − *x* + *η* for *x* ∈ *H*, we have found the asserted affine function.

Using this, we have that

$$\mathbf{g}(\boldsymbol{\mu}) \gtrless \frac{\boldsymbol{\alpha}}{2} \left\| \boldsymbol{\mu} - \mathbf{y} \right\|^2 + h(\boldsymbol{\mu}) \quad (\boldsymbol{\mu} \in H)$$

and since the right-hand side tends to ∞ as *u*→∞, we derive that *g* is coercive. Moreover, *g* is convex, proper and l.s.c. (see Exercise 17.2) and thus, there exists *x* ∈ *H* with *g(x)* = inf*u*∈*<sup>H</sup> g(u)* by (b). By (c), *(x, α(y* − *x))* ∈ *∂f* and thus, *(x, y)* ∈ 1 + *α∂f* . Since *y* ∈ *H* was arbitrary, 1 + *α∂f* is onto and so, *∂f* is maximal monotone by (a) and Proposition 17.1.2(b).

We can now prove Minty's theorem.

**Theorem 17.1.7 (Minty)** *Let A* ⊆ *H* ×*H maximal monotone. Then* 1+*λA is onto for all λ >* 0*.*

*Proof* Since *λA* is maximal monotone for each *λ >* 0, it suffices to prove the statement for *λ* = 1. Moreover, since *A* − *(*0*,f)* is maximal monotone for each *f* ∈ *H*, it suffices to show 0 ∈ ran*(*1+*A).* For this, define *fA* : *H* ×*H* → *(*−∞*,*∞] by (note that *<sup>A</sup>* <sup>=</sup> <sup>∅</sup> by maximal monotonicity)

$$f\_A(\mu, \upsilon) := \sup \left\{ \operatorname{Re} \left< \mu, \mathbf{y} \right> + \operatorname{Re} \left< \upsilon, \mathbf{x} \right> - \operatorname{Re} \left< \mathbf{x}, \mathbf{y} \right> \; ; \; (\mathbf{x}, \mathbf{y}) \in A \right\}.$$

As a supremum of affine functions, we see that *fA* is convex and l.s.c. Moreover, we have that

$$f\_{A}(\mu, v) = -\inf \left\{ -\operatorname{Re} \left< \mu, \mathbf{y} \right> - \operatorname{Re} \left< v, \mathbf{x} \right> + \operatorname{Re} \left< \mathbf{x}, \mathbf{y} \right> \; ; \; (\mathbf{x}, \mathbf{y}) \in A \right\}$$

$$= -\inf \left\{ \operatorname{Re} \left< \mathbf{x} - \mu, \mathbf{y} - v \right> \; ; \; (\mathbf{x}, \mathbf{y}) \in A \right\} + \operatorname{Re} \left< \mu, v \right>$$

for each *u, v* ∈ *H* and since *A* is maximal monotone, we get by using Remark 17.1.1

$$\inf \left\{ \operatorname{Re} \left\langle x - u, y - v \right\rangle \; ; \; (x, y) \in A \right\} \geqslant 0 \Leftrightarrow (u, v) \in A$$

$$
\Leftrightarrow \inf \left\{ \operatorname{Re} \left\langle \mathbf{x} - \boldsymbol{\mu}, \, \mathbf{y} - \boldsymbol{v} \right\rangle \; ; \; (\boldsymbol{\mu}, \, \mathbf{y}) \in A \right\} = 0
$$

and so

$$\inf \left\{ \operatorname{Re} \left\langle \mathbf{x} - \boldsymbol{u}, \, \mathbf{y} - \boldsymbol{v} \right\rangle \; ; \; (\mathbf{x}, \, \mathbf{y}) \in A \right\} \leqslant 0 \quad (\boldsymbol{u}, \, \mathbf{v} \in H).$$

In particular, we get *fA(u, v)* - Re *u, v* for each *u, v* ∈ *H* and *fA(u, v)* = Re *u, v* if and only if *(u, v)* <sup>∈</sup> *<sup>A</sup>*. Thus, *fA* is proper since *<sup>A</sup>* <sup>=</sup> <sup>∅</sup>. By Proposition 17.1.6(d) we obtain that 0 ∈ ran*(*1 + *∂fA)* and thus, we find *(u*0*, v*0*)* ∈ *H* × *H* with *((u*0*, v*0*), (*−*u*0*,* −*v*0*))* ∈ *∂fA*. Hence, by definition of *∂fA*,

$$f\_A(\mu, \upsilon) \geqslant f\_A(\mu\_0, \upsilon\_0) + \text{Re } \langle (-\mu\_0, -\upsilon\_0), (\mu - \mu\_0, \upsilon - \upsilon\_0) \rangle$$

$$= f\_A(\mu\_0, \upsilon\_0) + \left\| \mu\_0 \right\|^2 + \left\| \upsilon\_0 \right\|^2 - \text{Re } \langle \mu\_0, \mu \rangle - \text{Re } \langle \upsilon\_0, \upsilon \rangle$$

for all *(u, v)* ∈ *H* × *H.* In particular, using that *fA(u, v)* = Re *u, v* for *(u, v)* ∈ *A* we get

$$0 \geqslant f\_A(\boldsymbol{\mu}\_0, \boldsymbol{\upsilon}\_0) + \|\boldsymbol{\mu}\_0\|^2 + \|\boldsymbol{\upsilon}\_0\|^2 - \text{Re}\,\langle \boldsymbol{\mu}\_0, \boldsymbol{\upsilon}\rangle - \text{Re}\,\langle \boldsymbol{\upsilon}\_0, \boldsymbol{\upsilon}\rangle - \text{Re}\,\langle \boldsymbol{\mu}, \boldsymbol{\upsilon}\rangle \quad ((\boldsymbol{\mu}, \boldsymbol{\upsilon}) \in A).$$

Taking the supremum over all *(u, v)* ∈ *A*, we infer

$$\begin{aligned} 0 &\geqslant f\_A(\boldsymbol{\mu}\_0, \boldsymbol{v}\_0) + \left\|\boldsymbol{\mu}\_0\right\|^2 + \left\|\boldsymbol{v}\_0\right\|^2 + f\_A(-\boldsymbol{\mu}\_0, -\boldsymbol{v}\_0), \\ &\geqslant \mathbf{Re}\left\langle \boldsymbol{\mu}\_0, \boldsymbol{v}\_0 \right\rangle + \left\|\boldsymbol{\mu}\_0\right\|^2 + \left\|\boldsymbol{v}\_0\right\|^2 + \mathbf{Re}\left\langle -\boldsymbol{\mu}\_0, -\boldsymbol{v}\_0 \right\rangle = \left\|\boldsymbol{\mu}\_0 + \boldsymbol{v}\_0\right\|^2 \end{aligned}$$

Thus, *u*<sup>0</sup> + *v*<sup>0</sup> = 0 and instead of inequalities, we actually have equalities in the expression above. Thus, *fA(u*0*, v*0*)* = Re *u*0*, v*<sup>0</sup> and so, *(u*0*, v*0*)* ∈ *A*. From *u*<sup>0</sup> + *v*<sup>0</sup> = 0 it thus follows that 0 ∈ ran*(*1 + *A).*

Next, we show how to extend maximal monotone relations on a Hilbert space *H* to the Bochner–Lebesgue space *L*2*(μ*; *H )* for a *σ*-finite measure space *(, A, μ)*. The condition *(*0*,* 0*)* ∈ *A* can be dropped if *μ() <* ∞.

**Corollary 17.1.8** *Let A* ⊆ *H* × *H maximal monotone with (*0*,* 0*)* ∈ *A. Moreover, let (, A, μ) be a σ-finite measure space and define*

$$A\_{L\_2(\mu;H)} := \left| (f, g) \in L\_2(\mu; H) \times L\_2(\mu; H) \text{; } (f(t), g(t)) \in A \quad (t \in \Omega \ a.e.) \right|.$$

*Then AL*2*(μ*;*H ) is maximal monotone.*

*Proof* The monotonicity of *AL*2*(μ*;*H )* is clear. For showing the maximal monotonicity we prove that 1 + *AL*2*(μ*;*H )* is onto (see Proposition 17.1.2(b)). For this, let *<sup>h</sup>* <sup>∈</sup> *<sup>L</sup>*2*(μ*; *H )* and set *f (t)* := *(*<sup>1</sup> <sup>+</sup> *A)*−1*(h(t))* for each *<sup>t</sup>* <sup>∈</sup> . Note that *<sup>f</sup>* is welldefined by Theorem 17.1.7. Since *(*<sup>1</sup> <sup>+</sup> *A)*−<sup>1</sup> is continuous by Proposition 17.1.2(a) and *h* is Bochner-measurable, *f* is also Bochner-measurable. Moreover, using that *(*0*,* 0*)* ∈ 1 + *A* and  *(*<sup>1</sup> <sup>+</sup> *A)*−<sup>1</sup> Lip 1, we compute

$$\int\_{\Omega} \left\| f(t) \right\|^2 \, \mathrm{d}\mu(t) \lesssim \int\_{\Omega} \left\| h(t) \right\|^2 \, \mathrm{d}\mu(t) < \infty$$

and so, *f* ∈ *L*2*(μ*; *H )*. Thus, *h* − *f* ∈ *L*2*(μ*; *H )*, which yields *(f, h* − *f )* ∈ *AL*2*(μ*;*H )* and so, *h* ∈ ran*(*1 + *AL*2*(μ*;*H )).*

## **17.2 The Yosida Approximation and Perturbation Results**

We now have all concepts at hand to introduce the Yosida approximation for a maximal monotone relation.

**Definition** Let *A* ⊆ *H* × *H* be maximal monotone and *λ >* 0. We define

$$A\_{\lambda} := \lambda^{-1} \left( 1 - (1 + \lambda A)^{-1} \right).$$

The family *(Aλ)λ>*<sup>0</sup> is called *Yosida approximation of A.*

Since for a maximal monotone relation *<sup>A</sup>* <sup>⊆</sup> *<sup>H</sup>* <sup>×</sup> *<sup>H</sup>* the resolvent *(*<sup>1</sup> <sup>+</sup> *λA)*−<sup>1</sup> is actually a Lipschitz-continuous mapping (by Proposition 17.1.2(a)), whose domain is *H* (by Theorem 17.1.7), the same holds for *Aλ*. We collect some useful properties of the Yosida approximation.

**Proposition 17.2.1** *Let A* ⊆ *H* × *H maximal monotone and λ >* 0*. Then the following statements hold:*


#### *Proof*


$$\begin{aligned} &\lambda \operatorname{Re} \left\langle A\_{\lambda}(\mathbf{x}) - A\_{\lambda}(\mathbf{y}), \mathbf{x} - \mathbf{y} \right\rangle \\ & \qquad = \left\| \mathbf{x} - \mathbf{y} \right\|^{2} - \operatorname{Re} \left\langle (1 + \lambda A)^{-1}(\mathbf{x}) - (1 + \lambda A)^{-1}(\mathbf{y}), \mathbf{x} - \mathbf{y} \right\rangle \\ & \qquad \geqslant \left\| \mathbf{x} - \mathbf{y} \right\|^{2} - \left\| (1 + \lambda A)^{-1}(\mathbf{x}) - (1 + \lambda A)^{-1}(\mathbf{y}) \right\| \left\| \mathbf{x} - \mathbf{y} \right\| \\ & \qquad \geqslant 0 \end{aligned}$$

by Proposition 17.1.2(a) and hence, *Aλ* is monotone. Moreover,

$$\begin{aligned} &\operatorname{Re}\left\langle A\_{\lambda}(\mathbf{x}) - A\_{\lambda}(\mathbf{y}), \mathbf{x} - \mathbf{y} \right\rangle \\ &= \operatorname{Re}\left\langle A\_{\lambda}(\mathbf{x}) - A\_{\lambda}(\mathbf{y}), \left(\mathbb{I} + \lambda A\right)^{-1}(\mathbf{x}) - \left(\mathbb{I} + \lambda A\right)^{-1}(\mathbf{y}) \right\rangle \\ &\quad + \lambda \left\| A\_{\lambda}(\mathbf{x}) - A\_{\lambda}(\mathbf{y}) \right\|^{2} \\ &\geqslant \lambda \left\| A\_{\lambda}(\mathbf{x}) - A\_{\lambda}(\mathbf{y}) \right\|^{2}, \end{aligned}$$

where we have used (a) and the monotonicity of *A*. The Cauchy–Schwarz inequality now yields *Aλ*Lip <sup>1</sup> *<sup>λ</sup>* .

We state a result on the strong convergence of the resolvents of a maximal monotone relation, which we already have used in previous sections for the resolvent of *∂t ,ν*. For the projection *PC(x)* of *x* ∈ *H* onto a non-empty closed convex set *C* ⊆ *H*, recall Exercise 4.4 and that *y* = *PC(x)* if and only if *y* ∈ *C* and

$$\operatorname{Re}\langle \mathfrak{x} - \mathfrak{y}, \mathfrak{u} - \mathfrak{y} \rangle\_H \lesssim 0 \quad (\mathfrak{u} \in C).$$

**Proposition 17.2.2** *Let A* ⊆ *H* ×*H be maximal monotone. Then* dom *(A) is convex and for all <sup>x</sup>* <sup>∈</sup> *<sup>H</sup> we have (*1+*λA)*−1*(x)* <sup>→</sup> *<sup>P</sup>*dom *(A)(x) as <sup>λ</sup>* <sup>→</sup> <sup>0</sup>+*, where <sup>P</sup>*dom *(A) denotes the projection onto* dom *(A).*

*Proof* We set *C* := conv dom*(A)*. Then *C* is closed and convex. Next, we prove that *(*<sup>1</sup> <sup>+</sup> *λA)*−1*(x)* <sup>→</sup> *PC(x)* as *<sup>λ</sup>* <sup>→</sup> <sup>0</sup><sup>+</sup> for all *<sup>x</sup>* <sup>∈</sup> *<sup>H</sup>*. So let *<sup>x</sup>* <sup>∈</sup> *<sup>H</sup>* and set *xλ* := *(*<sup>1</sup> <sup>+</sup> *λA)*−1*(x)* for each *λ >* 0. Then we have *Aλ(x)* <sup>=</sup> <sup>1</sup> *<sup>λ</sup> (x* − *xλ)* and hence, using Proposition 17.2.1(a) and the monotonicity of *A*, we infer Re ' *xλ* <sup>−</sup> *u,* <sup>1</sup> *<sup>λ</sup> (x* − *xλ)* − *v* ( -0 for each *(u, v)* ∈ *A*. Consequently, we obtain

$$\left\|\mathbf{x}\_{\lambda}\right\|^{2} \leqslant \mathbf{Re}\left\langle\mathbf{x}\_{\lambda}-\boldsymbol{\mu},\mathbf{x}\right\rangle + \mathbf{Re}\left\langle\mathbf{x}\_{\lambda},\boldsymbol{\mu}\right\rangle - \lambda \operatorname{Re}\left\langle\mathbf{x}\_{\lambda}-\boldsymbol{\mu},\boldsymbol{v}\right\rangle \quad ((\boldsymbol{\mu},\boldsymbol{v})\in A). \tag{17.2}$$

In particular, we see that *(xλ)λ>*<sup>0</sup> is bounded as *λ* → 0 and so, for each nullsequence we find a subsequence *(λn)n* with *λn* → 0 such that *xλn* → *z* weakly for some *z* ∈ *H*. By (17.2) it follows that

$$\left\|z\right\|^2 \leqslant \text{Re}\left\langle z-\mu,\kappa\right\rangle + \text{Re}\left\langle z,\mu\right\rangle \quad (\mu \in \text{dom}\left(A\right)).$$

It is easy to see that this inequality carries over to each *u* ∈ *C* and thus Re *z* − *u, z* − *x* 0 for each *u* ∈ *C* which proves *z* = *PC(x)* and hence, *xλn* → *PC(x)* weakly. Next we prove that the convergence also holds in the norm topology. From (17.2) we see that

$$\limsup\_{n \to \infty} \left\| \chi\_{\lambda\_n} \right\|^2 \leqslant \text{Re } \langle P\_C(\mathbf{x}) - u, \mathbf{x} \rangle + \text{Re } \langle P\_C(\mathbf{x}), u \rangle \quad (u \in \text{dom}\,(A))$$

and again, this inequality stays true for each *u* ∈ *C*. In particular, choosing *u* = *PC(x)* we infer lim sup*n*→∞ *xλn* <sup>2</sup> *PC(x)*2, which together with the weak convergence, yields the convergence in norm (see Exercise 17.3). A subsequence argument (cf. Exercise 14.3) reveals *xλ* → *PC(x)* in *H* as *λ* → 0.

It remains to show that dom*(A)* is convex. By what we have shown above, we have *(*<sup>1</sup> <sup>+</sup> *λA)*−1*(x)* <sup>→</sup> *<sup>x</sup>* as *<sup>λ</sup>* <sup>→</sup> 0 for each *<sup>x</sup>* <sup>∈</sup> *<sup>C</sup>* and since *(*<sup>1</sup> <sup>+</sup> *λA)*−1*(x)* <sup>∈</sup> dom*(A)* for each *λ >* 0, we infer *x* ∈ dom *(A)*. Thus, *C* ⊆ dom *(A)* and since the other inclusion holds trivially the proof is completed.

We conclude this section with some perturbation results.

**Lemma 17.2.3** *Let A* ⊆ *H* × *H be maximal monotone and C*: *H* → *H Lipschitzcontinuous and monotone. Then A* + *C is maximal monotone.*

*Proof* The monotonicity of *A* + *C* is clear. If *C* is constant, then the maximality of *<sup>A</sup>* <sup>+</sup> *<sup>C</sup>* is obvious. If *<sup>C</sup>* is non-constant we choose 0 *<λ<* <sup>1</sup> *C*Lip . Then for all *f* ∈ *H* the mapping

$$\mu \mapsto (1 + \lambda A)^{-1} \left( f - \lambda C(\mu) \right)$$

defines a strict contraction (use Proposition 17.1.2(a) and dom*((*<sup>1</sup> <sup>+</sup> *λA)*−1*)* <sup>=</sup> *<sup>H</sup>* by Theorem 17.1.7) and thus, posseses a fixed point *x* ∈ *H*, which then satisfies *(x, f )* ∈ 1+*λ(A*+*C)*. Thus, *A*+*C* is maximal monotone by Proposition 17.1.2(b). 

We note that the latter lemma particularily applies to *C* = *Bλ* for a maximal monotone relation *B* ⊆ *H* × *H* and *λ >* 0 by Proposition 17.2.1(b).

**Proposition 17.2.4** *Let A,B* ⊆ *H* ×*H be two maximal monotone relations, c >* 0 *and f* ∈ *H. For λ >* 0 *we set*

$$x\_{\lambda} := \left(c + A + B\_{\lambda}\right)^{-1}(f).$$

*Then f* ∈ ran*(c* + *A* + *B) if and only if* sup*λ>*<sup>0</sup> *Bλ(xλ) <* ∞ *and in the latter case xλ* → *x as λ* → 0 *with (x, f )* ∈ *c* + *A* + *B, which identifies x uniquely.*

*Proof* Note that *xλ* is well-defined for *λ >* 0 by Lemma 17.2.3, Theorem 17.1.7 and Proposition 17.1.2.

For all *λ >* 0 we find *yλ* ∈ *H* such that *(xλ, yλ)* ∈ *A* and *cxλ* +*yλ* +*Bλ(xλ)* = *f.*

We first assume that there exist *x,y,z* ∈ *H* such that *(x, y)* ∈ *A, (x, z)* ∈ *B* and *cx* + *y* + *z* = *f* . Thus, we have

$$c(\mathbf{x} - \mathbf{x}\_{\lambda}) = \mathbf{y}\_{\lambda} + B\_{\lambda}(\mathbf{x}\_{\lambda}) - \mathbf{y} - z,$$

which gives

$$0 \le c \left\| \mathbf{x}\_{\lambda} - \mathbf{x} \right\|^{2} = \text{Re} \left\langle \mathbf{y} - \mathbf{y}\_{\lambda}, \mathbf{x}\_{\lambda} - \mathbf{x} \right\rangle + \text{Re} \left\langle \mathbf{z} - \mathbf{B}\_{\lambda}(\mathbf{x}\_{\lambda}), \mathbf{x}\_{\lambda} - \mathbf{x} \right\rangle$$

$$\le \text{Re} \left\langle \mathbf{z} - \mathbf{B}\_{\lambda}(\mathbf{x}\_{\lambda}), \mathbf{x}\_{\lambda} - \mathbf{x} \right\rangle$$

$$= \text{Re} \left\langle \mathbf{z} - \mathbf{B}\_{\lambda}(\mathbf{x}\_{\lambda}), (1 + \lambda B)^{-1}(\mathbf{x}\_{\lambda}) - \mathbf{x} \right\rangle + \text{Re} \left\langle \mathbf{z} - \mathbf{B}\_{\lambda}(\mathbf{x}\_{\lambda}), \lambda B\_{\lambda}(\mathbf{x}\_{\lambda}) \right\rangle$$

$$\le \text{Re} \left\langle \mathbf{z} - \mathbf{B}\_{\lambda}(\mathbf{x}\_{\lambda}), \lambda B\_{\lambda}(\mathbf{x}\_{\lambda}) \right\rangle$$

where we have used the monotonicity of *A* in the second line and the monotonicity of *B* as well as Proposition 17.2.1(a) in the last line. The latter implies

$$\left\| \left| B\_{\lambda}(\mathbf{x}\_{\lambda}) \right| \right\|^{2} \leqslant \mathbf{Re} \left\langle \boldsymbol{z}, B\_{\lambda}(\mathbf{x}\_{\lambda}) \right\rangle,$$

and the claim follows by the Cauchy–Schwarz inequality.

Assume now that *K* := sup*λ>*<sup>0</sup> *Bλ(xλ) <* ∞ and let *μ, λ >* 0. As above, we compute

$$\begin{split} \left\| \mathbf{c} \left\| \mathbf{x}\_{\lambda} - \mathbf{x}\_{\mu} \right\| ^2 &= \text{Re} \left\langle \mathbf{y}\_{\mu} - \mathbf{y}\_{\lambda}, \mathbf{x}\_{\lambda} - \mathbf{x}\_{\mu} \right\rangle + \text{Re} \left\langle B\_{\mu}(\mathbf{x}\_{\mu}) - B\_{\lambda}(\mathbf{x}\_{\lambda}), \mathbf{x}\_{\lambda} - \mathbf{x}\_{\mu} \right\rangle \\ &\leqslant \text{Re} \left\langle B\_{\mu}(\mathbf{x}\_{\mu}) - B\_{\lambda}(\mathbf{x}\_{\lambda}), \mathbf{x}\_{\lambda} - \mathbf{x}\_{\mu} \right\rangle \end{split}$$

$$\begin{split} \mathbb{E} &= \text{Re}\left\{ \left( B\_{\mu}(\mathbf{x}\_{\mu}) - B\_{\lambda}(\mathbf{x}\_{\lambda}), (1 + \lambda B)^{-1}(\mathbf{x}\_{\lambda}) - (1 + \mu B)^{-1}(\mathbf{x}\_{\mu}) \right) \right\} \\ &+ \text{Re}\left\{ B\_{\mu}(\mathbf{x}\_{\mu}) - B\_{\lambda}(\mathbf{x}\_{\lambda}), \lambda B\_{\lambda}(\mathbf{x}\_{\lambda}) - \mu B\_{\mu}(\mathbf{x}\_{\mu}) \right\} \\ &\leqslant \text{Re}\left\{ B\_{\mu}(\mathbf{x}\_{\mu}) - B\_{\lambda}(\mathbf{x}\_{\lambda}), \lambda B\_{\lambda}(\mathbf{x}\_{\lambda}) - \mu B\_{\mu}(\mathbf{x}\_{\mu}) \right\} \\ &\leqslant 2(\lambda + \mu)K^{2} . \end{split}$$

Thus, for a nullsequence *(λn)n*∈<sup>N</sup> in *(*0*,*∞*)* we infer that *(xλn )n*∈<sup>N</sup> is a Cauchy sequence whose limit we denote by *x*. Since *(Bλn (xλn ))n*∈<sup>N</sup> is bounded, we can assume, by passing to a suitable subsequence, that *Bλn (xλn )* → *z* weakly for some *z* ∈ *H*. Then

$$\left\| \left( (1 + \lambda\_n B)^{-1} (\mathbf{x}\_{\lambda\_n}) - \mathbf{x} \right) \right\| \leqslant \left\| \mathbf{x}\_{\lambda\_n} - \mathbf{x} \right\| + \left\| \lambda\_n B\_{\lambda\_n} (\mathbf{x}\_{\lambda\_n}) \right\| \to 0 \quad (n \to \infty) \to 0$$

and since *((*<sup>1</sup> <sup>+</sup> *λnB)*−<sup>1</sup>*(xλn ), Bλn (xλn ))* <sup>∈</sup> *<sup>B</sup>* for each *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> by Proposition 17.2.1(a), the demi-closedness of *B* (see Remark 17.1.1) reveals *(x, z)* ∈ *B*. Moreover,

$$\mathbf{x}\_{\lambda\_n} = f - B\_{\lambda\_n}(\mathbf{x}\_{\lambda\_n}) - c\mathbf{x}\_{\lambda\_n} \to f - z - c\mathbf{x} =: \mathbf{y} \quad (n \to \infty)$$

weakly and hence, by the demi-closedness of *A*, we infer *(x, y)* ∈ *A*, which completes the proof of the asserted equivalence. By a subsequence argument (cf. Exercise 14.3) we obtain the asserted convergence (note that *<sup>x</sup>* <sup>=</sup> *(c*+*A*+*B)*−1*(f )* is uniquely determined by *f* ).

To treat the example in Sect. 17.4 we need another perturbation result, for which we need to introduce the notion of local boundedness of a relation.

**Definition** Let *A* ⊆ *H* × *H* and *x* ∈ dom*(A)*. Then *A* is called *locally bounded at x* if there exists *δ >* 0 such that

$$A[B(\mathbf{x}, \delta)] = \{ \mathbf{y} \in H \; ; \; \exists z \in B(\mathbf{x}, \delta) \; : \; (z, \mathbf{y}) \in A \}$$

is bounded.

**Proposition 17.2.5** *Let A* ⊆ *H* × *H be maximal monotone such that* int conv dom *(A)* <sup>=</sup> <sup>∅</sup>*. Then* int dom *(A)* <sup>=</sup> int conv dom*(A)* <sup>=</sup> int dom*(A) and A is locally bounded at each point x* ∈ int dom *(A).*

In order to prove this proposition, we need the following lemma.

**Lemma 17.2.6** *Let (Dn)n*∈<sup>N</sup> *be a sequence of subsets of H with Dn* ⊆ *Dn*+<sup>1</sup> *for each <sup>n</sup>* <sup>∈</sup> <sup>N</sup> *and <sup>D</sup>* := *<sup>n</sup>*∈<sup>N</sup> *Dn. If* int conv *<sup>D</sup>* <sup>=</sup> <sup>∅</sup>*, then* int conv *<sup>D</sup>* <sup>=</sup> *<sup>n</sup>*∈<sup>N</sup> int conv *Dn.*

*Proof* Set *C* := int conv *D*. By Exercise 17.4 we have *C* = conv *D*. Since *(Dn)n*∈<sup>N</sup> is increasing we have conv *D* = *<sup>n</sup>*∈<sup>N</sup> conv *Dn* and hence, *<sup>C</sup>* <sup>⊆</sup> *<sup>n</sup>*∈<sup>N</sup> conv *Dn* <sup>⊆</sup> *<sup>C</sup>*. Since *<sup>C</sup>* is a Baire space by Exercise 17.5, we find *<sup>n</sup>*<sup>0</sup> <sup>∈</sup> <sup>N</sup> such that int conv *Dn*<sup>0</sup> <sup>=</sup> <sup>∅</sup> and hence, int conv *Dn* <sup>=</sup> <sup>∅</sup> for each *<sup>n</sup> n*0. Hence, conv *Dn* = int conv *Dn* for each *n n*<sup>0</sup> by Exercise 17.4. Thus,

$$\overline{C} = \overline{\bigcup\_{n \in \mathbb{N}} \overline{\operatorname{conv} D\_n}} = \bigcup\_{n \in \mathbb{N}} \overline{\operatorname{int} \overline{\operatorname{conv} D\_n}} = \overline{\bigcup\_{n \in \mathbb{N}} \overline{\operatorname{int} \overline{\operatorname{conv} D\_n}}}.$$

Finally, since *<sup>n</sup>*∈<sup>N</sup> int conv *Dn* is open and convex, we infer *<sup>C</sup>* <sup>=</sup> *<sup>n</sup>*∈<sup>N</sup> int conv *Dn* by Exercise 17.4.

*Proof of Proposition 17.2.5* We first show that *A* is locally bounded at each point in int conv dom*(A)*. For this, we set

$$A\_n := \{ (\mathbf{x}, \mathbf{y}) \in A \; ; \; \|\mathbf{x}\| \; , \; \|\mathbf{y}\| \lesssim n \} \quad (n \in \mathbb{N}).$$

Then dom*(A)* = *<sup>n</sup>*∈<sup>N</sup> dom*(An)* and dom*(An)* <sup>⊆</sup> dom*(An*+1*)* for each *n* <sup>∈</sup> <sup>N</sup>. Since int conv dom*(A)* <sup>=</sup> <sup>∅</sup>, Lemma 17.2.6 gives int conv dom*(A)* <sup>=</sup> *<sup>n</sup>*∈<sup>N</sup> int conv dom*(An)*. Thus, it suffices to show that *<sup>A</sup>* is locally bounded at each *<sup>x</sup>* <sup>∈</sup> int conv dom*(An)* for each *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>. So, let *<sup>x</sup>* <sup>∈</sup> int conv dom*(An)* for some *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>. Then we find *δ >* 0 such that *<sup>B</sup>*[*x,δ*] ⊆ conv dom*(An)*. We show that *<sup>A</sup>*[*B(x, <sup>δ</sup>* <sup>2</sup> *)*] is bounded. So, let *(u, v)* <sup>∈</sup> *<sup>A</sup>* with *<sup>u</sup>* <sup>−</sup> *<sup>x</sup> <sup>&</sup>lt; <sup>δ</sup>* <sup>2</sup> and note that *u* ∈ conv dom*(An)* ⊆ *B*[0*, n*]. Then for each *(a, b)* ∈ *An* we have Re *u* − *a,v* − *b* -0 and thus

$$\begin{aligned} \operatorname{Re}\left\langle a - u, v \right\rangle &= \operatorname{Re}\left\langle a - u, v - b \right\rangle + \operatorname{Re}\left\langle a - u, b \right\rangle \\ &\leqslant \operatorname{Re}\left\langle a - u, b \right\rangle \leqslant 2n^2 \quad (a \in \operatorname{dom}(A\_n)). \end{aligned}$$

Clearly, this inequality carries over to each *a* ∈ conv dom*(An)*. If *v* = 0 we choose *<sup>a</sup>* := *<sup>δ</sup>* <sup>2</sup> *<sup>v</sup> <sup>v</sup>* <sup>+</sup> *<sup>u</sup>* <sup>∈</sup> *<sup>B</sup>*[*u, <sup>δ</sup>* <sup>2</sup> ] ⊆ *B*[*x,δ*] ⊆ conv dom*(An)*, and obtain

$$\|v\| \ll \frac{4n^2}{\delta},$$

which shows the boundedness of *<sup>A</sup>*[*B(x, <sup>δ</sup>* <sup>2</sup> *)*].

To complete the proof we need to show that int dom*(A)* = int conv dom*(A)* = int dom *(A)*. First we note that dom*(A)* is convex by Proposition 17.2.2 and hence, conv dom*(A)* = dom*(A)*. Now Exercise 17.4(b) gives

$$
\text{int}\overline{\text{dom}\,(A)} = \text{int}\overline{\text{conv}\,\text{dom}\,(A)} = \text{int}\,\text{conv}\,\text{dom}\,(A).
$$

To show the missing equality it suffices to prove that int conv dom*(A)* ⊆ dom*(A)*. So, let *x* ∈ int conv dom *(A)*. Then *x* ∈ dom *(A)* and hence, we find a sequence *((xn, yn))n*∈<sup>N</sup> in *A* with *xn* → *x*. Since *A* is locally bounded at *x*, the sequence *(yn)n*∈<sup>N</sup> is bounded and hence, we can assume without loss of generality that *yn* → *y* weakly for some *y* ∈ *H*. The demi-closedness of *A* (see Remark 17.1.1) yields *(x, y)* ∈ *A* and thus, *x* ∈ dom *(A)*.

Now we can prove the following perturbation result.

**Theorem 17.2.7** *Let A,B* ⊆ *H* × *H be maximal monotone,* int dom*(A)* ∩ dom*(B)* <sup>=</sup> <sup>∅</sup>*. Then <sup>A</sup>* <sup>+</sup> *<sup>B</sup> is maximal monotone.*

*Proof* By shifting *A* and *B*, we can assume without loss of generality that *(*0*,* 0*)* ∈ *A*∩*B* and 0 ∈ *(*int dom *(A))*∩dom*(B)*. We need to prove that ran*(*1+*A*+*B)* = *H*. So, let *y* ∈ *H* and set

$$\alpha\_{\lambda} := (1 + A + B\_{\lambda})^{-1}(\mathfrak{y}) \quad (\lambda > 0).$$

Since *(*0*,* 0*)* ∈ *A* ∩ *Bλ* and  *(*<sup>1</sup> <sup>+</sup> *<sup>A</sup>* <sup>+</sup> *Bλ)*−<sup>1</sup> Lip 1, we infer that *xλ y* for each *λ >* 0. For showing *y* ∈ ran*(*1 + *A* + *B)* we need to prove that sup*λ>*<sup>0</sup> *Bλ(xλ) <* ∞ by Proposition 17.2.4. By definition we find *yλ* ∈ *H* such that *(xλ, yλ)* ∈ *A* and *y* = *xλ* + *yλ* + *Bλ(xλ)* for each *λ >* 0. Since *A* is locally bounded at 0 ∈ int dom *(A)* by Proposition 17.2.5 we find *R, δ >* 0 with *B(*0*, δ)* ⊆ dom *(A)* and

$$\forall (u, v) \in A: \|u\| < \delta \Rightarrow \|v\| \leqslant R.$$

For *λ >* 0 we define *uλ* := *<sup>δ</sup>* <sup>2</sup> *yλ yλ* if *yλ* <sup>=</sup> 0 and *uλ* := 0 if *yλ* <sup>=</sup> 0. Then *uλ δ* <sup>2</sup> *< δ* and thus, *uλ* ∈ dom *(A)*. Hence, there exist *vλ* ∈ *H* with *(uλ, vλ)* ∈ *A* and *vλ R* for each *λ >* 0. The monotonicity of *A* then yields

$$\begin{split} 0 &\leqslant \operatorname{Re}\left\langle \boldsymbol{\chi}\_{\lambda} - \boldsymbol{\upsilon}\_{\lambda}, \boldsymbol{x}\_{\lambda} - \boldsymbol{u}\_{\lambda} \right\rangle \\ &= \operatorname{Re}\left\langle \boldsymbol{\chi}\_{\lambda}, \boldsymbol{x}\_{\lambda} \right\rangle - \operatorname{Re}\left\langle \boldsymbol{\upsilon}\_{\lambda}, \boldsymbol{x}\_{\lambda} \right\rangle - \operatorname{Re}\left\langle \boldsymbol{\chi}\_{\lambda}, \boldsymbol{u}\_{\lambda} \right\rangle + \operatorname{Re}\left\langle \boldsymbol{\upsilon}\_{\lambda}, \boldsymbol{u}\_{\lambda} \right\rangle \\ &\leqslant \operatorname{Re}\left\langle \boldsymbol{\chi} - \boldsymbol{x}\_{\lambda} - \boldsymbol{B}\_{\lambda}(\boldsymbol{x}\_{\lambda}), \boldsymbol{x}\_{\lambda} \right\rangle - \operatorname{Re}\left\langle \boldsymbol{\chi}\_{\lambda}, \boldsymbol{u}\_{\lambda} \right\rangle + \operatorname{R}\left\| \boldsymbol{\chi} \right\| + \frac{\delta}{2} \operatorname{R} \\ &\leqslant \operatorname{Re}\left\langle \boldsymbol{\chi}, \boldsymbol{x}\_{\lambda} \right\rangle - \operatorname{Re}\left\langle \boldsymbol{\chi}\_{\lambda}, \boldsymbol{u}\_{\lambda} \right\rangle + \operatorname{R}\left\| \boldsymbol{y} \right\| + \frac{\delta}{2} \operatorname{R} \\ &\leqslant \left\| \boldsymbol{y} \right\|^{2} - \operatorname{Re}\left\langle \boldsymbol{y}\_{\lambda}, \boldsymbol{u}\_{\lambda} \right\rangle + \operatorname{R}\left\| \boldsymbol{y} \right\| + \frac{\delta}{2} \operatorname{R}, \end{split}$$

where we have used the monotonicity of *Bλ* and *Bλ(*0*)* = 0 in the fourth line. Hence, we obtain

$$\frac{\delta}{2} \|\mathbf{y}\_{\lambda}\| = \text{Re}\left\langle \mathbf{y}\_{\lambda}, \mu\_{\lambda} \right\rangle \leqslant \left\| \mathbf{y}\right\|^2 + R \left\| \mathbf{y}\right\| + \frac{\delta}{2} R,$$

which shows that *(yλ)λ>*<sup>0</sup> is bounded and thus, also sup*λ>*<sup>0</sup> *Bλ(xλ) <* ∞.

## **17.3 A Solution Theory for Evolutionary Inclusions**

In this section we provide a solution theory for evolutionary inclusions by generalising Picard's theorem (see Theorem 6.2.1) to the following situation.

Throughout, we assume that *A* ⊆ *H* × *H* is a maximal monotone relation with *(*0*,* <sup>0</sup>*)* <sup>∈</sup> *A.* Moreover, let *<sup>M</sup>* : dom*(M)* <sup>⊆</sup> <sup>C</sup> <sup>→</sup> *L(H )* be a material law satisfying the usual positive definiteness constraint

$$\exists \boldsymbol{\upsilon}\_{0} \geqslant \mathsf{s}\_{\mathsf{b}}\left(\boldsymbol{M}\right), \boldsymbol{c} > \boldsymbol{0} \,\forall \boldsymbol{z} \in \mathbb{C}\_{\mathsf{Re}\geqslant\mathsf{\upsilon}\_{0}}, \boldsymbol{\phi} \in H \; : \; \mathsf{Re}\left<\boldsymbol{\phi}, \boldsymbol{z}\mathcal{M}(\boldsymbol{z})\boldsymbol{\phi}\right> \geqslant \boldsymbol{c} \,\left\|\boldsymbol{\phi}\right\|^{2} \;.$$

Then for *ν* max{*ν*0*,* 0}, *ν* = 0, we consider *evolutionary inclusions* of the form

$$\Phi(u, f) \in \overline{\partial\_{\mathbb{I}, \mathbb{V}} M(\partial\_{\mathbb{I}, \mathbb{V}}) + A\_{L\_{2, \mathbb{V}}(\mathbb{R}; H)}},\tag{17.3}$$

where *AL*2*,ν (*R;*H )* is defined as in Corollary 17.1.8. The solution theory for this kind of problems is as follows.

**Theorem 17.3.1** *Let ν* max{*ν*0*,* 0}*, ν* = 0*. Then the inverse relation Sν* := *∂t ,νM(∂t ,ν)* <sup>+</sup> *AL*2*,ν (*R;*H )*−<sup>1</sup> *is a Lipschitz-continuous mapping,* dom*(Sν )* <sup>=</sup> *<sup>L</sup>*2*,ν (*R; *H ) and Sν*Lip <sup>1</sup> *<sup>c</sup> . Moreover, the solution mapping Sν is causal and independent of ν in the sense that Sν (f )* = *Sμ(f ) for each <sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *H )* <sup>∩</sup> *<sup>L</sup>*2*,μ(*R; *H ) and <sup>μ</sup> ν* max{*ν*0*,* 0}*, ν* = 0*.*

In order to prove this theorem, we need some prerequisites. We start with an estimate, which will give us the uniqueness of the solution as well as the causality of the solution mapping *Sν* .

**Proposition 17.3.2** *Let ν* max{*ν*0*,* 0}*, ν* = 0*, and*

$$(\mu, f), (\propto, \mathfrak{g}) \in \overline{\partial\_{l, \upsilon} M(\partial\_{l, \upsilon}) + A\_{L\_{2, \upsilon}(\mathbb{R}; H)}} \cdot$$

*Then for all <sup>a</sup>* <sup>∈</sup> <sup>R</sup>

$$\left\|\mathbbm{1}\_{\left( -\infty, a\right]}(\mu - \varkappa)\right\|\_{L\_{2,\upsilon}} \leqslant \frac{1}{c} \left\|\mathbbm{1}\_{\left( -\infty, a\right]}(f - g)\right\|\_{L\_{2,\upsilon}}.$$

*Proof* By definition, we find sequences *((un, fn))n*∈<sup>N</sup> and *((xn, gn))n*∈<sup>N</sup> in *∂t ,νM(∂t ,ν)* + *AL*2*,ν (*R;*H )* such that *un* → *u, xn* → *x,fn* → *f* and *gn* → *g* as *<sup>n</sup>* → ∞. In particular, for each *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> we find *vn, yn* <sup>∈</sup> *<sup>L</sup>*2*,ν (*R; *H )* such that *(un, vn), (xn, yn)* ∈ *AL*2*,ν (*R;*H )* and

$$
\partial\_{\mathfrak{t},\boldsymbol{\upsilon}} \mathcal{M}(\partial\_{\mathfrak{t},\boldsymbol{\upsilon}}) \mu\_n + \boldsymbol{\upsilon}\_n = f\_n,
$$

$$
\partial\_{\mathfrak{t},\boldsymbol{\upsilon}} \mathcal{M}(\partial\_{\mathfrak{t},\boldsymbol{\upsilon}}) \mathfrak{x}\_n + \mathfrak{y}\_n = \mathfrak{g}\_n.
$$

Since *(*0*,* <sup>0</sup>*)* <sup>∈</sup> *<sup>A</sup>*, we infer *(*1*(*−∞*,a*]*un,* <sup>1</sup>*(*−∞*,a*]*vn), (*1*(*−∞*,a*]*xn,* <sup>1</sup>*(*−∞*,a*]*yn)* <sup>∈</sup> *AL*2*,ν (*R;*H )* and hence, we may estimate

$$\begin{split} & \operatorname{Re} \left\langle \mathbb{1}\_{(-\infty, a]}(f\_n - \mathbf{g}\_n), \boldsymbol{\mu}\_n - \mathbf{x}\_n \right\rangle \\ & \quad = \operatorname{Re} \left\langle \mathbb{1}\_{(-\infty, a]} \partial\_{\boldsymbol{l}, \boldsymbol{\nu}} M(\partial\_{\boldsymbol{l}, \boldsymbol{\nu}}) (\boldsymbol{\mu}\_n - \mathbf{x}\_n), \boldsymbol{\mu}\_n - \mathbf{x}\_n \right\rangle \\ & \quad \quad + \operatorname{Re} \left\langle \mathbb{1}\_{(-\infty, a]} \boldsymbol{\nu}\_n - \mathbb{1}\_{(-\infty, a]} \mathbf{y}\_n, \mathbb{1}\_{(-\infty, a]} \boldsymbol{\mu}\_n - \mathbb{1}\_{(-\infty, a]} \mathbf{x}\_n \right\rangle \\ & \quad \geqslant \operatorname{Re} \left\langle \mathbb{1}\_{(-\infty, a]} \partial\_{\boldsymbol{l}, \boldsymbol{\nu}} M(\partial\_{\boldsymbol{l}, \boldsymbol{\nu}}) (\boldsymbol{\mu}\_n - \mathbf{x}\_n), \boldsymbol{\mu}\_n - \mathbf{x}\_n \right\rangle, \end{split}$$

where we used Corollary 17.1.8. Moreover, since *<sup>z</sup>* → *(zM(z))*−<sup>1</sup> is a material law, *(∂t ,νM(∂t ,ν))*−<sup>1</sup> is causal. By Proposition 16.2.3, for *<sup>φ</sup>* <sup>∈</sup> dom*(∂t ,νM(∂t ,ν))* we have Re <sup>1</sup>*(*−∞*,a*]*∂t ,νM(∂t ,ν)φ, φ c* <sup>1</sup>*(*−∞*,a*]*<sup>φ</sup>* <sup>2</sup> . Thus, we end up with

$$\operatorname{Re}\left\langle \mathbb{1}\_{\left( -\infty, a\right]}(f\_n - g\_n), \mu\_n - x\_n \right\rangle \geqslant c \left\| \mathbb{1}\_{\left( -\infty, a\right]}(\mu\_n - x\_n) \right\|^2,$$

which yields

$$\left\| \mathbf{1}\_{\left( -\infty, a\right]} (\boldsymbol{u}\_n - \boldsymbol{x}\_n) \right\| \leqslant \frac{1}{c} \left\| \mathbf{1}\_{\left( -\infty, a\right]} (f\_n - \boldsymbol{g}\_n) \right\|.$$

Letting *n* → ∞, we derive the assertion.

Next, we address the existence of a solution for (17.3) for suitable right-hand sides *f* . For this, we provide another useful characterisation for the weak differentiability of a function in *<sup>L</sup>*2*,ν (*R; *H )*.

**Lemma 17.3.3** *Let <sup>ν</sup>* <sup>∈</sup> <sup>R</sup>*, <sup>u</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *H ). Then <sup>u</sup>* <sup>∈</sup> dom*(∂t ,ν) if and only if* sup0*<hh*<sup>0</sup> 1 *<sup>h</sup> τhu* − *u <* ∞ *for some h*<sup>0</sup> *>* 0*. In either case*

$$\frac{1}{h}(\mathfrak{r}\_h \mu - \mathfrak{u}) \to \partial\_{\mathfrak{l}\_l \mathbb{V}} \mu \quad (h \to 0).$$

*in <sup>L</sup>*2*,ν (*R; *H ).*

*Proof* For *h >* 0 we consider the operator *Dh* : *<sup>L</sup>*2*,ν(*R; *H )* <sup>→</sup> *<sup>L</sup>*2*,ν(*R; *H )* given by *Dhv* <sup>=</sup> <sup>1</sup> *<sup>h</sup> (τhv* <sup>−</sup> *v)*. If *<sup>v</sup>* <sup>∈</sup> *<sup>C</sup>*<sup>1</sup> <sup>c</sup> *(*R; *H )* we estimate

$$\begin{split} \left\| \|D\_{h}v\| \right\|^{2} &= \int\_{\mathbb{R}} \frac{1}{h^{2}} \left\| v(t+h) - v(t) \right\|^{2} \operatorname{e}^{-2\nu t} \operatorname{d}t = \int\_{\mathbb{R}} \frac{1}{h^{2}} \left\| \int\_{0}^{h} v'(t+s) \operatorname{d}s \right\|^{2} \operatorname{e}^{-2\nu t} \operatorname{d}t \\ &\leqslant \int\_{\mathbb{R}} \frac{1}{h} \int\_{0}^{h} \left\| v'(t+s) \right\|^{2} \operatorname{d}s \operatorname{e}^{-2\nu t} \operatorname{d}t = \frac{1}{h} \int\_{0}^{h} \int\_{\mathbb{R}} \left\| v'(t+s) \right\|^{2} \operatorname{e}^{-2\nu t} \operatorname{d}t \operatorname{d}s \\ &\leqslant \operatorname{e}^{2\nu h} \left\| v' \right\|^{2} .\end{split}$$

By density of *C*<sup>1</sup> <sup>c</sup> *(*R; *H )* in *<sup>H</sup>*<sup>1</sup> *<sup>ν</sup> (*R; *H )* we infer that

$$\sup\_{0 \le h \le 1} \|D\_h\|\_{L(H^1\_\nu(\mathbb{R}; H), L\_{2, \nu}(\mathbb{R}; H))} \lesssim \mathbf{e}^\nu\ .$$

Moreover, for *<sup>v</sup>* <sup>∈</sup> *<sup>C</sup>*<sup>1</sup> <sup>c</sup> *(*R; *H )* it is clear that *Dhv* <sup>→</sup> *<sup>v</sup>* in *<sup>L</sup>*2*,ν (*R; *H )* as *<sup>h</sup>* <sup>→</sup> 0 by dominated convergence. Since *(Dh)*<sup>0</sup>*h*<sup>1</sup> is uniformly bounded, the convergence carries over to elements in *H*<sup>1</sup> *<sup>ν</sup> (*R; *H )*, which proves the first asserted implication and the convergence statement.

Assume now that sup0*<hh*<sup>0</sup> 1 *<sup>h</sup> τhu* − *u <* ∞ for some *h*<sup>0</sup> *>* 0*.* Choosing a suitable sequence *(hn)n*∈<sup>N</sup> in *(*0*, h*0] with *hn* → 0 as *n* → ∞, we can assume that <sup>1</sup> *hn (τhnu* <sup>−</sup> *u)* <sup>→</sup> *<sup>v</sup>* weakly for some *<sup>v</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν (*R; *H ).* Then we compute for each *φ* ∈ *C*<sup>∞</sup> <sup>c</sup> *(*R; *H )*

$$
\begin{split}
\langle\boldsymbol{\upsilon},\boldsymbol{\phi}\rangle &= \lim\_{n\to\infty} \int\_{\mathbb{R}} \frac{1}{h\_{n}} \left\langle \boldsymbol{\mu}(t+h\_{n}) - \boldsymbol{\mu}(t), \boldsymbol{\phi}(t) \right\rangle \mathrm{e}^{-2\boldsymbol{\upsilon}t} \,\mathrm{d}t \\&= \lim\_{n\to\infty} \int\_{\mathbb{R}} \frac{1}{h\_{n}} \left\langle \boldsymbol{\mu}(t), \boldsymbol{\phi}(t-h\_{n}) \mathrm{e}^{2\boldsymbol{\upsilon}h\_{n}} - \boldsymbol{\phi}(t) \right\rangle \mathrm{e}^{-2\boldsymbol{\upsilon}t} \,\mathrm{d}t \\&= \int\_{\mathbb{R}} \left\langle \boldsymbol{\mu}(t), -\boldsymbol{\phi}'(t) + 2\boldsymbol{\upsilon}\boldsymbol{\phi}(t) \right\rangle \mathrm{e}^{-2\boldsymbol{\upsilon}t} \,\mathrm{d}t = \left\langle \boldsymbol{\mu}, \,\partial\_{t,\boldsymbol{\upsilon}}^{\*}\boldsymbol{\phi} \right\rangle,
\end{split}
$$

which—as *C*∞ <sup>c</sup> *(*R; *H )* is a core for *<sup>∂</sup>*<sup>∗</sup> *t ,ν* (see Proposition 3.2.4 and Corollary 3.2.6)—shows *u* ∈ dom*(∂*∗∗ *t ,ν )* = dom*(∂t ,ν).*

**Proposition 17.3.4** *Let ν ν*<sup>0</sup> *and f* ∈ dom*(∂t ,ν). Then there exists u* ∈ dom*(∂t ,ν ) such that*

$$(u, f) \in \partial\_{\mathfrak{l}, \boldsymbol{\upsilon}} M(\partial\_{\mathfrak{l}, \boldsymbol{\upsilon}}) + A\_{L\_{2, \boldsymbol{\upsilon}}(\mathbb{R}; H)} \cdot$$

*Proof* We recall that *B* := *∂t ,νM(∂t ,ν)*−*c* is maximal monotone by Example 17.1.3. Let *λ >* 0 and set

$$\mu\_{\lambda} := \left(c + B + \left(A\_{L\_{2,\boldsymbol{\nu}}(\mathbb{R}; H)}\right)\_{\lambda}\right)^{-1}(f) = \left(\partial\_{l,\boldsymbol{\nu}}M(\partial\_{l,\boldsymbol{\nu}}) + \left(A\_{L\_{2,\boldsymbol{\nu}}(\mathbb{R}; H)}\right)\_{\lambda}\right)^{-1}(f).$$

We remark that *AL*2*,ν (*R;*H ) <sup>λ</sup>* = *Aλ <sup>L</sup>*2*,ν (*R;*H )* (see Exercise 17.6). Hence, we have *τh AL*2*,ν (*R;*H ) <sup>λ</sup>* = *AL*2*,ν (*R;*H ) <sup>λ</sup>τh* for each *h >* 0. Thus, we obtain

$$\pi\_{\hbar}\mu\_{\lambda} = \left(\partial\_{\mathbb{I},\boldsymbol{\nu}}M(\partial\_{\mathbb{I},\boldsymbol{\nu}}) + \left(A\_{L\_{2,\boldsymbol{\nu}}(\mathbb{R};H)}\right)\_{\lambda}\right)^{-1}(\pi\_{\hbar}f)\_{\boldsymbol{\nu}}$$

and so, due to the monotonicity of *B* and *AL*2*,ν (*R;*H ) λ* ,

$$\|\mathfrak{t}\_h \mu\_\lambda - \mu\_\lambda\| \lesssim \frac{1}{c} \|\mathfrak{t}\_h f - f\|\,.$$

Dividing both sides by *h* and using Lemma 17.3.3, we infer that *uλ* ∈ dom*(∂t ,ν )* and

$$\|\|\partial\_{\mathfrak{t},\boldsymbol{\upsilon}}u\_{\boldsymbol{\lambda}}\|\| = \lim\_{h \to 0} \frac{1}{h} \|\tau\_{h}u\_{\boldsymbol{\lambda}} - u\_{\boldsymbol{\lambda}}\| \leqslant \frac{1}{c} \sup\_{0 < h \leqslant 1} \frac{1}{h} \|\tau\_{h}f - f\| =: K$$

and hence,

$$\sup\_{\lambda>0} \left\| \left( A\_{L\_{2,\mathbb{J}}(\mathbb{R};H)} \right)\_{\lambda} (u\_{\lambda}) \right\| = \sup\_{\lambda>0} \left\| f - \partial\_{l,\mathbb{V}} M(\partial\_{l,\mathbb{V}}) u\_{\lambda} \right\| \leqslant \left\| f \right\| + K \left\| M(\partial\_{l,\mathbb{V}}) \right\|.$$

Proposition 17.2.4 implies *uλ* → *u* as *λ* → 0 and *(u, f )* ∈ *∂t ,νM(∂t ,ν)*+*AL*2*,ν (*R;*H )*. Moreover, since *(∂t ,νuλ)λ>*<sup>0</sup> is uniformly bounded, we can choose a suitable nullsequence *(λn)n*∈<sup>N</sup> in *(*0*,*∞*)* such that *∂t ,νuλn* → *v* weakly for some *v* ∈ *<sup>L</sup>*2*,ν(*R; *H ).* Since *∂t ,ν* is closed and hence, weakly closed (either use *<sup>∂</sup>*∗∗ *t ,ν* = *∂t ,ν* or Mazur's theorem [50, Corollary 2.11]) again), we infer that *u* ∈ dom*(∂t ,ν).*

We are now in the position to prove Theorem 17.3.1.

*Proof of Theorem 17.3.1* Let *ν ν*0. Since *∂t ,νM(∂t ,ν)* − *c* is monotone (Example 17.1.3), the relation *∂t ,νM(∂t ,ν)* + *AL*2*,ν (*R;*H )* − *c* is monotone and thus, *(∂t ,νM(∂t ,ν)*+*AL*2*,ν (*R;*H ))*−<sup>1</sup> defines a Lipschitz-continuous mapping with smallest Lipschitz-constant less than or equal to <sup>1</sup> *<sup>c</sup> .* Since this mapping is densely defined by Proposition 17.3.4, it follows that *Sν* = *∂t ,νM(∂t ,ν)* <sup>+</sup> *AL*2*,ν (*R;*H )*−<sup>1</sup> is Lipschitzcontinuous with *Sν*Lip <sup>1</sup> *<sup>c</sup>* and dom*(Sν )* <sup>=</sup> *<sup>L</sup>*2*,ν(*R; *H )*. Moreover, *Sν* is causal, since for *f, g* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *H )* with <sup>1</sup>*(*−∞*,a*]*<sup>f</sup>* <sup>=</sup> <sup>1</sup>*(*−∞*,a*]*<sup>g</sup>* for some *<sup>a</sup>* <sup>∈</sup> <sup>R</sup> it follows that <sup>1</sup>*(*−∞*,a*]*Sν (f )* <sup>=</sup> <sup>1</sup>*(*−∞*,a*]*Sν (g)* by Proposition 17.3.2. Thus, the only thing left to be shown is the independence of the parameter *ν*. So, let *<sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *H )* <sup>∩</sup> *<sup>L</sup>*2*,μ(*R; *H )* for some *<sup>ν</sup>*<sup>0</sup> *<sup>ν</sup> μ.* Then we find a sequence *(φn)n*∈<sup>N</sup> in *<sup>C</sup>*<sup>1</sup> <sup>c</sup> *(*R; *H )* with *φn* <sup>→</sup> *<sup>f</sup>* in both *<sup>L</sup>*2*,ν (*R; *H )* and *<sup>L</sup>*2*,μ(*R; *H )*. We set *un* := *Sν (φn)* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *H )* and since 0 <sup>=</sup> *Sν (*0*)*, we derive that inf spt *un* inf spt *φn <sup>&</sup>gt;* −∞ by Proposition 17.3.2. Thus, *un* <sup>∈</sup> *<sup>L</sup>*2*,μ(*R; *H )* and since *un* ∈ dom*(∂t ,ν)* by Proposition 17.3.4 and spt *∂t ,νun* ⊆ spt *un*, we infer that also *∂t ,νun* <sup>∈</sup> *<sup>L</sup>*2*,μ(*R; *H )*, which shows *un* <sup>∈</sup> dom*(∂t ,μ)* and *∂t ,μun* <sup>=</sup> *∂t ,νun* by Exercise 11.1. By Theorem 5.3.6 it follows that

$$\begin{aligned} \partial\_{l,\boldsymbol{\nu}}M(\partial\_{l,\boldsymbol{\nu}})\mu\_{\boldsymbol{n}} &= M(\partial\_{l,\boldsymbol{\nu}})\partial\_{l,\boldsymbol{\nu}}\mu\_{\boldsymbol{n}} = M(\partial\_{l,\boldsymbol{\nu}})\partial\_{l,\boldsymbol{\mu}}\mu\_{\boldsymbol{n}} \\ &= M(\partial\_{l,\boldsymbol{\mu}})\partial\_{l,\boldsymbol{\mu}}\mu\_{\boldsymbol{n}} = \partial\_{l,\boldsymbol{\mu}}M(\partial\_{l,\boldsymbol{\mu}})\mu\_{\boldsymbol{n}}. \end{aligned}$$

Since we have *(un, φn* − *∂t ,νM(∂t ,ν)un)* ∈ *AL*2*,ν (*R;*H )* it follows that *(un, φn* − *∂t ,μM(∂t ,μ)un)* ∈ *AL*2*,μ(*R;*H )* by the definition of *AL*2*,μ(*R;*H )* and thus, *un* = *Sμ(φn).* Letting *n* → ∞, we finally derive *Sμ(f )* = *Sν (f )*.

## **17.4 Maxwell's Equations in Polarisable Media**

We recall Maxwell's equations from Chap. 6. Let <sup>⊆</sup> <sup>R</sup><sup>3</sup> open. Then the electric field *E* and the magnetic induction *B* are linked via Faraday's law

$$
\partial\_{\mathfrak{l},\upsilon} B + \operatorname{curl}\_0 E = 0,
$$

where we assume the electric boundary condition for *E*. Moreover, the electric displacement *D*, the current *jc* and the magnetic field *H* are linked via Ampère's law

$$
\partial\_{l,\boldsymbol{\upsilon}} D + j\_c - \operatorname{curl} H = j\_0,
$$

where *j*<sup>0</sup> is a given external current. Classically, *D* and *E* as well as *B* and *H* are linked by the constitutive relations

$$D = \varepsilon E, \text{ and } B = \mu H,$$

where *ε, μ* <sup>∈</sup> *L(L*2*()*3*)* model the dielectricity and magnetic permeability, respectively. In a non-polarisable medium, we would additionally assume Ohm's law that links *jc* and *<sup>E</sup>* by *jc* <sup>=</sup> *σ E* with *<sup>σ</sup>* <sup>∈</sup> *L(L*2*()*3*)*. In polarisable media however, this relation is replaced as follows

$$\begin{aligned} \|E\| &< E\_0 \Rightarrow j\_c = \sigma E\\ \|E\| &= E\_0 \Rightarrow \exists \lambda \geqslant 0 : j\_c = (\sigma + \lambda)E, \end{aligned} \tag{17.4}$$

where *E*<sup>0</sup> *>* 0 is the called the threshold of ionisation of the underlying medium. The above relation is used to model the following phenomenon: Assume that the medium is not or weakly electrically conductive (i.e., *σ* is very small) but if the electric field is strong enough (i.e., reaching the threshold *E*0), the medium polarises and allows for a current flow proportional to the electric field. Such phenomena occur for instance in certain gases between two capacitor plates, where the gas becomes a conductor if the electric field is strong enough.

Our first goal is to formulate (17.4) in terms of a binary relation. For this, we set

$$B := \left\{ (u, v) \in L\_2(\Omega)^3 \times L\_2(\Omega)^3 \; ; \; \|u\| \leqslant E\_0, \; \text{Re}\; \langle u, v \rangle = E\_0 \; \|v\| \right\}.$$

**Lemma 17.4.1** *Let u, v* <sup>∈</sup> *<sup>L</sup>*2*()*3*. Then (u, v)* <sup>∈</sup> *<sup>B</sup> if and only if*

$$(\|\mu\| \leqslant E\_0) \text{ and } (\|\mu\| \prec E\_0 \Rightarrow v = 0) \text{ and } (\|\mu\| = E\_0 \Rightarrow \exists \lambda \geqslant 0 \text{ : } v = \lambda \mu).$$

*Proof* Assume first that *(u, v)* ∈ *B*. Then *u E*<sup>0</sup> by definition. Moreover,

$$E\_0 \|v\| = \text{Re } \langle u, v \rangle \leqslant \|u\| \text{ } \|v\|.$$

and hence, if *u < E*<sup>0</sup> it follows that *v* = 0. Moreover, if *u* = *E*<sup>0</sup> we have equality and thus, *<sup>u</sup>* and *<sup>v</sup>* are linearly dependent; that is, we find *<sup>λ</sup>*1*, λ*<sup>2</sup> <sup>∈</sup> <sup>C</sup> with *λ*1*λ*<sup>2</sup> = 0 such that *λ*1*u* + *λ*2*v* = 0. Note that *λ*<sup>2</sup> = 0 since *u* = 0 and hence, we get *<sup>v</sup>* <sup>=</sup> *λu* with *<sup>λ</sup>* := −*λ*<sup>1</sup> *λ*2 . We then have

$$0 \le |\lambda| E\_0^2 = \|v\| \, E\_0 = \text{Re}\,\langle u, v \rangle = \text{Re}\,\lambda \, \|u\|^2 = \text{Re}\,\lambda \, E\_0^2,$$

which shows 0 Re *λ* = |*λ*| and thus, *λ* -0. The other implication is trivial.

The latter lemma shows that *(E, jc)* satisfies (17.4) if and only if *(E, jc* −*σE)* ∈ *B*, or equivalently *(E, jc)* ∈ *σ* + *B*. Thus, we may reformulate Maxwell's equations in a polarisable medium as follows

$$\left( \begin{pmatrix} E \\ H \end{pmatrix}, \begin{pmatrix} j\_0 \\ 0 \end{pmatrix} \right) \in \partial\_{\mathbb{H}, \mathbb{V}} \begin{pmatrix} \varepsilon & 0 \\ 0 \ \mu \end{pmatrix} + \begin{pmatrix} \sigma & 0 \\ 0 \ 0 \end{pmatrix} + \begin{pmatrix} B & -\operatorname{curl} \\ \operatorname{curl}\_0 & 0 \end{pmatrix}.$$

To apply our solution theory in Theorem 17.3.1, we need to ensure that

$$A \coloneqq \begin{pmatrix} B & -\operatorname{curl} \\ \operatorname{curl}\_0 & 0 \end{pmatrix} = \begin{pmatrix} B \ 0 \\ 0 \ 0 \end{pmatrix} + \begin{pmatrix} 0 & -\operatorname{curl} \\ \operatorname{curl}\_0 & 0 \end{pmatrix} \tag{17.5}$$

defines a maximal monotone relation on *<sup>L</sup>*2*()*6×*L*2*()*6. This will be done by the perturbation result presented in Theorem 17.2.7. We start by showing the maximal monotonicity of *B*.

**Lemma 17.4.2** *We define the function <sup>I</sup>* : *<sup>L</sup>*2*()*<sup>3</sup> <sup>→</sup> *(*−∞*,*∞] *by*

$$I(u) = \begin{cases} 0 & \text{if } \|u\| \leqslant E\_0, \\ \infty & \text{otherwise.} \end{cases}$$

*Then I is convex, proper and l.s.c. Moreover, B* = *∂I . In particular, B is maximal monotone.*

*Proof* This is part of Exercise 17.7.

**Proposition 17.4.3** *The relation A given by (17.5) is maximal monotone with (*0*,* 0*)* ∈ *A.*

*Proof* Since *B* is maximal monotone by Lemma 17.4.2, it is easy to see that *B* 0 0 0 is maximal monotone, too. Moreover, by definition we see that 0 ∈ int dom*(B)* and thus, 0 <sup>∈</sup> int dom *B* 0 0 0 <sup>=</sup> int dom*(B)* <sup>×</sup> *<sup>L</sup>*2*()*3. Since clearly 0 <sup>∈</sup> dom <sup>0</sup> <sup>−</sup> curl curl0 0 and <sup>0</sup> <sup>−</sup> curl curl0 0 is maximal monotone (see Example 17.1.3), the assertion follows from Theorem 17.2.7.

**Theorem 17.4.4** *Let ε, μ, σ* <sup>∈</sup> *L(L*2*()*3*) with ε, μ selfadjoint. Moreover, assume there exist ν*0*,c >* 0 *such that*

$$
\nu \varepsilon + \text{Re}\,\sigma \geqslant c \text{ and } \mu \geqslant c \quad (\nu \geqslant \nu\_0).
$$

*Then for each ν ν*<sup>0</sup> *we have that*

$$S\_{\mathbb{V}} := \overline{\left(\partial\_{l,\mathbb{V}}\begin{pmatrix} \varepsilon & 0\\ 0 \ \mu \end{pmatrix} + \begin{pmatrix} \sigma \ 0\\ 0 \ 0 \end{pmatrix} + \begin{pmatrix} B & -\operatorname{curl}\\ \operatorname{curl}\_{0} & 0 \end{pmatrix}\_{L\_{2,\mathbb{V}}(\mathbb{R}; L\_{2}(\Omega)^{6})}\right)^{-1}}$$

*is a Lipschitz-continuous mapping with* dom*(Sν )* <sup>=</sup> *<sup>L</sup>*2*,ν(*R; *<sup>L</sup>*2*()*6*) and Sν*Lip <sup>1</sup> *<sup>c</sup> . Moreover, Sν is causal and independent of ν in the sense that Sν (f )* = *Sη(f ) whenever ν, η <sup>ν</sup>*<sup>0</sup> *and <sup>f</sup>* <sup>∈</sup> *<sup>L</sup>*2*,ν(*R; *<sup>L</sup>*2*()*6*)* <sup>∩</sup> *<sup>L</sup>*2*,η(*R; *<sup>L</sup>*2*()*6*).*

*Proof* This follows from Theorem 17.3.1 applied to *M(z)* := *ε* 0 0 *μ* <sup>+</sup> *<sup>z</sup>*−<sup>1</sup> *σ* 0 0 0 and *A* as in (17.5).

## **17.5 Comments**

The concept of maximal monotone relations in Hilbert spaces was first introduced by Minty in 1960 for the study of networks [66] and became a well-studied subject also with generalisations to the Banach space case. For this topic we refer to the monographs [16] and [49, Chapter 3]. The concept of subgradients is older and it was found out by Rockafellar [99] that subgradients are maximal monotone. Indeed, one can show that subgradients are precisely the cyclically maximal monotone relations (see e.g. [16, Theoreme 2.5]).

The Theorem of Minty was proved in 1962, [65] and generalised to the case of reflexive Banach spaces by Rockafellar in 1970 [100]. The proof presented here follows [106] and was kindly communicated by Ralph Chill and Hendrik Vogt.

The classical way to approach differential inclusions of the form *(u, f )* ∈ *∂t* + *A* where *A* is maximal monotone uses the theory of nonlinear semigroups of contractions, introduced by Komura in the Hilbert space case, [56] and generalised to the Banach space case by Crandall and Pazy, [24]. The results on evolutionary inclusions presented in this chapter are based on [117, 118] and were further generalised to non-autonomous problems in [122, 126].

The model for Maxwell's equations in polarisable media can be found in [36, Chapter VII]. We note that in this reference, condition (17.4) is replaced by

$$\begin{aligned} |E| &< E\_0 \Rightarrow j\_c = \sigma E\\ |E| &= E\_0 \Rightarrow \exists \lambda \geqslant 0 : j\_c = (\sigma + \lambda)E, \end{aligned}$$

which should hold almost everywhere. To solve this problem, one cannot apply Theorem 17.2.7, since 0 is not an interior point of the domain of the corresponding relation and thus, a weaker notion of solution is needed to tackle this problem, see [36, Theorem 8.1].

## **Exercises**

**Exercise 17.1** Let *f* : *H* → *(*−∞*,*∞] be convex, proper and l.s.c. Moreover, assume that *f* is differentiable in *x* ∈ *H* (in particular, *f <* ∞ in a neighbourhood of *x*). Show that *(x, y)* ∈ *∂f* if and only if *y* = *f* - *(x)*.

**Exercise 17.2** Let *f, g* : *H* → *(*−∞*,*∞]. Prove that


**Exercise 17.3** Let *H* be a Hilbert space, *(xn)n*∈<sup>N</sup> in *H* and *x* ∈ *H*. Show, that *xn* → *x* if and only if *xn* → *x* weakly and lim sup*n*→∞ *xn x*.

**Exercise 17.4** Let *X* be a normed space (or, more generally, a topological vector space) and *C* ⊆ *X* convex. Prove the following statements:


(c) If *C* is open and *K* ⊆ *X* is open with *K* ⊆ *C*. Then *K* ⊆ *C*.

*Hint*: For (a) take an open set *U* ⊆ *X* with 0 ∈ *U* such that *x* + *U* − *U* ⊆ *C* and show *(*1 − *t)x* + *ty* + *(*1 − *t)U* ⊆ *C*.

**Exercise 17.5** Let *X* be a topological space and *U* ⊆ *X* open. We equip *U* with the trace topology. Prove the following statements:

(a) For *<sup>A</sup>* <sup>⊆</sup> *<sup>U</sup>* we have *<sup>A</sup><sup>U</sup>* <sup>=</sup> *<sup>A</sup><sup>X</sup>* <sup>∩</sup> *<sup>U</sup>* and int*<sup>U</sup> <sup>A</sup>* <sup>=</sup> int*<sup>X</sup> <sup>A</sup>*.

(b) If *<sup>A</sup>* <sup>⊆</sup> *<sup>U</sup>* is closed in *<sup>U</sup>* and int*<sup>U</sup> <sup>A</sup>* <sup>=</sup> <sup>∅</sup>, then int*<sup>X</sup> <sup>A</sup><sup>X</sup>* <sup>=</sup> <sup>∅</sup>.

(c) If *X* is a Baire space, then *U* is a Baire space.

Recall, that a topological space *X* is a *Baire space* if for each sequence *(An)n*∈<sup>N</sup> of closed sets with int *An* <sup>=</sup> <sup>∅</sup> it follows that int *<sup>n</sup>*∈<sup>N</sup> *An* <sup>=</sup> <sup>∅</sup> or, equivalently, if for each sequence *(Un)n*∈<sup>N</sup> of open and dense sets it follows that " *<sup>n</sup>*∈<sup>N</sup> *Un* is dense.

**Exercise 17.6** Let *A* ⊆ *H* × *H* be maximal monotone.

(a) Let *μ, λ >* 0. Show that *(Aλ)μ* = *Aλ*+*μ*.

(b) Let *(*0*,* 0*)* ∈ *A* and *(, A, μ)* a *σ*-finite measure space. Prove that *(Aλ)L*2*(μ)* = *(AL*2*(μ))λ* for each *λ >* 0.

**Exercise 17.7** Let *H* be a Hilbert space and *C* ⊆ *H* non-empty, convex and closed. Moreover, define *IC* : *H* → *(*−∞*,*∞] by

$$I\_C(x) := \begin{cases} 0 & \text{if } x \in C, \\ \infty & \text{otherwise.} \end{cases}$$

Show that *IC* is convex, proper and l.s.c. and show

$$(\mathbf{x}, \mathbf{y}) \in \partial I\_C \Leftrightarrow \mathbf{x} \in C, \forall \mu \in C: \text{Re}\left\langle \mathbf{y}, \mu - \mathbf{x} \right\rangle \leqslant 0.$$

Moreover, prove Lemma 17.4.2.

## **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Appendix A Derivations of Main Equations**

In this appendix we will derive the main equations studied in this book from a mere Physics' point of view. We will start with the heat equation and then turn to Maxwell's equations. After that, we derive the equations for linear elasticity and finally deduce the wave equation from elasticity theory.

## **A.1 Heat Equation**

The heat equation describes the energy transport between materials due to a difference in temperature, where the transport evolves from high temperature to low temperature. Let <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* be open. Let *<sup>θ</sup>* : <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup> be the heat distribution. As a physical principle, we ask for conservation of total energy. For a Borel subset *<sup>V</sup>* <sup>⊆</sup> with smooth boundary let *QV* : <sup>R</sup> <sup>→</sup> <sup>R</sup> given by *QV (t)* := *<sup>V</sup> θ (t , x)* d*x* be the time-dependent heat content (i.e., the energy) in *V* . Then for a system without external heat sources, changes of *QV* can only result in heat fluxes along the boundary of *<sup>V</sup>* . Let *<sup>q</sup>* : <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup>*<sup>d</sup>* be the heat flux, which can be interpreted as a density. Then

$$
\partial\_t \mathcal{Q}\_V(t) = -\int\_{\partial V} q(t, \mathbf{x}) \cdot \nu(\mathbf{x}) \, \mathrm{d}S(\mathbf{x}),
$$

where *ν* is the outward unit normal on *∂V* . By Gauss' divergence theorem, we thus have

$$\partial\_t \mathcal{Q}\_V(t) = -\int\_V \text{div}\, q(t,x) \, \text{d}x.$$

299

On the other hand, interchanging the time derivative and integration, we observe

$$
\partial\_t \mathcal{Q}\_V(t) = \int\_V \partial\_t \theta(t, x) \, \mathrm{d}x.
$$

Hence,

$$\int\_{V} \left(\partial\_{l}\theta(t, \mathbf{x}) + \text{div}\, q(t, \mathbf{x})\right) \text{d}\mathbf{x} = \mathbf{0}.$$

Since *V* ⊆ was arbitrary, we conclude the continuity equation

$$
\partial\_t \theta + \text{div} \, q = 0.
$$

In presence of an external heat source *<sup>Q</sup>*: <sup>R</sup>× <sup>→</sup> <sup>R</sup>, the continuity equation turns into the heat flux balance

$$
\partial\_t \theta + \text{div} \, q = \mathcal{Q}.
$$

In order to incorporate that the energy transport runs from regions of high temperature to regions of low temperature, we make use of Fourier's law stating that the heat flux at time *t* and position *x* is determined by the gradient of the temperature at *t* and *x*; that is,

$$q(t, \mathbf{x}) = -a(\mathbf{x}) \operatorname{grad} \theta(t, \mathbf{x}),$$

where *<sup>a</sup>* : <sup>→</sup> <sup>R</sup>*d*×*<sup>d</sup>* is the heat conductivity, and we may assume that *a(x)* is invertible for all *x* ∈ . We thus arrive at the heat equation

$$
\partial\_t \theta + \text{div} \, q = \mathcal{Q},
$$

$$
a^{-1}q + \text{grad} \, \theta = 0,
$$

or, put differently,

$$
\partial\_t \theta - \text{div}(a \,\text{grad}\,\theta) = Q.
$$

## **A.2 Maxwell's Equations**

Maxwell's equations are the governing equations in electrodynamics and describe the evolution of the electromagnetic fields. Let <sup>⊆</sup> <sup>R</sup><sup>3</sup> be a *domain*; that is, open and connected. The physical quantities of interest in Maxwell's equations in vacuum are the time-dependent electric field *<sup>E</sup>* : <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup><sup>3</sup> and magnetic induction *<sup>B</sup>* : <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup><sup>3</sup> on , since they can be observed via their action as a force.

#### A Derivations of Main Equations 301

Given two point charges *q,q* at distinct points *x,x*- ∈ , respectively, the Coulomb force

$$F = q \frac{1}{4\pi\varepsilon\_0} q' \frac{x - x'}{\|x - x'\|^3}$$

can be observed, where *ε*<sup>0</sup> is the dielectric constant in vacuum. More precisely, *F* is the force on the point charge *q* at *x* induced by the point charge *q* at *x*- :

$$\stackrel{q}{\dashv}\_{\mathfrak{F}} \xleftarrow{F} \xrightarrow[\mathfrak{x'}]{q'} \stackrel{q'}{\dashv}$$

The electrical field at time *t* and position *x* induced by *q* at *x*is then given by

$$E(t,x) = \frac{1}{4\pi\varepsilon\_0}q'\frac{x-x'}{\|x-x'\|^3}$$

such that it acts locally on the point charge *q* at *x* via the Coulomb force

$$F = qE(t, x).$$

Let us generalise from point charges, formally given by *q*- *δx*- , to charge densities. Let *<sup>ρ</sup>* : <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup> be the time-dependent charge density. Then the electric field at time *t* and position *x* is given by

$$E(t, \mathbf{x}) = \frac{1}{4\pi\varepsilon\_0} \int\_{\Omega} \rho(t, \mathbf{x'}) \frac{\mathbf{x} - \mathbf{x'}}{\left\|\mathbf{x} - \mathbf{x'}\right\|^3} d\mathbf{x'}.$$

By Exercise A.1 we can rewrite this as

$$\begin{split} E(t, \mathbf{x}) &= -\frac{1}{4\pi\varepsilon\_0} \int\_{\Omega} \operatorname{grad} \frac{\rho(t, \mathbf{x}')}{\|\mathbf{x} - \mathbf{x}'\|} \, \mathrm{d}\mathbf{x}' = -\operatorname{grad} \left( \frac{1}{4\pi\varepsilon\_0} \int\_{\Omega} \frac{\rho(t, \mathbf{x}')}{\|\mathbf{x} - \mathbf{x}'\|} \, \mathrm{d}\mathbf{x}' \right) \\ &= -\operatorname{grad} \Phi(t, \mathbf{x}), \end{split}$$

where : <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup> given by *(t , x)* := <sup>1</sup> 4*πε*0 *ρ(t ,x*- *) x*−*x*- <sup>d</sup>*x* is the electric potential.

Analogously, the magnetic induction acts as a force as follows. We first consider two closed non-intersecting curves *C* and *C* in decribing two wires and let *I* and *I* be (constant) currents on *C* and *C*- , respectively. Then the force between these two wires is given by

$$F = I \frac{\mu\_0}{4\pi} I' \int\_C \int\_{C'} \left(\frac{\mathbf{x} - \mathbf{x'}}{\left\|\mathbf{x} - \mathbf{x'}\right\|^3} \times \mathbf{dx'}\right) \times \mathbf{dx} \,\mathrm{d}x,$$

where *μ*<sup>0</sup> is the permeability in vacuum.

Thus, the magnetic induction at time *t* induced by the wire *C* acting at a point *x* from *C* is given by

$$B(t, \mathbf{x}) = -\frac{\mu\_0}{4\pi} I' \int\_{C'} \frac{\mathbf{x} - \mathbf{x'}}{\left\| \mathbf{x} - \mathbf{x'} \right\|^3} \times \mathbf{d} \mathbf{x'},$$

such that it acts via the force

$$F = I \int\_{C} B(t, x) \times \,\mathrm{d}x.$$

Let us generalise from constant currents on one-dimensional curves *C* to current densities. Let *<sup>j</sup>* : <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup><sup>3</sup> be the time-dependent current density. Then the magnetic induction at time *t* and position *x* is given by

$$B(t, \mathbf{x}) = -\frac{\mu\_0}{4\pi} \int\_{\Omega} \frac{\mathbf{x} - \mathbf{x}'}{\left\|\mathbf{x} - \mathbf{x}'\right\|^3} \times j(t, \mathbf{x}') \,\mathrm{d}\mathbf{x}'.$$

By Exercise A.1 we can rewrite this as

$$B(t, \mathbf{x}) = \frac{\mu\_0}{4\pi} \int\_{\Omega} \operatorname{grad} \frac{1}{\|\mathbf{x} - \mathbf{x}'\|} \times j(t, \mathbf{x}') \,\mathrm{d}\mathbf{x}'$$

$$= \operatorname{curl} \left(\frac{\mu\_0}{4\pi} \int\_{\Omega} \frac{j(t, \mathbf{x}')}{\|\mathbf{x} - \mathbf{x}'\|} \,\mathrm{d}\mathbf{x}'\right) = \operatorname{curl} A(t, \mathbf{x}),$$

where *<sup>A</sup>*: <sup>R</sup>× <sup>→</sup> <sup>R</sup><sup>3</sup> given by *A(t, x)* := *<sup>μ</sup>*<sup>0</sup> 4*π j (t ,x*- *) x*−*x*- <sup>d</sup>*x*is the vector potential.

We now relate the charge density *ρ* and the current density *j* . As a physical principle, we ask for conservation of total charge. For a Borel subset *V* ⊆ with smooth boundary let *QV* : <sup>R</sup> <sup>→</sup> <sup>R</sup> given by *QV (t)* := *<sup>V</sup> ρ(t, x)* d*x* be the timedependent total charge in *V* . Then changes of *QV* can only result in currents along the boundary of *V* ; that is,

$$\partial\_t \mathcal{Q}\_V(t) = -\int\_{\partial V} j(t, \mathbf{x}) \cdot \nu(\mathbf{x}) \, \mathrm{d}\mathcal{S}(\mathbf{x}),$$

where *ν* is the outward unit normal on *∂V* . By Gauss' divergence theorem, we thus have

$$\partial\_t \mathcal{Q}\_V(t) = -\int\_V \text{div}\, j(t, x) \, \text{d}x.$$

On the other hand, interchanging the (time) differentiation and integration, we observe

$$
\partial\_t \mathcal{Q}\_V(t) = \int\_V \partial\_t \rho(t, x) \, \mathrm{d}x.
$$

Hence,

$$\int\_{V} \left(\partial\_{t}\rho(t,\mathbf{x}) + \text{div}\,j(t,\mathbf{x})\right) \text{d}x = 0.$$

Since *V* ⊆ was arbitrary, we conclude the continuity equation

$$
\partial\_t \rho + \text{div} \, j = 0.
$$

We now derive the two fundamental equations, namely Faraday's law and Ampère's law. We start with Faraday's law. Let ⊆ be a two-dimensional submanifold with boundary curve *∂* which we may think of as a wire.

Then a changing magnetic field through induces a voltage along *∂* as

$$U(t) = -\int\_{\Sigma} \partial\_t B(t, \mathbf{x}) \cdot \nu(\mathbf{x}) \, \mathrm{d}S(\mathbf{x}) .$$

Since voltages result from electric fields, we also have

$$U(t) = \int\_{\partial\Sigma} E(t, \mathbf{x}) \, \mathrm{d}x = \int\_{\Sigma} \mathrm{curl} \, E(t, \mathbf{x}) \cdot \nu(\mathbf{x}) \, \mathrm{d}S(\mathbf{x}),$$

where we invoked Stoke's theorem and *ν* is again the unit normal on (oriented accordingly to a parametrisation of *∂*). Thus,

$$\int\_{\Sigma} \left( \partial\_t \, \mathcal{B}(t, \mathbf{x}) + \mathbf{curl} \, \boldsymbol{E}(t, \mathbf{x}) \right) \cdot \nu(\mathbf{x}) \, \mathrm{d}\mathcal{S}(\mathbf{x}) = \mathbf{0}.$$

Since ⊆ was arbitrary, we conclude Faraday's law

$$
\partial\_\mathbf{l} B = -\operatorname{curl} E \, .
$$

We now derive Ampère's law by considering curl*B* = curl curl *A* = grad div *A*− *A*, where *A* = *(A*1*, A*2*, A*3*)* and *Aj* = div grad *Aj* for *j* ∈ {1*,* 2*,* 3}. We calculate by Exercise A.1

$$\begin{split} \operatorname{div} A(t, \mathbf{x}) &= \frac{\mu\_0}{4\pi} \int\_{\Omega} \operatorname{div} \frac{j(t, \mathbf{x}')}{\|\mathbf{x} - \mathbf{x}'\|} \, \operatorname{d} \mathbf{x}' = \frac{\mu\_0}{4\pi} \int\_{\Omega} \left( -\operatorname{grad}\_{\mathbf{x}'} \frac{1}{\|\mathbf{x} - \mathbf{x}'\|} \right) \cdot j(t, \mathbf{x}') \, \operatorname{d} \mathbf{x}' \\ &= \frac{\mu\_0}{4\pi} \int\_{\Omega} \frac{\operatorname{div} j(t, \mathbf{x}')}{\|\mathbf{x} - \mathbf{x}'\|} \, \operatorname{d} \mathbf{x}'. \end{split}$$

By the continuity equation, we further obtain

$$\begin{split} \operatorname{div} A(t, \boldsymbol{x}) &= -\frac{\mu\_0}{4\pi} \int\_{\Omega} \frac{\partial\_l \rho(t, \boldsymbol{x}')}{\|\boldsymbol{x} - \boldsymbol{x}'\|} \operatorname{d}\mathbf{x}' \\ &= -\frac{\mu\_0}{4\pi} \partial\_l \int\_{\Omega} \frac{\rho(t, \boldsymbol{x}')}{\|\boldsymbol{x} - \boldsymbol{x}'\|} \operatorname{d}\mathbf{x}' = -\varepsilon\_0 \mu\_0 \partial\_l \Phi(t, \boldsymbol{x}). \end{split}$$

Thus,

$$\begin{aligned} \operatorname{grad} \operatorname{div} A(t, \mathbf{x}) &= -\varepsilon\_0 \mu\_0 \operatorname{grad} \partial\_l \Phi(t, \mathbf{x}) \\ &= -\varepsilon\_0 \mu\_0 \partial\_l \operatorname{grad} \Phi(t, \mathbf{x}) = \varepsilon\_0 \mu\_0 \partial\_l E(t, \mathbf{x}). \end{aligned}$$

Moreover, by Exercise A.2 (assuming that *j (t ,*·*)* can be smoothly extended to <sup>R</sup>3),

$$
\Delta A(t, \mathbf{x}) = \frac{\mu\_0}{4\pi} \int \Delta \frac{1}{\|\mathbf{x} - \mathbf{x}'\|} j(t, \mathbf{x}') \, \mathbf{d} \mathbf{x}' = \frac{\mu\_0}{4\pi} \int \left(\Delta\_{\mathbf{x}'} \frac{1}{\|\mathbf{x} - \mathbf{x}'\|}\right) j(t, \mathbf{x}') \, \mathbf{d} \mathbf{x}'.
$$

$$
= \frac{\mu\_0}{4\pi} \int \frac{1}{\|\mathbf{x} - \mathbf{x}'\|} \Delta j(t, \mathbf{x}') \, \mathbf{d} \mathbf{x}' = -\mu\_0 j(t, \mathbf{x}).
$$

We conclude Ampère's law

$$\operatorname{curl} B = \varepsilon\_0 \mu\_0 \partial\_t E + \mu\_0 j.$$

So far we only considered the equations in vacuum. In materials two additional effects, polarisation and magnetisation, occur due to the interaction of the fields with the medium. Let *<sup>P</sup>* : <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup><sup>3</sup> be the polarisation; that is, the averaged electrical dipole moments. Further, let *<sup>M</sup>* : <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup><sup>3</sup> be the magnetisation; that is, the averaged magnetic dipole moments. Then the current density gets two additional terms *jP , jM* : <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup>3, where *jP* <sup>=</sup> *∂tP* and *jM* <sup>=</sup> curl *<sup>M</sup>*. Thus, *j* = *jc* + *jP* + *jM* where *jc* corresponds to the free charged carriers or free current (as the current density in vacuum) and *jP* + *jM* forms the bound currents. In order to take these two effects into account, we define the electric displacement *<sup>D</sup>* : <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup><sup>3</sup> by *<sup>D</sup>* := *<sup>ε</sup>*0*<sup>E</sup>* <sup>+</sup> *<sup>P</sup>* and the magnetic field *<sup>H</sup>* : <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup><sup>3</sup> by *<sup>H</sup>* := <sup>1</sup> *<sup>μ</sup>*<sup>0</sup> *B* − *M*, such that *B* = *μ*0*H* + *M*. Then one typically expands *P* and *M* in terms of *E* and *H*. We only consider linear models; that is, *P* = *χeE* and *M* = *χmH* with electric and magnetic susceptibility *χe, χm* : <sup>→</sup> <sup>R</sup>3×3, respectively. Then *<sup>D</sup>* <sup>=</sup> *εE* where *<sup>ε</sup>* <sup>=</sup> *<sup>ε</sup>*0*(*<sup>1</sup> <sup>+</sup> *χe)*: <sup>→</sup> <sup>R</sup>3×<sup>3</sup> is the dielectricity and *<sup>B</sup>* <sup>=</sup> *μH* where *<sup>μ</sup>* <sup>=</sup> *<sup>μ</sup>*0*(*<sup>1</sup> <sup>+</sup> *χm)*: →R3×<sup>3</sup> is the magnetic permeability. Polarisation and magnetisation have no effect on Faraday's law, but on Ampère's law, which now states

$$\operatorname{curl} H = \partial\_t \varepsilon E + j\_c.$$

In case of an external current *<sup>j</sup>*<sup>0</sup> : <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup>3, we observe

$$\operatorname{curl} H = \partial\_t \varepsilon E + j\_c - j\_0.$$

Finally, Ohm's law couples the free current *jc* with the electric field *E* by *jc* = *σ E*, where *<sup>σ</sup>* : <sup>→</sup> <sup>R</sup>3×<sup>3</sup> is the electric conductivity, so that we obtain

$$\operatorname{curl} H = \partial\_{\mathbf{l}} \varepsilon \, E + \sigma \, E - j\_{\mathbf{0}} \cdot$$

We thus arrive at Maxwell's equations

$$
\partial\_t \varepsilon E + \sigma E - \operatorname{curl} H = j\_0,
$$

$$
\partial\_t \mu H + \operatorname{curl} E = 0.
$$

## **A.3 Linear Elasticity**

The theory of elasticity is devoted to the study of distortion of bodies due to forces, which is reversible in the sense that the body will return to its original state when the force is removed. In order to reasonably neglect thermodynamical effects we assume that the deformation occurs slowly to obtain thermodynamical equilibrium and the temperature of the body is constant. Also, we assume that the behaviour of the material does not depend on memory effects, so hysteresis is excluded. Moreover, we exclude rigid body moves (i.e., translations and rotations) due to the forces.

Let <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* be a domain which models the body. Then the displacement field *<sup>u</sup>*: <sup>R</sup>× <sup>→</sup> <sup>R</sup>*<sup>d</sup>* describes the deformation vector of the body at time *<sup>t</sup>* and position *x*. For *x,y* ∈ we write *x*- = *x* +*u(t, x)* and *y*- = *y* +*u(t, y)* for the new positions of *x* and *y*, respectively, after the deformation at time *t*.

$$\bigwedge\_{(\mathfrak{a}^{(t,x)})\_{\mathfrak{a}}} \bigwedge\_{\mathfrak{b}^{(t,x)}} \bigwedge\_{\mathfrak{b}^{(t,y)}}^{x}$$

Then, assuming spatially smooth and slowly varying deformations *u* (i.e., small spatial derivatives of *u*), by a linearisation of *u(t,*·*)* we obtain

$$
\mu(t, \mathbf{x}) \approx \mu(t, \mathbf{y}) + \partial\_{\mathbf{y}} \mu(t, \mathbf{y}) (\mathbf{x} - \mathbf{y})
$$

for *x* close to *y* and therefore

$$\begin{aligned} \left| \mathbf{x'} - \mathbf{y'} \right|^2 &= \left| \mathbf{x} + \boldsymbol{\mu}(t, \mathbf{x}) - (\mathbf{y} + \boldsymbol{\mu}(t, \mathbf{y})) \right|^2 \\ &= \left| \mathbf{x} - \mathbf{y} \right|^2 + 2 \left< \boldsymbol{\mu}(t, \mathbf{x}) - \boldsymbol{\mu}(t, \mathbf{y}), \mathbf{x} - \mathbf{y} \right> + \left| \boldsymbol{\mu}(t, \mathbf{x}) - \boldsymbol{\mu}(t, \mathbf{y}) \right|^2 \\ &\approx \left| \mathbf{x} - \mathbf{y} \right|^2 + 2 \left< \partial\_\mathbf{y} \boldsymbol{\mu}(t, \mathbf{y}) (\mathbf{x} - \mathbf{y}), \mathbf{x} - \mathbf{y} \right>, \end{aligned}$$

where we neglected the quadratic term |*u(t, x)* − *u(t, y)*| <sup>2</sup> <sup>≈</sup> *∂yu(t, y)(x* <sup>−</sup> *y)* 2 . Since

$$\begin{aligned} \left< \partial\_{\mathbf{y}} u(t, \mathbf{y}) (\mathbf{x} - \mathbf{y}), \mathbf{x} - \mathbf{y} \right> &= \sum\_{j,k=1}^{d} \partial\_{k} u\_{j}(t, \mathbf{y}) (\mathbf{x}\_{k} - \mathbf{y}\_{k}) (\mathbf{x}\_{j} - \mathbf{y}\_{j}) \\ &= \sum\_{j,k=1}^{d} \left( \frac{1}{2} \partial\_{k} u\_{j}(t, \mathbf{y}) + \frac{1}{2} \partial\_{j} u\_{k}(t, \mathbf{y}) \right) (\mathbf{x}\_{k} - \mathbf{y}\_{k}) (\mathbf{x}\_{j} - \mathbf{y}\_{j}) \\ &= \left< \frac{1}{2} (\partial\_{k} u\_{j}(t, \mathbf{y}) + \partial\_{j} u\_{k}(t, \mathbf{y}))\_{j,k \in \{1, \ldots, d\}} (\mathbf{x} - \mathbf{y}), \mathbf{x} - \mathbf{y} \right>, \end{aligned}$$

we may introduce the symmetrised gradient of *<sup>u</sup>* as Grad *<sup>u</sup>*: <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup>*<sup>d</sup>* defined by Grad *u(t, y)* := <sup>1</sup> 2 *∂kuj (t, y)* + *∂juk(t, y) j,k*∈{1*,...,d*} to get

$$\left|\mathbf{x}'-\mathbf{y}'\right|^2 \approx \left|\mathbf{x}-\mathbf{y}\right|^2 + 2\left<\mathbf{Grad}\,\boldsymbol{u}(t,\mathbf{y})(\mathbf{x}-\mathbf{y}),\mathbf{x}-\mathbf{y}\right>\dots$$

Note that *ε(u)(t, y)* := Grad *u(t, y)* is called the strain tensor of *u* at *t* and *y*.

Due to the displacement *u*, there appear forces between the molecules of the material trying to push them back to their equilibrium state. These forces induced by the displacement *u* result from stresses along the boundary of . Let *T* := *Tu* : <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup>*d*×*<sup>d</sup>* sym be the stress tensor corresponding to the displacement *u*. Then the forces between the molecules are given by the divergence of *T* ; that is, by Div *<sup>T</sup>* : <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup>*<sup>d</sup>* ,

$$\operatorname{Div} T(t, \boldsymbol{x}) := \left(\sum\_{k=1}^d \partial\_k T\_{jk}(t, \boldsymbol{x})\right)\_{j \in \{1, \dots, d\}}.$$

In thermodynamics, the free energy *F* of a system describes the maximum amount of work that a system can perform. Thus, we may expand the free energy *F<sup>u</sup>* of the deformed system in terms of the strain tensor *ε(u)* = Grad *u* around the undeformed system *F*0. Since changes of the free energy result from stresses, we observe *<sup>T</sup>* <sup>=</sup> *<sup>∂</sup>F<sup>u</sup> ∂ε(u)*. Since the stress tensor vanishes for deformation 0, there exists a so-called elasticity tensor *C* : → *L* R*d*×*d* sym *,* <sup>R</sup>*d*×*<sup>d</sup>* sym *)* such that

$$\mathcal{F}\_{\mu} = \mathcal{F}\_0 + \frac{1}{2} \left< \varepsilon(\mu), C\varepsilon(\mu) \right> .$$

Thus,

$$T = \frac{\partial \mathcal{F}\_{\mu}}{\partial \varepsilon(\mu)} = C\varepsilon(\mu) = C \operatorname{Grad} \mu.$$

This is Hooke's law of linear elasticity. Using Hooke's law, we get

$$\operatorname{Div} T = \operatorname{Div} C \operatorname{Grad} \mu \dots$$

In order to obtain the governing equations for linear elasticity, we make use of Newton's law. Let *<sup>ρ</sup>* : <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup> be the mass density of the body. Then Newton's law on conservation of momentum yields

$$
\partial\_t \rho \partial\_t u = F,
$$

where *F* describes the acting forces on the system. These forces decompose into the internal forces between the molecules due to the displacement *u* and we have seen that this is given by Div *<sup>T</sup>* . Moreover, there may be external forces *<sup>f</sup>* : <sup>R</sup>× <sup>→</sup> <sup>R</sup>*<sup>d</sup>* (for example gravity). Thus, *F* = Div *T* + *f* , and therefore

$$
\partial\_t \rho \partial\_t \mu - \text{Div } T = f.
$$

Taking into account Hooke's law, we arrive at the governing equation of linear elasticity as

$$
\partial\_t \rho \partial\_t \mu - \text{Div } C \text{ Grad} \,\mu = f.
$$

## **A.4 Scalar Wave Equation**

The scalar wave equation can be derived from linear elasticity. Indeed, let <sup>⊆</sup> <sup>R</sup>*<sup>d</sup>* be open and consider scalar displacements *<sup>u</sup>*: <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup>, so we only consider displacements in one particular direction. Also, we may assume constant mass density; that is, *<sup>ρ</sup>* : <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup> is constant. Without loss of generality, we therefore set *<sup>ρ</sup>* <sup>=</sup> 1. Let *<sup>f</sup>* : <sup>R</sup>× <sup>→</sup> <sup>R</sup> be an external force in direction of the displacements. Then from linear elasticity we obtain

$$
\partial\_t^2 u - \text{div}\, T = f,
$$

where *<sup>T</sup>* : <sup>R</sup> <sup>×</sup> <sup>→</sup> <sup>R</sup>*<sup>d</sup>* is the stress obtained by the displacements. If we further make use of Hooke's law *T* = *C* grad *u* with the elasticity tensor *C* : → *L* <sup>R</sup>*<sup>d</sup> ,* <sup>R</sup>*<sup>d</sup> )* <sup>=</sup> <sup>R</sup>*d*×*<sup>d</sup>* , we arrive at the scalar wave equation

$$
\partial\_t^2 u - \text{div} \, \mathbf{C} \, \text{grad} \, u = f.
$$

## **A.5 Comments**

The physical derivations of the equations treated in this appendix are well-known and can be found in many textbooks. We refer to [74–76] for foundations on physics of electrodynamics, thermodynamics and statistical physics. The final form of Maxwell's equations appeared in [62], however they had been derived in his earlier works already. The vector form of Maxwell's equations appeared in the 1880s. The equations of linear elasticity stem from elastodynamics.

## **Exercises**

**Exercise A.1** Let <sup>⊆</sup> <sup>R</sup><sup>3</sup> be open, *<sup>x</sup>*- ∈ , *f* : \ {*x*- } → <sup>R</sup> defined by *f (x)* := 1 *x*−*x*-  . Show that *<sup>f</sup>* is differentiable and grad *f (x)* <sup>=</sup> *<sup>x</sup>*−*x*- *x*−*x*- <sup>3</sup> for all *<sup>x</sup>* <sup>∈</sup> \ {*x*- }.

**Exercise A.2** Let *<sup>K</sup>* : <sup>R</sup><sup>3</sup> \ {0} → <sup>R</sup>, *K(x)* := − <sup>1</sup> <sup>4</sup>*<sup>π</sup>* <sup>1</sup> *<sup>x</sup>* . Then *K(x)* <sup>=</sup> div grad*K(x)* <sup>=</sup> 0 for all *<sup>x</sup>* <sup>∈</sup> <sup>R</sup><sup>3</sup> \ {0} and

$$-\int\_{\mathbb{R}^3} K(\mathbf{x}) \Delta \varphi(\mathbf{x}) \, \mathrm{d}x = \varphi(0).$$

for all *ϕ* ∈ *C*<sup>∞</sup> <sup>c</sup> *(*R3*)*.

## **References**


# **Bibliography**


© The Author(s) 2022

C. Seifert et al., *Evolutionary Equations*, Operator Theory: Advances and Applications 287, https://doi.org/10.1007/978-3-030-89397-2

309


# **Index**

#### **A**

Abscissa of boundedness, sb *(*·*)*, 74 Adjoint relation, 19 Almost separably-valued, 40 Ampère's law, 95, 304 Autonomous, 82, 125

#### **B**

Baire space, 295 Balance of momentum, 92 BD*(*div*)*, 197 BD*(*grad*)*, 197 Bochner-integral, 36 Bochner-Lebesgue spaces, 33 Bochner-measurable, 31 Boundedness in *M(H, ν*0*)*, 206 Bounded relation, 16

#### **C**

*<sup>C</sup>*b*(*R; *H )*, <sup>67</sup> *C*1 <sup>c</sup> *(*R; *H )*, <sup>44</sup> *Cν (*R; *H )*, <sup>53</sup> Causal, 56, 125 Clamped boundary condition, 93 Closable, 16 Closed, 16 Coercive, 277 Compensated compactness, 238 Consistent initial value, 151, 155 Continuous linear operator, 15 Convex, 277 Core, 18 Current, 95, 304

#### **D**

*δ*-Sequence, 48 Demiclosed, 276 Densely defined, 16 Dielectricity *ε*, 95, 305 Differential algebraic equation, 150 Div-curl lemma, 238 Domain, 15 Drazin inverse, 162 Dual phase lag heat conduction, 111 Dual space, 37, 133

#### **E**

Eddy current approximation, 100 Elasticity tensor, 93, 307 Electric boundary condition, 95 Electric conductivity, 305 Electric conductivity *σ*, 95 Electric displacement, 95, 304 Electric field, 95, 300 Eventually independent, 89 Evolutionary equation, 5, 6 Evolutionary inclusions, 288 Evolutionary mapping, 266 Evolution equation, 1, 2 Evo-system, 1 Exponentially stable, 168, 182 External current, 95, 305 Extrapolated operator, 134

#### **F**

Faraday's law, 94, 303 Fourier–Laplace transformation, 72

© The Author(s) 2022 C. Seifert et al., *Evolutionary Equations*, Operator Theory: Advances and Applications 287, https://doi.org/10.1007/978-3-030-89397-2

Fourier's law, 91, 300 Fourier transform, 67 Fourier transformation, 71 Fractional elasticity, 107 Fractional integral, 78 Fractional parabolic pair, 247 Fundamental solution or Green's function, 4, 5, 11 Fundamental theorem of calculus, 39

#### **G**

Graph norm, 17 Graph scalar product, 18

#### **H**

*H (*curl*, ), H*0*(*curl*, )*, 87 *H (*div*, ), H*0*(*div*, )*, 87 *H*1*(), H*<sup>1</sup> <sup>0</sup> *()*, 87 *H*<sup>1</sup> *<sup>ν</sup> (*R; *H )*, <sup>138</sup> *<sup>H</sup>* <sup>−</sup><sup>1</sup> *<sup>ν</sup> (*R; *H )*, <sup>138</sup> *H<sup>α</sup> <sup>ν</sup> (*R; *H )*, <sup>246</sup> *H"(*div*,Y)*, 227 *H*<sup>1</sup> *" (Y )*, 227 Hardy space, 120 Heat equation, 2, 8, 300 Heat equation, evolutionary equation, 91 Heat flux, 91, 299 Heat flux balance, 91, 300 (skew-)Hermitian, 21 Hölder continuous, 65 Homogenisation problem, 230 Hooke's law, 93, 307

#### **I**

Image, 16 Index of operator pair, 158 Inverse relation, 16

#### **K**

Kernel, 15 Korn's inequality, 187

#### **L**

*<sup>L</sup>*2*,ν (*R; *H )*, <sup>42</sup> Laplace transform, 121 Laplacian, 2 Lax–Milgram lemma, 97, 100 Lemma of Riemann–Lebesgue, 68 Linear elasticity, 307

Linear relation, 16 Lipschitz semi-norm, 54 Local boundedness, 285 Lower semi-continuous (l.s.c.), 277

#### **M**

Magnetic field, 95, 304 Magnetic induction, 95, 300 Magnetic permeability *μ*, 95, 305 Magnetisation, 304 Material law, 74 Material law operator, 76 Matrix exponential, 2 Maximal monotone relation, 276 Maxwell's equations, 6, 305 Maxwell's equations, evolutionary equation, 94 Mean value property, 114 Monotone, 276 Multiplication-by-the-argument operator, m, 73 Multiplication-by-*V* operator, 73

#### **N**

Newton's law, 307 Normal, 21

#### **O**

Ohm's law, 95, 305 Operator, 16

#### **P**

Parabolic, 247 Periodic, 210 Periodic gradient, 227 Poincaré's inequality, 172 Poisson's equation, 4 Poisson's formula, 115 Polarisation, 304 Poro-elasticity, 103, 104 Positive definite, 7 Proper, 277

#### **Q**

Quasi-Weierstraß normal form, 151

#### **R** Range, 15

Index 317

Real part of operator, 89 Regular, matrix pair, 150 Regular operator pair, 157 Relation, 15 Resolvent identity, 24 Resolvent set, 23 matrix pair, 150 operator pair, 156

#### **S**

Schwartz space, 68 (skew-)selfadjoint, 21 Semi-finite, 27 Simple function, 31 Simple functions with compact support, 54 Sobolev embedding theorem, 53 Sobolev space, 87 Solid-fluid interaction, 9 Solution theory evolutionary equations, 88 general notion, 3 Spectrum, 23 Spectrum, matrix pair, 150 Strain tensor, 104, 306 Stress, 93 Stress tensor, 105, 108, 306 Strong operator topology convergence, 206 Subgradient, 277 (skew-)symmetric, 21

#### **T**

Theorem of Hille, 38

Theorem of Minty, 280 Theorem of Paley–Wiener, 121 Theorem of Pettis, 40 Theorem of Picard, 89 Theorem of Picard–Lindelöf, 55 Theorem of Plancherel, 71 Theorem of Rellich–Kondrachov, 225 Time derivative, 44 Time-shift operator, 60

**U**

Unbounded, 16 Uniformly Lipschitz continuous, 54 Unitary, 29

**V** Visco-elasticity, 114

**W**

Wave equation, 7, 308 Wave equation, scalar, evolutionary equation, 92 Weakly Bochner-measurable, 40 Weak operator topology convergence, 206 Wong sequence, 156

#### **Y**

Yosida approximation, 281