**Simula SpringerBriefs on Computing** Reports on Computational Physiology **14 Karoline Horgmo Jæger · Aslak Tveito**

# **Differential Equations for Studies in Computational Electrophysiology**

# **Simula SpringerBriefs on Computing**

Reports on Computational Physiology

Volume 14

### **Editor-in-Chief**

Joakim Sundnes, Simula Research Laboratory, Oslo, Norway

### **Series Editors**

Kimberly J. McCabe, Simula Research Laboratory, Oslo, Norway

Andrew D. McCulloch, Institute for Engineering in Medicine, University of California San Diego, La Jolla, California, USA; Department of Bioengineering, University of California San Diego, La Jolla, California, USA

Aslak Tveito, Simula Research Laboratory, Oslo, Norway

### **Managing Editor**

Jennifer Hazen, Simula Research Laboratory, Oslo, Norway

### **About this Series**

In 2016, Springer and Simula launched an Open Access series called the Simula SpringerBriefs on Computing. This series aims to provide concise introductions to the research areas in which Simula specializes: scientific computing, software engineering, communication systems, machine learning and cybersecurity. These books are written for graduate students, researchers, professionals and others who are keenly interested in the science of computing, and each volume presents a compact, state-of-the-art disciplinary overview and raises essential critical questions in the field.

Simula's focus on computational physiology has grown considerably over the last decade, with the development of multi-scale mathematical models of excitable tissues (brain and heart) that are becoming increasingly complex and accurate. This subseries represents a new branch of the SimulaSpringer Briefs that is specifically focused on computational physiology. Each volume in the series will introduce and explore one or more sub-fields of computational physiology, and present models and tools developed to address the fundamental research questions of the field. Whenever possible, the software used will be made publicly available.

By publishing the Simula SpringerBriefs on Computing and the sub-series on computational physiology, Simula Research Laboratory acts on its mandate of emphasizing research education. Books in the series are published by invitation from one of the editors, and authors interested in publishing in the series are encouraged to contact any member of the editorial board.

Karoline Horgmo Jæger • Aslak Tveito

# Differential Equations for Studies in Computational Electrophysiology

Karoline Horgmo Jæger Simula Research Laboratory Oslo, Norway

Aslak Tveito Simula Research Laboratory Oslo, Norway

ISSN 2512-1677 ISSN 2512-1685 (electronic) Simula SpringerBriefs on Computing ISSN 2730-7735 ISSN 2730-7743 (electronic) Reports on Computational Physiology ISBN 978-3-031-30851-2 ISBN 978-3-031-30852-9 (eBook) https://doi.org/10.1007/978-3-031-30852-9

Mathematics Subject Classification (2020): 92-01, 92C05, 34-01, 35-01, 65-M99

© The Editor(s) (if applicable) and The Author(s) 2023. This book is an open access publication.

**Open Access** This book is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this book are included in the book's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the book's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.

The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

## **Series Foreword**

### Dear reader,

the series *Simula SpringerBriefs on Computing* was established in 2016, with the aim of publishing compact introductions and state-of-the-art overviews of select fields in computing. Research is increasingly interdisciplinary, and students and experienced researchers both often face the need to learn the foundations, tools, and methods of a new field. This process can be demanding, and typically involves extensive reading of multidisciplinary publications with different notations, terminologies and styles of presentation. The briefs in this series are meant to ease the process by explaining important concepts and theories in a specific interdisciplinary field without assuming extensive disciplinary knowledge and by outlining open research challenges and posing critical questions in the field.

Simula has a major research program in computational physiology that includes a long and close collaboration with the University of California (UC) San Diego. To reflect this research focus, we established in 2020 a new subseries entitled *Simula Springer Briefs on Computing - Reports on Computational Physiology*. The subseries includes both introductory and advanced texts on select fields of computational physiology, designed to advance interdisciplinary scientific literacy and promote effective communication and collaboration in the field. This subseries is also the outlet for collections of reports from the annual Summer School in Computational Physiology, organized by Simula, University of Oslo, and UC San Diego. The school starts in June each year with students spending two weeks in Oslo learning the principles underlying mathematical models commonly used in studying the heart and the brain. During their stay in Oslo, students are assigned a research project to work on over the summer. In August, they travel to San Diego for another week of training and project work, and a final presentation of their findings. Every year, we have been impressed by the students' creativity and we often see results that could lead to a scientific publication. Starting with the 2021 edition of the summer school, we have taken the course one step further by having each team conclude their project with a scientific report that can pass rigorous peer review as a publication in this subseries.

All items in the main series and the subseries are published within the SpringerOpen framework, as this will allow authors to use the series to publish an initial version of their manuscript that could subsequently evolve into a full-scale book on a broader theme. Since the briefs are freely available online, the authors do not receive any direct income from the sales; however, remuneration is provided for every completed manuscript. Briefs are written on the basis of an invitation from a member of the editorial board.

Suggestions for possible topics are most welcome, and interested authors are encouraged to contact a member of the editorial board.

March 2023 *Dr. Joakim Sundnes* sundnes@simula.no *Dr. Kimberly J. McCabe* kimberly@simula.no *Dr. Andrew McCulloch* amcculloch@ucsd.edu *Dr. Aslak Tveito* aslak@simula.no

### **Series Editor for this Volume**

Joakim Sundnes, Simula Research Laboratory, Oslo, Norway

# **Preface**

### **Why would you want to read these notes?**

When something is very large, very small or very complex, it's often helpful to represent it using a model in order to understand what is going on. Nature offers many examples of situations where direct interrogations are difficult, and a model can provide insight into the phenomena of interest. Physiology, for example, is a field that is complex and sometimes poorly understood, and where mathematical models are frequently used to increase understanding. These models are often formulated in terms of differential equations. However, many students who want to learn state-of-the-art physiology may not be familiar with differential equations. They are probably able to use software tools that solve these equations in numerical simulations, but they may not fully understand the underlying principles. Our goal with these notes is to provide a simple introduction to differential equations and give examples of how to solve them. Differential equations is a vast area of active research, and we can only provide a glimpse. But, we hope that these notes will provide a basic understanding of the principles, such that computational codes will appear less like a black box, and more like something that is comprehensible.

### **The Simula Summer School in Computational Physiology**

Every year Simula Research Laboratory organizes a summer school in Computational Physiology together with the University of California, San Diego (UCSD) and the University of Oslo (UiO). The students come to Oslo in June and learn about models and software used in computational physiology. Next, the students are divided into groups and work on different projects through the summer. All groups are assigned one or more mentors who help the students with their investigations. In August, all students and mentors meet again at UCSD and continue their work on the project. In addition, they attend guest lectures at UCSD and a two-day workshop on scientific writing organized by experienced editors of *Nature*.

The models and software used in the course are state-of-the art and the students complete the course by writing a scientific report that is published in a Simula SpringerBriefs on Computing in the sub-series on Computational Physiology. In order to reach the advanced level of using state-of-the-art methods and software, there is not enough time to cover the details of all the subjects covered in the course. The students are most often enrolled in MSc., PhD or Post Doc programs in universities around the world. Their scientific backgrounds vary greatly. There are students from theoretical disciplines like computer science, scientific computing, statistics or mathematics, and from more classical science disciplines like physics, biology, or medicine. And perhaps the largest group of students comes with a background in bioengineering. We notice that the students easily follow 'their' part of the course and struggle with the parts they are unfamiliar with.

Scientific work in computational physiology is inherently interdisciplinary so meeting this reality in the summer school is proper training. Typical research projects comprise elements of many disciplines and thus it is very common to not fully understand all elements of a project, and almost no project is completed by only one person. Similarly, it is exceptionally rare to see a single-authored paper in computational physiology, or in computational science in general for that matter. So, learning to communicate across disciplines is very useful, but also very challenging.

The most common mathematical machinery used to model physiology (and physics in general) is differential equations. Most of the models introduced in the summer school is founded, in one way or another, on differential equations. Elements of this subject are taught at every university in the world and students from mathematics, physics or scientific computing most likely have a course in this subject. But students from biology, medicine or even computer science rarely know much about differential equations. Now, the summer school is very streamlined and the software is usually prepared and can be used without a complete understanding of the underlying models, but it is clearly advantageous to know, at least intuitively, the foundation of the models and solution methods that are used in the course.

### **How are the notes organized?**

We have organized these notes in two parts. In the first part (Chapters 1–7), we introduce the concept of differential equations and numerical methods. To keep the exposition as simple as possible, we use simple, unitless equations as examples in these chapters. In the second part (Chapters 8–12), we apply the techniques introduced in the first part of the book to a selection of models of electrophysiology.

More specifically, we start these notes by considering a very simple differential equation. For this equation, we introduce a numerical method and we study the error of the method. The simplicity of the equation allows us to study these important concepts in a very explicit manner. Next, we move on to systems of ordinary differential equations and we show how the FitzHugh-Nagumo model can be solved numerically. Partial differential equations are then introduced and we show how the diffusion equation can be solved numerically. By combining the diffusion equation and the FitzHugh-Nagumo model, we introduce traveling wave solutions and show how a reaction-diffusion system can be handled numerically.

At the beginning of the second part of the notes, in Chapter 8, we introduce the Hodgkin-Huxley equations and from this chapter we start using units for all quantities involved. The Hodgkin-Huxley equations are the most famous system of equations in physiology. The equations model the action potential of an axon and have been proved to represent that process with great accuracy. We also consider a similar model for a cardiac action potential. The membrane models of cardiac electrophysiology have evolved into a very complex matter, but the structure remains very similar to the Hodgkin-Huxley model.

After being familiarized with the Hodgkin-Huxley membrane model, we combine it with the cable equation to model the propagation of an action potential along an axon. Next, we introduce the bidomain and the monodomain models in two spatial dimensions. The monodomain equation is very similar to the cable equation, but the numerical solution will be a bit more complicated because we consider a 2D problem. The bidomain model is more complex than the monodomain model, but we will see that by using operator splitting, solution methods become quite straightforward. Never heard of operator splitting? You will learn about it from these notes. It is a very powerful technique used to break complicated problems into problems we already know how to deal with. We will explain it in Chapter 7 and show some applications later in the notes.

One major issue with the monodomain and bidomain models is that they represent averaged quantities in the sense that the cardiomyocytes are not present in the model. In fact, both the extracellular (E) space, the cell membrane (M) and the intracellular (I) space are assumed to exist everywhere. That is a bold assumption, so we will also introduce the EMI model where all these elements (E, M, and I) are explicitly part of the model.

The monodomain and bidomain models provide reasonable approximations of cardiac electrophysiology at the mm-scale, and the EMI model addresses electrophysiology at the µm-scale. The next level is the nm scale1. The relevant equations at the nm-scale are the Poisson-Nernst-Planck (PNP) equations. From a computational perspective, the PNP equations are extremely challenging and can, at present, only be used to study very small regions; one complete cardiomyocyte is far too large to be modeled by the PNP equations. We will show how to solve these equations in the final chapter using operator splitting and finite differences.

In order to keep these notes relatively brief, many details about the methods and models introduced must inevitably be left out. At the end of most chapters, we therefore include a section called 'Comments and Further Reading', providing some comments on these details and suggestions for further reading.

<sup>1</sup> Note here that meter is denoted by m, and mm means millimeter or <sup>10</sup>−3m, µm is micrometer and means 10−6m, and, finally, nm means nanometer or 10−9m. If you ever need help with units, we recommend www.wolframalpha.com.

### **Why do we use the finite difference method?**

There are many ways to solve differential equations. In computational physiology, the finite element method may very well be the most popular alternative. The reason for this is that the geometry of the computational domain (e.g., the heart) is quite complex and thus very difficult to represent using a finite difference method. Finite difference methods are easiest to work with when the computational mesh is *very regular*, but the finite element method is constructed to allow for highly irregular meshes. Both finite element methods and finite volume methods are successfully applied to simulate complex phenomenas on complex geometries. The finite difference method is successfully applied to simulate complex dynamics but the code quickly becomes clunky in the presence of geometries that are any more complex than a cuboid. Nevertheless, we have chosen to focus entirely on the finite difference method in these notes and the reason for this is simplicity. The method is more or less completely defined by simply replacing derivatives by differences. Therefore, we stick to simple geometries and use finite differences. In the summer school, much more advanced simulations, using finite elements, will be performed, but we still think it is useful to know how this in principle can be done using the simplest possible method.

### **It's open access**

These notes are printed in Simula SpringerBriefs on Computing in the sub-series on Computational Physiology. That means that the notes can be downloaded for free. The software (Matlab) used to generate all figures and tables in these notes are freely available online. The codes are written primarily for clarity and less emphasis is put on efficiency. If you find a bug in the codes or an error in these notes, please send a mail to aslak@simula.no. The software associated with these notes can be found at: https://github.com/karolihj/differential-equations-book-2023

### **Acknowledgement**

The authors extend their gratitude to Joakim Sundnes, Kimberly McCabe, Lars Tveito, and Jennifer Hazen for their invaluable comments to the manuscript. They also acknowledge the financial support provided by Simula Research Laboratory and the SUURPh program funded by the Norwegian Ministry of Education and Research. In addition, it is a pleasure to express our appreciation of the long-term collaboration with Dr. Martin Peters at SpringerNature.

Oslo, *Karoline Horgmo Jæger* February 2023 *Aslak Tveito*

# **Contents**

### **Part I Tools for Differential Equations and Numerical Methods**





# **Part I Tools for Differential Equations and Numerical Methods**

# **Chapter 1 Getting Started**

In this introductory chapter we will introduce two essential concepts: differential equations and numerical methods. If you want to spend as little energy as possible (don't be ashamed of that - energy preservation is both fashionable and a fundamental property of many biological mechanisms - it's fine) you can get a good overview from this chapter alone.

### **1.1 What Is a Differential Equation?**

You have no doubt seen an *algebraic* equation. A typical algebraic equation may look like

$$\mathbf{x}^2 - 4\mathbf{x} + 3 = \mathbf{0} \tag{1.1}$$

with solutions *x* = 1 and *x* = 3. So, solutions of algebraic equations are numbers. The solutions of differential equations, on the other hand, are functions. One very simple differential equation is given by

$$
\mathbf{y}'(t) = \mathbf{y}(t). \tag{1.2}
$$

Here, we typically assume that *t* represents time, and that the equation (1.2) describes how the function y changes with time. Suppose we also know that the solution is 1 at *t* = 0, that is

$$\mathbf{y}(\mathbf{0}) = 1.\tag{1.3}$$

Such a condition is generally referred to as an *initial condition* and is needed in order to find a unique solution of the problem. In this case, the solution is given by the function

$$\mathbf{y}(t) = \mathbf{e}^{t}.\tag{1.4}$$

It is straightforward to verify this. We simply note that y(0) = *e* <sup>0</sup> = 1 and, by differentiation, that y 0 (*t*) = *e* <sup>t</sup> = y(*t*). So all is good; we have found the solution of our first differential equation.

For a long time, differential equations were solved in this way. Formulas were derived using pencil and paper. If the equations were very complex, they were simplified in order to be solved by a formula and people spent whole academic careers deriving such approximate solutions of differential equations. The reason for this was that the solutions of the equations could bring critical insight into a phenomenon modeled by the equation. Nowadays, differential equations are solved by computers. In some cases analytical formulas can be derived and then it is usually done by computing systems like Mathematica or Maple, or other tools for symbolic computations. But the more common approach is to solve the equations numerically. In order to do that, we need to transform the equations into a form that is suitable for computers. Almost all differential equations that are solved in computational physiology are solved by computers. It is the main purpose of these notes to teach you the basics of how that is done, and we might as well start with the very simple case that we've already introduced.

### **1.2 What Is a Numerical Method?**

*A numerical method is a way to solve a mathematical problem on a computer.* We will see that one way to prepare differential equations for solution on computers is to replace derivatives by differences. That may not come as a surprise since you may remember from calculus that the derivative was introduced as the limit of a difference. Concretely, for a smooth1 function *f* = *f* (*t*), the derivative is, per definition,

$$f'(t) = \lim\_{\Delta t \to 0} \frac{f(t + \Delta t) - f(t)}{\Delta t}. \tag{1.5}$$

Instead of going all the way to zero, we can settle for a small value of ∆*t* and use the approximation

$$f'(t) \approx \frac{f(t + \Delta t) - f(t)}{\Delta t}.\tag{1.6}$$

Here, *t* is time, and we are interested in solving the equation from *t* = 0 (where the solution, y, is known to be 1) until some *t* = *T*, for example *T* = 1. For some fixed value of ∆*t*, it is useful to define a number of discrete points in time,

$$
\Delta t\_n = n \times \Delta t,\tag{1.7}
$$

for *<sup>n</sup>* <sup>=</sup> <sup>0</sup>, ...,*N*, where *<sup>N</sup>* <sup>=</sup> *<sup>T</sup>*/∆*t*. We note here that the step in time from *<sup>t</sup>*n−<sup>1</sup> to *<sup>t</sup>*<sup>n</sup> is ∆*t*, and that the final time is given by *t*<sup>N</sup> = *N* × ∆*t* = *T*. We want to compute a numerical approximation to the solution of the problem

<sup>1</sup> A lot could be said about *smooth functions*. Generally, regularity of functions is important in solving differential equations. But for these notes it is sufficient to simply think of smooth functions as – smooth.

1.2 What Is a Numerical Method? 5

$$\mathbf{y}'(t) = \mathbf{y}(t),\tag{1.8}$$

$$\mathbf{y}'(t) = \mathbf{y}(t) \tag{1.9}$$

$$\mathbf{y}(0) = 1,\tag{1.9}$$

at the time steps {*t*n} N n=0 by replacing the derivative of y by the formula given in (1.6). This replacement can be written as

$$\frac{\mathbf{y}\_{n+1} - \mathbf{y}\_n}{\Delta t} = \mathbf{y}\_n,\tag{1.10}$$

$$y\_0 = 1,\tag{1.11}$$

where, in general, y<sup>n</sup> denotes an approximation of the solution of (1.8) at time *t*n; i.e., y<sup>n</sup> ≈ y(*t*n). By rearranging (1.10), we find that

$$\mathbf{y}\_{n+1} = (1 + \Delta t)\mathbf{y}\_n,\tag{1.12}$$

and we refer to this as the *computational form* of the numerical scheme. Since y<sup>0</sup> = 1, we find y<sup>1</sup> = 1 + ∆*t*, and y<sup>2</sup> = (1 + ∆*t*)y<sup>1</sup> = (1 + ∆*t*) 2 , and so forth. In general, we have

$$\mathbf{y}\_n = (1 + \Delta t)^n. \tag{1.13}$$

So in this unusually simple case, we don't need a program to compute the numerical solution. That is very unusual and we use this example just to introduce the concepts. More commonly, we first need to compute the solution y<sup>1</sup> from the initial condition, y0, using a formula like (1.12), and then insert this solution y<sup>1</sup> into (1.12) to find the solution y2, and so on until we find y<sup>N</sup> using the previously computed yN−1. This type of repetitive computation is usually most conveniently performed using a computer program.

In Fig. 1.1 we have plotted the analytical (i.e. exact) solution, y, together with the numerical solution, yn, for some different choices of the time step, ∆*t*. We observe that when ∆*t* is small, the numerical and analytical solutions are indistinguishable, but that for larger values of ∆*t*, for example ∆*t* = 0.2, there is a visible difference between the two solutions. The analytical solution is exact, which means that the difference between the two solutions are due to an error of the numerical method. In the next section, we will consider this error in some more detail.

**Fig. 1.1** Analytical (solid line) and numerical (dotted line) solutions of the differential equation (1.8)–(1.9) for t between 0 and 1 for some different values of ∆t.

### **1.3 What Is the Error of a Numerical Method?**

*The error of a numerical method is the difference between the numerical and the exact solution of the problem.* Since we are in the fortunate position of knowing both the analytical (exact) solution, y(*t*), and the numerical solution, yn, we can also compute the error2 of the numerical solution defined by

$$E\_n = |\mathbf{y}(t\_n) - \mathbf{y}\_n|.\tag{1.14}$$

As observed in Fig. 1.1, the error depends on the size of the time step, ∆*t*, which is natural since we are supposed to pass to the limit of ∆*t* = 0 in the definition of the derivative (see (1.6)). When we approximate 0 by something small, we must be prepared for it to come with a price – and the price is the error we will encounter. To illustrate this, we put *T* = 1 and compare the analytical solution y(*T*) = *e* with the numerical solution at *T* = 1 for several values of ∆*t*. Since the final time is *T* = 1, we must have *N* × ∆*t* = 1 and therefore we compare

$$\mathbf{y}\_N = \left(1 + \frac{1}{N}\right)^N \tag{1.15}$$

with the exact solution y(1) = *e*. In Table 1.1, we show the error *E*<sup>N</sup> = |y(1) − y<sup>N</sup> | for *N* =5, 10, 100, and 1000. We also show *E*<sup>N</sup> /∆*t* and we note that this value seems to be more or less constant. It is thus evident that when ∆*t* goes to zero (*N* goes to infinity), the numerical solution converges towards the analytical solution3.

**Table 1.1** Error of the numerical solution of the differential equation (1.2)–(1.4) at t = T = 1 for different values of ∆t = T N . The error is defined as E<sup>N</sup> = |y(1) − y<sup>N</sup> |, where y(1) = e is the analytical solution, and y<sup>N</sup> is the numerical solution, defined by (1.15).


2 The error defined here is referred to as the *absolute error*. An alternative is to consider the relative error given by

$$\frac{|\mathbf{y}(t\_n) - \mathbf{y}\_n|}{|\mathbf{y}(t\_n)|}.$$

3 If this feels familiar, it may be because you learned in calculus that

$$\lim\_{\varepsilon \to 0} (1+\varepsilon)^{\frac{1}{x}} = e.$$

We have learned that by replacing the derivative y 0 (*t*) by a finite difference approximation (yn+<sup>1</sup> − yn)/∆*t* we can find an approximate solution. Usually, the finite difference scheme must be implemented on a computer but in this very simple case, we can find a formula for both the numerical and the analytical solutions. The error introduced by this method is *<sup>E</sup>*<sup>N</sup> <sup>≈</sup> <sup>1</sup>.<sup>36</sup> <sup>×</sup> <sup>∆</sup>*t*. This indicates a relation that is generally true: The error is smaller for smaller values of ∆*t*, but the work associated with running through the time steps to compute the numerical solutions is increasing for smaller values of ∆*t*. We will see plenty of examples of the fact that finer resolutions (i.e., smaller ∆*t*) means higher accuracy and more work. So life is fair; a low quality solution is cheap and a high quality solution is expensive. Since *E*<sup>N</sup> is proportional to ∆*t*, we have linear (or first order) convergence. If the error had been proportional to ∆*t* <sup>2</sup> we would have had quadratic (or second order) convergence, and so on.

### **1.4 Implicit vs. Explicit Numerical Schemes**

At this point, we will briefly mention the difference between an implicit and an explicit numerical scheme. When a numerical scheme can be written in the generic form

$$
\mathbf{y}^{n+1} = F(\mathbf{y}^n),
$$

like in (1.12), we refer to the scheme as an *explicit scheme* because y n+1 can be explicitly computed as a function of y n . Conversely, if an equation has to be solved in order to compute y <sup>n</sup>+<sup>1</sup> based on y n , the scheme is referred to as *implicit*. Implicit schemes will be introduced in Chapter 4 of these notes.

### **1.5 But What** *Is* **a Differential Equation?**

We have given you one example of a differential equation, and there are many more in the subsequent chapters. The easiest way to grasp what differential equations are is probably by seeing many examples. However, in general, *differential equations are mathematical relations where the unknown is a function, and derivatives of the function are used in the formulation of the equation.* The solution is a function. Classical mathematical questions related to differential equations are: *Existence:* Is there a solution of the equation? *Uniqueness*: Is there only one solution? *Stability*: Can we slightly perturb the parameters of the equation and obtain almost the same solution? And *Solution:* How can we find the solution? In these notes, our emphasis will be on the last question and we will concentrate on numerical solutions of the equations.

### **1.6 Analytical Solutions**

Our focus in these notes will be on numerical methods for solving differential equations. However, we will also encounter some cases where analytical solutions (i.e., solutions given by a formula) can be found. While finding analytical solutions to differential equations is a vast field, these techniques are not typically applicable to the types of problems we will be studying, which is why we will rely on numerical methods. In this section, we will indicate a couple of possible approaches to obtain analytical solutions to illustrate that it is not entirely mysterious, but we will not delve into the methods in these notes, and you can safely skip this subsection and jump to Chapter 2 if you want.

First, we will demonstrate an approach for obtaining an analytical solution for the simple example studied earlier. We start by repeating that the differential equation is given by

$$\mathbf{y}'(t) = \mathbf{y}(t) \tag{1.16}$$

with the initial condition y(0) = 1. To find an analytical solution for this problem, we first write the equation in the form

$$\frac{\mathbf{y}^{\prime}(t)}{\mathbf{y}(t)} = 1.\tag{1.17}$$

Now, we can integrate both sides from 0 to *t* and get

$$\int\_0^t \frac{\mathbf{y}'(\tau)}{\mathbf{y}(\tau)} d\tau = \int\_0^t 1 d\tau. \tag{1.18}$$

Therefore,

$$[\ln(\chi(\tau))]\_0^t = t,\tag{1.19}$$

or

$$
\ln(\mathbf{y}(t)) - \ln(\mathbf{y}(0)) = t. \tag{1.20}
$$

Since y(0) = 1, and thus ln(y(0)) = 0, (1.20) reads

$$
\ln(\mathbf{y}(t)) = t,\tag{1.21}
$$

and raising *e* to the power of both sides of the equation, we get the analytical solution

$$\mathbf{y}(t) = e^t. \tag{1.22}$$

This way of finding the solution of a differential equation can be generalized to equations of the form

$$F'(\mathbf{y})\mathbf{y}'(t) = F(\mathbf{y}(t))\tag{1.23}$$

with the initial condition y(0) = y0, where y<sup>0</sup> is a given number. Here, *F* = *F*(y) is assumed to be some invertible function. By following the steps above we find that

### 1.6 Analytical Solutions 9

$$[\ln(F(\mathbf{y}(\tau)))]\_0^t = t,\tag{1.24}$$

or

$$\ln\left(\frac{F(\mathbf{y}(t))}{F(\mathbf{y}\_0)}\right) = t,\tag{1.25}$$

so

$$F(\mathbf{y}(t)) = F(\mathbf{y}\_0)e^t,\tag{1.26}$$

$$\text{and, } \text{finally},$$

$$\mathbf{y}(t) = F^{-1}(F(\mathbf{y}\_0)e^t). \tag{1.27}$$

### **1.6.1 Integrating Factors**

Another way of obtaining analytical solutions of differential equations is by applying *integrating factors*. We can illustrate this technique by considering the equation

$$\mathbf{y}'(t) + p(t)\mathbf{y}(t) = q(t),\tag{1.28}$$

with the initial condition y(0) = y0, where y<sup>0</sup> is given. Here, we assume that *p* = *p*(*t*) and *q* = *q*(*t*) are known functions. In addition, we assume that we have a function *P* = *P*(*t*) with the special property that

$$P'(t) = p(t). \tag{1.29}$$

By multiplying (1.28) by the integrating factor

*e* P(t)

we get

$$e^{P(t)}\mathbf{y}'(t) + e^{P(t)}p(t)\mathbf{y}(t) = e^{P(t)}q(t). \tag{1.30}$$

Here we observe that (because of (1.29)) the left-hand side can be written in a more compact manner as,

$$e^{P(t)}\mathbf{y}'(t) + e^{P(t)}p(t)\mathbf{y}(t) = (e^{P(t)}\mathbf{y}(t))'.\tag{1.31}$$

and therefore (1.30) can be rewritten to read,

$$(e^{P(t)}\mathbf{y}(t))' = e^{P(t)}q(t). \tag{1.32}$$

Integrating both sides from 0 to *t*, we obtain,

$$e^{P(t)}\mathbf{y}(t) - e^{P(0)}\mathbf{y}\_0 = \int\_0^t e^{P(\tau)} q(\tau)d\tau. \tag{1.33}$$

Finally, the solution is given by

10 1 Getting Started

$$\mathbf{y}(t) = e^{-P(t)} \left( e^{P(0)} \mathbf{y}\_0 + \int\_0^t e^{P(\tau)} q(\tau) d\tau \right). \tag{1.34}$$

In the case of the simple example equation considered in this chapter, y 0 (*t*) = y(*t*), we note that the equation can be written in the form (1.28) for *<sup>p</sup>*(*t*) <sup>=</sup> <sup>−</sup>1, *<sup>q</sup>*(*t*) <sup>=</sup> 0, and y<sup>0</sup> = 1. Furthermore, (1.29) is fulfilled if *P*(*t*) = −*t*. Inserting these functions into (1.34), we obtain the analytical solution

$$\mathbf{y}(t) = e^{-(-t)} \left( e^0 \cdot \mathbf{1} + \int\_0^t e^{-\tau} \cdot \mathbf{0} d\tau \right) = e^t. \tag{1.35}$$

### **1.7 Comments and Further Reading**

Here is a list of suggested further reading on a few of the overarching topics of these notes.


### **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 2 A System of Ordinary Differential Equations**

Above, we introduced a very simple differential equation given by

$$\mathbf{y}'(t) = \mathbf{y}(t). \tag{2.1}$$

This is referred to as an *ordinary differential equation (ODE)* because it involves the derivative with respect to only one variable. If the derivative with respect to more than one variable is involved, the equation is referred to as a *partial differential equation (PDE)*. We will come back to PDEs in Chapter 3, and spend some time discussing how to solve them, but in this chapter, we will keep focusing on ODEs.

The ODE (2.1) is a *scalar* equation because there is only a single unknown function to be found. Now, we will start considering *systems* of ODEs. A typical system of ODEs can be written in the form

$$\mathbf{y}'(t) = F(\mathbf{y}(t)).\tag{2.2}$$

Here, y is a vector and *F* is vector valued function.

### **2.1 The FitzHugh-Nagumo Model**

We will introduce numerical methods for systems of ODEs by considering the celebrated1 FitzHugh-Nagumo model published by FitzHugh [1] in 1961 and, independently, by Nagumo et. al. [2] in 1962. The model is a system of ordinary differential equations with two unknowns, and is commonly used as a simple model for the action potentials of excitable pacemaker cells.

We consider the following version of the FitzHugh-Nagumo model,

$$v' = c\_1 v(v-a)(1-v) - c\_2 w,\tag{2.3}$$

$$w' = b(v - dw).\tag{2.4}$$

<sup>1</sup> These two papers together are cited more than 11,000 times.

**Fig. 2.1** Numerical solutions <sup>v</sup><sup>n</sup> (left) and <sup>w</sup><sup>n</sup> (right) of the FitzHugh-Nagumo model specified by (2.3)–(2.5). The numerical scheme used to compute the solutions is specified in (2.8)–(2.9), and we have used <sup>∆</sup><sup>t</sup> <sup>=</sup> <sup>1</sup> and the initial conditions <sup>v</sup><sup>0</sup> <sup>=</sup> <sup>0</sup>.<sup>26</sup> and <sup>w</sup><sup>0</sup> <sup>=</sup> 0.

Here, the constants are given by

$$a = -0.12, \ c\_1 = 0.175, \ c\_2 = 0.03, \ b = 0.011, \ d = 0.55,\tag{2.5}$$

and the unknown functions are v and w. In order to solve the system of equations numerically, we use the steps introduced for the scalar equation in Chapter 1 and start by replacing derivatives by differences. The discrete system then reads

$$\frac{v\_{n+1} - v\_n}{\Delta t} = c\_1 v\_n (v\_n - a)(1 - v\_n) - c\_2 w\_n,\tag{2.6}$$

$$\frac{w\_{n+1} - w\_n}{\Delta t} = b(v\_n - dw\_n). \tag{2.7}$$

Again, we reorganize this system to write it in *computational form*,

$$v\_{n+1} = v\_n + \Delta t \left[ c\_1 v\_n (v\_n - a)(1 - v\_n) - c\_2 w\_n \right],\tag{2.8}$$

$$w\_{n+1} = w\_n + \Delta t [b(v\_n - dw\_n)].\tag{2.9}$$

This time, however, we note that we will need a piece of software to compute the solutions. But it is straightforward to implement this since the numerical solution at time *t*n+<sup>1</sup> is an explicit function of the numerical solution at time *t*n.

### **2.1.1 Numerical Computations**

We assume that the solution is known initially, so we define (for instance) <sup>v</sup><sup>0</sup> <sup>=</sup> <sup>0</sup>.<sup>26</sup> and <sup>w</sup><sup>0</sup> <sup>=</sup> 0. Here, it is useful to note that if we put both <sup>v</sup><sup>0</sup> and <sup>w</sup><sup>0</sup> equal to zero, the solution will remain zero (for both v and w) for all time. But if we perturb v <sup>a</sup> little, we get very different solutions. In Fig. 2.1, we show the numerical solution from *t* = 0 to *t* = *T* = 5000. In the computation, we have used *N* = 5000 which

**Fig. 2.2** The numerical solutions <sup>v</sup><sup>n</sup> and <sup>w</sup><sup>n</sup> of the FitzHugh-Nagumo model from Fig. 2.1 displayed in a parametric plot with <sup>v</sup><sup>n</sup> on the <sup>x</sup>-axis and <sup>w</sup><sup>n</sup> on the <sup>y</sup>-axis.

w



gives <sup>∆</sup>*<sup>t</sup>* <sup>=</sup> *<sup>T</sup>*/*<sup>N</sup>* <sup>=</sup> 1. In Fig. 2.2, we show the numerical solutions (vn, wn) N n=1 in a parametric plot, and we note that the solutions are periodic. In electrophysiology, such solutions are useful for studying pacemaker cells that keep on creating action potentials at a steady rate.

### **2.2 What Is the Error?**

In the very simple equation in the previous chapter, we had a formula for the analytical solution and a formula for the numerical solution, and thus it was straightforward to find the error introduced by replacing a derivative by a difference. For the FitzHugh-Nagumo equations, this is harder. In numerical analysis there are techniques for proving error bounds for numerical methods. But the proofs tend to be very technical and often involve constants that need to be estimated. So we are looking for something simpler. If we just assume that we have convergence towards the correct solution as ∆*t* tends to zero, we can compute an accurate approximation of the solution by using a very small ∆*t* and then monitoring the convergence towards this highly resolved solution.

v -0.5 0 0.5 1

**Fig. 2.3** Numerical solutions <sup>v</sup><sup>n</sup> (left) and <sup>w</sup><sup>n</sup> (right) of the FitzHugh-Nagumo model specified by (2.3)–(2.5). We compare the solutions for a very fine resolution (∆<sup>t</sup> <sup>=</sup> <sup>0</sup>.001, solid blue line) to the solutions for three cases of coarser resolution (dotted orange line). In the upper panel, we consider ∆t = 10, in the middle panel, we consider ∆t = 5, and in the lower panel, we consider ∆t = 1. To improve visibility, we only show the solutions from t = 4000 to t = 5000. As ∆t is decreased, the coarse numerical solutions are more similar to the numerical solution computed with a very small ∆t, and for ∆t = 1 the two solutions are indistinguishable.

For simplicity, we assume that we are merely interested in the error at the final time *t* = *T* = 5000. We first compute the solution using an extremely fine resolution (∆*<sup>t</sup>* <sup>=</sup> <sup>0</sup>.001) and regard that as the correct solution at time *<sup>T</sup>*. Then, we compute solutions for varying resolutions (different values of the time step ∆*t*) and compare the "correct" and approximate solutions. In Fig. 2.3, we compare the solutions for a few different choices of <sup>∆</sup>*t*. The error defined by *<sup>E</sup>*<sup>N</sup> <sup>=</sup> <sup>|</sup><sup>v</sup> <sup>−</sup> <sup>v</sup><sup>N</sup> <sup>|</sup> <sup>+</sup> <sup>|</sup><sup>w</sup> <sup>−</sup> <sup>w</sup><sup>N</sup> <sup>|</sup>, where (v, w) denotes the fine scale solution, is given in Table 2.1. Again, we observe that *E*<sup>N</sup> /∆*t* is more or less constant and we therefore conclude again that the error in the numerical solution is proportional to the time step ∆*t*. In other words, the convergence is linear (or first order) in ∆*t*.

### **2.3 Upstroke Velocity and Action Potential Duration**

In applications of the FitzHugh-Nagumo model, the unknown function v is often used to represent the membrane potential of an excitable cell, e.g., a neuron or a cardiac cell, firing a sequence of action potentials. A single action potential from the solution of the FitzHugh-Nagumo model is illustrated in Fig. 2.4. In general terms, the action potential first consists of a period during which the value of v increases slowly, followed by a more rapid increase (the upstroke). Then, v decreases relatively slowly for a while before it decreases rapidly back to the minimum value and starts increasing again. In this setting, we often refer to v increasing as *depolarization*, and v decreasing as *repolarization.* We will come back to these terms below where we introduce models with proper physical units.

By solving the FitzHugh-Nagumo model equations for different values of the model constants, or parameters, (*a*, *<sup>c</sup>*1, *<sup>c</sup>*2, *<sup>b</sup>*, and *<sup>d</sup>*), we could gain some insight into how the parameters affect the firing of action potentials. For example, we could investigate how the parameters affect the frequency of firing or the shape of the fired action potentials. Two properties that are of interest regarding the shape of the action potential are the *maximal upstroke velocity* and the *action potential duration*. The maximal upstroke velocity is often defined as the maximum value of the derivative of v with respect to time. Using a finite difference approximation of the derivative, this can be defined as

$$\max\_{n} \left( \frac{v\_{n+1} - v\_n}{\Delta t} \right). \tag{2.10}$$

The action potential duration is often defined in terms of a given percentage of repolarization, for example APD50 or APD90, for 50% or 90% repolarization, respectively. Here, APD50 represents the duration from the start of the action potential until the membrane potential reaches a value that is 50% repolarized (i.e., at <sup>v</sup><sup>50</sup> <sup>=</sup> <sup>0</sup>.<sup>5</sup> (maxn(vn) <sup>+</sup> minn(vn))), at *<sup>t</sup>* down <sup>50</sup> . Similarly, APD90 is defined as the duration from the start of the action potential until the membrane potential reaches a value that is 90% repolarized (i.e., at <sup>v</sup><sup>90</sup> <sup>=</sup> maxn(vn) − <sup>0</sup>.9(maxn(vn) − minn(vn))), at *t* down <sup>90</sup> . The start of the action potential used in the definition of the action potential duration can, for example, be defined as the point in time when the maximal

**Fig. 2.5** The numerical solution, vn, of the FitzHugh-Nagumo model, defined by the two equations v <sup>0</sup> <sup>=</sup> <sup>c</sup>1v(v <sup>−</sup> <sup>a</sup>)(<sup>1</sup> <sup>−</sup> v) − <sup>c</sup>2w, and w <sup>0</sup> <sup>=</sup> <sup>b</sup>(v <sup>−</sup> <sup>d</sup>w). The parameters are as specified in (2.5), except that in each row of the figure, the value of either <sup>a</sup>, <sup>c</sup>1, <sup>c</sup>2, <sup>b</sup>, or <sup>d</sup> is adjusted. The title above each plot specifies the parameter change.

upstroke velocity occurs, or when the membrane potential crosses the 50% or 90% repolarization thresholds, <sup>v</sup><sup>50</sup> or <sup>v</sup>90, during the upstroke, denoted by *<sup>t</sup>* up <sup>50</sup> or *t* up 90 , respectively. In the latter case, APD50 and APD90 can be defined as

$$\text{APD}\text{S0} = t\_{\text{S0}}^{\text{down}} - t\_{\text{S0}}^{\text{up}},\tag{2.11}$$

$$\text{APD90} = t\_{90}^{\text{down}} - t\_{90}^{\text{up}}.\tag{2.12}$$

Such a definition of APD50 is illustrated in Fig. 2.4.

In Fig. 2.5, we show the numerical solution, vn, of the FitzHugh-Nagumo model with different choices of parameters. In each row, we consider three different values of one of the parameters *<sup>a</sup>*, *<sup>c</sup>*1, *<sup>c</sup>*2, *<sup>b</sup>*, or *<sup>d</sup>*, and keep the remaining values fixed at the values specified in (2.5). In the plots, we observe that increasing the value of *c*<sup>1</sup> appears to make the action potentials longer and the firing frequency slower, whereas the opposite effect is observed when *c*<sup>2</sup> or *b* are increased. In Fig. 2.6, we study the effects on the individual action potentials more closely. In the left panel, we have zoomed in on the points in time representing the action potential upstroke. We observe that decreasing the value of *c*<sup>1</sup> reduces the upstroke velocity, but changing the other parameters do not seem to have a significant effect on the upstroke. In the

**Fig. 2.6** The numerical solution, vn, of the FitzHugh-Nagumo model, defined by the two equations v <sup>0</sup> <sup>=</sup> <sup>c</sup>1v(v <sup>−</sup> <sup>a</sup>)(<sup>1</sup> <sup>−</sup> v) − <sup>c</sup>2w, and w <sup>0</sup> <sup>=</sup> <sup>b</sup>(v <sup>−</sup> <sup>d</sup>w). The parameters are as specified in (2.5), except that in each row, the value of either <sup>a</sup>, <sup>c</sup>1, <sup>c</sup>2, <sup>b</sup>, or <sup>d</sup> is adjusted. The legends at the right-hand side of each row specify the parameter changes. The time axes of the solutions are adjusted such that the maximal upstroke velocity occurs at the same time for all the parameter changes. The left panel shows the upstroke of the action potential and the right panel shows one action potential for each parameter set.

right panel, we consider a single action potential for the different parameter choices. We observe that all the parameters have a significant effect on the action potential duration.

### **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 3 The Diffusion Equation**

The diffusion equation appears in many applications in science and engineering, and computational physiology is no exception. In its most basic form, the diffusion equation is also useful as an example of how to deal with a PDE using numerical methods. We will start by considering it as a stand-alone model, but in the next chapters we will study it in combination with non-linear ODEs. This chapter therefore serves as a warm-up for the more complex models. We will also follow the path we started above. In the very simplest case of an ODE, we found a formula for the solution of both the differential equation and the numerical scheme approximating the equation. One nice consequence of this is that, as we saw above, we can explicitly study the error introduced by the numerical approximation. In this chapter, we will use the same approach to study the error of the numerical approximation of the diffusion equation.

### **3.1 The Problem = Equation + Initial Values + Boundary Conditions**

We consider the diffusion equation

$$\frac{\partial \mu}{\partial t}(t, \mathbf{x}) = \frac{\partial^2 \mu}{\partial \mathbf{x}^2}(t, \mathbf{x}). \tag{3.1}$$

Here, *u* models1 the concentration of a chemical in a spatial domain, *x* is the spatial variable and *t* is time. And since partial derivatives are present, the equation is a PDE (see page 13). We study the problem for values of *<sup>x</sup>* in the spatial domain (0, <sup>1</sup>) and for time *<sup>t</sup>* in the interval (0,*T*]. We will assume that the solution is given by *<sup>u</sup>* 0 (*x*) at time *<sup>t</sup>* <sup>=</sup> 0, and that *<sup>u</sup>* <sup>=</sup> <sup>0</sup> at *<sup>x</sup>* <sup>=</sup> <sup>0</sup> and *<sup>x</sup>* <sup>=</sup> <sup>1</sup> for all time in the interval [0,*T*].

<sup>1</sup> In the beginning of these notes, we treat all models as unitless. In models of electrophysiology, units can be a true nightmare so we follow the prudent path of trying to deal with one nightmare at the time. Units will come later.

The function *u* <sup>0</sup> = *u* 0 (*x*) is called the *initial condition*, and *u* = 0 at *x* = 0 and *x* = 1 are called *boundary conditions*. In particular, these boundary conditions are referred to as Dirichlet boundary conditions, as opposed to Neumann boundary conditions, which will be introduced below.

We will consider the special case of

$$
\mu^0(\mathbf{x}) = \sin(\pi \mathbf{x}).\tag{3.2}
$$

For that particular initial condition, the analytical solution of the diffusion equation (3.1) is given by

$$
\mu(t, \mathbf{x}) = e^{-\pi^2 t} \sin(\pi \mathbf{x}).\tag{3.3}
$$

This can be verified by observing that the boundary conditions are satisfied since clearly *<sup>u</sup>*(*t*, <sup>0</sup>) <sup>=</sup> *<sup>u</sup>*(*t*, <sup>1</sup>) <sup>=</sup> 0, since sin(0) <sup>=</sup> 0, and the initial condition is also satisfied since *<sup>u</sup>*(0, *<sup>x</sup>*) <sup>=</sup> sin(π*x*) <sup>=</sup> *<sup>u</sup>* 0 (*x*). It remains to be seen whether equation (3.1) is satisfied by *<sup>u</sup>*(*t*, *<sup>x</sup>*). To this end, we note that

$$\frac{\partial \mu}{\partial t} = -\pi^2 e^{-\pi^2 t} \sin(\pi x) = -\pi^2 u,\tag{3.4}$$

and

$$\frac{\partial^2 u(t, \mathbf{x})}{\partial \mathbf{x}^2} = -\pi^2 e^{-\pi^2 t} \sin(\pi \mathbf{x}) = -\pi^2 u,\tag{3.5}$$

so (3.1) holds.

### **3.2 The Numerical Scheme**

We define a numerical approximation of the problem simply by replacing derivatives by differences. In order to replace the left-hand side of (3.1), we use the same formula as above and state that

$$\frac{\partial \mu}{\partial t} \approx \frac{\mu(t + \Delta t, \mathbf{x}) - \mu(t, \mathbf{x})}{\Delta t}. \tag{3.6}$$
 
$$\text{and side of } (\mathcal{O}, 1), \text{ an odd-null form, calculate that}$$

To approximate the right-hand side of (3.1), we need to recall from calculus that the Taylor series of *u* states that

$$u(t, \mathbf{x} + \Delta \mathbf{x}) \approx u(t, \mathbf{x}) + \Delta x u\_x(t, \mathbf{x}) + \frac{1}{2} \Delta x^2 u\_{xx}(t, \mathbf{x}),\tag{3.7}$$

where *u*<sup>x</sup> and *u*xx are shorthand for <sup>∂</sup>u(t,x) ∂x and ∂ <sup>2</sup>u(t,x) ∂x 2 , respectively. Similarly,

$$
\mu(t, \mathbf{x} - \Delta \mathbf{x}) \approx \mu(t, \mathbf{x}) - \Delta \mathbf{x} \mu\_{\mathbf{x}}(t, \mathbf{x}) + \frac{1}{2} \Delta \mathbf{x}^2 \mu\_{\mathbf{x}\mathbf{x}}(t, \mathbf{x}), \tag{3.8}
$$

and therefore, by adding (3.7) and (3.8), we get the approximation,

3.3 Analytical Solution of the Numerical Scheme and Error 23

$$
\mu\_{\rm xx}(t, \mathbf{x}) \approx \frac{\mu(t, \mathbf{x} - \Delta \mathbf{x}) - 2u(t, \mathbf{x}) + u(t, \mathbf{x} + \Delta \mathbf{x})}{\Delta \mathbf{x}^2}. \tag{3.9}
$$

As above, we introduce *t*<sup>n</sup> = *n* × ∆*t*, where ∆*t* = *T*/*N* for a sufficiently large integer *N*. Furthermore, we introduce the spatial mesh points given by *x*<sup>j</sup> = (*j* − 1) × ∆*x* for *<sup>j</sup>* <sup>=</sup> <sup>1</sup>, ..., *<sup>M</sup>*, where <sup>∆</sup>*<sup>x</sup>* <sup>=</sup> <sup>1</sup>/(*<sup>M</sup>* <sup>−</sup> <sup>1</sup>) for a suitable<sup>2</sup> integer *<sup>M</sup>*.

We have defined the points {(*t*n, *<sup>x</sup>*j)} and we let *<sup>u</sup>* n j denote a numerical approximation in these points. It remains to define these values. The beginning is very simple; we clearly want the numerical scheme to satisfy the initial condition so we define,

$$
\mu\_j^0 = \mu^0(\alpha\_j),\tag{3.10}
$$

and for our special problem we have *u* 0 j <sup>=</sup> sin(π*x*j). Next, we apply the boundary conditions and define

$$
\mu\_1^n = 0,\tag{3.11}
$$

and

$$
\mu\_M^n = 0,\tag{3.12}
$$

for all *n* ≤ *N*.We still need to find values of *u* n j for *<sup>j</sup>* <sup>=</sup> <sup>2</sup>, . . . , *<sup>M</sup>*−<sup>1</sup> and <sup>1</sup> < *<sup>n</sup>* <sup>≤</sup> *<sup>N</sup>*.We find these by using the finite difference approximations (3.6) and (3.9). Specifically, we define *u* n j by the following numerical scheme,

$$\frac{u\_j^{n+1} - u\_j^n}{\Delta t} = \frac{u\_{j-1}^n - 2u\_j^n + u\_{j+1}^n}{\Delta x^2}. \tag{3.13}$$

By defining

$$
\rho = \frac{\Delta t}{\Delta x^2},
\tag{3.14}
$$

we can write the scheme in computational form,

$$
\mu\_j^{n+1} = \rho u\_{j-1}^n + (1 - 2\rho)u\_j^n + \rho u\_{j+1}^n. \tag{3.15}
$$

Starting with *n* = 0, we note that we can compute all the values at time *t*<sup>1</sup> = ∆*t* since *u* 1 j only depends on values at the time *t*<sup>0</sup> = 0 and they are given by the initial condition *u* 0 j (*<sup>j</sup>* <sup>=</sup> <sup>1</sup>, . . . , *<sup>M</sup>*). When the values at time *<sup>t</sup>*<sup>1</sup> <sup>=</sup> <sup>∆</sup>*<sup>t</sup>* have been computed, we can use them to compute all the values at *t* = 2∆*t* and so forth. The scheme is illustrated in Fig. 3.1.

### **3.3 Analytical Solution of the Numerical Scheme and Error**

This subsection is very technical and can be skipped without serious consequences (jump to Section 3.4). The purpose of the section is to provide formulas for the

<sup>2</sup> 'Sufficiently large integer' and 'suitable integer' are intentionally vague – stay calm – it will become clear (clearer) later on.

**Fig. 3.1** Illustration of the explicit finite difference scheme for the diffusion equation. The solutions u n j−1 , u n j and u n j+1 from the previous time step are used to compute the new solution, u n+1 j , at this time step. The scheme is given by (3.15).

numerical solution and the associated error for a PDE like we did for an ODE in (1.13). Such formulas are available only in rare cases, but can be useful for analyzing the error of the numerical approximation.

For a general initial condition, we have to write a program to use the scheme (3.15), and we will do that below, but for the special initial condition (3.2), we can find a formula for the numerical solution. It is given by

$$
\mu\_j^n = (1 - \Delta t \,\mu)^n \sin(\pi x\_j), \tag{3.16}
$$

where

$$
\mu = \frac{4}{\Delta x^2} \sin^2(\pi \Delta x/2). \tag{3.17}
$$

In order to show that this is the solution, we first check that the formula holds at time *t*0. Inserting *n* = 0 into (3.16), we get *u* 0 j <sup>=</sup> sin(π*x*j), which matches the numerical solution at time *t*<sup>0</sup> = 0 (see (3.2) and (3.10)). Next, we assume that the formula is correct at time *t*<sup>n</sup> and show that it also holds at time *t*n+1, – then the formula holds by induction. By inserting (3.16) into (3.15), we get,

$$u\_j^{n+1} = \rho u\_{j-1}^n + (1 - 2\rho)u\_j^n + \rho u\_{j+1}^n \tag{3.18}$$

$$=(1-\Delta t\mu)^{n}[\rho\sin(\pi\mathbf{x}\_{f-1})+(1-2\rho)\sin(\pi\mathbf{x}\_{j})+\rho\sin(\pi\mathbf{x}\_{j+1})].\tag{3.19}$$

We need two trigonometric identities from calculus to proceed from here,

$$1 - \cos(\mathbf{y}) = 2\sin^2(\mathbf{y}/2),\tag{3.20}$$

and

$$
\sin(\mathbf{x} + \mathbf{y}) + \sin(\mathbf{x} - \mathbf{y}) = 2\cos(\mathbf{y})\sin(\mathbf{x}).\tag{3.21}
$$

These identities hold for all values of *x* and y. We define

$$z\_j = \rho \sin(\pi x\_{j-1}) + (1 - 2\rho)\sin(\pi x\_j) + \rho \sin(\pi x\_{j+1})\tag{3.22}$$

and rewrite it like

$$z\_{j} = \sin(\pi \mathbf{x}\_{j}) + \rho \left[ \sin(\pi \mathbf{x}\_{j-1}) + \sin(\pi \mathbf{x}\_{j+1}) - 2 \sin(\pi \mathbf{x}\_{j}) \right] \tag{3.23}$$

$$\mathbf{x} = \sin(\pi \mathbf{x}\_j) + \rho [\sin(\pi \mathbf{x}\_j - \pi \Delta \mathbf{x}) + \sin(\pi \mathbf{x}\_j + \pi \Delta \mathbf{x}) - 2\sin(\pi \mathbf{x}\_j)],\qquad(3.24)$$

so by (3.21), we get

$$z\_f = \sin(\pi x\_j) + \rho[2\cos(\pi \Delta x)\sin(\pi x\_j) - 2\sin(\pi x\_j)]\tag{3.25}$$

$$\mathbf{x} = \sin(\pi \mathbf{x}\_f) + 2\rho[\cos(\pi \Delta \mathbf{x}) - 1]\sin(\pi \mathbf{x}\_f). \tag{3.26}$$

By (3.20), we now get,

$$z\_j = \sin(\pi x\_j) - 4\rho \sin^2(\pi \Delta x / 2)\sin(\pi x\_j) \tag{3.27}$$

$$\mathbf{x} = \sin(\pi x\_f)[1 - 4\rho \sin^2(\pi \Delta x / 2)],\tag{3.28}$$

so, by (3.14) and (3.16), we have

$$z\_j = \left[1 - \Delta t \mu\right] \sin(\pi x\_j). \tag{3.29}$$

Here, µ is defined by (3.17). By combining (3.19), (3.22) and (3.29), we find that

$$
\mu\_j^{n+1} = (1 - \Delta t \,\mu)^{n+1} \sin(\pi x\_j), \tag{3.30}
$$

and thus the formula (3.16) holds by induction.

### **3.3.1 What Is the Error?**

Since we now have the analytical solution of the PDE (3.1) given by

$$u(t, \mathbf{x}) = e^{-\pi^2 t} \sin(\pi \mathbf{x})\tag{3.31}$$

and a formula (3.16) for the numerical solution

$$
\mu\_j^n = (1 - \Delta t \mu)^n \sin(\pi x\_j) \tag{3.32}
$$

it is straightforward to compare the analytical and numerical solutions. The spatial part of the solutions are identical, and therefore it is sufficient to consider the temporal part of the solution. We thus consider the relative error in time defined by

$$E\_n = \frac{|(1 - \Delta t \mu)^n - e^{-\pi^2 t\_n}|}{e^{-\pi^2 t\_n}}.\tag{3.33}$$

In Table 3.1 we show *E*<sup>N</sup> where *N* = 1/∆*t* for several values of ∆*t*. We also show *E*<sup>N</sup> /∆*t* and we see that, again, this is more or less constant and therefore we again have linear convergence with *E*<sup>N</sup> ≈ 48 × ∆*t*.

**Table 3.1** Error of the temporal part of the numerical solution of the diffusion equation at t = T = 1 for <sup>∆</sup><sup>x</sup> <sup>=</sup> <sup>0</sup>.<sup>001</sup> and different values of <sup>∆</sup><sup>t</sup> <sup>=</sup> T N . The error is defined in (3.33).


### **3.3.2 More on the Error**

We can go one step further in comparing the analytical (3.31) and numerical (3.32) solutions using the formulas for these solutions. Since the spatial dependency of the solutions are equal, we just pick *x* = 1/2 and then we want to compare the analytical solution

$$U(T) = \mu(T, 1/2) = e^{-\pi^2 T}.\tag{3.34}$$

and the numerical solution

$$U^N = \mu^N\_{(M+1)/2} = (1 - \Delta t \mu)^N,\tag{3.35}$$

at the final time *t* = *T* = *N* × ∆*t*. Here we have assumed that *M* is an odd number. Recall, again from calculus, that the Taylor series of the sine function states that3

$$
\sin(z) = z + O(z^3),
\tag{3.36}
$$

so for small values of *z*, we have sin(*z*) ≈ *z*. We can use this to approximate the value of

$$
\mu = \frac{4}{\Delta x^2} \sin^2(\pi \Delta x/2) \tag{3.37}
$$

for small values of ∆*t*. By the Taylor series, we get

$$
\mu \approx \frac{4}{\Delta x^2} (\pi \Delta x / 2)^2 = \pi^2. \tag{3.38}
$$

So the numerical solution satisfies

$$U^N \approx (1 - \Delta t \pi^2)^N. \tag{3.39}$$

We can also approximate the analytical solution using the following Taylor series for the exponential function,

<sup>3</sup> You don't remember the O-notation? It is shorthand for telling how fast something goes to zero. So f (x) = O(x 2 ) means that f (x) goes to zero as fast as x <sup>2</sup> goes to zero for small values of x. In numerical analysis this notation is often used to indicate the size of an error term. Often, it is the remainder of a truncated Taylor series.

$$e^{-z} = 1 - z + O(z^2). \tag{3.40}$$

By using this approximation, we find that the analytical solution at *x* = 1/2 can be approximated by

$$U(T) = e^{-\pi^2 T} = e^{-\pi^2 N \Delta t} = (e^{-\pi^2 \Delta t})^N \approx (1 - \pi^2 \Delta t)^N,\tag{3.41}$$

hence, clearly, we have

$$U^N \approx U(T). \tag{3.42}$$

### **3.4 Instabilities in the Numerical Solution**

Numerical instabilities often appear and it can be hard to understand the origin. Here, we will show instabilities that appear because of too long time steps, but keep in mind that a long list of other instabilities can appear as well. Differential equations can have oscillatory solutions, but as the mesh parameters are refined, the numerical solution should converge towards the correct solution. Numerical instabilities, on the other hand, manifest themselves by diverging as the mesh is refined, rather than converging.

### **3.4.1 Specification of a Stable Problem with Unstable Numerical Solutions**

Suppose you have a tank of length 1 filled with water, where you have ink added to the water for *<sup>x</sup>* <sup>≤</sup> <sup>1</sup>/<sup>2</sup> but no ink for *<sup>x</sup>* > <sup>1</sup>/2. In the middle of the tank, at *x* = 1/2, there is an impermeable membrane, so there is no ink leaking from the left to the right side of the tank. At time *t* = 0, we remove the membrane and we are curious about what is going to happen. Intuitively, we expect the ink to diffuse to the right-hand side of the tank and that, eventually, the ink will be uniformly distributed throughout the tank. Qualitatively, we can get an impression of what happens by solving the diffusion equation

$$\frac{\partial u}{\partial t}(t, \mathbf{x}) = \frac{\partial^2 u}{\partial \mathbf{x}^2}(t, \mathbf{x}). \tag{3.43}$$

We will use the initial condition *<sup>u</sup>*(0, *<sup>x</sup>*) <sup>=</sup> <sup>1</sup> for *<sup>x</sup>* <sup>≤</sup> <sup>1</sup>/<sup>2</sup> and *<sup>u</sup>*(0, *<sup>x</sup>*) <sup>=</sup> <sup>0</sup> for *<sup>x</sup>* > <sup>1</sup>/2. The boundary conditions are given

$$\frac{\partial \mu}{\partial \chi}(t,0) = \frac{\partial \mu}{\partial \chi}(t,1) = 0,\tag{3.44}$$

i.e., that the spatial derivative is zero at the boundary. This is referred to as a no-flux boundary condition, or a Neumann boundary condition, and simply means that we don't allow the ink to leak out of the tank.

### **3.4.2 Numerical Scheme for Neumann Boundary Conditions**

In order to deal with this alternative set of boundary conditions in the numerical scheme, we consider the two Taylor series of *u* considered in Section 3.2, i.e.,

$$u(t, \mathbf{x} + \Delta \mathbf{x}) \approx u(t, \mathbf{x}) + \Delta x u\_{\mathbf{x}}(t, \mathbf{x}) + \frac{1}{2} \Delta x^2 u\_{\mathbf{x}\mathbf{x}}(t, \mathbf{x}),\tag{3.45}$$

and

$$
\mu(t, \mathbf{x} - \Delta \mathbf{x}) \approx \mu(t, \mathbf{x}) - \Delta \mathbf{x} u\_{\mathbf{x}}(t, \mathbf{x}) + \frac{1}{2} \Delta \mathbf{x}^2 u\_{\mathbf{x}\mathbf{x}}(t, \mathbf{x}).\tag{3.46}
$$

By subtracting (3.46) from (3.45), we obtain

$$
\mu(t, \mathbf{x} + \Delta \mathbf{x}) - \mu(t, \mathbf{x} - \Delta \mathbf{x}) \approx 2\Delta x u\_x(t, \mathbf{x}), \tag{3.47}
$$

which yields

$$
\mu\_{\mathbf{x}}(t, \mathbf{x}) \approx \frac{\mu(t, \mathbf{x} + \Delta \mathbf{x}) - \mu(t, \mathbf{x} - \Delta \mathbf{x})}{2\Delta \mathbf{x}}.\tag{3.48}
$$

Replacing the derivatives by this difference in the boundary condition (4.24), we get

$$\frac{u\_2^n - u\_0^n}{2\Delta x} = 0,\qquad\qquad\qquad\frac{u\_{M+1}^n - u\_{M-1}^n}{2\Delta x} = 0,\tag{3.49}$$

which yield

$$
\mu\_0^n = \mu\_2^n, \qquad \qquad \qquad \mu\_{M+1}^n = \mu\_{M-1}^n, \tag{3.50}
$$

for all *n* ≤ *N*. Inserting (3.50) into the main scheme (3.15) for *j* = 1 and *j* = *M*, we obtain

$$u\_1^{n+1} = (1 - 2\rho)u\_1^n + 2\rho u\_2^n,\tag{3.51}$$

$$
\mu\_M^{n+1} = (1 - 2\rho)u\_M^n + 2\rho u\_{M-1}^n. \tag{3.52}
$$

### **3.4.3 Example of Instabilities in the Numerical Solution**

In Fig. 3.2 we have solved the problem numerically using <sup>∆</sup>*<sup>t</sup>* <sup>=</sup> <sup>0</sup>.<sup>0001</sup> and <sup>∆</sup>*<sup>x</sup>* <sup>=</sup> <sup>0</sup>.<sup>02</sup> and we note that the solution seems to evolve as we expected. But in Fig. 3.3, we attempt to take longer time steps, and then we see pretty wild oscillations in the numerical solution. This is a classical type of numerical instability that often appears in numerical methods. Usually it helps to reduce the time steps, and it certainly helps in this case.

**Fig. 3.2** Numerical solution of the diffusion equation with boundary conditions <sup>∂</sup><sup>u</sup> ∂x (t, <sup>0</sup>) <sup>=</sup> ∂u ∂x (t, <sup>1</sup>) <sup>=</sup> <sup>0</sup> and initial conditions <sup>u</sup>(0, <sup>x</sup>) <sup>=</sup> <sup>1</sup> for <sup>x</sup> <sup>≤</sup> <sup>1</sup>/<sup>2</sup> and <sup>u</sup>(0, <sup>x</sup>) <sup>=</sup> <sup>0</sup> for <sup>x</sup> > <sup>1</sup>/<sup>2</sup> using <sup>∆</sup><sup>t</sup> <sup>=</sup> <sup>0</sup>.<sup>0001</sup> and <sup>∆</sup><sup>x</sup> <sup>=</sup> <sup>0</sup>.02. The solution is plotted at five different time points.

**Fig. 3.3** Numerical solution of the diffusion equation with boundary conditions ∂u ∂x (t, <sup>0</sup>) <sup>=</sup> ∂u ∂x (t, <sup>1</sup>) <sup>=</sup> <sup>0</sup> and initial conditions <sup>u</sup>(0, <sup>x</sup>) <sup>=</sup> <sup>1</sup> for <sup>x</sup> <sup>≤</sup> <sup>1</sup>/<sup>2</sup> and <sup>u</sup>(0, <sup>x</sup>) <sup>=</sup> <sup>0</sup> for <sup>x</sup> > <sup>1</sup>/<sup>2</sup> using <sup>∆</sup><sup>t</sup> <sup>=</sup> <sup>0</sup>.<sup>001</sup> and <sup>∆</sup><sup>x</sup> <sup>=</sup> <sup>0</sup>.02. For these values of ∆t and ∆x, we get pretty wild oscillations in the numerical solution. Note that the scaling of the y-axis is different in each row of the figure, corresponding to different points in time.

### **3.5 Stability Condition for the Numerical Scheme**

The diffusion equation satisfies a maximum principle stating that the solution will always be bounded by the values of the initial condition and the boundary conditions. We will show that the numerical solution generated by the scheme (3.15) satisfies the same principle, provided that the time step is sufficiently small. For simplicity we again assume that the boundary conditions are given by *u* = 0 for *x* = 0 and *x* = 1. Concretely, we consider the scheme

$$
\mu\_j^{n+1} = \rho u\_{j-1}^n + (1 - 2\rho)u\_j^n + \rho u\_{j+1}^n,\tag{3.53}
$$

where we recall that

$$
\rho = \frac{\Delta t}{\Delta x^2},\tag{3.54}
$$

and where the boundary conditions are given by *u* n 1 = 0 and *u* n <sup>M</sup> = 0. Define

$$
\mu\_+ = \max\_{\hat{f}} |u\_{\hat{f}}^0|, \tag{3.55}
$$

i.e., the biggest (in absolute value) initial value, and assume that ρ <sup>≤</sup> <sup>1</sup>/2, i.e. we assume that

$$
\Delta t \le \frac{\Delta x^2}{2}.\tag{3.56}
$$

Now, we want to prove that

$$|u\_f^n| \le u\_+ \tag{3.57}$$

for all *<sup>j</sup>* ∈ [2, *<sup>M</sup>* <sup>−</sup>1] and *<sup>n</sup>* <sup>≥</sup> 0. We will show this by induction and start by assuming that (3.57) holds for an arbitrary value of *n*. Then, by the scheme (3.53), we get4

$$\begin{aligned} |\mu\_j^{n+1}| &= |\rho u\_{j-1}^n + (1 - 2\rho)\mu\_j^n + \rho \mu\_{j+1}^n| \\ &\le \rho |\mu\_{j-1}^n| + (1 - 2\rho)|\mu\_j^n| + \rho |\mu\_{j+1}^n| \\ &\le \rho \mu\_+ + (1 - 2\rho)\mu\_+ + \rho \mu\_+ \\ &= \mu\_+ \end{aligned}$$

and therefore (3.57) holds by induction.

We also note that the criterion (3.56) is in agreement with the results observed in the numerical example in the previous section. In that case, we had <sup>∆</sup><sup>x</sup> 2 2 <sup>=</sup> <sup>0</sup>.0002. Thus, for the stable case of <sup>∆</sup>*<sup>t</sup>* <sup>=</sup> <sup>0</sup>.<sup>0001</sup> (Fig. 3.2), the stability criterion (3.56) was satisfied, whereas for the unstable case of <sup>∆</sup>*<sup>t</sup>* <sup>=</sup> <sup>0</sup>.<sup>001</sup> (Fig. 3.3), the stability criterion was not satisfied.

When using explicit finite difference schemes to solve equations where the diffusion equation is part of the problem, a stability condition similar to (3.56) usually has to be satisfied in order to obtain proper numerical results.

<sup>4</sup> Here we use the *triangle equality* stating that |a + b| ≤ |a| + |b| for any numbers a and b.

### **3.6 A Brief Comment on Uniqueness**

We have seen that a formula can be obtained for the solution of the diffusion equation and for the numerical approximation of the same problem. But are these solutions unique? Can there be other solutions than those given by the formulas? In order to see that the solution of the continuous problem is in fact unique, we assume that we have two solutions, *<sup>u</sup>* and <sup>v</sup>, with coinciding initial condition given by *<sup>f</sup>* <sup>=</sup> *<sup>f</sup>* (*x*), and the usual Dirichlet boundary conditions. Since *<sup>u</sup>* and v solve

$$\frac{\partial u}{\partial t}(t,x) = \frac{\partial^2 u}{\partial x^2}(t,x) \tag{3.58}$$

and

$$\frac{\partial v}{\partial t}(t,x) = \frac{\partial^2 v}{\partial x^2}(t,x),\tag{3.59}$$
 
$$\text{with } (\text{2.50}) \text{ from (3.59), that the difference between}$$

respectively, we find (by subtracting (3.59) from (3.58)) that the difference between the solutions, given by *<sup>e</sup>* <sup>=</sup> *<sup>u</sup>* <sup>−</sup> v, solves the equation

$$\frac{\partial e}{\partial t}(t, \mathbf{x}) = \frac{\partial^2 e}{\partial \mathbf{x}^2}(t, \mathbf{x}). \tag{3.60}$$

Now, we can define a measure of the difference between these solutions as a function of time,

$$E(t) = \frac{1}{2} \int\_0^1 e^2(t, x) dx,\tag{3.61}$$

and observe that

$$E'(t) = \int\_0^1 e(t, \mathbf{x}) \frac{\partial e}{\partial t}(t, \mathbf{x}) d\mathbf{x} = \int\_0^1 e(t, \mathbf{x}) \frac{\partial^2 e}{\partial \mathbf{x}^2}(t, \mathbf{x}). \tag{3.62}$$

By using integrations by parts, we find that

$$\int\_0^1 e(t, \mathbf{x}) \frac{\partial^2 e}{\partial \mathbf{x}^2}(t, \mathbf{x}) = -\int\_0^1 \left(\frac{\partial e}{\partial \mathbf{x}}(t, \mathbf{x})\right)^2 d\mathbf{x},\tag{3.63}$$

and therefore,

$$E'(t) \le 0.\tag{3.64}$$

Since *<sup>u</sup>* and v have the same initial condition given by *<sup>f</sup>* <sup>=</sup> *<sup>f</sup>* (*x*), we clearly have *<sup>e</sup>*(0, *<sup>x</sup>*) <sup>=</sup> <sup>0</sup> for all relevant values of *<sup>x</sup>*. Therefore, since *<sup>E</sup>*(0) <sup>=</sup> 0, and *<sup>E</sup>* 0 (*t*) ≤ 0 we have *<sup>E</sup>*(*t*) ≡ <sup>0</sup> for all time and thus *<sup>u</sup>* and v are equal, and the solution of the problem must be unique. A similar argument can be given for the discrete case, so the numerical solution given by the formula (3.16) is also unique. This is a classical energy argument and it can be extended to also provide stability estimates of the solutions; see, e.g., [1].

### **References**

[1] Tveito A, Winther R (2009) Introduction to partial differential equations; a computational approach, 2nd edn. Springer

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 4 Implicit Numerical Methods**

In the previous chapter, we saw that the simple explicit numerical scheme resulted in an instability problem. We also saw that the problem could be resolved by using sufficiently short time steps. But in many situations, short time steps become exceedingly short, as can be seen, e.g., in the stability criterion (3.56). This means that we have to perform computations for a very large number of time steps to reach the final time and it is therefore tempting to look for alternatives. The most common alternative is to use an implicit scheme, which generally allows for much longer time steps.

### **4.1 Explicit and Implicit Numerical Schemes**

In Section 1.4 (page 7) we briefly introduced the concept of *explicit* and *implicit* numerical schemes. So far we have only considered explicit schemes and the reason for that is just simplicity. If we have a differential equation that can be written in the form

$$
\mu'(t) = F(\mu(t)),
\tag{4.1}
$$

we can, as seen in Chapter 1, use the the approximation

$$
\mu'(t) \approx \frac{\mu(t + \Delta t) - \mu(t)}{\Delta t},\tag{4.2}
$$

to replace (4.1) by the difference equation,

$$\frac{\mu^{n+1} - \mu^n}{\Delta t} = F(\mu^n), \tag{4.3}$$

leading to the *explicit scheme*

$$
\mu^{n+1} = \mu^n + \Delta t F(\mu^n). \tag{4.4}
$$

Simula SpringerBriefs on Computing 14, https://doi.org/10.1007/978-3-031-30852-9\_4 K. Horgmo Jæger, A. Tveito, *Differential Equations for Studies in Computational Electrophysiology*,

<sup>©</sup> The Author(s) 2023 33

But an alternative, equally plausible, approach is to replace (4.1) by the difference equation

$$\frac{\mu^{n+1} - \mu^n}{\Delta t} = F(\mu^{n+1}),\tag{4.5}$$

which leads to the *implicit scheme*

$$
\mu^{n+1} - \Delta t F(\mu^{n+1}) = \mu^n. \tag{4.6}
$$

As we shall see below, the advantage of the implicit scheme is that we obtain numerically stable results for longer time steps, but the disadvantage is that we have to solve a potentially nonlinear equation of the form

$$
\mu - \Delta t F(\mu) = \vec{\mu} \tag{4.7}
$$

at every time step. Here, *u*¯ is known from the previous time step, and *u* is the unknown that we need to compute.

We will now show how to derive implicit schemes and demonstrate that they are more stable than the explicit versions. But keep in mind that we also have to deal with accuracy. For this purpose we still want the time steps to be reasonably short without being *too* short.

### **4.2 An Implicit Scheme for the Diffusion Equation**

Let us first recall how we derived the explicit scheme for the diffusion equation1,

$$
\mu\_l = \mu\_{\text{xx}},\tag{4.8}
$$

equipped with the initial and boundary conditions,

$$
\mu(0, \mathbf{x}) = \mu^0(\mathbf{x}), \tag{4.9}
$$

$$
\mu^0(\mathbf{x}) \quad \text{or} \quad \mathbf{a} \tag{4.10}
$$

$$
\mu(t,0) = \mu(t,1) = 0.\tag{4.10}
$$

As in (4.3), we can approximate this equation by replacing the time derivative by

$$
\mu\_t \approx \frac{\mu(t + \Delta t) - \mu(t)}{\Delta t}. \tag{4.11}
$$

For the right-hand side of (4.8) there are two alternatives (actually many alternatives). We can either approximate the right-hand side by a difference approximation of *<sup>u</sup>*xx(*t*, *<sup>x</sup>*) or of *<sup>u</sup>*xx(*<sup>t</sup>* <sup>+</sup> <sup>∆</sup>*t*, *<sup>x</sup>*). The first alternative results in the explicit scheme we used above,

<sup>1</sup> If you just browse these notes and didn't recognize the notation used here, we repeat that <sup>u</sup><sup>t</sup> <sup>=</sup> <sup>∂</sup>u/∂t, and <sup>u</sup>x x <sup>=</sup> <sup>∂</sup> <sup>2</sup>u/∂<sup>x</sup> <sup>2</sup> – keep on browsing.

4.3 The Linear System 35

$$\frac{u\_j^{n+1} - u\_j^n}{\Delta t} = \frac{u\_{j-1}^n - 2u\_j^n + u\_{j+1}^n}{\Delta x^2},\tag{4.12}$$

and the second alternative results in the implicit scheme,

$$\frac{u\_j^{n+1} - u\_j^n}{\Delta t} = \frac{u\_{j-1}^{n+1} - 2u\_j^{n+1} + u\_{j+1}^{n+1}}{\Delta x^2}.\tag{4.13}$$

In Fig. 4.1 we have illustrated how this scheme works. Here, we realize that there is a fundamental difference between the explicit (see Fig. 3.1) and implicit schemes. The explicit schemes are very simple since every value is simply an explicit function of the values of the previous time step. This is straightforward to implement in software. But the implicit scheme is much more convoluted. In fact, all values at time *t*n+<sup>1</sup> are coupled with all other other values at that the same time step. Well, actually, every point is connected to two neighbors, but these again are coupled to their neighbors and so on. So we end up with a linear system of equations.

**Fig. 4.1** Illustration of the implicit finite difference scheme for the diffusion equation. The unknown solutions u n+1 j−1 and u n+1 j+1 from the present time step, in addition to the known solution u n j from the previous time step, are all needed to compute the solution u n+1 j . The scheme is given by (4.14), or on matrix form by (4.21).

### **4.3 The Linear System**

We will write the scheme (4.13) as a linear system of equations, but first we rearrange it by putting unknowns (variables at time *t*n+1) at the left-hand side and previously computed variables (*t*n) on the right-hand side of the equation,

$$-\rho u\_{j-1}^{n+1} + (1 + 2\rho)u\_j^{n+1} - \rho u\_{j+1}^{n+1} = u\_j^n,\tag{4.14}$$

where again

$$
\rho = \frac{\Delta t}{\Delta x^2}.\tag{4.15}
$$

The implicit scheme has the form (4.14) for *<sup>j</sup>* <sup>=</sup> <sup>3</sup>, . . . , *<sup>M</sup>* <sup>−</sup> <sup>2</sup> but takes a different form for *j* = 2 and *j* = *M* − 1. Recall that the boundary conditions are specified by (4.10), given by

$$
\mu\_1^n = \mu\_M^n = 0 \tag{4.16}
$$

for all *n*. Hence, for *j* = 2, the scheme (4.14) takes the form,

$$(1 + 2\rho)u\_2^{n+1} - \rho u\_3^{n+1} = u\_2^n. \tag{4.17}$$

Similarly, for *j* = *M* − 1, we get the scheme

$$-\rho u\_{M-2}^{n+1} + (1+2\rho)u\_{M-1}^{n+1} = u\_{M-1}^n. \tag{4.18}$$

We are now ready to rewrite the somewhat messy scheme defined by (4.14), (4.17), (4.18) in matrix form. To this end, we defined the vectors2

$$\boldsymbol{u}^{n} = \begin{pmatrix} \boldsymbol{u}\_{2}^{n} \\ \boldsymbol{u}\_{3}^{n} \\ \vdots \\ \boldsymbol{u}\_{M-1}^{n} \end{pmatrix},\tag{4.19}$$

and the matrix

$$A = \begin{pmatrix} 1+2\rho & -\rho & 0 & \cdots & 0 \\ -\rho & 1+2\rho & -\rho & 0 & \cdots & 0 \\ 0 & -\rho & 1+2\rho & -\rho & 0 & \cdots & 0 \\ \vdots & & \ddots & \ddots & \ddots & \vdots \\ 0 & \cdots & & & 0 & -\rho & 1+2\rho \end{pmatrix} . \tag{4.20}$$

With these definitions, we can rewrite the scheme defined by (4.14), (4.17), (4.18) on the simple form

$$A\mu^{n+1} = \mu^n. \tag{4.21}$$

The matrix *A* defined in (4.20) is tridiagonal and this makes the system (4.21) easy to solve. In the software associated these notes, the solution is shown in Matlab, but the system is easy to solve in any numerically oriented computing system.

### **4.3.1 Neumann Boundary Conditions**

In Section 3.4 we considered the numerical solution of a specific diffusion equation problem given by

<sup>2</sup> Note that u n 1 and u n <sup>M</sup> are given directly by the boundary conditions at every time step, so these values do not have to be included in the vector of unknowns.

$$
\mu\_l = \mu\_{\text{xx}},
\tag{4.22}
$$

with the initial and boundary conditions

$$\mu(0, x) = \begin{cases} 1, & \text{if } x \le 1/2, \\ 0, & \text{if } x > 1/2, \end{cases} \tag{4.23}$$

$$
\mu\_{\mathbf{x}}(t, 0) = \mu\_{\mathbf{x}}(t, 1) = 0. \tag{4.24}
$$

For the inner points *<sup>j</sup>* <sup>=</sup> <sup>2</sup>, . . . , *<sup>M</sup>* <sup>−</sup>1, an implicit numerical scheme for this problem can be given by (4.14) like above, but for the boundary points (*j* = 1 and *j* = *M*), we need to take the different set of boundary conditions into account. One way to do this is by following the same procedure as in Section 3.4.2. That is, using the difference

$$
\mu\_{\mathbf{x}}(t, \mathbf{x}) \approx \frac{\mu(t, \mathbf{x} + \Delta \mathbf{x}) - \mu(t, \mathbf{x} - \Delta \mathbf{x})}{2\Delta \mathbf{x}}.\tag{4.25}
$$

Replacing the derivatives by this difference in the boundary condition (4.24), we get

$$\frac{u\_2^n - u\_0^n}{2\Delta x} = 0,\qquad\qquad\qquad\frac{u\_{M+1}^n - u\_{M-1}^n}{2\Delta x} = 0,\tag{4.26}$$

which yield

$$
\mu\_0^n = \mu\_2^n, \qquad \qquad \qquad \mu\_{M+1}^n = \mu\_{M-1}^n, \tag{4.27}
$$

for all *n* ≤ *N*. Inserting (4.27) into the main scheme (4.14) for *j* = 1 and *j* = *M*, we obtain

$$(1+2\rho)u\_1^{n+1} - 2\rho u\_2^{n+1} = u\_1^n,\tag{4.28}$$

$$(1 + 2\rho)u\_M^{n+1} - 2\rho u\_{M-1}^{n+1} = u\_M^n,\tag{4.29}$$

and the scheme can be written on matrix form

$$A\mu^{n+1} = \mu^n,\tag{4.30}$$

where

$$\mu^n = \begin{pmatrix} \mu\_1^n\\ \mu\_2^n\\ \vdots\\ \mu\_M^n \end{pmatrix},\tag{4.31}$$

and

$$A = \begin{pmatrix} 1+2\rho & -2\rho & 0 & \cdots & 0 \\ -\rho & 1+2\rho & -\rho & 0 & \cdots & 0 \\ 0 & -\rho & 1+2\rho & -\rho & 0 & \cdots & 0 \\ \vdots & & \ddots & \ddots & \ddots & \vdots \\ 0 & \cdots & & 0 & -2\rho & 1+2\rho \end{pmatrix}. \tag{4.32}$$

### **4.4 The Implicit Scheme Is Stable**

We noticed above that if the time steps we used for the explicit scheme were too long, we got a solution with wild oscillations. In Fig. 4.2 we repeat these computations using the implicit scheme (4.30)–(4.32) and we notice that the oscillations are gone. The computations could indicate that the solutions are stable for any value of ∆*t*. In fact, unconditional stability of the implicit scheme for the diffusion equation is classical and can be proved. One argument can be found in [3].

**Fig. 4.2** Numerical solutions of the diffusion equation with boundary conditions <sup>∂</sup><sup>u</sup> ∂x (t, <sup>0</sup>) <sup>=</sup> ∂u ∂x (t, <sup>1</sup>) <sup>=</sup> <sup>0</sup> and initial conditions <sup>u</sup>(0, <sup>x</sup>) <sup>=</sup> <sup>1</sup> for <sup>x</sup> <sup>≤</sup> <sup>1</sup>/<sup>2</sup> and <sup>u</sup>(0, <sup>x</sup>) <sup>=</sup> <sup>0</sup> for <sup>x</sup> > <sup>1</sup>/<sup>2</sup> using <sup>∆</sup><sup>t</sup> <sup>=</sup> <sup>0</sup>.<sup>001</sup> and <sup>∆</sup><sup>x</sup> <sup>=</sup> <sup>0</sup>.02. The left panel shows the solution for the explicit scheme (as also seen in Fig. 3.3), and we observe wild oscillations. In the right panel, we show the solution of the implicit scheme, and the solution appears to be more reasonable. Note that at <sup>T</sup> <sup>=</sup> <sup>0</sup>.5, the explicit scheme is so broken down that the solution returned by the scheme are not numbers (NaNs).

### **4.5 Comments and Further Reading**

1. As mentioned above, when the PDE under consideration is in one spatial dimension, the associated linear system (4.21) is easy to solve. But if we consider the diffusion equation in three spatial dimensions, e.g.,

$$
\mu\_I = \mu\_{\text{xx}} + \mu\_{\text{yy}} + \mu\_{\text{zz}}, \tag{4.33}
$$

this problem becomes much harder. The solution of linear systems in the form of *Ax* = *b*, where *A* is a matrix, *b* is a known vector, and *x* is unknown, is a crucial problem in scientific computing. It is a fundamental part of almost all scientific computing projects and presents significant challenges. While small systems are relatively straightforward to solve, larger systems become increasingly difficult. This problem is well studied in the field of scientific computing, but it remains a challenging task in general.


### **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 5 Improved Accuracy**

In the examples we have considered so far, the error introduced by the numerical scheme has been *O*(∆*t*). This can easily be improved as we will show through two simple examples.

### **5.1 Back to the Simplest Equation**

The first equation we studied was

$$\mathbf{y}' = \mathbf{y},\tag{5.1}$$

and we approximated the equation using the finite difference approximation

$$\frac{\mathbf{y}\_{n+1} - \mathbf{y}\_n}{\Delta t} = \mathbf{y}\_n. \tag{5.2}$$

This scheme resulted in an error proportional with the time step ∆*t*. From Chapter 4, we realize that we can also use the approximation

$$\frac{\mathbf{y}\_{n+1} - \mathbf{y}\_n}{\Delta t} = \mathbf{y}\_{n+1}.\tag{5.3}$$

And, as a compromise between these two alternatives, we can use the following midpoint1 approximation,

$$\frac{\mathbf{y}\_{n+1} - \mathbf{y}\_n}{\Delta t} = \frac{1}{2} (\mathbf{y}\_n + \mathbf{y}\_{n+1}).\tag{5.4}$$

The explicit, implicit and midpoint schemes can be written on computational form as follows,

<sup>1</sup> This scheme is sometimes called the midpoint scheme (for obvious reasons) and sometimes called the Cranck-Nicolson scheme because it was developed by John Crank and Phyllis Nicolson, [1].

Simula SpringerBriefs on Computing 14, https://doi.org/10.1007/978-3-031-30852-9\_5 K. Horgmo Jæger, A. Tveito, *Differential Equations for Studies in Computational Electrophysiology*,

$$\mathbf{y}\_{n+1} = (1 + \Delta t)\mathbf{y}\_n,\tag{5.5}$$

$$\mathbf{y}\_{n+1} = \frac{1}{1 - \Delta t} \mathbf{y}\_n,\tag{5.6}$$

$$\mathbf{y}\_{n+1} = \frac{1 + \Delta t/2}{1 - \Delta t/2} \mathbf{y}\_n,\tag{5.7}$$

respectively. In Table 5.1 we show the errors introduced by these three schemes and we note that the midpoint scheme is clearly more accurate than the two other schemes. Furthermore, we find that the error of the implicit and explicit schemes are both proportional to ∆*t*, or *O*(∆*t*), whereas the error of the midpoint scheme is proportional to ∆*x* 2 , or *O*(∆*t* 2 ). This means that the implicit and explicit schemes have first order (linear) convergence, whereas the midpoint scheme has second order (quadratic) convergence.

**Table 5.1** Errors of numerical solutions of the differential equation (5.1) with initial condition <sup>u</sup><sup>0</sup> <sup>=</sup> <sup>1</sup> at <sup>t</sup> <sup>=</sup> <sup>T</sup> <sup>=</sup> <sup>1</sup> for different values of <sup>∆</sup>t. The errors are defined as <sup>E</sup><sup>e</sup> <sup>=</sup> <sup>|</sup>y(1) − <sup>y</sup><sup>N</sup> ,<sup>e</sup> <sup>|</sup>, <sup>E</sup><sup>i</sup> <sup>=</sup> <sup>|</sup>y(1) − <sup>y</sup><sup>N</sup> ,<sup>i</sup> <sup>|</sup> and <sup>E</sup><sup>m</sup> <sup>=</sup> <sup>|</sup>y(1) − <sup>y</sup><sup>N</sup> ,<sup>m</sup> <sup>|</sup>, where <sup>y</sup>(1) <sup>=</sup> <sup>e</sup> is the analytical solution, and <sup>y</sup><sup>N</sup> ,e, <sup>y</sup><sup>N</sup> ,<sup>i</sup> , and <sup>y</sup><sup>N</sup> ,m, are the numerical solutions of the explicit (5.5), implicit (5.6) and midpoint (5.7) schemes, respectively, at time 1.


The difference in numerical errors reported in Table 5.1 may not seem to be dramatic. But if you require the error to be less than, say, 10−10, then the number of time steps needed for the first order schemes are about <sup>2</sup>.<sup>85</sup> <sup>×</sup> <sup>10</sup><sup>5</sup> larger than what is needed for the second order scheme. So, the difference in computing time between a first and second order scheme can be dramatic. For the very simple model considered here, the computation is trivial in any case, but with a challenging system of partial differential equations in three spatial dimensions, the difference in computing efforts can become enormous.

### **5.2 A Linear Reaction-Diffusion Equation**

We considered the diffusion equation above. That equation becomes a little more interesting if we add a reaction term to it,

$$
\mu\_t = \mu\_{\text{xx}} + f(\mu). \tag{5.8}
$$

Here, *f* = *f* (*u*) is referred to as a reaction term. In order to keep things simple, we will consider a linear reaction term and define

$$f(\mu) = \lambda \mu,\tag{5.9}$$

and we will wait a little before defining the value of <sup>λ</sup>. Above, we noted that a numerical scheme for the diffusion equation could be written in a very compact form by introducing vector/matrix notation. We will extend this in order to define three alternative schemes for the reaction-diffusion equation (5.9). Note that we still apply the following initial and boundary conditions,

$$
\mu(0, \mathbf{x}) = \mu^0(\mathbf{x}), \tag{5.10}
$$

$$
\mu^0(\mathbf{x}) = \mu^0(\mathbf{x}) \quad \text{or} \tag{5.11}
$$

$$
\mu(t,0) = \mu(t,1) = 0.\tag{5.11}
$$

As above, we let

$$\boldsymbol{\mu}^{n} = \begin{pmatrix} \boldsymbol{\mu}\_{2}^{n} \\ \boldsymbol{\mu}\_{3}^{n} \\ \vdots \\ \boldsymbol{\mu}\_{M-1}^{n} \end{pmatrix},\tag{5.12}$$

and in addition, we introduce the matrix

$$D = \frac{1}{\Delta x^2} \begin{pmatrix} -2 & 1 & 0 & \cdots & & 0 \\ 1 & -2 & 1 & 0 & \cdots & & 0 \\ 0 & 1 & -2 & 1 & 0 & \cdots & 0 \\ \vdots & & \ddots & \ddots & \ddots & & \vdots \\ 0 & \cdots & & & 0 & 1 & -2 \end{pmatrix} . \tag{5.13}$$

With this notation at hand we can approximate the linear version of equation (5.8) using an explicit, implicit or midpoint approximation of the right-hand side of the equation

$$\frac{\mu^{n+1} - \mu^n}{\Delta t} = D\mu^n + \lambda \mu^n,\tag{5.14}$$

$$\frac{\mu^{n+1} - \mu^n}{\Delta t} = D\mu^{n+1} + \lambda\mu^{n+1},\tag{5.15}$$

$$\frac{\mu^{n+1} - \mu^n}{\Delta t} = \frac{1}{2} (Du^n + Du^{n+1} + \lambda u^n + \lambda u^{n+1}).\tag{5.16}$$

To help ease the implementation of the numerical schemes and make it more straightforward to write the schemes in matrix form as in, e.g., (4.21), it is convenient to write these schemes in computational form, that is, collect all terms involving the unknown solutions to be computed in each time step, *u* n+1 , on the left-hand side and all the terms involving the know solutions from the previous time step, *u* n , on the right-hand side. The schemes (5.14)–(5.16) can be written in this form by,

44 5 Improved Accuracy

$$\mu^{n+1} = [(1 + \lambda \Delta t)I + \Delta t D] \mu^n \tag{5.17}$$

$$[(1 - \lambda \Delta t)I - \Delta tD]u^{n+1} = u^n \tag{5.18}$$

$$\begin{bmatrix} \dots \ \ \ \ 1 \end{bmatrix} \qquad \begin{bmatrix} \ \ \ \ \ \ \ \ \ \ \ \end{bmatrix} \qquad \begin{bmatrix} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \end{bmatrix} \tag{5.18}$$

$$\left[\left(1-\frac{\lambda\Delta t}{2}\right)I-\frac{\Delta t}{2}D\right]u^{n+1}=\left[\left(1+\frac{\lambda\Delta t}{2}\right)I+\frac{\Delta t}{2}D\right]u^{n},\tag{5.19}$$

where *I* is the identity2 matrix.

Let us now consider the problem (5.8)-(5.11) with a specific choice of λ <sup>=</sup> <sup>2</sup>π 2 and *u* 0 (*x*) <sup>=</sup> sin(π*x*). Then, the analytical solution is given by

$$
\mu(t, \mathbf{x}) = e^{\pi^2 t} \sin(\pi \mathbf{x}).\tag{5.20}
$$

In Fig. 5.1, we show the results of the implicit (5.18) and midpoint (5.19) schemes together with the analytical solution at time *T* = 1. In addition, we show the results of the explicit scheme at *<sup>T</sup>* <sup>=</sup> <sup>0</sup>.1. We have used <sup>∆</sup>*<sup>x</sup>* <sup>=</sup> <sup>0</sup>.<sup>01</sup> and <sup>∆</sup>*<sup>t</sup>* <sup>=</sup> <sup>0</sup>.001. For this choice of discretization parameters, we get wild oscillations for the explicit scheme. The implicit and midpoint schemes produce more reasonable solutions, and we note that the midpoint scheme is clearly more accurate than the implicit scheme.

### **References**

[1] Crank J, Nicolson P (1947) A practical method for numerical evaluation of solutions of partial differential equations of the heat-conduction type. In: Mathematical Proceedings of the Cambridge Philosophical Society, Cambridge University Press, vol 43, pp 50–67

<sup>2</sup> The identity matrix has ones at the diagonal and zeroes elsewhere.

**Fig. 5.1** Analytical and numerical solutions of the reaction-diffusion equation (5.8) with λ <sup>=</sup> <sup>2</sup>π 2 , boundary conditions <sup>u</sup>(t, <sup>0</sup>) <sup>=</sup> <sup>u</sup>(t, <sup>1</sup>) <sup>=</sup> <sup>0</sup> and initial conditions <sup>u</sup>(0, <sup>x</sup>) <sup>=</sup> sin(πx). In the numerical schemes, we use <sup>∆</sup><sup>t</sup> <sup>=</sup> <sup>0</sup>.<sup>001</sup> and <sup>∆</sup><sup>x</sup> <sup>=</sup> <sup>0</sup>.01. In the upper panel, we show the solution of the implicit and midpoint schemes together with the analytical solution at time T = 1. The lower panel shows the solution for the explicit scheme at <sup>T</sup> <sup>=</sup> <sup>0</sup>.1, and we observe wild oscillations.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 6 A Simple Cable Equation**

The cable equation was first derived to model transport of electrical signals in telegraphic cables. But it later gained enormous popularity as a model of transport of electrical signals along a neuronal axon. In Chapter 9, we will discuss how this equation is derived and how the different terms in the equation come about. But here, we will just take a simple version of the equations for granted and then try to solve them. We will observe that the few techniques we learned above are actually sufficient to solve the non-linear reaction-diffusion equations we consider here.

### **6.1 A Non-Linear Reaction-Diffusion System**

We will consider a system of equations where we add a diffusion term to the FitzHugh-Nagumo equations. Specifically, we consider the equations1,

$$v\_t = \delta v\_{xx} + c\_1 v(v-a)(1-v) - c\_2 w,\tag{6.1}$$

$$w\_l = b(v - dw). \tag{6.2}$$

We use the same constants as above,

$$a = -0.12, \; c\_1 = 0.175, \; c\_2 = 0.03, \; b = 0.011, \; d = 0.55,\tag{6.3}$$

and in addition we use the diffusion coefficient<sup>2</sup> δ <sup>=</sup> <sup>5</sup> · <sup>10</sup>−<sup>5</sup> .

Simula SpringerBriefs on Computing 14, https://doi.org/10.1007/978-3-031-30852-9\_6 K. Horgmo Jæger, A. Tveito, *Differential Equations for Studies in Computational Electrophysiology*,

<sup>1</sup> Recall, once again, that we use the convention that <sup>v</sup><sup>t</sup> <sup>=</sup> ∂v ∂t and <sup>v</sup><sup>x</sup> <sup>x</sup> <sup>=</sup> ∂ 2 v ∂x2 .

<sup>2</sup> We still use no units; they will be introduced later.

### **6.2 The Explicit Numerical Scheme**

We note that we can use the same approximation of <sup>v</sup><sup>t</sup> and <sup>w</sup><sup>t</sup> that we used for the FitzHugh-Nagumo model in Chapter 2 (see (2.6) and (2.7)), and the same approximation for <sup>v</sup>xx as we used for the diffusion equation in Chapter <sup>3</sup> (see (3.13)) to define an explicit numerical scheme,

$$\begin{aligned} \frac{v\_j^{n+1} - v\_j^n}{\Delta t} &= \delta \frac{v\_{j-1}^n - 2v\_j^n + v\_{j+1}^n}{\Delta x^2} + c\_1 v\_j^n (v\_n - a)(1 - v\_j^n) - c\_2 w\_j^n, \\\ \frac{w\_j^{n+1} - w\_j^n}{\Delta t} &= b(v\_j^n - dw\_j^n). \end{aligned} \tag{6.4}$$

Here, v n j and w n j denote approximations of <sup>v</sup>(*x*<sup>j</sup> ,*t*n) and <sup>w</sup>(*x*<sup>j</sup> ,*t*n), respectively. It is straightforward to put this scheme in computational form,

$$\begin{aligned} v\_j^{n+1} &= \rho v\_{j-1}^n + (1 - 2\rho)v\_j^n + \rho v\_{j+1}^n + \Delta t [c\_1 v\_j^n (v\_n - a)(1 - v\_j^n) - c\_2 w\_j^n], \\ w\_j^{n+1} &= w\_j^n + \Delta t b(v\_j^n - dw\_j^n) \end{aligned}$$

where ρ <sup>=</sup> δ∆*t*/∆*<sup>x</sup>* 2 .

### **6.3 Traveling Wave Solutions**

Traveling wave solutions are characteristic of solutions of reaction-diffusion models of neuronal axons and myocardial tissue. Here, we will see such solutions for the simple model given by (6.1) and (6.2).

In Fig. 6.1, we show the numerical solution of v as a function of *<sup>x</sup>* at five different points in time. In order to initiate a wave traveling from left to right, we let the initial conditions be specified by <sup>v</sup><sup>0</sup> <sup>=</sup> <sup>0</sup> and <sup>w</sup><sup>0</sup> <sup>=</sup> <sup>0</sup> for all values of *<sup>x</sup>*, except that we set <sup>v</sup><sup>0</sup> <sup>=</sup> <sup>0</sup>.<sup>26</sup> for *<sup>x</sup>* <sup>≤</sup> <sup>0</sup>.04. We use <sup>∆</sup>*<sup>x</sup>* <sup>=</sup> <sup>0</sup>.<sup>01</sup> and <sup>∆</sup>*<sup>t</sup>* <sup>=</sup> <sup>0</sup>.005. The boundary conditions for v are given by ∂v (*t*, <sup>0</sup>) <sup>=</sup> ∂v (*t*, <sup>1</sup>) <sup>=</sup> 0.

∂x ∂x In the leftmost panel of Fig. 6.1, at *t* = 0, we see the initial conditions for the variable v. In the second panel, at *<sup>t</sup>* <sup>=</sup> 50, we see that the value of v has started to increase in an area close to the left boundary of the domain, and in the next panels, at *<sup>t</sup>* <sup>=</sup> 100, *<sup>t</sup>* <sup>=</sup> 200, and *<sup>t</sup>* <sup>=</sup> 300, we see that the increase in v gradually moves from left to right like a traveling wave.

### **6.3.1 Adjusting Parameters**

In Fig. 6.2, we show the solution of the model at five different points in time for three different values of the parameters δ. We observe that for a low value of δ

**Fig. 6.1** Numerical solution of v in the FitzHugh-Nagumo equations with a diffusion term added. The boundary conditions are given by ∂v ∂x (t, <sup>0</sup>) <sup>=</sup> ∂v ∂x (t, <sup>1</sup>) <sup>=</sup> 0, and the initial conditions are w(0, <sup>x</sup>) <sup>=</sup> <sup>0</sup> everywhere and v(0, <sup>x</sup>) <sup>=</sup> <sup>0</sup> for <sup>x</sup> > <sup>0</sup>.<sup>04</sup> and v(0, <sup>x</sup>) <sup>=</sup> <sup>0</sup>.<sup>26</sup> for <sup>x</sup> <sup>≤</sup> <sup>0</sup>.04. We use <sup>∆</sup><sup>t</sup> <sup>=</sup> <sup>0</sup>.<sup>005</sup> and <sup>∆</sup><sup>x</sup> <sup>=</sup> <sup>0</sup>.01, and show the solution at five different points in time.

(δ <sup>=</sup> <sup>1</sup> · <sup>10</sup>−<sup>5</sup> ), the traveling wave moves more slowly through the domain. For example, at *t* = 200, the wave has crossed about half of the distance from *x* = 0 to *<sup>x</sup>* <sup>=</sup> <sup>1</sup> for the default case of δ <sup>=</sup> <sup>5</sup> · <sup>10</sup>−<sup>5</sup> , and only about a quarter of the distance for δ <sup>=</sup> <sup>1</sup> · <sup>10</sup>−<sup>5</sup> . Moreover, for a high value of δ (δ <sup>=</sup> <sup>20</sup> · <sup>10</sup>−<sup>5</sup> ), the wave moves more quickly through the domain, and has almost crossed the entire domain at *t* = 200. We also observe that the slope of the wavefront of the traveling wave appears to become less steep at the value of δ is increased.

In Fig. 6.3, we similarly consider the traveling wave solutions for three different values of the parameter *<sup>c</sup>*1. As when we adjust the value of δ, we observe that the wave moves more quickly as the value of *c*<sup>1</sup> is increased.

### **6.3.2 Conduction Velocity**

In Fig. 6.2 and Fig. 6.3, we observed that the traveling wave moves more quickly as we increased the value of δ or *<sup>c</sup>*1. The conduction velocity is often used to characterize the speed with which a traveling wave traverses the domain. In Fig. 6.3, we have computed the conduction velocity for some different values of δ and *<sup>c</sup>*1. Here, we have defined the conduction velocity as

$$\mathbf{CV} = \frac{\mathbf{x}\_2 - \mathbf{x}\_1}{t\_2 - t\_1},\tag{6.6}$$

where *<sup>x</sup>*<sup>1</sup> <sup>=</sup> <sup>0</sup>.<sup>5</sup> and *<sup>x</sup>*<sup>2</sup> <sup>=</sup> <sup>0</sup>.7. Furthermore, *<sup>t</sup>*<sup>1</sup> and *<sup>t</sup>*<sup>2</sup> are the points in time when the value of <sup>v</sup> first increases to a value <sup>v</sup> <sup>≥</sup> 0.5, in the spatial points *<sup>x</sup>*<sup>1</sup> and *<sup>x</sup>*2, respectively. As expected Fig. 6.3 shows that increasing <sup>δ</sup> or *<sup>c</sup>*<sup>1</sup> in the model increases the computed conduction velocity.

**Fig. 6.2** Numerical solution of v in the FitzHugh-Nagumo equations with an added diffusion term at for three different values of δ. The remaining parameter values are as specified in (6.3). The boundary conditions are given by ∂v ∂x (t, <sup>0</sup>) <sup>=</sup> ∂v ∂x (t, <sup>1</sup>) <sup>=</sup> 0, and the initial conditions are w(0, <sup>x</sup>) <sup>=</sup> <sup>0</sup> everywhere and v(0, <sup>x</sup>) <sup>=</sup> <sup>0</sup> for <sup>x</sup> > <sup>0</sup>.<sup>04</sup> and v(0, <sup>x</sup>) <sup>=</sup> <sup>0</sup>.<sup>26</sup> for <sup>x</sup> <sup>≤</sup> <sup>0</sup>.04. We use <sup>∆</sup><sup>t</sup> <sup>=</sup> <sup>0</sup>.<sup>005</sup> and <sup>∆</sup><sup>x</sup> <sup>=</sup> <sup>0</sup>.01, and show the solution at five different points in time.

**Fig. 6.3** Numerical solution of v in the FitzHugh-Nagumo equations with an added diffusion term for three different values of δ for three different values of <sup>c</sup>1. The remaining parameter values are as specified in (6.3) and δ <sup>=</sup> <sup>5</sup> · <sup>10</sup>−<sup>5</sup> . The boundary conditions are given by ∂v ∂x (t, <sup>0</sup>) <sup>=</sup> ∂v ∂x (t, <sup>1</sup>) <sup>=</sup> 0, and the initial conditions are w(0, <sup>x</sup>) <sup>=</sup> <sup>0</sup> everywhere and v(0, <sup>x</sup>) <sup>=</sup> <sup>0</sup> for <sup>x</sup> > <sup>0</sup>.<sup>04</sup> and v(0, <sup>x</sup>) <sup>=</sup> <sup>0</sup>.<sup>26</sup> for <sup>x</sup> <sup>≤</sup> <sup>0</sup>.04. We use <sup>∆</sup><sup>t</sup> <sup>=</sup> <sup>0</sup>.<sup>005</sup> and <sup>∆</sup><sup>x</sup> <sup>=</sup> <sup>0</sup>.01, and show the solution at five different points in time.

**Fig. 6.4** Conduction velocity for different values of δ and <sup>c</sup>1, computed from numerical solutions of the FitzHugh-Nagumo equations with an added diffusion equation term. The conduction velocity is computed as defined in (6.6).

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 7 Operator Splitting**

In mathematics, a common approach to solving a new problem is to break down the problem into sub-problems that you know how to solve and then try to glue the pieces together to yield a solution of the new problem. This approach is also very useful in software development; well-tested pieces of software can be glued together in order to obtain solutions to a wider class of problems. Operator splitting is a technique that illustrates this principle very well. Rather complex equations can be broken down into more familiar problems and solved individually. It's a miracle that it works, but it does. And it's no miracle because there are proofs of convergence. Anyway, we will illustrate operator splitting with two examples and then come back to this technique when the equations become more challenging.

### **7.1 Numerical Schemes for a Reaction-Diffusion Equation**

Suppose we want to solve the following reaction-diffusion equation,

$$
\mu\_l = \mu\_{\text{xx}} + f(\mu), \tag{7.1}
$$

with initial and boundary conditions,

$$
\mu(0, \mathbf{x}) = \mu^0(\mathbf{x}), \tag{7.2}
$$

$$
\mu^0(\mathbf{x}) = \mu^0(\mathbf{x}) \quad \text{or} \tag{7.3}
$$

$$
\mu(t,0) = \mu(t,1) = 0.\tag{7.3}
$$

Clearly, we can use the techniques introduced above. By using the notation introduced on page 43, we can write explicit, implicit, and midpoint schemes as follows,

54 7 Operator Splitting

$$\frac{\mu^{n+1} - \mu^n}{\Delta t} = Du^n + f(u^n), \tag{7.4}$$

$$\frac{u^{n+1} - u^n}{\Delta t} = Du^{n+1} + f(u^{n+1}),\tag{7.5}$$

$$\frac{\mu^{n+1} - \mu^n}{\Delta t} = \frac{1}{2} (Du^n + Du^{n+1}) + \frac{1}{2} (f(\mu^n) + f(\mu^{n+1})),\tag{7.6}$$

where we have tacitly extended *f* to be a vector valued function with components *f*<sup>i</sup> = *f* (*u*i). These schemes can be rearranged to computational form as follows,

$$\begin{aligned} \boldsymbol{\mu}^{n+1} &= [I + \Delta tD]\boldsymbol{\mu}^n + \Delta t f(\boldsymbol{\mu}^n), \\ [I - \Delta tD]\boldsymbol{\mu}^{n+1} - \Delta t f(\boldsymbol{\mu}^{n+1}) &= \boldsymbol{\mu}^n, \\ \left(I - \frac{\Delta t}{2}D\right)\boldsymbol{\mu}^{n+1} - \frac{\Delta t}{2}f(\boldsymbol{\mu}^{n+1}) &= \left(I + \frac{\Delta t}{2}D\right)\boldsymbol{\mu}^n + \frac{\Delta t}{2}f(\boldsymbol{\mu}^n), \end{aligned}$$

where, again, we use the convention that known quantities are on the right-hand side of the equations and the unknowns are at the left-hand side. The explicit scheme is straightforward to implement since computing *u* n+1 simply amounts to evaluating the right-hand side. But the implicit scheme has become more complicated than we are used to because we have to solve a potentially non-linear *system of algebraic equations* in order to compute *u* n+1 . We can do that using Newton's method, but we can also avoid this by introduction operator splitting.

### **7.2 Operator Splitting for a Reaction-Diffusion Equation**

Let *u* <sup>n</sup> denote an approximation of *<sup>u</sup>*(*t*n, *<sup>x</sup>*), where *<sup>t</sup>*<sup>n</sup> <sup>=</sup> *<sup>n</sup>*∆*<sup>t</sup>* as usual. Then, the problem (7.1) can be solved by alternately solving *u*<sup>t</sup> = *u*xx and *u*<sup>t</sup> = *f* (*u*). More precisely, we assume that an approximate solution has been computed for time *t* = *t*n. Then, an approximate solution of (7.1) at *t*n+<sup>1</sup> can be found in two steps. First we solve

$$
\mu\_l = \mu\_{\infty},
\tag{7.7}
$$

from *<sup>t</sup>*<sup>n</sup> to *<sup>t</sup>*n+<sup>1</sup> with the initial condition *<sup>u</sup>*(*t*n, ·) <sup>=</sup> *<sup>u</sup>* n and boundary conditions given by (7.2) and (7.3). We let *u* <sup>n</sup>+1/<sup>2</sup> denote the solution of the first step. In the second step, we solve

$$
\mu\_l = f(\mu),
\tag{7.8}
$$

using the initial condition *<sup>u</sup>*(*t*n, ·) <sup>=</sup> *<sup>u</sup>* n+1/2 . Now, we have broken the somewhat complex problem (7.1) down to two problems we are more familiar with. We will show how this works with a couple of examples. Note, however, that the main message from this chapter is the technique of breaking down the numerical solution of a problem into two simpler problems like described in this subsection. So if you reach a point where you feel that the mathematics in the remaining part of this chapter

becomes a bit overwhelming, it might be good to jump ahead to Chapter 8, where we will start applying the methods introduced above to models of electrophysiology.

### **7.2.1 Numerical Example**

In the first numerical example, we consider the problem

$$
\mu\_t = \mu\_{\text{xx}} - \mu^2,\tag{7.9}
$$

with the boundary conditions *<sup>u</sup>*(*t*, <sup>0</sup>) <sup>=</sup> *<sup>u</sup>*(*t*, <sup>1</sup>) <sup>=</sup> <sup>0</sup> and the initial condition *<sup>u</sup>*(0, *<sup>x</sup>*) <sup>=</sup> sin(π*x*). We want to use the operator splitting procedure introduced above to solve this problem numerically, and this will result in two steps. The first step is to solve

$$
\mu\_l = \mu\_{\text{xx}},\tag{7.10}
$$

with an implicit scheme (see page 34). The second step is to solve

$$
\mu\_t = -\mu^2,\tag{7.11}
$$

using an implicit scheme. Note that this equation has to be solved for each mesh point *x*<sup>i</sup> , so the implicit scheme reads

$$\frac{u\_j^{n+1} - u\_j^{n+1/2}}{\Delta t} = -(u\_j^{n+1})^2,\tag{7.12}$$

where *u* n+1/2 j is the result of the first step. By using operator splitting we have avoided solving a big system of non-linear equations. Instead we need to solve a usual linear system of equations arising from the discrete version of (7.10) and a series of second order algebraic equations given by (7.12) whose solutions are given by the quadratic formula,

$$
u\_j^{n+1} = \frac{-1 + \sqrt{1 + 4\Delta t u\_j^{n+1/2}}}{2\Delta t}.\tag{7.13}$$

In Fig. 7.1 we show the solution of this problem using <sup>∆</sup>*<sup>x</sup>* <sup>=</sup> <sup>0</sup>.<sup>01</sup> and <sup>∆</sup>*<sup>t</sup>* <sup>=</sup> <sup>0</sup>.001. Furthermore, in Table 7.1 we have computed the error by comparing the solution to a very fine scale solution of this problem using a standard explicit scheme. Again we note that the error seems to be first order in ∆*t*.

**Table 7.1** Errors of the numerical solutions of the differential equation u<sup>t</sup> = ux x − u 2 , with boundary conditions <sup>u</sup>(t, <sup>0</sup>) <sup>=</sup> <sup>u</sup>(t, <sup>1</sup>) <sup>=</sup> <sup>0</sup> and initial condition <sup>u</sup>(0, <sup>x</sup>) <sup>=</sup> sin(πx) for different values of ∆t. The error is defined as E = max<sup>j</sup> <sup>|</sup>ue, <sup>j</sup> <sup>−</sup> <sup>u</sup> N j <sup>|</sup>, where <sup>u</sup>e, <sup>j</sup> is the solution of the problem found using a standard explicit scheme with a very fine time step (∆t = 10−<sup>6</sup> ), and u N j is the numerical solution of the operator splitting scheme described in (7.10)–(7.13). We use <sup>∆</sup><sup>x</sup> <sup>=</sup> <sup>0</sup>.01.


### **7.2.2 A Detour via Numerical Integration**

Very few things come for free, but second order convergence does. With minimal change of the algorithm above, we can obtain second order convergence. Actually, just the first and the last step of the algorithm are slightly changed (half step instead of whole step). Exactly the same alteration is present in changing from a first order to a second order scheme of numerical integration. We will show that similarity here because it might help make the second order operator splitting algorithm seem less mysterious, but if you are not interested, just skip this subsection and go to page 58 where you can read about second order operator splitting.

### **First and Second Order Schemes for Numerical Integration**

Suppose we have a smooth function g = g(*x*) defined on the interval from 0 to 1, and that we want to compute an approximation of the integral of g on this interval. We start by defining ∆*x* = 1/(*M* − 1) and *x*<sup>i</sup> = (*i* − 1) × ∆*x* for a sufficiently large integer *M*. In Fig. 7.2, we show two approximations of the function g. The first

**Fig. 7.2** Illustration of two approximations of a function g. In the left panel, we illustrate a piecewise constant approximation defined by the value of g at x = x<sup>i</sup> for the interval between x<sup>i</sup> to xi+1. In the right panel, we illustrate a piecewise linear approximation defined to coincide with g for x = x<sup>i</sup> and x = xi+1.

approximation is simply a constant function defined by the value of g at *x* = *x*<sup>i</sup> . The second is a linear function that coincides with g for *x* = *x*<sup>i</sup> and *x* = *x*i+1.

If we use these two approximations to estimate the integral of g from *x* = *x*<sup>i</sup> to *x* = *x*i+1, we get

$$\int\_{\chi\_{i}}^{\chi\_{i+1}} \mathbf{g}(\mathbf{x})d\mathbf{x} \approx \Delta \mathbf{x} \mathbf{g}(\chi\_{i}) \tag{7.14}$$

and

$$\int\_{\mathbf{x}\_{i}}^{\mathbf{x}\_{i+1}} \mathbf{g}(\mathbf{x})d\mathbf{x} \approx \frac{1}{2}\Delta \mathbf{x}(\mathbf{g}(\mathbf{x}\_{i}) + \mathbf{g}(\mathbf{x}\_{i+1})),\tag{7.15}$$

respectively. Since we clearly have

$$\int\_{0}^{1} \mathbf{g}(\mathbf{x})d\mathbf{x} = \sum\_{i=1}^{M-1} \int\_{\mathbf{x}\_{i}}^{\mathbf{x}\_{i+1}} \mathbf{g}(\mathbf{x})d\mathbf{x},\tag{7.16}$$

we get two approximations of the integral from (7.15) and (7.16), respectively;

$$\int\_0^1 \mathbf{g}(\mathbf{x})d\mathbf{x} \approx \Delta \mathbf{x} \sum\_{i=1}^{M-1} \mathbf{g}(\mathbf{x}\_i),\tag{7.17}$$

and

$$\int\_0^1 \mathbf{g}(\mathbf{x})d\mathbf{x} \approx \frac{1}{2}\Delta \mathbf{x} \sum\_{i=1}^{M-1} (\mathbf{g}(\mathbf{x}\_i) + \mathbf{g}(\mathbf{x}\_{i+1})).\tag{7.18}$$

Here, the latter formula can be rewritten slightly to

$$\int\_{0}^{1} \mathbf{g}(\mathbf{x})d\mathbf{x} \approx \Delta \mathbf{x} \left[ \frac{1}{2} \mathbf{g}(\mathbf{x}\_{1}) + \mathbf{g}(\mathbf{x}\_{2}) + \mathbf{g}(\mathbf{x}\_{3}) + \dots + \mathbf{g}(\mathbf{x}\_{M-1}) + \frac{1}{2} \mathbf{g}(\mathbf{x}\_{M}) \right]. \tag{7.19}$$

**Table 7.2** Maximum errors of the Riemann sum approximation (ER, (7.17)) and the Trapezoidal scheme approximation (ET, (7.19)) to the integral <sup>∫</sup> <sup>1</sup> 0 x <sup>2</sup>dx for some different values of ∆x.


The approximation of the integral defined by (7.17) is referred to as a Riemann sum, whereas the approximation given by (7.19) is referred to as the Trapezoidal method of integration. In Table 7.2, we show the error when using these to schemes to approximate the integral

$$\int\_0^1 x^2 dx = \frac{1}{3}$$

using several values of ∆*x*. For the Riemann sum, the error is *O*(∆*x*), and for the Trapezoidal scheme, the error is *O*(∆*x* 2 ). So, by using the same number of function evaluations, the accuracy of the integration scheme is improved from first to second order.

### **7.2.3 Second Order Operator Splitting**

In order to introduce second order operator splitting, it is useful to introduce notations that we can use to simplify the formulations. We recall that the problem we want to solve is the following reaction-diffusion equation,

$$
\mu\_l = \mu\_{\text{xx}} + f(\mu), \tag{7.20}
$$

with initial and boundary conditions,

$$
\mu(0, \mathbf{x}) = \mu^0(\mathbf{x}), \tag{7.21}
$$

$$
\mu(t,0) = \mu(t,1) = 0.\tag{7.22}
$$

First, we let D(∆*t*) be an operator that evolves the solution of the diffusion equation one time step <sup>∆</sup>*<sup>t</sup>* ahead. If the function *<sup>u</sup>* <sup>=</sup> *<sup>u</sup>*(*t*, ·) is known at time *<sup>t</sup>*, then *<sup>u</sup>*(*t*+∆*t*, ·) <sup>=</sup> D(∆*t*)*u*(*t*, ·) denotes the solution of

$$
\mu\_l = \mu\_{\text{xx}},
\tag{7.23}
$$

at time *t* + ∆*t*, where *u*(*t*) is the initial condition at time *t*. Similarly, we let R play the same role for the reaction part of the equation. If *<sup>u</sup>* <sup>=</sup> *<sup>u</sup>*(*t*, ·) is known, then *<sup>u</sup>*(*<sup>t</sup>* <sup>+</sup> <sup>∆</sup>*t*, ·) <sup>=</sup> R(∆*t*)*u*(*t*, ·) denotes the solution of

$$
\mu\_l = f(\mu),
\tag{7.24}
$$

at time *t* + ∆*t*, where *u*(*t*) is the initial condition at time *t*. With this notation, we can rewrite the first order operator splitting derived above in a very compact manner. The step from *t*<sup>n</sup> to *t*n+<sup>1</sup> can be written

$$
\mu^{n+1} = \mathcal{R}(\Delta t) \mathcal{D}(\Delta t) \mu^n. \tag{7.25}
$$

By simply repeating the process, we find that

$$
\mu^n = (\mathcal{R}(\Delta t)\mathcal{D}(\Delta t))^n u^0. \tag{7.26}
$$

Equipped with this notation, we can improve the accuracy of the operator splitting using the same approach as for numerical integration (7.19). In order to improve the accuracy of the operator splitting to second order, we now approximate one time step by

$$
\mu^{n+1} = \mathcal{D}(\Delta t/2)\mathcal{R}(\Delta t)\mathcal{D}(\Delta t/2)u^n. \tag{7.27}
$$

By combining several steps, we get

$$\mu^n = [\mathcal{D}(\Delta t/2)\mathcal{R}(\Delta t)\mathcal{D}(\Delta t/2)\cdots\mathcal{D}(\Delta t/2)\mathcal{R}(\Delta t)\mathcal{D}(\Delta t/2)]\mu^0. \tag{7.28}$$

Note that by the definition of D, it has the following useful property,

$$
\mathcal{D}(\Delta t/2)\mathcal{D}(\Delta t/2) = \mathcal{D}(\Delta t),
$$

since applying D(∆*t*/2) twice simply means to solve the equation twice using the time step ∆*t*/2 . By using this property, we can rewrite (7.28) as follows,

$$\mu^n = \left(\mathcal{D}(\Delta t/2)[\mathcal{R}(\Delta t)\mathcal{D}(\Delta t)]^{n-1}\mathcal{R}(\Delta t)\mathcal{D}(\Delta t/2)\right)\mu^0. \tag{7.29}$$

So the only difference between the first order scheme (7.26) and the second order scheme (7.29) is that in the latter, we do a half time step in the first and last time steps. This is very similar to the half step in the first and last step of the numerical integration above; see (7.19) compared to (7.17).

### **7.3 Operator Splitting Applied to a System of Reaction-Diffusion Equations**

In order to demonstrate the use of the operator splitting techniques introduced above, we revisit the FitzHugh-Nagumo system,

60 7 Operator Splitting

$$v\_t = \delta v\_{xx} + c\_1 v(v-a)(1-v) - c\_2 w,\tag{7.30}$$

$$w\_l = b(v - dw). \tag{7.31}$$

We use the same constants as above,

$$a = -0.12, \ c\_1 = 0.175, \ c\_2 = 0.03, \ b = 0.011, \ d = 0.55, \ \delta = \\$ \cdot 10^{-8}.\tag{7.32}$$

Here, the solution operator of the diffusion step D evolves

$$v\_l = \delta v\_{\chi\chi},\tag{7.33}$$

$$w\_t = 0,\tag{7.34}$$

and the R evolves

$$v\_t = c\_1 v(v-a)(1-v) - c\_2 w,\tag{7.35}$$

$$w\_t = b(v - dw). \tag{7.36}$$

In Table 7.3 we show the error when the first order (see (7.26)) and second order (see (7.29)) algorithms. The solutions are computed by replacing D and R by the standard explicit schemes with a fine time step (∆*t* = 10−<sup>4</sup> ), and the errors are estimated by comparing the solutions to a numerical solution found using the fine time step and no operator splitting. As anticipated, the convergence of the two methods are first and second order.

**Table 7.3** Maximum errors of the first order (E1, (7.26)) and the second order (E2, (7.29)) operator splitting techniques applied to the FitzHugh-Nagumo system (7.30)–(7.32). In each operator splitting step, the system is solved using standard explicit schemes with ∆t = 10−<sup>4</sup> and ∆x = 0.01. The error is found by comparing the solutions to solutions found using a standard explicit scheme with ∆t = 10−<sup>4</sup> and ∆x = 0.01 without operator splitting.


### **7.4 Comments and Further Reading**

1. The example in Section 7.3 indicates the strength of operator splitting. If we have a good solver for the diffusion step, and a good solver for a system of ordinary differential equations, these solvers can be combined to get a solution of the reaction diffusion problem. In this case, this is not really necessary, but there are extremely complex problems out there that are virtually impossible to solve without using operator splitting.

2. Why does operator splitting work? This question is studied in some detailed in, e.g., [4, 5, 7, 8, 9, 12]. A detailed discussion of this topic is far outside of our scope. However, we can give a hint to why this works. To this end, we consider a linear system of ordinary differential equations

$$u\_t = Au + Bu\_t$$

where *u* is a vector and *A* and *B* are matrices compatible with the dimensions of *u*; they may represent discrete version of spatial derivatives as above. An explicit scheme for this system can be written on the form

$$
\mu^{n+1} = (I + \Delta t A + \Delta t B)\mu^n,
$$

and using first order operator splitting, we get the scheme

$$U^{n+1/2} = (I + \Delta t A)U^n,\tag{7.37}$$

$$U^{n+1} = (I + \Delta t B) U^{n+1/2}.\tag{7.38}$$

By combining the two steps of the operator splitting scheme, we get

$$U^{n+1} = (I + \Delta t A)(I + \Delta t B)U^n. \tag{7.39}$$

This scheme can be expanded to read

$$U^{n+1} = (I + \Delta t A + \Delta t B)U^n + \Delta t^2 A B U^n. \tag{7.40}$$

Hence, in the step from *t*<sup>n</sup> to *t*n+<sup>1</sup> the results of the two schemes differ by *O*(∆*t* 2 ), and thus the difference summed over *N* ∼ ∆*t* −1 steps is *O*(∆*t*). Therefore, the standard explicit scheme and the first order operator splitting scheme converges to the same solution as ∆*t* goes to zero.


coupled system is inherently more difficult to solve. It is possible to solve a reaction-diffusion equation using a fully coupled implicit scheme and Newton's method, but when the problem involves large systems of equations defined on different domains and scales, and it can be impractical to solve the problem in a fully coupled manner. In such cases, operator splitting is often necessary to manage the complexity of the problem.

### **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Part II Models of Electrophysiology**

# **Chapter 8 Membrane Models**

In the previous chapters, we introduced techniques that can be used to find numerical solutions of differential equations. The examples we considered were simple, theoretical differential equations without units. From this point forwards, we will look at how to apply the methods introduced in the previous chapters to differential equations that are set up to model aspects of electrophysiology.

In this chapter, we will consider a type of model that is commonly used to model the dynamics across the membrane of excitable cells. We will start by introducing a model for the action potentials generated in neurons. More specifically, we will consider the Hodgkin-Huxley model for the action potential of the squid giant axon [10]. Then, we will introduce a similar model for a cardiac action potential specifically, a model for the action potential of a rabbit ventricular cardiomyocyte [9].

### **8.1 The Hodgkin-Huxley Model**

The Hodgkin-Huxley model [10] consists of a system of four ordinary differential equations with the four unknowns v, *<sup>m</sup>*, *<sup>h</sup>*, and *<sup>r</sup>*1:

$$C\_m \frac{dv}{dt} = -(I\_{\rm Na} + I\_{\rm K} + I\_{\rm L}),\tag{8.1}$$

$$\frac{dm}{dt} = \alpha\_m (1 - m) - \beta\_m m,\tag{8.2}$$

$$\frac{dh}{dt} = \alpha\_h (1 - h) - \beta\_h h,\tag{8.3}$$

$$\frac{dr}{dt} = \alpha\_r (1 - r) - \beta\_r r.\tag{8.4}$$

<sup>1</sup> The unknown function r is actually called n in the original formulation of the Hodgkin-Huxley model, but we will use the name r to avoid confusion with the index n used to denote the time step in the numerical schemes.

Here, the unknown function <sup>v</sup> has unit millivolts (mV) and represents the membrane potential, and *<sup>C</sup>*<sup>m</sup> <sup>=</sup> <sup>1</sup> <sup>µ</sup>F/cm<sup>2</sup> is a parameter representing the specific membrane capacitance. Furthermore, *I*Na, *I*K, and *I*<sup>L</sup> represent the current density through three types of ion channel: Na<sup>+</sup> channels, K<sup>+</sup> channels, and non-specific leak channels. The current densities are given in units of µA/cm<sup>2</sup> and are defined by

$$I\_{\rm Na} = \mathcal{g}\_{\rm Na} m^3 h (v - v\_{\rm Na}), \tag{8.5}$$

$$I\_{\mathbf{K}} = \mathbf{g}\_{\mathbf{K}} r^4 (v - v\_{\mathbf{K}}),\tag{8.6}$$
 
$$I\_{\mathbf{K}} = \dots (v - v\_{\mathbf{K}}) \tag{8.7}$$

$$I\_{\mathcal{L}} = \mathcal{g}\_{\mathcal{L}}(v - v\_{\mathcal{L}}),\tag{8.7}$$

where gNa = 120 mS/cm<sup>2</sup> , g<sup>K</sup> = 36 mS/cm<sup>2</sup> and <sup>g</sup><sup>L</sup> <sup>=</sup> <sup>0</sup>.<sup>3</sup> mS/cm<sup>2</sup> are parameters representing the maximal conductance density of the different channel types, and <sup>v</sup>Na <sup>=</sup> <sup>50</sup> mV, <sup>v</sup><sup>K</sup> <sup>=</sup> <sup>−</sup><sup>77</sup> mV and <sup>v</sup><sup>L</sup> <sup>=</sup> <sup>−</sup>54.<sup>4</sup> mV are the equilibrium potentials of the channels. In addition, *m* <sup>3</sup>*h* and *r* 4 represent the open probability of the Na<sup>+</sup> and the K<sup>+</sup> channels, respectively. The open probability of the leak channels is assumed to be 1 at all times.

The unknown functions *m*, *h* and *r* take values between 0 and 1 and are governed by the equations (8.2)–(8.4). In these equations, <sup>α</sup>m, <sup>β</sup>m, <sup>α</sup>h, <sup>β</sup>h, <sup>α</sup><sup>r</sup> , and <sup>β</sup><sup>r</sup> represent rates of the opening and closing of channel gates, are given in units of ms−<sup>1</sup> and depend on the value of v. More specifically, they are defined by

$$\alpha\_m = \frac{\gamma\_1 (v + \gamma\_2)}{1 - e^{-(v + \gamma\_2)/\gamma\_3}}, \qquad \qquad \beta\_m = \gamma\_4 e^{-(v + \gamma\_5)/\gamma\_6}, \tag{8.8}$$

$$
\alpha\_h = \gamma\_7 e^{-(v+\gamma\_8)/\gamma\_9}, \quad \qquad \beta\_h = \frac{\gamma\_{10}}{1 + e^{-(v+\gamma\_{11})/\gamma\_{12}}}, \tag{8.9}
$$

$$\alpha\_r = \frac{\gamma\_{13}(v+\gamma\_{14})}{1 - e^{-(v+\gamma\_{14})/\gamma\_{13}}}, \qquad \qquad \beta\_r = \gamma\_{16} e^{-(v+\gamma\_{17})/\gamma\_{18}}, \tag{8.10}$$

where the parameters <sup>γ</sup>1–γ<sup>18</sup> are constants specified by

$$
\gamma\_1 = 0.1 \text{ ms}^{-1} \text{mV}^{-1}, \qquad \qquad \gamma\_2 = 40 \text{ mV}, \qquad \qquad \gamma\_3 = 10 \text{ mV}, \tag{8.11}
$$

$$
\gamma\_4 = 4 \text{ ms}^{-1}, \quad \qquad \qquad \gamma\_5 = 65 \text{ mV}, \qquad \qquad \gamma\_6 = 18 \text{ mV}, \tag{8.12}
$$

$$
\gamma\_7 = 0.07 \text{ ms}^{-1}, \qquad \qquad \gamma\_8 = 65 \text{ mV}, \qquad \qquad \gamma\_9 = 20 \text{ mV}, \tag{8.13}
$$

$$\gamma\_{10} = 1 \text{ ms}^{-1}, \qquad \qquad \qquad \gamma\_{11} = \text{35 mV}, \qquad \qquad \gamma\_{12} = 10 \text{ mV}, \tag{8.14}$$

$$\gamma\_{13} = 0.01 \text{ ms}^{-1} \text{mV}^{-1}, \qquad \gamma\_{14} = \mathbf{55} \text{ mV}, \qquad \gamma\_{15} = 10 \text{ mV}, \tag{8.15}$$

$$
\gamma\_{16} = 0.125 \text{ ms}^{-1}, \qquad \qquad \gamma\_{17} = 65 \text{ mV}, \qquad \gamma\_{18} = 80 \text{ mV}. \tag{8.16}
$$

In our computations reported below, we will use the initial conditions

$$w(0) = -60 \text{ mV}, \qquad m(0) = 0.1, \qquad h(0) = 0.6, \qquad r(0) = 0.3. \tag{8.17}$$

### **8.1.1 An Explicit Numerical Scheme**

The differential equations of the Hodgkin-Huxley model can be solved numerically using the techniques we have applied for simple example equations in the previous chapters. More specifically, we can define a numerical scheme for the equations by replacing the derivatives in (8.1)–(8.4) by the standard difference,

$$f'(t) \approx \frac{f(t + \Delta t) - f(t)}{\Delta t},\tag{8.18}$$

and define an explicit scheme by

$$C\_m \frac{v\_{n+1} - v\_n}{\Delta t} = -(I\_{\text{Na}}(v\_n, m\_n, h\_n) + I\_{\text{K}}(v\_n, r\_n) + I\_{\text{L}}(v\_n)),\tag{8.19}$$

$$\frac{m\_{n+1} - m\_n}{\Delta t} = \alpha\_m(v\_n)(1 - m\_n) - \beta\_m(v\_n)m\_n,\tag{8.20}$$

$$\frac{h\_{n+1} - h\_n}{\Delta t} = \alpha\_h(v\_n)(1 - h\_n) - \beta\_h(v\_n)h\_n,\tag{8.21}$$

$$\frac{r\_{n+1} - r\_n}{\Delta t} = \alpha\_r(v\_n)(1 - r\_n) - \beta\_r(v\_n)r\_n,\tag{8.22}$$

where, as usual, <sup>v</sup>n, *<sup>m</sup>*n, *<sup>h</sup>*n, and *<sup>r</sup>*<sup>n</sup> are the numerical solutions at time *<sup>t</sup>*<sup>n</sup> <sup>=</sup> *<sup>n</sup>* <sup>×</sup> <sup>∆</sup>*t*. The scheme can be written in computational form as

$$v\_{n+1} = v\_n - \frac{\Delta t}{C\_m} [I\_{\text{Na}}(v\_n, m\_n, h\_n) + I\_{\text{K}}(v\_n, r\_n) + I\_{\text{L}}(v\_n)],\tag{8.23}$$

$$m\_{n+1} = m\_n + \Delta t \{ \alpha\_m(v\_n)(1 - m\_n) - \beta\_m(v\_n) m\_n \},\tag{8.24}$$

$$h\_{n+1} = h\_n + \Delta t [\alpha\_h(v\_n)(1 - h\_n) - \beta\_h(v\_n)h\_n],\tag{8.25}$$

$$r\_{n+1} = r\_n + \Delta t [\alpha\_r(v\_n)(1 - r\_n) - \beta\_r(v\_n)r\_n].\tag{8.26}$$

### **8.1.2 Error of the Numerical Solution**

In Table 8.1, we report the error, *<sup>E</sup>*v, of the numerical solution of<sup>v</sup> using the numerical scheme (8.23)–(8.26) for some different values of ∆*t*. The error is computed by comparing the solutions at time *t* = 3 ms to the solutions found using a very fine time step (∆*t* = 10−<sup>6</sup> ms). As observed for simpler systems of equations in earlier chapters, we find that the error of the explicit scheme is close to proportional to the time step, ∆*t*, applied in the numerical scheme. In other words, we have linear (or first order) convergence.

**Table 8.1** Error of the numerical solution of the Hodgkin-Huxley model for different values of ∆t. The error is defined as <sup>E</sup>v <sup>=</sup> <sup>|</sup><sup>v</sup> <sup>−</sup> <sup>v</sup><sup>N</sup> <sup>|</sup>, where <sup>v</sup> is the numerical solution at <sup>t</sup> <sup>=</sup> <sup>3</sup> ms for a very fine time step (∆<sup>t</sup> <sup>=</sup> <sup>10</sup>−<sup>6</sup> ms), and <sup>v</sup><sup>N</sup> is the numerical solution at <sup>t</sup> <sup>=</sup> <sup>3</sup> ms for each of the values of <sup>∆</sup><sup>t</sup> reported in the first column of the table.


### **8.1.3 Details of the Model Solution**

In Fig. 8.1, we show plots of the numerical solution of the Hodgkin-Huxley model computed using <sup>∆</sup>*<sup>t</sup>* <sup>=</sup> <sup>0</sup>.<sup>001</sup> ms. In the upper left panel, we have plotted v, representing the membrane potential. We see that an action potential is generated, starting with an increase in the value of v (depolarization) followed by a decrease in the value of v (repolarization), like in the simple FitzHugh-Nagumo model studied in Chapter 2. The action potential lasts for a couple of milliseconds.

In the next three panels of Fig. 8.1, we show the value of the unitless *m*, *h*, and *r* variables, and in the final three panels we show each of the current densities *I*Na, *I*K, and *I*L. We see that *I*Na obtains negative values, while *I*<sup>K</sup> is positive. The current *I*<sup>L</sup> obtains both positive and negative values, and these values are considerably smaller (in absolute value) than the values of *I*Na and *I*<sup>K</sup> during the action potential. The sign of the current densities can be explained by revisiting the definitions,

$$\begin{aligned} I\_{\mathrm{Na}} &= \mathrm{g\_{\mathrm{Na}}} m^3 h (v - v\_{\mathrm{Na}}), \\ I\_{\mathrm{K}} &= \mathrm{g\_{\mathrm{K}}} r^4 (v - v\_{\mathrm{K}}), \\ I\_{\mathrm{L}} &= \mathrm{g\_{\mathrm{L}}} (v - v\_{\mathrm{L}}), \end{aligned}$$

(see (8.5)–(8.7)) and recalling that gNa, gK, gL, *m*, *h* and *r* are all positive. Thus, we see directly from the definition of the current densities that *I*Na is positive for v > vNa <sup>=</sup> <sup>50</sup> mV and negative for v < <sup>50</sup> mV. Similarly, *<sup>I</sup>*<sup>K</sup> is positive for v > v<sup>K</sup> <sup>=</sup> <sup>−</sup><sup>77</sup> mV and negative for v < <sup>−</sup><sup>77</sup> mV, and *<sup>I</sup>*<sup>L</sup> is positive for v > v<sup>L</sup> <sup>=</sup> <sup>−</sup>54.<sup>4</sup> mV and negative for v < <sup>−</sup>54.<sup>4</sup> mV.

Moreover, since *<sup>C</sup>*mv<sup>t</sup> <sup>=</sup> −(*I*Na <sup>+</sup> *<sup>I</sup>*<sup>K</sup> <sup>+</sup> *<sup>I</sup>*L), see (8.1), and *<sup>C</sup>*<sup>m</sup> is positive, we deduce that a negative value of the sum (*I*Na <sup>+</sup> *<sup>I</sup>*<sup>K</sup> <sup>+</sup> *<sup>I</sup>*L) is needed for <sup>v</sup><sup>t</sup> to be positive (i.e., for <sup>v</sup> to increase) and a positive value of the sum (*I*Na <sup>+</sup> *<sup>I</sup>*<sup>K</sup> <sup>+</sup> *<sup>I</sup>*L) is needed for <sup>v</sup><sup>t</sup> to be negative (i.e., for v to decrease). We can therefore conclude that in the beginning of the simulation, for v < <sup>−</sup>54.<sup>4</sup> mV, both *<sup>I</sup>*Na and *<sup>I</sup>*<sup>L</sup> contribute to the depolarization of <sup>v</sup>, and that after <sup>v</sup> increases above <sup>−</sup>54.<sup>4</sup> mV, *<sup>I</sup>*Na is solely responsible for the depolarization. However, as the value of v increases, the value of (v <sup>−</sup> vK) and *<sup>r</sup>*

**Fig. 8.1** Numerical solution of the Hodgkin-Huxley model with initial conditions v(0) <sup>=</sup> <sup>−</sup><sup>60</sup> mV, <sup>m</sup>(0) <sup>=</sup> <sup>0</sup>.1, <sup>h</sup>(0) <sup>=</sup> <sup>0</sup>.<sup>6</sup> and <sup>r</sup>(0) <sup>=</sup> <sup>0</sup>.3. The system of equations is solved using the explicit scheme described in (8.23)–(8.26) with <sup>∆</sup><sup>t</sup> <sup>=</sup> <sup>0</sup>.<sup>001</sup> ms.

increases, which leads to an increased value of *I*K. When *I*<sup>K</sup> + *I*<sup>L</sup> is more positive than *<sup>I</sup>*Na is negative, <sup>v</sup><sup>t</sup> becomes negative, and <sup>v</sup> starts to repolarize.

### **8.1.4 Upstroke Velocity and Action Potential Duration**

From the discussion in the previous subsection, it seems reasonable to expect that increasing *I*Na, e.g., by increasing the value of the parameter gNa, will make the depolarization phase (upstroke) of the action potential more rapid and the repolarization slower (longer action potential duration)2. Similarly, we would expect that increasing *I*K, e.g., by increasing the value of the parameter gK, would make the repolarization of the action potential more rapid, and thus the duration of the action potential shorter. In Fig. 8.2, we show the upstroke and the action potential for a few choices of the parameters gNa, gK, and gL. As expected, we observe that increasing gNa increases the upstroke velocity and the action potential duration, while increasing g<sup>K</sup> decreases the action potential duration. The adjustments of g<sup>L</sup> do not seem to have a significant effect on the computed upstroke or action potential duration.

**Fig. 8.2** Numerical solution of the Hodgkin-Huxley model for some adjustments of the parameters gNa, gK, and gL. The applied adjustments are given in the legends on the right-hand side of each row, and the remaining parameters are kept at their default values. To make the comparison easier, the timing is adjusted such that the time of the maximal upstroke velocity occurs at the same time for all the parameter choices. The system of equations is solved using the explicit scheme described in (8.23)–(8.26) with <sup>∆</sup><sup>t</sup> <sup>=</sup> <sup>0</sup>.<sup>001</sup> ms.

<sup>2</sup> See Section 2.3 (page 17) for definitions of the upstroke velocity and action potential duration.

### **8.2 A Parsimonious Model for the Action Potential of Rabbit Ventricular Cardiomyocytes**

In addition to the Hodgkin-Huxley model for neuronal action potentials, we will also consider a similar model for a cardiac action potential. More specifically, we consider a simple, so-called parsimonious, model for the action potential of rabbit ventricular cardiomyocytes from [9]. Note, however, that numerous alternative cardiac action potential models exist (see Section 8.3). The parsimonious rabbit model consists of a system of three ordinary differential equations with three unknowns,

$$C\_m \frac{dv}{dt} = -(I\_{\rm Na} + I\_{\rm K} + I\_{\rm stim}),\tag{8.27}$$

$$\frac{dm}{dt} = \frac{m\_{\infty} - m}{\tau\_m},\tag{8.28}$$

$$\frac{dh}{dt} = \frac{\tau\_m}{\tau\_h}.\tag{8.29}$$

Again, the unknown function v in units of millivolts (mV) represents the membrane potential, and *<sup>C</sup>*<sup>m</sup> <sup>=</sup> <sup>1</sup> <sup>µ</sup>F/cm<sup>2</sup> is a parameter representing the specific membrane capacitance. Furthermore, *<sup>I</sup>*Na and *<sup>I</sup>*<sup>K</sup> (in <sup>µ</sup>A/cm<sup>2</sup> ) represent the current densities through Na<sup>+</sup> and K<sup>+</sup> channels and are given by

$$I\_{\rm Na} = \mathcal{g}\_{\rm Na} m^3 h (\upsilon - \upsilon\_{\rm Na}), \tag{8.30}$$

$$I\_{\mathbf{K}} = \mathbf{g}\_{\mathbf{K}} e^{-b(v - v\_{\mathbf{K}})} (v - v\_{\mathbf{K}}),\tag{8.31}$$

where gNa = 11 mS/cm<sup>2</sup> and <sup>g</sup><sup>K</sup> <sup>=</sup> <sup>0</sup>.<sup>3</sup> mS/cm<sup>2</sup> represent the maximal conductance densities of Na<sup>+</sup> and K<sup>+</sup> channels, respectively, and <sup>v</sup>Na <sup>=</sup> <sup>65</sup> mV and <sup>v</sup><sup>K</sup> <sup>=</sup> <sup>−</sup><sup>83</sup> mV are the equilibrium potentials of the two channels types. In addition, *m* <sup>3</sup>*h* and *e* <sup>−</sup>b(v−vK) represent the open probability of the Na<sup>+</sup> and K<sup>+</sup> channels, respectively. The parameter *<sup>b</sup>* has the value *<sup>b</sup>* <sup>=</sup> <sup>0</sup>.<sup>047</sup> mV−<sup>1</sup> . As in the Hodgkin-Huxley model, the unknown functions *m* and *h* take values between 0 and 1 and are governed by (8.28)–(8.29)3 where

$$m\_{\infty} = \frac{1}{1 + e^{(\nu - E\_m)/k\_m}}, \qquad \qquad \qquad \tau\_m = 0.12 \text{ ms}, \tag{8.32}$$

$$h\_{\infty} = \frac{1}{1 + e^{(v - E\_h)/k\_h}},\qquad\qquad\qquad\tau\_h = \frac{2\tau\_h^0 e^{\delta\_h(v - E\_h)/k\_h}}{1 + e^{(v - E\_h)/k\_h}},\qquad(8.33)$$

and

<sup>3</sup> Note that the formulation of the equation for <sup>m</sup> used here (m<sup>t</sup> <sup>=</sup> (m<sup>∞</sup> <sup>−</sup> <sup>m</sup>)/τm) corresponds to the formulation used in the Hodgkin-Huxley model (m<sup>t</sup> <sup>=</sup> <sup>α</sup>m(<sup>1</sup> <sup>−</sup> <sup>m</sup>) − <sup>β</sup>mm) if we define <sup>τ</sup><sup>m</sup> <sup>=</sup> 1 <sup>α</sup>m+β<sup>m</sup> and m<sup>∞</sup> = <sup>α</sup><sup>m</sup> <sup>α</sup>m+β<sup>m</sup> . The equation for h can also be rewritten analogously.

**Fig. 8.3** Values of m∞, h<sup>∞</sup> and e <sup>−</sup>b(v−vK) in the parsimonious ventricular rabbit model (8.27)–(8.35) as functions of v.

*<sup>E</sup>*<sup>m</sup> <sup>=</sup> <sup>−</sup><sup>41</sup> mV, *<sup>k</sup>*<sup>m</sup> <sup>=</sup> <sup>−</sup>4.<sup>0</sup> mV, (8.34)

$$E\_h = -74.9 \text{ mV}, \qquad k\_h = 4.4 \text{ mV}, \qquad \tau\_h^0 = 6.8 \text{ ms}, \qquad \delta\_h = 0.8. \tag{8.35}$$

In addition, *<sup>I</sup>*stim is a stimulus current density given in units of <sup>µ</sup>A/cm<sup>2</sup> . This stimulus current density is introduced because the initial conditions (see below) yield a system at rest, and the stimulus current is needed to trigger an action potential. The stimulus current is described in more detail in Section 8.2.1 below.

In our computations, we will use the initial conditions

$$w(0) = -83 \text{ mV}, \qquad \qquad m(0) = 0, \qquad \qquad h(0) = 0.9. \tag{8.36}$$

### **8.2.1 The Stimulus Current Density**

In the model (8.27)–(8.29), *I*stim is a stimulus current density that is used to initiate an action potential by increasing the membrane potential enough to activate *I*Na. Specifically, the stimulus current density is given by

$$I\_{\rm stim} = \begin{cases} a\_{\rm stim}, & \text{if } t \ge t\_{\rm stim} \text{ and } t \le t\_{\rm stim} + d\_{\rm stim}, \\ 0, & \text{otherwise.} \end{cases} \tag{8.37}$$

In other words, the stimulus current density is only nonzero in the period from *t*stim to *<sup>t</sup>*stim <sup>+</sup> *<sup>d</sup>*stim. In our computations, we use *<sup>a</sup>*stim <sup>=</sup> <sup>−</sup><sup>25</sup> <sup>µ</sup>A/cm<sup>2</sup> , *t*stim = 50 ms, and *d*stim = 2 ms, unless otherwise specified. This turns out to be a sufficiently strong *I*stim to activate *I*Na and thus initiate an action potential.

As mentioned above, without an included stimulus current density, the membrane potential, v, is at rest (i.e., does not change with time) at the initial conditions, v(0) <sup>=</sup> <sup>−</sup><sup>83</sup> mV, *<sup>m</sup>*(0) <sup>=</sup> 0, *<sup>h</sup>*(0) <sup>=</sup> <sup>0</sup>.9. This is because both *<sup>I</sup>*<sup>K</sup> and *<sup>I</sup>*Na on the right-hand side of (8.27) are equal to or close to zero. The current density *I*<sup>K</sup> = gK*e* <sup>−</sup>b(v−vK) (v <sup>−</sup> vK) is zero because v <sup>=</sup> <sup>−</sup><sup>83</sup> mV, which is equal to the equilibrium potential of the K<sup>+</sup> channels, <sup>v</sup>K, so (v−vK) <sup>=</sup> 0. Furthermore, the current density *<sup>I</sup>*Na <sup>=</sup> <sup>g</sup>Na*<sup>m</sup>* <sup>3</sup>*h*(v−vNa) is zero because *<sup>m</sup>* is zero. From Fig. 8.3, we also see that *<sup>m</sup>*∞(v) ≈ <sup>0</sup> for v <sup>=</sup> <sup>−</sup><sup>83</sup> mV. Therefore, *m*<sup>∞</sup> ≈ *m*, which means that dm dt = m∞−m τm ≈ 0 and *m* is expected to maintain a value close to zero. Since the membrane potential is at rest at v <sup>=</sup> <sup>−</sup><sup>83</sup> mV, it can be referred to as the *resting potential* of the model.

However, if we apply a negative *<sup>I</sup>*stim for some time, this will allow <sup>v</sup> to increase a bit. If we for example increase v to about <sup>−</sup><sup>40</sup> mV, then *<sup>m</sup>*∞(v) increases to about 0.5 (see Fig. 8.3), and dm dt = m∞−m τm becomes positive (if *m* ≈ 0), leading to an increased value of *<sup>m</sup>*. For *<sup>m</sup>* > 0, we get a nonzero, negative *<sup>I</sup>*Na, that will last as long as *<sup>h</sup>* > <sup>0</sup> and (<sup>v</sup> <sup>−</sup>vNa) <sup>&</sup>gt; <sup>0</sup> (see (8.30)), and this negative *<sup>I</sup>*Na creates the upstroke of the action potential.

### **8.2.2 An Explicit Numerical Scheme**

An explicit numerical scheme for the parsimonious ventricular rabbit model can be defined in almost exactly the same manner as in Section 8.1.1 for the Hodgkin-Huxley model by replacing the derivatives in (8.27)–(8.29) by the standard difference (8.18). The computational form for the scheme reads:

$$v\_{n+1} = v\_n - \frac{\Delta t}{C\_m} [I\_{\text{Na}}(v\_n, m\_n, h\_n) + I\_{\text{K}}(v\_n) + I\_{\text{stim}}(t\_n)],\tag{8.38}$$

$$m\_{n+1} = m\_n + \Delta t \frac{m\_{\infty}(v\_n) - m\_n}{\tau\_m(v\_n)},\tag{8.39}$$

$$h\_{n+1} = h\_n + \Delta t \frac{h\_{\infty}(v\_n) - h\_n}{\tau\_h(v\_n)}.\tag{8.40}$$

### **8.2.3 Numerical Computations**

In Table 8.2, we report the error, *<sup>E</sup>*v, of the numerical solution of<sup>v</sup> using the numerical scheme (8.38)–(8.40) for some different values of ∆*t*. The error is computed by comparing the solutions at time *t* = 10 ms to the solutions found using a very fine time step (∆*t* = 10−<sup>5</sup> ms). As observed for the Hodgkin-Huxley model, we find that the error of the explicit scheme is close to proportional to the time step, ∆*t*, applied in the numerical scheme. So again we have linear (or first order) convergence.

In Fig. 8.4, we show the numerical solution of the parsimonious ventricular rabbit model solved using <sup>∆</sup>*<sup>t</sup>* <sup>=</sup> <sup>0</sup>.<sup>001</sup> ms. In the upper left plot we show the membrane potential, v, and we see that like in Fig. 8.1 for the Hodgkin-Huxley model, an action potential is generated. The action potential starts around the time when *I*stim is applied (at *t*stim = 50 ms) and seems to last for about 250 ms. This is significantly longer than the about 2-3 ms long neuronal action potential in the Hodgkin-Huxley

**Table 8.2** Error of the numerical solution of the parsimonious ventricular rabbit model for different values of <sup>∆</sup>t. The error is defined as <sup>E</sup><sup>v</sup> <sup>=</sup> <sup>|</sup><sup>v</sup> <sup>−</sup> <sup>v</sup><sup>N</sup> <sup>|</sup>, where <sup>v</sup> is the numerical solution at <sup>t</sup> <sup>=</sup> <sup>10</sup> ms for a very fine time step (∆<sup>t</sup> <sup>=</sup> <sup>10</sup>−<sup>5</sup> ms), and <sup>v</sup><sup>N</sup> is the numerical solution at <sup>t</sup> <sup>=</sup> <sup>10</sup> ms for each of the values of ∆t reported in the first column of the table. In these computations we have used tstim = 0 ms and dstim = 2 ms.


**Fig. 8.4** Numerical solution of the parsimonious ventricular rabbit model. The system of equations is solved using the explicit scheme described in (8.38)–(8.40) with <sup>∆</sup><sup>t</sup> <sup>=</sup> <sup>0</sup>.<sup>001</sup> ms. Note that <sup>I</sup>Na is shown in two panels, one with the same time scale as the remaining panels, and one with the time scale zoomed in on the time of the peak current.

model for a neuronal action potential. The next two upper panels show how the value of *m* and *h* changes with time, and the plots in the lower panels show the two ion channel current densities, *I*Na and *I*K. The lower left panel shows *I*Na in the same time scale as in the remaining panels, and the lower middle panel shows *I*Na zoomed in on the points in time when the peak current occurs. The lower right panel shows *I*K.

### **8.2.4 Upstroke Velocity and Action Potential Duration**

In Fig. 8.5, we investigate how the upstroke velocity and action potential duration are affected by adjusting the parameters gNa and g<sup>K</sup> in the parsimonious ventricular rabbit model. In the upper panel, we observe that decreasing gNa leads to a slower upstroke and a shorter action potential duration, whereas a lower value of g<sup>K</sup> leads to a longer action potential duration. These observations are consistent with what we observed for the Hodgkin-Huxley model in Fig. 8.5. We also observe that the onset of the rapid upstroke happens slightly faster after the stimulus current is applied when g<sup>K</sup> is decreased.

**Fig. 8.5** Numerical solution of v in the parsimonious ventricular rabbit model for some adjustments of the parameters gNa and gK. The applied adjustments are given in the legends on the right-hand side of each row, and the remaining parameters are kept at their default values.. The system of equations are solved using the explicit scheme described in (8.38)–(8.40) with <sup>∆</sup><sup>t</sup> <sup>=</sup> <sup>0</sup>.<sup>001</sup> ms.

### **8.3 Comments and Further Reading**

1. Following the publication of the Hodgkin-Huxley model [10] in 1952, many similar models representing the action potential of different cell types were published, building on the same basic principles. One example is the ventricular rabbit model described in Section 8.2. More recent models are often more complex than the two membrane models considered in this chapter. For instance, a large number of additional membrane currents and dynamic intracellular ion concentrations are often included in the equations. Nevertheless, the resulting system of differential equations can be solved in exactly the same manner as seen for the Hodgkin-Huxley model in Section 8.1.1 and for the parsimonious ventricular rabbit model in Section 8.2.2.


longer action potential of the cardiomyocyte is crucial in inducing an increase in intracellular Ca2<sup>+</sup> concentration (not part of the model above), resulting in the mechanical contraction of the myocyte and effective pumping of blood. Thus, the longer action potential is essential for optimal cardiac function.

### **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 9 The Cable Equation**

In Chapter 6, we studied a simple version of the cable equation, where a diffusion term was added to the FitzHugh-Nagumo equations. In this chapter, we will revisit the cable equation and go through a simple derivation of the model. In addition, we will consider the numerical solution of the cable equation for a neuronal axon with membrane dynamics modeled by the Hodgkin-Huxley model.

### **9.1 Derivation of the Cable Equation**

In order to get a sense of the origin of the terms in the cable equation, we will here consider a simple derivation of the model. Similar derivations are found in, e.g., [1, 2, 4, 7, 9]. The derivation is based on dividing a cell (e.g., an axon) into a number of compartments in the *x*-direction, as illustrated in Fig. 9.1.

1 1 1 **Fig. 9.1** Left: Illustration of a cell separated into a number of compartments of length ∆x. Right: Illustration of one of the compartments, j. The cross-sectional area in the x-direction is denoted by A<sup>x</sup> , and the total membrane area is denoted by Am. The current across the membrane is denoted by Im, the current from compartment j to compartment j +1 is denoted by I<sup>+</sup> and the current from compartment j to compartment j − 1 is denoted by I−.

The left panel of Fig. 9.1 illustrates a cell divided into a number of compartments of length ∆*x*, and the right panel illustrates an arbitrary inner1 compartment number *j*. We will derive the cable equation by considering each of the electric currents flowing out of this compartment.

### **Currents Between Compartments**

We assume that the currents that flow between compartments are governed by Ohm's law in the sense that the current from compartment *j* to compartment *j* + 1 is given by

$$I\_{+} = \frac{u\_{i,j} - u\_{i,j+1}}{R},\tag{9.1}$$

where *<sup>u</sup>*i,<sup>j</sup> is the electrical potential the center of compartment *<sup>j</sup>* and *<sup>u</sup>*i,j+<sup>1</sup> is the electrical potential in the center of compartment *j* + 1, both in units of millivolts (mV). The subscript *i* is used to specify that we are considering the intracellular potential of the cell. Furthermore, *R* is the intracellular resistance between the two compartment centers in units of kilo-Ohm's (kΩ). This resistance can be expressed as (see, e.g., [5])

$$R = \frac{\Delta x}{\sigma\_i A\_x},\tag{9.2}$$

where ∆*x* is the distance between the centers of compartments *j* and *j* + 1 (in cm), σi is the intracellular conductivity (in mS/cm), and *A*<sup>x</sup> (in cm<sup>2</sup> ) is the cross sectional area of the cell (see Fig. 9.1). Inserting (9.2) into (9.1), we get

$$I\_{+} = \sigma\_{i} A\_{\chi} \frac{u\_{i,j} - u\_{i,j+1}}{\Delta x},\tag{9.3}$$

and, following the same steps, we get that the current from compartment *j* to compartment *j* − 1 is given by

$$I\_{-}=\sigma\_{i}A\_{\chi}\frac{u\_{i,j}-u\_{i,j-1}}{\Delta x}.\tag{9.4}$$

The unit of *<sup>I</sup>*<sup>+</sup> and *<sup>I</sup>*<sup>−</sup> is micro-Amperes (µA).

### **Membrane Currents**

We assume that the membrane acts as a capacitor that can store charge and that this property can be modeled like in the membrane models in Chapter 8. More specifically, we assume that the total current across the membrane of compartment

<sup>1</sup> Here, an arbitrary inner compartments refers to any of the compartments except for the rightmost or leftmost compartments.

9.1 Derivation of the Cable Equation 81

*j* is given by2

$$I\_m = A\_m \left( C\_m \frac{\partial v\_j}{\partial t} + I\_{\rm ion} \right). \tag{9.5}$$

Here, *A*<sup>m</sup> is the membrane area of compartment *j* (in cm<sup>2</sup> ), given by

$$A\_m = \eta\_m \Delta x,\tag{9.6}$$

where <sup>η</sup><sup>m</sup> is the circumference of the compartment (in cm) and <sup>∆</sup>*<sup>x</sup>* is the length of the compartment (in cm). Furthermore, *C*<sup>m</sup> is the specific membrane capacitance (given in units of µF/cm<sup>2</sup> ), and <sup>v</sup><sup>j</sup> is the membrane potential (in mV) defined as the potential difference

$$
\upsilon = \mu\_i - \mu\_e,\tag{9.7}
$$

where *u*<sup>i</sup> is the intracellular potential in the compartment and *u*<sup>e</sup> is the extracellular potential outside of the compartment. The term *C*<sup>m</sup> ∂vj ∂t is called the capacitive current density and represents the current density to and from the collection of charges stored by the membrane capacitor. Moreover, *<sup>I</sup>*ion is the current density (in <sup>µ</sup>A/cm<sup>2</sup> ) through ion channels in the cell membrane. For example, if we assume that the membrane dynamics are modeled by the Hodgkin-Huxley model (see Section 8.1), this current density is given by

$$I\_{\rm ion} = I\_{\rm Na} + I\_{\rm K} + I\_{\rm L}.\tag{9.8}$$

### **Sum of Currents**

In order to derive the cable equation, we assume that Kirchhoff's current law applies in each compartment. In other words, we assume that the sum of all of the currents out of the compartment is zero. This gives

$$I\_+ + I\_- + I\_m = 0,\tag{9.9}$$

and inserting (9.3)–(9.6), this yields

$$
\sigma\_i \sigma\_i A\_x \frac{u\_{i,j} - u\_{i,j+1}}{\Delta x} + \sigma\_i A\_x \frac{u\_{i,j} - u\_{i,j-1}}{\Delta x} + \eta\_m \Delta x \left( C\_m \frac{\partial v\_j}{\partial t} + I\_{\text{ion}} \right) = 0,\tag{9.10}
$$

which can be rearranged to

$$\mathcal{C}\_m \frac{\partial v\_j}{\partial t} = \frac{\sigma\_i A\_x}{\eta\_m} \frac{\mu\_{i,j-1} - 2u\_{i,j} + u\_{i,j+1}}{\Delta x^2} - I\_{\text{ion}}.\tag{9.11}$$

<sup>2</sup> In (9.5) we recognize the terms <sup>C</sup>mv<sup>t</sup> and <sup>I</sup>ion (see (9.8)) from the membrane models considered in Chapter 8. In those models, we ignored all spatial variation and assumed that I<sup>m</sup> was the only current into or out of the cell. In order for the sum of the currents to be zero, we therefore ended up with models on the form <sup>C</sup>mv<sup>t</sup> <sup>=</sup> <sup>−</sup>Iion.

Now, we insert an assumption that the extracellular potential is zero everywhere such that <sup>v</sup> <sup>=</sup> *<sup>u</sup>*<sup>i</sup> (see (9.7))3. In that case, the model reads

$$C\_m \frac{\partial v\_j}{\partial t} = \delta \frac{v\_{j-1} - 2v\_j + v\_{j+1}}{\Delta x^2} - I\_{\text{ion}},\tag{9.12}$$

where

$$
\delta = \frac{\sigma\_{\bar{I}} A\_{\bar{\chi}}}{\eta\_m}.\tag{9.13}
$$

From Chapter 3 (see page 23), we recall that

$$v\_{\rm xx}(t, \mathbf{x}) \approx \frac{v(t, \mathbf{x} - \Delta \mathbf{x}) - 2v(t, \mathbf{x}) + v(t, \mathbf{x} + \Delta \mathbf{x})}{\Delta \mathbf{x}^2} \tag{9.14}$$

for a sufficiently small ∆*x*. We therefore assume that ∆*x* in (9.12) is very small, and obtain the cable equation

$$C\_m \frac{\partial v}{\partial t} = \delta \frac{\partial^2 v}{\partial x^2} - I\_{\text{ion}}.\tag{9.15}$$
 
$$\text{i.e., } \sin \theta \cos \theta \cos \phi \text{ e.f., } \cos \theta \sin \phi \text{ e.f.}$$

Since the considered compartment *j* was chosen as an arbitrary inner compartment, we conclude that the equation (9.15) applies everywhere along the cell, except at the left and right boundaries. The boundary conditions are considered in the next subsection.

### **9.1.1 Boundary Conditions**

Intuitively, for the leftmost compartment of the cell (see Fig. 9.1), the current from this compartment to the *non-existing* compartment to the left, *I*−, should be assumed to be zero. In other words,

$$I\_{-}=\sigma\_{i}A\_{\chi}\frac{u\_{i,j}-u\_{i,j-1}}{\Delta x}=0.\tag{9.16}$$

From previous chapters (see, e.g., (1.6) in Chapter 1), we know that

$$\frac{\partial \mu\_i}{\partial \mathbf{x}}(\mathbf{t}, \mathbf{x}) \approx \frac{\mu\_i(\mathbf{t}, \mathbf{x}) - \mu\_i(\mathbf{t}, \mathbf{x} - \Delta \mathbf{x})}{\Delta \mathbf{x}} \tag{9.17}$$

for a sufficiently small ∆*x*. Assuming that ∆*x* is very small, (9.16) can therefore be translated to the boundary condition

$$
\sigma\_i A\_\chi \frac{\partial u\_i}{\partial \chi}(t, 0) = 0,\tag{9.18}
$$

<sup>3</sup> Note that the assumption that the extracellular potential is zero could be replaced by alternative assumptions (see the Section 9.4 for more details).

where we have assumed that the leftmost boundary of the cell is located at *x* = 0. Dividing by <sup>σ</sup><sup>i</sup> *<sup>A</sup>*<sup>x</sup> on both sides of the equation and using the assumption that the extracellular potential is zero, which gives *<sup>u</sup>*<sup>i</sup> <sup>=</sup> <sup>v</sup>, we get the boundary condition

$$\frac{\partial v}{\partial x}(t,0) = 0.\tag{9.19}$$

Inserting this into the discrete version of the cable equation for the left boundary, we get

$$C\_m \frac{\partial v\_1}{\partial t} = \delta \frac{-v\_1 + v\_2}{\Delta x^2} - I\_{\text{ion}}.\tag{9.20}$$
 
$$\text{is what branch of the null } x \text{ at } \dots = I\_{\text{ion}} \text{ is the bandwidth}$$

A similar argument for the right boundary of the cell, at *x* = *L*, gives the boundary condition

$$i\frac{\partial v}{\partial x}(t,L) = 0\tag{9.21}$$

and the discrete equation

$$C\_m \frac{\partial \upsilon\_M}{\partial t} = \delta \frac{-\upsilon\_M + \upsilon\_{M-1}}{\Delta x^2} - I\_{\text{ion}}.\tag{9.22}$$

### **9.1.2 Geometry**

The value of δ (see (9.13)) in the cable equation depends on the considered geometry. If we consider a rectangular cuboid, as illustrated in Fig. 9.1, with width w in the <sup>y</sup>and *<sup>z</sup>*-directions, we have *<sup>A</sup>*<sup>x</sup> <sup>=</sup> <sup>w</sup> 2 and <sup>η</sup><sup>m</sup> <sup>=</sup> <sup>4</sup>w, which gives

$$
\delta = \frac{\sigma\_i A\_\chi}{\eta\_m} = \frac{w \sigma\_i}{4}.\tag{9.23}
$$

Similarly, if we consider a cylinder with radius *<sup>r</sup>*, we have *<sup>A</sup>*<sup>x</sup> <sup>=</sup> <sup>π</sup>*<sup>r</sup>* 2 and <sup>η</sup><sup>m</sup> <sup>=</sup> <sup>2</sup>π*r*, which gives

$$
\delta = \frac{\sigma\_i A\_\chi}{\eta\_m} = \frac{r \sigma\_i}{2}.\tag{9.24}
$$

### **9.1.3 Additional State Variables**

As mentioned in the derivation above, the term *I*ion represents the current density through ion channels in the cell membrane. This current density could, for example, be modeled by the Hodgkin-Huxley model:

$$I\_{\rm ion} = I\_{\rm Na} + I\_{\rm K} + I\_{\rm L} \tag{9.25}$$

(see Section 8.1). Here, the formulation of *I*Na and *I*<sup>K</sup> involve the additional state variables *m*, *h* and *r*, which are governed by (8.2)–(8.4). In order to define the current density, *I*ion, we therefore need to include these equations in the model, where *m*, *h* and *r* are functions of both *t* and *x*. We thus get a system of equations of the form

$$C\_m \frac{\partial v}{\partial t} = \delta \frac{\partial^2 v}{\partial x^2} - I\_{\text{ion}},\tag{9.26}$$

$$\frac{\partial m}{\partial t} = \alpha\_m (1 - m) - \beta\_m m,\tag{9.27}$$

$$\frac{\partial h}{\partial t} = \alpha\_h (1 - h) - \beta\_h h,\tag{9.28}$$

$$\frac{\partial r}{\partial t} = \alpha\_r (1 - r) - \beta\_r r. \tag{9.29}$$

### **9.2 Numerical Schemes**

We will consider two alternative numerical schemes for the cable equation. First, a straightforward explicit scheme and then an operator splitting scheme, treating the diffusion part of the system implicitly. In both schemes, we seek numerical approximations to the solution in the *M* spatial points *x*<sup>j</sup> = (*j* − 1) × ∆*x*, for *<sup>j</sup>* <sup>=</sup> <sup>1</sup>, ..., *<sup>M</sup>*, at the time points *<sup>t</sup>*<sup>n</sup> <sup>=</sup> *<sup>n</sup>* <sup>×</sup> <sup>∆</sup>*<sup>t</sup>* for *<sup>n</sup>* <sup>=</sup> <sup>1</sup>, ...,*N*. Here, <sup>∆</sup>*<sup>x</sup>* <sup>=</sup> L M−1 and ∆*t* = T N , where *L* is the length of the domain and *T* is the total simulation time.

### **9.2.1 An Explicit Numerical Scheme for the Cable Equation**

We consider a numerical scheme for the cable equation that is based on this discrete spatial version of the equation considered in the derivation of the model, i.e., (9.12). To define a numerical scheme, we have to replace the remaining derivative ∂v ∂t by a difference, and we choose the usual difference (see, e.g., (1.5) on page 4). In addition, similar replacements of derivatives by differences are inserted into (9.27)–(9.29), and we get the explicit scheme

$$C\_m \frac{v\_j^{n+1} - v\_j^n}{\Delta t} = \delta \frac{v\_{j-1}^n - 2v\_j^n + v\_{j+1}^n}{\Delta x^2} - I\_{\text{ion}}(v\_j^n, m\_j^n, h\_j^n, r\_j^n), \tag{9.30}$$

$$\frac{m\_j^{n+1} - m\_j^n}{\Delta t} = \alpha\_m(v\_j^n)(1 - m\_j^n) - \beta\_m(v\_j^n)m\_j^n,\tag{9.31}$$

$$\frac{h\_j^{n+1} - h\_j^n}{\Delta t} = \alpha\_h(v\_j^n)(1 - h\_j^n) - \beta\_h(v\_j^n)h\_j^n,\tag{9.32}$$

$$\frac{r\_j^{n+1} - r\_j^n}{\Delta t} = \alpha\_r(v\_j^n)(1 - r\_j^n) - \beta\_r(v\_j^n)r\_j^n,\tag{9.33}$$

for *<sup>j</sup>* <sup>=</sup> <sup>2</sup>, ..., *<sup>M</sup>* <sup>−</sup>1. For *<sup>j</sup>* <sup>=</sup> <sup>1</sup> and *<sup>j</sup>* <sup>=</sup> *<sup>M</sup>*, we replace the right-hand side of (9.30) by the right-hand sides of (9.20) and (9.22), respectively. This scheme can be rewritten to computational form

$$v^{n+1} = \left(I + \frac{\Delta t}{C\_m}A\right)v^n - \frac{\Delta t}{C\_m}I\_{\text{ion}}(v^n, m^n, h^n, r^n),\tag{9.34}$$

$$m^{n+1} = m^n + \Delta t \{ \alpha\_m(v^n)(1 - m^n) - \beta\_m(v^n)m^n \},\tag{9.35}$$

$$h^{n+1} = h^n + \Delta t [\alpha\_h(v^n)(1 - h^n) - \beta\_h(v^n)h^n],\tag{9.36}$$

$$r^{n+1} = r^n + \Delta t[\alpha\_r(v^n)(1 - r^n) - \beta\_r(v^n)r^n],\tag{9.37}$$

where

$$\boldsymbol{v}^{n} = \begin{pmatrix} v\_1^n \\ v\_2^n \\ \vdots \\ v\_M^n \end{pmatrix}, \quad \boldsymbol{m}^n = \begin{pmatrix} m\_1^n \\ m\_2^n \\ \vdots \\ m\_M^n \end{pmatrix}, \quad \boldsymbol{h}^n = \begin{pmatrix} h\_1^n \\ h\_2^n \\ \vdots \\ h\_M^n \end{pmatrix}, \quad \boldsymbol{r}^n = \begin{pmatrix} r\_1^n \\ r\_2^n \\ \vdots \\ r\_M^n \end{pmatrix}, \tag{9.38}$$

the matrix *A* is defined by

$$A = \frac{\delta}{\Delta \mathbf{x}^2} \begin{pmatrix} -1 & 1 & 0 & \cdots & & 0 \\ 1 & -2 & 1 & 0 & \cdots & & 0 \\ 0 & 1 & -2 & 1 & 0 & \cdots & 0 \\ \vdots & & \ddots & \ddots & \ddots & & \vdots \\ 0 & \cdots & & & 0 & 1 & -1 \end{pmatrix},\tag{9.39}$$

and *I* is the *M* × *M* identity matrix.

### **9.2.2 An Operator Splitting Scheme for the Cable Equation**

A potential disadvantage of the explicit scheme defined above is instability issues, like observed in Chapter 3 (see Section 3.4.3). In an attempt to avoid these issues, we therefore also define an operator splitting scheme for the cable equation, with an implicit treatment of the diffusion part of the system. In other words, we divide the system into two parts for every time step by first solving

$$C\_m \frac{\partial v}{\partial t} = \delta \frac{\partial^2 v}{\partial x^2}, \qquad \qquad \frac{\partial m}{\partial t} = 0, \qquad \qquad \frac{\partial h}{\partial t} = 0, \qquad \qquad \frac{\partial r}{\partial t} = 0,\tag{9.40}$$

using an implicit scheme and then solving

$$C\_m \frac{\partial v}{\partial t} = -I\_{\text{ion}},\tag{9.41}$$

$$\frac{\partial m}{\partial t} = \alpha\_m (1 - m) - \beta\_m m,\tag{9.42}$$

$$\frac{\partial h}{\partial t} = \alpha\_h (1 - h) - \beta\_h h,\tag{9.43}$$

$$\frac{\partial r}{\partial t} = \alpha\_r (1 - r) - \beta\_r r,\tag{9.44}$$

using an explicit scheme. In computational form, using the vectors <sup>v</sup>n, *<sup>m</sup>*n, *<sup>h</sup>*n, *<sup>r</sup>*<sup>n</sup> as defined in (9.38) and the matrix *A* as defined in (9.39), the scheme for the first step reads

$$\left(I - \frac{\Delta t}{C\_m}A\right)v^{n+1/2} = v^n, \quad m^{n+1/2} = m^n, \quad h^{n+1/2} = h^n, \quad r^{n+1/2} = r^n,\tag{9.45}$$

and the scheme for the second step reads

$$v^{n+1} = v^{n+1/2} - \frac{\Delta t}{C\_m} I\_{\rm ion}(v^{n+1/2}, m^{n+1/2}, h^{n+1/2}, r^{n+1/2}), \tag{9.46}$$

$$m^{n+1} = m^{n+1/2} + \Delta t \big[ \alpha\_m(v^{n+1/2})(1 - m^{n+1/2}) - \beta\_m(v^{n+1/2})m^{n+1/2} \big],\qquad(9.47)$$

$$h^{n+1} = h^{n+1/2} + \Delta t \lfloor \alpha\_h(v^{n+1/2})(1 - h^{n+1/2}) - \beta\_h(v^{n+1/2})h^{n+1/2} \rfloor,\tag{9.48}$$

$$r^{n+1} = r^{n+1/2} + \Delta t [\alpha\_r(v^{n+1/2})(1 - r^{n+1/2}) - \beta\_r(v^{n+1/2})r^{n+1/2}].\tag{9.49}$$

### **9.3 Numerical Computations**

We apply the two numerical schemes to a simple test case for the cable equation with membrane dynamics modeled by the Hodgkin-Huxley model. We consider a cell shaped as a rectangular cuboid with length *<sup>L</sup>* <sup>=</sup> <sup>0</sup>.<sup>5</sup> cm and width w <sup>=</sup> <sup>0</sup>.<sup>001</sup> cm, and we assume that <sup>σ</sup><sup>i</sup> <sup>=</sup> <sup>4</sup> mS/cm. This gives <sup>δ</sup> <sup>=</sup> <sup>0</sup>.<sup>001</sup> mS (see (9.13)). Furthermore, the initial conditions are given by

$$w(0, \mathbf{x}) = \begin{cases} -50 \text{ mV, for } \mathbf{x} < = 0.05 \text{ cm,} \\ -65 \text{ mV, for } \mathbf{x} > 0.05 \text{ cm,} \end{cases} \tag{9.50}$$

$$m(0, \mathbf{x}) = 0.1, \quad h(0, \mathbf{x}) = 0.6, \quad r(0, \mathbf{x}) = 0.3. \tag{9.51}$$

### **9.3.1 Stability**

In Fig. 9.2, we show the numerical solution of the problem found using the explicit scheme with <sup>∆</sup>*<sup>x</sup>* <sup>=</sup> <sup>0</sup>.<sup>001</sup> cm and <sup>∆</sup>*<sup>t</sup>* <sup>=</sup> <sup>0</sup>.<sup>001</sup> ms at some different points in time. We **Fig. 9.2** Numerical solution of v in the cable equation with membrane dynamics modeled by the Hodgkin-Huxley model at five different points in time. The numerical solution is found using the explicit numerical scheme (9.34)–(9.37), with <sup>∆</sup><sup>x</sup> <sup>=</sup> <sup>0</sup>.<sup>001</sup> cm and <sup>∆</sup><sup>t</sup> <sup>=</sup> <sup>0</sup>.<sup>001</sup> ms. We have zoomed in on the solution at the leftmost part of the domain, and observe that we get unreasonable oscillations in the area close to the discontinuity of the initial conditions (<sup>x</sup> <sup>=</sup> <sup>0</sup>.<sup>05</sup> cm).

observe that already at the first time steps, we get unreasonable oscillations close to the discontinuity in the initial conditions (at *<sup>x</sup>* <sup>=</sup> <sup>0</sup>.<sup>05</sup> cm, see (9.50)).

In Fig. 9.3, we have solved the system of equations using the explicit scheme with a smaller value of <sup>∆</sup>*<sup>t</sup>* (∆*<sup>t</sup>* <sup>=</sup> <sup>0</sup>.<sup>0002</sup> ms). In this case, we avoid the oscillations, and we seem to get a reasonable traveling wave solution. However, because of the small time step, the numerical computations are quite slow. For this reason, we also try to solve the system using the operator splitting scheme described in Section 9.2.2. Using this scheme, we are able to use a time step of <sup>∆</sup>*<sup>t</sup>* <sup>=</sup> <sup>0</sup>.<sup>02</sup> ms, and still get a solution that is very similar to the one that we got using the explicit scheme with a fine time step (compare the solid blue and dotted orange lines in Fig. 9.3). Even though we have to solve a linear system of equations in order to find the solution in the first step of the operator splitting scheme, the algorithm is able to find the solution much faster than the explicit scheme because of the large difference in the required time step. More specifically, for the laptop computer that we used to perform the computations, the operator splitting scheme was about 25 times faster than the explicit scheme.

**Fig. 9.3** Numerical solution of v in the cable equation with membrane dynamics modeled by the Hodgkin-Huxley model at five different points in time. The solution drawn with a solid line is found using the explicit numerical scheme (9.34)–(9.37), with <sup>∆</sup><sup>x</sup> <sup>=</sup> <sup>0</sup>.<sup>001</sup> cm and <sup>∆</sup><sup>t</sup> <sup>=</sup> <sup>0</sup>.<sup>0002</sup> ms, and the solution drawn with a dotted line is found using the operator splitting scheme described in Section 9.2.2, with <sup>∆</sup><sup>x</sup> <sup>=</sup> <sup>0</sup>.<sup>001</sup> cm and <sup>∆</sup><sup>t</sup> <sup>=</sup> <sup>0</sup>.<sup>02</sup> ms.

### **9.3.2 Conduction Velocity**

In Chapter 6, we considered a unitless version of the cable equation with membrane dynamics modeled by the FitzHugh-Nagumo model and observed how the conduction velocity, CV (i.e., the velocity with which the traveling wave moved though the domain) depended on two of the model parameters. In Fig. 9.4, we show the results of a similar experiment for the cable equation with membrane dynamics modeled by the Hodgkin-Huxley model. We vary the value of the three parameters gNa, gK, and g<sup>L</sup> representing the maximal conductance density of the three membrane currents of the Hodgkin-Huxley model (see (8.5)–(8.7)). In addition, we vary the value of the cell width, <sup>w</sup>, and the intracellular conductivity, <sup>σ</sup><sup>i</sup> , used to define δ in the cable equation (see (9.23)).

We define the conduction velocity as

$$\text{CV} = \frac{x\_2 - x\_1}{t\_2 - t\_1},\tag{9.52}$$

**Fig. 9.4** Conduction velocity (CV) computed from the numerical solution of the cable equation as described in (9.52) for a few adjustments of parameters. The parameter values not explicitly stated on the x-axis are set to their default values. The numerical solution is found using the operator splitting scheme described in Section 9.2.2, with <sup>∆</sup><sup>x</sup> <sup>=</sup> <sup>0</sup>.<sup>001</sup> cm and <sup>∆</sup><sup>t</sup> <sup>=</sup> <sup>0</sup>.<sup>02</sup> ms.

where *<sup>x</sup>*<sup>1</sup> <sup>=</sup> <sup>0</sup>.<sup>2</sup> cm and *<sup>x</sup>*<sup>2</sup> <sup>=</sup> <sup>0</sup>.4. Furthermore, *<sup>t</sup>*<sup>1</sup> and *<sup>t</sup>*<sup>2</sup> are the points in time when the value of <sup>v</sup> first increases to a value <sup>v</sup> <sup>≥</sup> 0 mV, in the spatial points *<sup>x</sup>*<sup>1</sup> and *<sup>x</sup>*2, respectively.

In Fig. 9.4, we observe that increasing the value of <sup>g</sup>Na, <sup>σ</sup><sup>i</sup> or the cell width, <sup>w</sup>, significantly increases the conduction velocity. Increasing <sup>g</sup><sup>L</sup> also increases the conduction velocity somewhat, whereas increasing g<sup>K</sup> decreases the conduction velocity.

### **9.4 Comments and Further Reading**

1. In the derivation of the cable equation given in this chapter, we assume that the extracellular potential is zero. However, if we instead assume that the extracellular potential is another constant, *C*, different from zero, we end up with the exact same formulation of the cable equation. In that case, we would have *u* j i = v <sup>j</sup> + *C* (see (9.7)), and inserting this into (9.11), we get

$$\begin{split} C\_m \frac{\partial \upsilon^j}{\partial t} &= \delta \frac{\upsilon^{j-1} + C - 2(\upsilon^j + C) + \upsilon^j + C}{\Delta x^2} - I\_{\text{ion}} \\ &= \delta \frac{\upsilon^{j-1} - 2\upsilon^j + \upsilon^j}{\Delta x^2} - I\_{\text{ion}}. \end{split} \tag{9.53}$$

which is exactly the same as (9.12).


$$A\_m = A\_\chi + A\_m = \Delta\chi \left(\frac{A\_\chi}{\Delta\chi} + \eta\_m\right). \tag{9.54}$$

Inserting this into the discrete version of the cable equation, we get the adjusted

$$\delta\_b = \frac{\sigma\_i A\_\chi}{\frac{A\_\chi}{\Delta \chi} + \eta\_m} \tag{9.55}$$

for the left and right boundaries. In the code associated with these notes, we have included an example simulation where we compare the solution of the system using this adjusted <sup>δ</sup><sup>b</sup> at the boundary to the case where the default <sup>δ</sup> is used everywhere. By running that code, we see that the solutions for these two cases are indistinguishable.


many equations. Finally, differential equations can be analyzed mathematically using tools that are harder to apply when the equations are discretized. But the major disadvantage of the formulation as a differential equation is that it is not straightforward to solve using computers – the form and popularity is inherited from a time where math was done using paper and pencils. An attempt to introduce partial differential equations using both continuous and discrete approaches can be found in [8].

### **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 10 Spatial Models of Cardiac Electrophysiology**

In Chapter 8, we introduced mathematical models of the action potential across a cell membrane. Understanding the properties of the cell membrane is absolutely essential in order to understand the electrophysiology of excitable cells. However, some essential properties can only be studied in spatially resolved models; i.e., in models representing spatial variation across a single cell or a collection of cells.

Every heartbeat is based on an electrochemical wave traversing the whole cardiac muscle. Strong perturbations to these waves are referred to as arrhythmias. These disturbances can seriously disrupt the contraction of the heart muscle and are therefore very dangerous and potentially lethal. Cardiac fibrillation refers to a state of the heart where the contraction is completely unsynchronized, leading to severely reduced pumping functions. In mathematical modelling, fibrillation can only be studied in spatially resolved models, and hence the membrane models introduced above are inadequate. But these models are excellent building blocks in models representing collections of cells.

A first step towards spatially resolved modeling was presented in Chapter 9, where we discussed the cable equation. This model is often used to study a strand of cardiac cells, but since the model is inherently one-dimensional, it has limited relevance for complex spatial phenomena like cardiac fibrillation. Here, we will present the celebrated bidomain model. It is considered to represent the gold standard of mathematical models of cardiac electrophysiology and dates back 50 years. From the bidomain model it is easy to derive the somewhat simpler monodomain model. This model is easier to deal with in terms of numerical solution and is therefore frequently used as a replacement of the more correct bidomain model.

Here we will just present the models and show that the techniques we have derived above can be used to obtain numerical solutions of both the monodomain model and the bidomain model. In the notes below we will point to literature where the models are derived, and we will point to better numerical methods and to applications of the models. Again, we will just consider the finite difference method in order to keep things as simple as possible (but not simpler) and therefore we will use two-dimensional squares as computational domains. However, excellent open-source software tools are available for numerical simulations based on the bidomain and monodomain models using the finite element method. Using the finite element method, more realistic geometries can be applied.

### **10.1 First, the Diffusion Equation, Again**

For purely technical and notational reasons, we will first briefly consider the two-dimensional diffusion equation. This is helpful in order to understand the mesh, the matrices and the vectors that we use to solve the bidomain and monodomain equations below.

Consider the following initial and boundary value problem (unitless),

$$\frac{\partial u(t, \mathbf{x}, \mathbf{y})}{\partial t} = \sigma \frac{\partial^2 u(t, \mathbf{x}, \mathbf{y})}{\partial \mathbf{x}^2} + \sigma \frac{\partial^2 u(t, \mathbf{x}, \mathbf{y})}{\partial \mathbf{y}^2},\tag{10.1}$$

for <sup>Ω</sup> : (*x*, <sup>y</sup>) ∈ (0, <sup>1</sup>) × (0, <sup>1</sup>) and for *<sup>t</sup>* <sup>≤</sup> *<sup>T</sup>*. We let ∂<sup>Ω</sup> denote the boundary of the computational domain, <sup>Ω</sup>. At the boundary, ∂Ω, we use the Neumann boundary conditions

$$\frac{\partial u(t,0,\mathbf{y})}{\partial \mathbf{x}} = 0, \quad \frac{\partial u(t,1,\mathbf{y})}{\partial \mathbf{x}} = 0, \quad \frac{\partial u(t,\mathbf{x},\mathbf{0})}{\partial \mathbf{y}} = 0, \quad \frac{\partial u(t,\mathbf{x},\mathbf{l})}{\partial \mathbf{y}} = 0,\tag{10.2}$$

and, in addition, we define the initial condition

$$
\mu(0, \mathbf{x}, \mathbf{y}) = \mu^0(\mathbf{x}, \mathbf{y}),
\tag{10.3}
$$

where *u* 0 (*x*, <sup>y</sup>) is a given function. In (10.1), σ denotes a strictly positive (given) constant. In order to derive a numerical scheme for this problem, we proceed as usual by replacing derivatives by difference. By using the difference approximations introduced in Chapter 3, we get the following scheme1,

$$\frac{u\_{k,j}^{n+1} - u\_{k,j}^n}{\Delta t} = \sigma \frac{u\_{k-1,j}^n - 2u\_{k,j}^n + u\_{k+1,j}^n}{\Delta x^2} + \sigma \frac{u\_{k,j-1}^n - 2u\_{k,j}^n + u\_{k,j+1}^n}{\Delta y^2}. \tag{10.4}$$

Here, *u* n k,j denotes an approximation of *<sup>u</sup>*(*t*n, *<sup>x</sup>*k, <sup>y</sup>j) where *<sup>t</sup>*<sup>n</sup> <sup>=</sup> *<sup>n</sup>*∆*t*, *<sup>x</sup>*<sup>k</sup> <sup>=</sup> (*<sup>k</sup>* <sup>−</sup> <sup>1</sup>)∆*<sup>x</sup>* and y<sup>j</sup> = (*j* −1)∆y with ∆*x* = 1/(*M*<sup>x</sup> −1) and ∆y = 1/(*M*<sup>y</sup> −1) for sufficiently large integers *M*<sup>x</sup> and *M*y.

We noted above (see page 43) that it is quite useful to formulate these numerical schemes using matrix/vector notation. However, here the unknowns are given by *u* n k,j and thus it is not straightforward to use such a notation. We therefore introduce a one-dimensional numbering of the two-dimensional problem. Specifically, we define the new index *i* = *M*x(*j* − 1) + *k*. Then, any two-dimensional vector *z* with components {*z*k,<sup>j</sup> } can be rewritten as a one-dimensional vector *<sup>z</sup>*<sup>i</sup> where *<sup>i</sup>* runs from 1 to *M* = *M*<sup>x</sup> × *M*y. The components of the one-dimensional vector are given

<sup>1</sup> It may be useful to note the similarity with the one-dimensional case; see (3.13) at page 23.

by *z*M<sup>x</sup> (j−1)+<sup>k</sup> where *j* and *k* runs from 1 to *M*<sup>x</sup> and *M*y, respectively. Using this numbering, we can rewrite the scheme (10.4) as follows,

$$\frac{\mu\_{i}^{n+1} - \mu\_{i}^{n}}{\Delta t} = \sigma \frac{u\_{i-1}^{n} - 2u\_{i}^{n} + u\_{i+1}^{n}}{\Delta x^{2}} + \sigma \frac{u\_{i-M\_{x}}^{n} - 2u\_{i}^{n} + u\_{i+M\_{x}}^{n}}{\Delta y^{2}}.\tag{10.5}$$

By defining <sup>ρ</sup><sup>x</sup> <sup>=</sup> <sup>σ</sup>/∆*<sup>x</sup>* 2 and <sup>ρ</sup><sup>y</sup> <sup>=</sup> <sup>σ</sup>/∆<sup>y</sup> 2 , we can rewrite the scheme as follows,

$$\mu\_{i}^{n+1} = \mu\_{i}^{n} + \Delta t \left[ \rho\_{\ge} (\mu\_{i-1}^{n} + \mu\_{i+1}^{n}) - 2(\rho\_{\ge} + \rho\_{\ge})\mu\_{i}^{n} + \rho\_{\ge} (\mu\_{i-M\_{\ge}}^{n} + \mu\_{i+M\_{\ge}}^{n}) \right]. \tag{10.6}$$

Now, we can define an *M* × *M* matrix *A* where non-zero elements of a typical row *i* in the matrix are given by

$$a\_{l,i-M\_\chi} = \rho\_\chi,\tag{10.7}$$

$$a\_{i,i-1} = \rho\_{\ge 1} \tag{10.8}$$

$$a\_{i, \dot{i}} = -2(\rho\_{\dot{x}} + \rho\_{\dot{y}}),\tag{10.9}$$

$$a\_{i,i+1} = \rho\_{\ge},\tag{10.10}$$

$$a\_{i,i+M\_\chi} = \rho\_\chi. \tag{10.11}$$

When the index *i* corresponds to a boundary point or to a corner point in the mesh, there are exceptions. The detailed definition of all the elements of the matrix is found in the software associated these notes, and a figure showing the structure of the matrix is depicted in Fig. 10.1 in the case <sup>∆</sup>*<sup>x</sup>* <sup>=</sup> <sup>∆</sup><sup>y</sup> <sup>=</sup> <sup>0</sup>.<sup>2</sup> and σ <sup>=</sup> <sup>0</sup>.04, which gives <sup>ρ</sup><sup>x</sup> <sup>=</sup> <sup>ρ</sup><sup>y</sup> <sup>=</sup> 1, *<sup>M</sup>*<sup>x</sup> <sup>=</sup> *<sup>M</sup>*<sup>y</sup> <sup>=</sup> 6, and *<sup>M</sup>* <sup>=</sup> *<sup>M</sup>*x*M*<sup>y</sup> <sup>=</sup> 36.

With this notation at hand, we can write the scheme (10.6) in the convenient form

96 10 Spatial Models of Cardiac Electrophysiology

$$
\mu^{n+1} = (I + \Delta t A)\mu^n. \tag{10.12}
$$

Furthermore, we can define the implicit scheme,

$$\frac{\mu^{n+1} - \mu^n}{\Delta t} = A\mu^{n+1},\tag{10.13}$$

which can be rewritten as,

$$(I - \Delta t A)\mu^{n+1} = \mu^n,\tag{10.14}$$

where, as usual, *I* denotes the identity matrix. We can also define the second order midpoint scheme as follows,

$$\frac{\mu^{n+1} - \mu^n}{\Delta t} = \frac{1}{2}A(\mu^n + \mu^{n+1}),\tag{10.15}$$

or

$$\left(I - \frac{\Delta t}{2}A\right)\mu^{n+1} = \left(I + \frac{\Delta t}{2}A\right)\mu^n. \tag{10.16}$$

### **10.2 The Bidomain Model**

The bidomain model with membrane dynamics modeled by the parsimonious ventricular rabbit model [6] can be written in the form

$$\times \left( C\_m \frac{\partial v}{\partial t} + I\_{\rm ion}(v, m, h) \right) = \nabla \cdot (\sigma\_i \nabla v) + \nabla \cdot (\sigma\_i \nabla u\_\varepsilon), \tag{10.17}$$

$$\begin{split} \mathbf{D} = \nabla \cdot (\sigma\_i \nabla v) + \nabla \cdot (\sigma\_i \nabla v) \end{split} \tag{10.18}$$

$$0 = \nabla \cdot (\sigma\_i \nabla v) + \nabla \cdot ((\sigma\_i + \sigma\_e) \nabla u\_e), \qquad (10.18)$$

$$\frac{dm}{dt} = \frac{m\_{\infty} - m}{\tau\_m},\tag{10.19}$$

$$\frac{dh}{dt} = \frac{\tau\_m}{\tau\_h}.\tag{10.20}$$

We consider this system for <sup>Ω</sup> : (*x*, <sup>y</sup>) ∈ (0, *<sup>L</sup>*) × (0, *<sup>L</sup>*) and for *<sup>t</sup>* <sup>≤</sup> *<sup>T</sup>*. The spatial coordinate is given centimeters (cm) and time is given in milliseconds (ms). The unknown functions <sup>v</sup> and *<sup>u</sup>*<sup>e</sup> are the membrane potential and the extracellular potential, respectively (in units of mV). Note that <sup>v</sup> <sup>=</sup> *<sup>u</sup>*<sup>i</sup> <sup>−</sup> *<sup>u</sup>*e, where *<sup>u</sup>*<sup>i</sup> is the intracellular potential. We can use any pair of variables among <sup>v</sup>, *<sup>u</sup>*<sup>i</sup> and *<sup>u</sup>*<sup>e</sup> as prime variables, but it is most common to use v and *<sup>u</sup>*e. The bidomain model can be derived by assuming that the membrane potential, <sup>v</sup>, the intracellular potential, *<sup>u</sup>*<sup>i</sup> , and the extracellular potential, *u*e, are all defined everywhere in the domain and following the same steps as those used to derive the cable equation (see Chapter 9) without the assumption of *u*<sup>e</sup> = 0 (see Comment 2 in Section 9.4).

In the bidomain model, <sup>σ</sup><sup>i</sup> and <sup>σ</sup><sup>e</sup> denote the conductivities of the extracellular and intracellular spaces, respectively. The conductivities are tensors allowing the


**Table 10.1** Parameter values used in the bidomain model simulations.

conductivity to vary according to the spatial directions. Specifically, we have

$$
\sigma\_{\varepsilon} = \begin{pmatrix} \sigma\_{\varepsilon, \ge} & 0 \\ 0 & \sigma\_{\varepsilon, \ge} \end{pmatrix}, \qquad \qquad \qquad \sigma\_{i} = \begin{pmatrix} \sigma\_{i, \ge} & 0 \\ 0 & \sigma\_{i, \ge} \end{pmatrix}.
$$

In general, the conductivities can vary in space, but here we will assume that they are constant. Furthermore, *<sup>C</sup>*<sup>m</sup> is the specific membrane capacitance, and <sup>χ</sup> denotes the surface-to-volume ratio of the cell membrane. The sum of the ion current densities across the membrane are given by

$$I\_{\rm ion}(v, m, h) = I\_{\rm Na}(v, m, h) + I\_{\rm K}(v) + I\_{\rm stim}.\tag{10.21}$$

The specific formulations for the current densities *I*Na and *I*<sup>K</sup> are as specified in Chapter <sup>8</sup> (see page 71), and so are the associated gating functions *<sup>m</sup>*∞, *<sup>h</sup>*∞, τ<sup>m</sup> and <sup>τ</sup>h. The current density *<sup>I</sup>*stim represents a stimulus used to start the electrical wave. It is given by

$$I\_{\rm stim}(t, \mathbf{x}, \mathbf{y}) = \begin{cases} a\_{\rm stim}, & \text{if } t \ge t\_{\rm stim} \text{ and } t \le t\_{\rm stim} + d\_{\rm stim}, \\ & \text{and } \sqrt{\mathbf{x}^2 + \mathbf{y}^2} \le l\_{\rm stim}, \\ 0, & \text{otherwise,} \end{cases} \tag{10.22}$$

where *<sup>a</sup>*stim <sup>=</sup> <sup>−</sup><sup>25</sup> <sup>µ</sup>A/cm<sup>2</sup> , *<sup>t</sup>*stim <sup>=</sup> <sup>0</sup> ms, *<sup>d</sup>*stim <sup>=</sup> <sup>2</sup> ms, and *<sup>l</sup>*stim <sup>=</sup> <sup>0</sup>.<sup>25</sup> cm. The initial conditions are as specified for the parsimonious rabbit model in Chapter 8 (see page 72), and, for simplicity, we use the boundary conditions

$$
\mu\_e(t, 0, \mathbf{y}) = \mu\_e(t, L\_\mathbf{x}, \mathbf{y}) = \mu\_e(t, \mathbf{x}, 0) = \mu\_e(t, \mathbf{x}, L\_\mathbf{y}) = 0,\tag{10.23}
$$

$$\frac{\partial u\_i(t, 0, \mathbf{y})}{\partial \mathbf{x}} = 0, \quad \frac{\partial u\_i(t, L\_\mathbf{x}, \mathbf{y})}{\partial \mathbf{x}} = 0, \quad \frac{\partial u\_i(t, \mathbf{x}, 0)}{\partial \mathbf{y}} = 0, \quad \frac{\partial u\_i(t, \mathbf{x}, L\_\mathbf{y})}{\partial \mathbf{y}} = 0. \tag{10.24}$$

### **10.2.1 Operator Splitting for the Bidomain Model**

In order to solve the somewhat intimidating system (10.17)–(10.20) we use, more or less, all the tricks we have introduced above. The main technique, however, is to break the complex problem into parts that we are able to deal with. The first step along that path is to apply operator splitting. We start by assuming that the complete solution vector given by (v,*u*e,*m*, *<sup>h</sup>*) is known at time *<sup>t</sup>* <sup>=</sup> *<sup>t</sup>*<sup>n</sup> <sup>=</sup> *<sup>n</sup>* <sup>×</sup> <sup>∆</sup>*t*, and we want to compute the solution at time *t* = *t*n+1. We do this in two steps. First we solve the following system of ordinary differential equations,

$$C\_m \frac{\partial v}{\partial t} = -I\_{\rm ion}(v, m, h) \tag{10.25}$$
 
$$dm \qquad m\_{\rm on} - m$$

$$\frac{dm}{dt} = \frac{m\_{\infty} - m}{\tau\_m},\tag{10.26}$$

$$\frac{dh}{dt} = \frac{\tau\_m}{\tau\_h} \,\tag{10.27}$$

By solving these equations with *t* ranging from *t*<sup>n</sup> to *t*n+1, we obtain a solution that we denote (v,*u*e,*m*, *<sup>h</sup>*) n+1/2 . In the next step, we solve the spatial part of the equation,

$$
\chi \mathbf{C}\_m \frac{\partial v}{\partial t} = \nabla \cdot (\sigma\_i \nabla v) + \nabla \cdot (\sigma\_i \nabla u\_e), \tag{10.28}
$$

$$
0 = \nabla \cdot (\sigma\_i \nabla v) + \nabla \cdot ((\sigma\_i + \sigma\_e) \nabla u\_e) \tag{10.29}
$$

$$\underset{l\ldots}{}^{}0 = \nabla \cdot (\sigma\_{l} \nabla v) + \nabla \cdot ((\sigma\_{l} + \sigma\_{e}) \nabla \mu\_{e}),\tag{10.29}$$

$$\frac{dm}{dt} = 0,\tag{10.30}$$

$$\frac{dh}{dt} = 0,\tag{10.31}$$

where the initial conditions at time *<sup>t</sup>* <sup>=</sup> *<sup>t</sup>*<sup>n</sup> is given by (v,*u*e,*m*, *<sup>h</sup>*) n+1/2 . Here, we immediately note that *m* <sup>n</sup>+<sup>1</sup> = *m* n+1/2 and *h* <sup>n</sup>+<sup>1</sup> = *h* n+1/2 . In order to compute v and *u*<sup>e</sup> at *t*n+1, we need to solve the linear system,

$$
\chi C\_m \frac{\partial v}{\partial t} = \nabla \cdot (\sigma\_i \nabla v) + \nabla \cdot (\sigma\_i \nabla u\_e), \tag{10.32}
$$

$$
0 = \nabla \cdot (\sigma\_i \nabla v) + \nabla \cdot ((\sigma\_i + \sigma\_e) \nabla u\_e) \tag{10.33}
$$

$$0 = \nabla \cdot (\sigma\_i \nabla v) + \nabla \cdot ((\sigma\_i + \sigma\_e) \nabla u\_e), \tag{10.33}$$

where the initial condition for v is given by v n+1/2 .

### **10.2.2 Finite Difference Approximation**

The task at hand is now to solve the system of ordinary differential equations given by (10.25)–(10.27) and then solve the partial differential equations given by (10.32)-(10.33). In order to do this we introduce a mesh as above. Specifically2, we let *u* n k,j and v n k,j denote approximations of *<sup>u</sup>*(*t*n, *<sup>x</sup>*k, <sup>y</sup>j) and <sup>v</sup>(*t*n, *<sup>x</sup>*k, <sup>y</sup>j) where *<sup>t</sup>*<sup>n</sup> <sup>=</sup> *<sup>n</sup>*∆*t*, *x*<sup>k</sup> = (*k* − 1)∆*x* and y<sup>j</sup> = (*j* − 1)∆y with ∆*x* = *L*x/(*M*<sup>x</sup> − 1), ∆y = *L*y/(*M*<sup>y</sup> − 1) and ∆*t* = *T*/*M*<sup>t</sup> for sufficiently large integers *M*x, *M*<sup>y</sup> and *M*<sup>t</sup> . Again, we use the mapping from a two-dimensional notation to a one-dimensional vector notation by defining *i* = *M*x(*j* − 1) + *k*.

### **Explicit ODE Step**

By using this notation, we can write an explicit scheme3 for solving the ordinary differential equations given by (10.25)–(10.27) as follows,

$$v\_{i}^{n+1/2} = v\_{i}^{n} - \frac{\Delta t}{C\_{m}} [I\_{\text{Na}}(v\_{i}^{n}, m\_{i}^{n}, h\_{i}^{n}) + I\_{\text{K}}(v\_{i}^{n}) + I\_{\text{stim}}(t\_{n}, \mathbf{x}\_{i}, \mathbf{y}\_{i})],\tag{10.34}$$

$$m\_i^{n+1/2} = m\_i^n + \Delta t \frac{m\_\infty(v\_i^n) - m\_i^n}{\tau\_m(v\_i^n)},\tag{10.35}$$

$$h\_i^{n+1/2} = h\_i^n + \Delta t \frac{h\_\infty(v\_i^n) - h\_i^n}{\tau\_h(v\_i^n)},\tag{10.36}$$

where <sup>1</sup> <sup>≤</sup> *<sup>i</sup>* <sup>≤</sup> *<sup>M</sup>*<sup>x</sup> <sup>×</sup> *<sup>M</sup>*y, and *<sup>n</sup>* <sup>=</sup> <sup>0</sup>, . . . , *<sup>M</sup>*<sup>t</sup> <sup>−</sup> 1.

### **Implicit PDE Step**

In the implicit PDE step we first note that *m* <sup>n</sup>+<sup>1</sup> = *m* n+1/2 and *h* <sup>n</sup>+<sup>1</sup> = *h* n+1/2 . For the systems of PDEs given by (10.32)–(10.33) we use the same discretization as we used in (10.4) for the diffusion equation. This leads to the definition of two matrices, *A*<sup>i</sup> and *A*e, with typical elements given by (10.7)–(10.11). For *A*e, the typical elements of the matrix are defined by using <sup>ρ</sup><sup>x</sup> <sup>=</sup> <sup>σ</sup>e/∆*<sup>x</sup>* 2 and <sup>ρ</sup><sup>y</sup> <sup>=</sup> <sup>σ</sup>e/∆<sup>y</sup> 2 in (10.7)–(10.11). Similarly, the typical elements of *<sup>A</sup>*<sup>i</sup> are defined by using <sup>ρ</sup><sup>x</sup> <sup>=</sup> <sup>σ</sup>i/∆*<sup>x</sup>* 2 and <sup>ρ</sup><sup>y</sup> <sup>=</sup> σi/∆<sup>y</sup> 2 in (10.7)–(10.11). With this notation, we are ready to define the following scheme for the spatial part of the bidomain equations,

$$\,\_\lambda\chi C\_m \frac{v^{n+1} - v^{n+1/2}}{\Delta t} = A\_i v^{n+1} + A\_i u^{n+1},\tag{10.37}$$

$$0 = A\_i v^{n+1} + (A\_i + A\_e)\mu^{n+1}.\tag{10.38}$$

This can be rewritten as a block matrix-vector system as follows,

<sup>2</sup> We replace u<sup>e</sup> by u in order to reduce the load of subscripts.

<sup>3</sup> This is exactly the same scheme as we used for the parsimonious rabbit model in Section 8.2, only that we here need to solve the ODE system in all the computational nodes.

100 10 Spatial Models of Cardiac Electrophysiology

$$
\begin{bmatrix}
I - \frac{\Delta t}{\chi C\_m} A\_i & -\frac{\Delta t}{\chi C\_m} A\_i \\
A\_i & A\_i + A\_e
\end{bmatrix}
\begin{bmatrix}
v^{n+1} \\
\mu^{n+1}
\end{bmatrix} = \begin{bmatrix}
v^{n+1/2} \\
0
\end{bmatrix}.
\tag{10.39}
$$

### **10.2.3 Traveling Wave Solution of the Bidomain Model**

In Fig. 10.2, we show the solution of the numerical scheme for the bidomain model described above, with parameter values specified in Table 10.1 and membrane dynamics modeled by the parsimonious rabbit model described in Chapter 8. The upper panel shows the membrane potential, v, at four different points in time, and the lower panel shows the extracellular potential, *u*e, at the same time points. We observe a traveling wave solution initiated by the stimulus current applied in the lower left corner of the domain, leading to an increased value of v. This increase in v gradually spreads through the domain, and at *<sup>t</sup>* <sup>=</sup> <sup>20</sup> ms the wave has traveled almost all the way to the opposite corner of the domain.

From this traveling wave, we can compute the conduction velocity, for example, by

$$\text{CV} = \frac{\sqrt{(\mathbf{x}\_2 - \mathbf{x}\_1)^2 + (\mathbf{y}\_2 - \mathbf{y}\_1)^2}}{t\_2 - t\_1},\tag{10.40}$$

where *<sup>x</sup>*<sup>1</sup> <sup>=</sup> <sup>y</sup><sup>1</sup> <sup>=</sup> <sup>0</sup>.<sup>4</sup> cm and *<sup>x</sup>*<sup>2</sup> <sup>=</sup> <sup>y</sup><sup>2</sup> <sup>=</sup> <sup>0</sup>.<sup>8</sup> cm. Furthermore, *<sup>t</sup>*<sup>1</sup> and *<sup>t</sup>*<sup>2</sup> are the points in time when the value of v first increases to a value v ≥ −<sup>20</sup> mV, in the spatial points (*x*1, <sup>y</sup>1) and (*x*2, <sup>y</sup>2), respectively. In that case, we find that the conduction velocity in this default case is about 54 cm/s. In Fig. 10.3, we investigate how the conduction velocity depends on the conductivity parameters, <sup>σ</sup><sup>i</sup> and <sup>σ</sup>e, and we find that the conduction velocity increases if <sup>σ</sup><sup>i</sup> or <sup>σ</sup><sup>e</sup> are increased.

**Fig. 10.2** Solution of the bidomain model at four different points in time. The upper panel shows the membrane potential, v, and the lower panel shows the extracellular potential, <sup>u</sup>e. A traveling wave solution is initiated by stimulating the cells in the lower left corner.

**Fig. 10.3** Conduction velocity computed from the solution of the bidomain model for some different values of <sup>σ</sup><sup>i</sup> and <sup>σ</sup>e. In the left plot, <sup>σ</sup><sup>e</sup> is fixed at the value specified in Table 1.1, and in the right plot <sup>σ</sup><sup>i</sup> is fixed at the value specified in Table 1.1.

### **10.3 The Monodomain Model**

As mentioned above, the bidomain model is often referred to as the gold-standard for computational cardiac electrophysiology. But in many cases, results of relevant accuracy can be achieved by using the simpler monodomain model. Here will show that, under one specific condition, the two models actually give the same results. We can show this by considering the linear PDE that needs to be solved in each time step; see (10.32) and (10.33) above. We assume that the conductivities are constant in space, and that they are related as follows,

$$
\begin{pmatrix}
\sigma\_{e,\mathbf{x}} & \mathbf{0} \\
\mathbf{0} & \sigma\_{e,\mathbf{y}}
\end{pmatrix} = \lambda \begin{pmatrix}
\sigma\_{\bar{i},\mathbf{x}} & \mathbf{0} \\
\mathbf{0} & \sigma\_{\bar{i},\mathbf{y}}
\end{pmatrix},
\tag{10.41}
$$

where λ is a positive constant. By using this assumption, we note that the second equation of the system

$$
\chi C\_m \frac{\partial v}{\partial t} = \nabla \cdot (\sigma\_i \nabla v) + \nabla \cdot (\sigma\_i \nabla u\_e), \tag{10.42}
$$

$$
0 = \nabla \cdot (\sigma\_i \nabla v) + \nabla \cdot ((\sigma\_i + \sigma\_j) \nabla u\_e) \tag{10.43}
$$

$$0 = \nabla \cdot (\sigma\_i \nabla v) + \nabla \cdot ((\sigma\_i + \sigma\_e) \nabla u\_e),\tag{10.43}$$

can be rewritten as follows,

$$0 = \nabla \cdot (\sigma\_i \nabla v) + (1 + \lambda) \nabla \cdot (\sigma\_i \nabla u\_e),\tag{10.44}$$

and therefore,

$$\nabla \cdot (\sigma\_i \nabla \mu\_e) = -\frac{1}{1+\lambda} \nabla \cdot (\sigma\_i \nabla \upsilon). \tag{10.45}$$

By inserting this observation in (10.42), we observe that we can remove *u*<sup>e</sup> from this equation and get the following scalar equation (only v is unknown here),

102 10 Spatial Models of Cardiac Electrophysiology

$$
\chi C\_m \frac{\partial v}{\partial t} = \frac{\lambda}{1+\lambda} \nabla \cdot (\sigma\_i \nabla v). \tag{10.46}
$$

### **10.3.1 Operator Splitting for the Monodomain System**

In the case of the bidomain model, operator splitting involved alternating solution of the ODEs by the scheme (10.34)–(10.36) and the PDEs by solving the linear system (10.37) and (10.38). For the monodomain case, the PDE part can be simplified considerably by solving the system

$$
\chi \chi C\_m \frac{v^{n+1} - v^{n+1/2}}{\Delta t} = \frac{\lambda}{1 + \lambda} A\_i v^{n+1}, \tag{10.47}
$$

(10.48)

which can be written to the computational form

$$(I - \gamma \Delta t A\_i)v^{n+1} = v^{n+1/2},\tag{10.49}$$

$$(10.50)$$

where

$$\gamma = \frac{\lambda}{(1+\lambda)\chi C\_m}.$$

### **10.4 Comments and Further Reading**


$$
\begin{bmatrix}
I - \frac{\Delta t}{\chi \overline{C}\_m} A\_i & -\frac{\Delta t}{\chi \overline{C}\_m} A\_i \\
\end{bmatrix}
\begin{bmatrix}
v^{n+1} \\
u^{n+1}
\end{bmatrix} = \begin{bmatrix}
v^{n+1/2} \\
0
\end{bmatrix}.
\tag{10.51}
$$

This system is symmetric and positive definite and therefore fast solvers can be applied; see, e.g., [2, 4, 7, 8, 10, 11, 14, 15, 16, 17].


### **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 11 Re-Introducing the Cell: The Extracellular-Membrane-Intracellular (EMI) Model**

As mentioned earlier, the bidomain system is currently the standard mathematical model of cardiac electrophysiology. This system is now routinely solved and provides valuable insights into the conduction of electrical signals in cardiac tissue. However, the model has one glaring limitation: The cardiomyocyte is nowhere to be found in the model, since the extracellular space, the intracellular space and the cell membrane are all assumed to be everywhere in the computational domain. The cell was lost in homogenization! There is a tremendous advantage to this because the model becomes much simpler and thus solvable for the whole human heart. And it works! But the downside is of course that the cell is the essential building block of the tissue and leaving it out of the model has consequences. For instance, it becomes impossible to investigate the detailed dynamics of the electrochemical processes in the vicinity of a small collection of cells.

Here, we will present an alternative cell-based model. The model represents the extracellular (E) domain, the cell membrane (M) and the intracellular (I) domain explicitly and it is therefore referred to as the EMI model. The main advantage of this model is that it becomes feasible to represent the cell and the cell membrane in a much more detailed manner. For instance, it is possible to study both the effect of varying ion channel density across the cell membrane and cell to cell variations of properties in the model. But it comes with a stiff price: both implementation and computing efforts are much more demanding than for the bidomain model.

Here, we will present the EMI model and observe that, again, we can come up with a reasonable solution method by applying operator splitting and replacing derivatives with differences. We will present a case that is as simple as possible but also give references to more challenging applications.

**Fig. 11.1** Illustration of the different parts of the domain in the EMI model. A cell, Ω<sup>i</sup> is surrounded by an extracellular space, Ωe. The interface between Ω<sup>i</sup> and Ω<sup>e</sup> defines the cell membrane, Γ. Note that all the EMI model simulations are performed in 3D.

### **11.1 The EMI Model for one Cell Surrounded by an Extracellular Space**

The system of partial differential equations forming the EMI model for a single cell surrounded by an extracellular space like illustrated in Fig. 11.1 is given by (see, e.g., [1, 2, 14, 41]):

$$\begin{aligned} \nabla \cdot \sigma\_i \nabla \mu\_i &= 0, \quad &\text{in } \Omega\_i, \quad &(11.1) \\ \nabla \cdot \sigma\_i \nabla \mu\_i &= 0, \quad &\text{in } \Omega\_i \end{aligned} \tag{11.1}$$

∇ · <sup>σ</sup>e∇*u*<sup>e</sup> <sup>=</sup> <sup>0</sup>, in <sup>Ω</sup>e, (11.2)

$$\mu\_e = 0,\tag{11.3}$$

$$\square \qquad \square \qquad \square \qquad \square \qquad \square \qquad \square \qquad \square \qquad \square \qquad \square \qquad \square \qquad \square$$

$$\mathbf{n}\_{\varepsilon} \cdot \boldsymbol{\sigma}\_{\varepsilon} \nabla \boldsymbol{\mu}\_{\varepsilon} = -\mathbf{n}\_{l} \cdot \boldsymbol{\sigma}\_{l} \nabla \boldsymbol{\mu}\_{l}, \qquad \qquad \text{at } \Gamma, \qquad \qquad (11.4)$$

$$
\mu\_i - \mu\_e = v,\tag{11.5}
$$

$$I\_m = -\mathbf{n}\_i \cdot \boldsymbol{\sigma}\_i \nabla u\_i,\tag{11.6}$$

$$\frac{\partial v}{\partial t} = \frac{1}{C\_m}(I\_m - I\_{\text{ion}}), \tag{11.7}$$

Here, the unknown variables to be found are the intracellular potential, *u*<sup>i</sup> , the extracellular potential, *<sup>u</sup>*e, and the membrane potential, v, all given in units of millivolts (mV). The intracellular potential is defined in the intracellular space, Ω<sup>i</sup> , the extracellular potential is defined in the extracellular space, Ωe, and the membrane potential is defined at the membrane, Γ, defined as the interface between Ω<sup>i</sup> and Ωe. The outer boundary of the extracellular space is denoted by ∂Ωe. Time is given in milliseconds (ms) and distance is specified in centimeters (cm). Furthermore, <sup>σ</sup><sup>i</sup> is the intracellular conductivity (in mS/cm), <sup>σ</sup><sup>e</sup> is the extracellular conductivity (in mS/cm), *<sup>C</sup>*<sup>m</sup> is the specific membrane capacitance (in <sup>µ</sup>S/cm<sup>2</sup> ), and **n**<sup>i</sup> and **n**<sup>e</sup> are the outward pointing unit normal vectors of Ω<sup>i</sup> and Ωe, respectively. Like in the membrane models in Chapter 8, the cable equation in Chapter 9 and in the bidomain and monodomain models in Chapter 10, the term *I*ion represents the current density (in µA/cm<sup>2</sup> ) through ion channels on the cell membrane. These current densities can, for example, be modeled by the Hodgkin-Huxley model described in Chapter 8. In that case,

$$I\_{\rm ion} = I\_{\rm Na} + I\_{\rm K} + I\_{\rm L},\tag{11.8}$$

and we get the following extra equations for the state variables defined at the membrane, Γ:

$$\frac{\partial m}{\partial t} = \alpha\_m (1 - m) - \beta\_m m,\tag{11.9}$$

$$\frac{\partial f}{\partial h} = \dots$$

$$\begin{cases} \frac{\partial h}{\partial t} = \alpha\_h (1 - h) - \beta\_h h, & \text{at } \Gamma, \\ \frac{\partial h}{\partial r} \end{cases} \tag{11.10}$$

$$\frac{\partial r}{\partial t} = \alpha\_r (1 - r) - \beta\_r r,\tag{11.11}$$

### **11.1.1 Numerical Scheme for the EMI Model**

As observed in the previous chapters, a numerical finite difference scheme for the EMI model can be defined by applying operator splitting and replacing derivatives with differences. We define a scheme where for each time step, *n*, there is one unknown, *u* n e , for all mesh points in the extracellular space and one unknown, *u* n i , for each intracellular point. In addition, for the mesh points located on the membrane, there are six unknowns, *u* n i , *u* n e , v n , *m* n , *h* n , and *r* n .

### **Operator Splitting for the EMI Model**

We define an operator splitting scheme for the EMI model where for each time step we first solve the nonlinear ordinary differential equation part of the system

$$\begin{cases} \frac{\partial v}{\partial t} = -\frac{1}{C\_m} I\_{\text{ion}}, \\ \partial w \end{cases} \tag{11.12}$$

$$\frac{\partial m}{\partial t} = \alpha\_m (1 - m) - \beta\_m m,\tag{11.13}$$

$$\frac{\partial h}{\partial h}$$

$$\frac{\partial h}{\partial t} = \alpha\_h (1 - h) - \beta\_h h,\tag{11.14}$$

$$\frac{\partial r}{\partial t} = \alpha\_r (1 - r) - \beta\_r r,\tag{11.15}$$

with initial conditions from the previous time step. Then, in the second step of the operator splitting scheme, we solve the linear system,

$$\begin{aligned} \nabla \cdot \sigma\_i \nabla u\_i &= 0, & \text{in } \Omega\_i, & (11.16) \\ \nabla \cdot \sigma\_e \nabla u\_e &= 0, & \text{in } \Omega\_e, & (11.17) \\ u\_e &= 0, & \text{at } \partial \Omega\_e, & (11.18) \\ \mathbf{n}\_e \cdot \sigma\_e \nabla u\_e &= -\mathbf{n}\_i \cdot \sigma\_i \nabla u\_i, & \text{at } \Gamma, & (11.19) \\ u\_i - u\_e &= v, & \text{at } \Gamma, & (11.20) \\ I\_m &= -\mathbf{n}\_i \cdot \sigma\_i \nabla u\_i, & \text{at } \Gamma, & (11.21) \\ \frac{\partial v}{\partial t} &= \frac{1}{C\_m} I\_m, & \text{at } \Gamma, & (11.22) \end{aligned}$$

with initial conditions provided from the first step of the operator splitting scheme.

### **Finite Difference Approximation of the EMI Model**

The first step of the operator splitting scheme for the EMI model is simply a system of ordinary differential equations, and we find the numerical solutions by replacing the derivatives in the form ∂v ∂<sup>t</sup> with the differences in the form <sup>v</sup> <sup>n</sup>+1−v n ∆t in an explicit manner.

In the second step of the operator splitting scheme, we also replace the temporal derivative in (11.22) with the standard difference, but we here treat the system in an implicit manner. That is, we replace (11.22) with

$$\frac{v^{n+1} - v^n}{\Delta t} = \frac{1}{C\_m} I\_m^{n+1} \,. \tag{11.23}$$

Furthermore, we use standard differences for the derivatives in (11.16), (11.17), (11.19), and (11.21). However, some special treatment is required for the normal derivatives in (11.19) and (11.21) at the corners of the cell. The details of the finite difference scheme is found in the code associated with these notes and is also described in more detail in [41].

### **11.1.2 EMI Model Simulation of a Neuronal Axon**

Using the parameter values specified in Table 11.1, we perform an EMI model simulation of a neuronal axon using a similar setup as for the cable equation in Chapter 9. However, to reduce the computational cost, we consider a shorter axon than in Chapter 9 (0.2 cm). A traveling wave moving from left to right is initiated by increasing the membrane potential of the leftmost 0.05 cm of the axon. Fig. 11.2 shows the numerical EMI model solution of the problem at three different points in time along a plane in the *x*- and y-directions. The left panel shows the extracellular potential, and the right panel shows the membrane potential, v.

**Table 11.1** Parameter values used in the EMI model simulations of an axon. The parameter values of the Hodgkin-Huxley model are as specified in Chapter 8.


**Fig. 11.2** EMI model solution for three points in time along a plane in the x- and y-directions. The plane is located at the z-value corresponding to the upper boundary of the cell. The left panel shows the extracellular potential, and the right panel shows the membrane potential, v.

### **11.2 The EMI Model for Connected Cardiomyocytes**

The EMI model for one cell considered in the previous section can be extended to incorporate currents between individual cells through gap junctions and thus be used to model collections of connected cardiomyocytes (see, e.g., [14, 16, 29, 35, 36, 37]). The resulting system of equations can be solved using a similar operator splitting technique as the one applied for a single cell above (see, e.g., [40]). Furthermore, a spatial operator splitting approach can be introduced in order to split the linear part

**Fig. 11.3** EMI model solution (intracellular potential, ui) for a pulmonary vein sleeve at ten points in time, following the simulation set-up used in [20]. The two mutations N588K and A130V are both present. The solutions are found using the finite element method (see [20]). The finite element mesh used to represent each single cardiomyocyte in the simulation is illustrated in the lower panel of the figure. The full cylinder of cells seen in the upper panels contains 3930 cardiomyocytes, each associated with about 70 computational nodes. In addition, the mesh consists of about 44,000 extracellular nodes.

of the system system into one system for the extracellular space and one system for each individual cell (see [17, 19]).

### **11.2.1 EMI Model Simulation of Cardiomyocytes in the Sleeve of a Pulmonary Vein**

To illustrate an example of the EMI model used for connected cardiomyocytes, we consider a collection of myocytes located around the sleeve of a pulmonary vein. In [20], the EMI model was used to study how mutations that have been found to be associated with increased risk of atrial fibrillation affected properties of the cardiomyocytes in this region, known to be a common initiation site of atrial fibrillation. In Fig. 11.3, we consider a collection of cells forming a cylinder around the vein, using the same setup as in [20]. The mesh used to represent each single cell in the simulation is illustrated in the lower panel of the figure. The properties of the individual cells vary according to known differences for cardiomyocytes in this region (see [20]). In the simulation, a combination of two mutations found to be associated with atrial fibrillation is present. The first mutation, N588K, leads to an increased potassium current (*I*Kr), and thus shortening of the action potential duration, whereas the second mutation, A130V, leads to reduced sodium current (*I*Na) and thereby reduced conduction velocity. In Fig. 11.3, we observe that when the two mutations are present, the solution is a traveling wave continuously moving around the cylinder of cells. Such a reentrant excitation wave could be a potential mechanism of atrial fibrillation.

### **11.3 Comments and Further Reading**


### **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 12 The Poisson-Nernst-Planck (PNP) Model**

In these notes, we have considered models of electrophysiology across several scales. The first was the membrane model. It assumes that the action potential is similar across the whole cell membrane, and the model represents the action potential as a function of time alone. No spatial variable is involved in the pure membrane models, so a length scale of these models does not make sense. As models of cardiac electrophysiology, we next considered the monodomain and bidomain equations. These models are accurate descriptions of the physics at the scale of millimeters, and the typical mesh resolution is about <sup>∆</sup>*<sup>x</sup>* <sup>≈</sup> <sup>0</sup>.<sup>25</sup> mm (see, e.g., [1, 16]). The EMI model is cell-based and represents the physics at the micrometer scale. The typical mesh size for the EMI model is about 10 µm see, e.g., [7, 8, 9].

We have moved from the homogenized millimeter scale (monodomain/bidomain) to the cell-based EMI model on the micrometer scale. Next, we move to the nanometer scale. The reason for this is that strong electrical and chemical gradients exist very close to the cell membrane. These gradients, referred to as the Debye layer (see, e.g., [6]), are only a few nanometers wide. In order to study what happens very close to the membrane, it is necessary to solve equations on the nanometer level, and the proper equations are referred to as the Poisson-Nernst-Planck (PNP) model. Once again we will see that even if the model is rather complex, we can use the standard tricks introduced above to solve the equations; operator splitting and finite differences are all we need (plus a little blood, toil, sweat and tears).

### **12.1 The PNP System of Partial Differential Equations**

The PNP system can be written in the form

$$\nabla \cdot (\varepsilon \nabla \phi) = -\rho,\tag{12.1}$$

$$\frac{\partial c\_k}{\partial t} = \nabla \cdot D\_k \nabla c\_k + \nabla \cdot \left( D\_k \beta\_k c\_k \nabla \phi \right), \tag{12.2}$$
 
$$\text{for } k = 1 \text{N} \text{s}^+ \text{ } \text{V}^+ \text{ } \text{C}\_a \text{ $^{2+}$  and } \text{Cl}^- \text{h}$$

$$\text{for } k = \{ \text{Na}^+, \text{K}^+, \text{Ca}^{2+} \text{ and } \text{Cl}^- \}.$$

In this system, the variables are the electric potential, φ (in mV), and the ion concentrations, *c*<sup>k</sup> (in mM) for *k* = {Na<sup>+</sup> , K<sup>+</sup> , Ca2<sup>+</sup> , Cl<sup>−</sup> }. Furthermore, we have used the following definitions,

$$
\varepsilon = \varepsilon\_r \varepsilon\_0,\tag{12.3}
$$

$$
\rho = \rho\_0 + F \sum\_k z\_k c\_k,\tag{12.4}
$$

$$
\beta\_k = \frac{z\_k e}{k\_B T}.\tag{12.5}
$$

The parameters *<sup>F</sup>*, ε0, εr, ρ0, *<sup>D</sup>*k, *<sup>e</sup>*, *<sup>k</sup>*B,*T*, and *<sup>z</sup>*<sup>k</sup> for *<sup>k</sup>* <sup>=</sup> {Na<sup>+</sup> , K<sup>+</sup> , Ca2<sup>+</sup> and Cl<sup>−</sup> } are defined in Table 12.1.


**Table 12.1** Parameter values used in the PNP simulations, taken from [10].

Again, we will solve this system by applying operator splitting and replacing derivatives by differences. We assume that the solution is known at time *t*<sup>n</sup> = *n*∆*t*. The

**Table 12.2** Initial conditions for the ion concentrations in the intracellular space, Ω<sup>i</sup> , and the extracellular space, Ωe. In the membrane, Ωm, all ion concentrations are set to zero. Furthermore, in the K<sup>+</sup> channel embedded in the membrane, the concentration of K<sup>+</sup> ions varies linearly from the intracellular to the extracellular concentration, whereas the remaining ion concentrations are set to zero. To make the entire domain electroneurtral at <sup>t</sup> <sup>=</sup> 0, <sup>ρ</sup><sup>0</sup> is set up so that <sup>ρ</sup> <sup>=</sup> <sup>0</sup> in the channel for the initial conditions (see (12.4)).


first step in the algorithm is to compute the electrical potential at time *t*n+<sup>1</sup> = (*n*+1)∆*t*. This step can be written as

$$\nabla\_h \cdot (\varepsilon \nabla\_h \phi^{n+1}) = -\rho^n,\tag{12.6}$$

where <sup>∇</sup><sup>h</sup> denotes a finite difference approximation of the gradient. Note that <sup>ρ</sup> n is taken from the previous time step so only the electrical potential is unknown in this step. In the first time step, we use the initial conditions to compute ρ given by (12.4).

When φ <sup>n</sup>+<sup>1</sup> has been computed, we can compute <sup>∇</sup>hφ n+1 and use it to solve the concentration equations by the following scheme,

$$\frac{c\_k^{n+1} - c\_k^n}{\Delta t} = \nabla\_h \cdot D\_k \nabla\_h c\_k^{n+1} + \nabla\_h \cdot \left( D\_k \beta\_k c\_k^{n+1} \nabla\_h \phi^{n+1} \right), \tag{12.7}$$

for *k* = {Na<sup>+</sup> , K<sup>+</sup> , Ca2<sup>+</sup> ,Cl<sup>−</sup> }. Writing the complete finite difference schemes for these equations is a bit messy, but the interested reader can consult the online Matlab code, or the supplementary information of [10].

### **12.1.1 Numerical Simulation of the Resting State**

We will use the scheme given by (12.6) and (12.7) to compute the resting state close to the cell membrane. In the models introduced in the previous chapters, we have taken the concentrations to be constants in space; i.e., we have assumed that the concentrations can vary in time across the cell membrane, but not in space in the intra- or extracellular spaces. This is often an accurate approximation, but we will see that significant gradients exist very close to the cell membrane. Furthermore, in the EMI model, the cell membrane is assumed to be infinitely thin, and in the bidomain and monodomain models, the cell membrane is assumed to be everywhere! In the PNP model simulation, the cell membrane is explicitly represented in the model **Fig. 12.1** Illustration of the PNP model domain, consisting of an intracellular domain, Ω<sup>i</sup> , an extracellular domain, Ωe, a membrane domain, Ωm, and a K<sup>+</sup> channel domain, Ω<sup>c</sup> . In the simulation reported in Fig. 12.2, we use the domain size L<sup>i</sup> = L<sup>e</sup> = L<sup>y</sup> = 50 nm, <sup>L</sup><sup>m</sup> <sup>=</sup> <sup>5</sup> nm, <sup>w</sup><sup>c</sup> <sup>=</sup> <sup>5</sup> nm.

**Fig. 12.2** The membrane potential, <sup>v</sup> <sup>=</sup> <sup>φ</sup><sup>i</sup> <sup>−</sup> <sup>φ</sup>e, as a function of time in the PNP simulation.

(5 nm wide), but still the ion channels integrated in the membrane are modeled in an oversimplified manner.

We solve the PNP system in two spatial dimensions, see Fig. 12.1. Note that the computational domain consists of an intracellular domain, Ω<sup>i</sup> , a cell membrane, Ωm, an extracellular space, Ωe, and a K<sup>+</sup> channel, Ωc. The initial conditions are given in Table 12.2, and the solutions are presented in Figs. 12.2–12.4.

In Fig. 12.2, we have plotted the membrane potential (<sup>v</sup> <sup>=</sup> <sup>φ</sup><sup>i</sup> <sup>−</sup> <sup>φ</sup>e) as a function of time during the PNP simulation, and we see that the value starts at 0 and gradually approaches a typical resting potential value of about −80 mV. In Fig. 12.3 and Fig. 12.4, we show how the PNP model solution varies in space at the end of the simulation (at *t* = 50 ns) in the part of the domain that is located in the 5 nm closest to the membrane. The upper panel focuses on the intracellular side and the lower panel focuses on the extracellular side of the membrane. In Fig. 12.3, we plot the full 2D solution, and in Fig. 12.4, we plot the solution along lines in the *x*-direction at y = 0 nm and y = 25 nm. We observe that a boundary layer is formed close to the membrane with slightly different values of the ion concentrations and potential than in the bulk intracellular and extracellular spaces.

**Fig. 12.3** Solutions at the end of the simulation (t = 50 ns) of the PNP model simulation in the 5 nm closest to the membrane on the intracellular side (upper panel) and the extracellular side (lower panel).

**Fig. 12.4** Solutions at the end of the simulation (t = 50 ns) of the PNP model simulation in the 5 nm closest to the membrane on the intracellular side (upper panel) and the extracellular side (lower panel). We show the solutions along lines in the x-direction for y = 0 nm and y = 25 nm.

### **12.2 Comments and Further Reading**


assumed everywhere, meaning that <sup>ρ</sup> <sup>≡</sup> <sup>0</sup> in the entire domain. That is, the charges sum to zero everywhere. Numerical approximations of the KNP equations can be found in, e.g., [2, 3, 13, 15].

4. Above, we mentioned that one single computational block (∆*x* <sup>3</sup> <sup>=</sup> (0.<sup>25</sup> mm) 3 ) for a standard mesh used to solve the bidomain model is large enough to cover almost 1000 cardiomyocytes. In the PNP model we use the resolution <sup>∆</sup>*<sup>x</sup>* <sup>=</sup> <sup>0</sup>.<sup>5</sup> nm and thus the volume of one block is 0.125 nm<sup>3</sup> . The volume of a sodium atom is <sup>≈</sup> <sup>0</sup>.<sup>0244</sup> nm<sup>3</sup> so one computational block covers about five sodium atoms. The next scale, following bidomain/EMI/PNP, is therefore simulation based on representation of individual atoms. If a cell with a volume of 16 pL is represented by a uniform mesh at atomic (sodium atom) resolution, it will require about <sup>6</sup>.<sup>5</sup> <sup>×</sup> <sup>10</sup><sup>14</sup> blocks, which is a lot! A reasonably well-equipped PC today has 16 GB memory and can therefore work with a vector (in Matlab) of ∼ 2×10<sup>9</sup> real numbers. Thus, about 325,000 of these PCs are needed to store one real number per atom in a cardiomyocyte of 16 pL. So, it will probably take some time before atomic scale simulations can be used to simulate whole cells.

### **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Index**

O-notation, 26, 42

Action potential, 15, 68, 70, 72, 73, 113, 119 Action potential duration (APD), 17, 19, 70, 75, 113 Algebraic equation, 3 Analytical solution, 3, 5, 6, 8, 22, 23, 25 Atrial fibrillation, 112, 113

Bidomain model, xi, 61, 93, 94, 96, 98–103, 109, 113, 114, 119, 121, 124 Boundary condition, 22, 27, 28, 34, 36, 53, 54, 58, 82, 83, 86, 90, 94, 97 Boundary element method, 90

Cable equation, 47, 79–90, 93, 109, 110 Calculus, 4, 6, 10, 22, 24, 26 Capacitance, 66, 71, 81, 96, 108 Capacitor, 80, 81 Cardiac electrophysiology, 93, 101, 102, 107, 114, 119 Cardiomyocyte, xi, 65, 71, 76, 90, 107, 111–113 Cell membrane, 71, 107 Cell-based model, 107, 113, 119 Compartment, 79–82, 90 Computational form, 5, 14, 23, 41, 43, 48, 54, 67, 73, 86, 90 Conductance, 66, 81, 88 Conduction velocity, 49, 88, 100, 113 Conductivity, 80, 88, 96, 101, 108 Current density, 66, 68, 81, 83, 97, 109

Depolarization, 17, 68, 70 Diffusion equation, xi, 21, 27, 30, 34, 39, 58, 94, 99 Dirichlet boundary condition, 22, 31

EMI model, xi, 107–114, 119, 121 Equilibrium potential, 66, 71, 73 Error, 5, 6, 15, 25, 26, 42, 56, 58, 60, 61, 67, 68, 73 Explicit scheme, 7, 14, 30, 33, 34, 41, 42, 44, 48, 53, 54, 60, 61, 67, 69, 73, 74, 84, 86, 87, 99 Extracellular potential, 81, 82, 89, 96, 100, 108, 110 Finite difference method, xii, 7, 23, 30, 33–35, 37, 41, 67, 84, 90, 93, 94, 98, 103, 109, 110, 114, 119, 121 Finite element method, xii, 90, 94, 114 Finite volume method, xii, 90 First order convergence, 7, 16, 42, 56, 67, 73 FitzHugh-Nagumo, xi, 13, 15, 17–19, 47, 59, 68, 88 Fractional steps, 61 Gap junction, 90, 111 Godunov splitting, 61 Hodgkin-Huxley, xi, 65, 68, 69, 71, 73, 75, 81, 83, 86–88, 109 Homogenization, 107, 113, 119 Implicit scheme, 7, 33–35, 38, 41, 42, 44, 53–55, 85, 99 Improved accuracy, 41 Initial condition, 3, 14, 22, 34, 53, 54, 58, 71,

72, 86, 94, 97, 98, 109, 110, 121, 122 Instability, 27, 28, 33 Integrating factor, 9 Intracellular potential, 80, 81, 96, 108 Ion channel, 66, 71, 76, 81, 83, 107, 109, 121 Ion concentration, 120–123

© The Author(s) 2023 127

Simula SpringerBriefs on Computing 14, https://doi.org/10.1007/978-3-031-30852-9 K. Horgmo Jæger, A. Tveito, *Differential Equations for Studies in Computational Electrophysiology*, Iterative methods, 39

Kirchhoff's current law, 81 Kirchhoff-Nernst-Planck (KNP) model, 114, 124

Leak channel, 66 Linear convergence, 7, 16, 25, 42, 67, 73 Linear system, 35, 39, 55, 61, 87, 98, 102, 109

Maple, 4 Mathematica, 4 Mathematical biology, 10 Maximum principle, 30 Membrane current, 80, 88 Membrane model, 65, 76, 109, 119 Membrane potential, 17, 65, 66, 71–73, 76, 81, 96, 100, 108, 110, 122 Mesh point, 23, 55, 109 Midpoint scheme, 41, 42, 44, 53, 54 Monodomain model, xi, 61, 93, 94, 101–103, 109, 113, 119, 121 Mutation, 113

Neumann boundary condition, 22, 28, 36, 94 Neuron, 17, 65, 73, 79, 90, 110 Newton's method, 54 No-flux boundary condition, 28 Numerical integration, 56 Numerical method, 4, 13, 21

Ohm's law, 80 Open probability, 66, 71 Operator Splitting, 120 Operator splitting, xi, 53–56, 58, 59, 61, 85, 87, 88, 98, 102, 107, 109–112, 119 Ordinary differential equation, xi, 13, 61, 65, 71, 98, 102, 109, 110 Oscillations, 29, 38, 44, 87

Pacemaker, 13, 15 Parsimonious model, 71, 73, 75, 96, 100 Partial differential equation, 13, 21, 25, 98, 102, 108, 120 Poisson equation, 102 Poisson-Nernst-Planck (PNP) model, xi, 119, 121, 123 Pulmonary vein, 112, 113

Quadratic convergence, 7, 42

Rabbit ventricular cardiomyocyte, 65, 71 Reaction-diffusion equation, xi, 42, 47, 48, 53, 54, 58, 59 Repolarization, 17, 68–70 Resistance, 80, 90 Resting potential, 73, 122 Riemann sum, 58

Scientific computing, 10, 39, 102 Second order convergence, 7, 42, 56 Simula Research Laboratory, ix Smooth function, 4 Stability, 38, 86 Stability condition, 30 Stimulus current, 71, 72, 75 Strang splitting, 61 Summer school, ix Symbolic computations, 4 System of algebraic equations, 54 System of ODEs, 13, 61, 65, 71, 76, 98, 99, 109, 110 System of PDEs, 98, 99, 108, 120

Taylor series, 22, 26, 28 Trapezoidal scheme, 58 Traveling wave, xi, 48, 49, 87, 88, 100, 110 Triangle inequality, 30

Unconditional stability, 38 University of California, San Diego, ix University of Oslo, ix Upstroke velocity, 17, 19, 70, 75

Wolfram Alpha, xi