**Developments in Mathematics**

## Jean-Luc Marichal Naïm Zenaïdi

# A Generalization of Bohr-Mollerup's Theorem for Higher Order Convex Functions

## **Developments in Mathematics**

Volume 70

#### **Series Editors**

Krishnaswami Alladi, Department of Mathematics, University of Florida, Gainesville, FL, USA

Pham Huu Tiep, Department of Mathematics, Rutgers University, Piscataway, NJ, USA

Loring W. Tu, Department of Mathematics, Tufts University, Medford, MA, USA

#### **Aims and Scope**

The **Developments in Mathematics** (DEVM) book series is devoted to publishing well-written monographs within the broad spectrum of pure and applied mathematics. Ideally, each book should be self-contained and fairly comprehensive in treating a particular subject. Topics in the forefront of mathematical research that present new results and/or a unique and engaging approach with a potential relationship to other fields are most welcome. High-quality edited volumes conveying current state-of-the-art research will occasionally also be considered for publication. The DEVM series appeals to a variety of audiences including researchers, postdocs, and advanced graduate students.

Jean-Luc Marichal • Naïm Zenaïdi

# A Generalization of Bohr-Mollerup's Theorem for Higher Order Convex Functions

Jean-Luc Marichal Department of Mathematics University of Luxembourg Esch-sur-Alzette, Luxembourg Naïm Zenaïdi Department of Mathematics University of Liège Liège, Belgium

Fonds National de la Recherche Luxembourg (http://dx.doi.org/10.13039/501100001866) Université du Luxembourg (http://dx.doi.org/10.13039/100008665)

ISSN 1389-2177 ISSN 2197-795X (electronic) Developments in Mathematics ISBN 978-3-030-95087-3 ISBN 978-3-030-95088-0 (eBook) https://doi.org/10.1007/978-3-030-95088-0

Mathematics Subject Classification: Primary: 39B22, 39A06, 26A51. Secondary: 39A60, 33B15, 33B20

© The Editor(s) (if applicable) and The Author(s) 2022. This book is an open access publication. **Open Access** This book is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this book are included in the book's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the book's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.

The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

*To Pascale, Olivia, Jean-Philippe, and Claudia*

*Jean-Luc Marichal*

*To Elise, Nils, and Eva*

*Naïm Zenaïdi*

## **Preface**

In this work, we provide a general and unified setting for a systematic and in-depth investigation of a broad variety of functions, including several special functions like the Euler gamma function, the polygamma functions, the Barnes G-function, the Hurwitz zeta function, and the generalized Stieltjes constants.

We know for instance that the gamma function

$$\Gamma(\mathbf{x}) = \int\_0^\infty t^{\mathbf{x}-1} e^{-t} \, dt$$

satisfies several fundamental properties and identities such as Bohr-Mollerup's characterization, Euler's infinite product, Gauss' multiplication formula, Stirling's formula, and Weierstrass' infinite product. In this book, we show through a series of new and elementary results that a large range of functions of mathematical analysis satisfy analogues of several properties of the gamma function, including those mentioned above.

The starting point of our theory is the remarkable characterization of the gamma function on the open half-line <sup>R</sup><sup>+</sup> <sup>=</sup> (0,∞) by Harald Bohr and Johannes Mollerup [23]. It simply states that the log-gamma function f (x) = ln -(x) is the unique convex solution vanishing at x = 1 to the equation

$$f(x+1) - f(x) = \ln x, \qquad x > 0.$$

This result can actually be slightly generalized as follows, where denotes the classical forward difference operator.

*All eventually convex solutions to the equation* f (x) <sup>=</sup> ln <sup>x</sup> *on* <sup>R</sup><sup>+</sup> *are of the form* f (x) <sup>=</sup> c + ln-(x), *where* <sup>c</sup> <sup>∈</sup> <sup>R</sup>.

(Here and throughout, a function is said to be eventually convex if it is convex in a neighborhood of infinity.)

This characterization was later generalized to a wide class of functions by Wolfgang Krull [54] and then independently by Roger Webster [98]. They essentially showed that for any eventually concave function <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> having the asymptotic property that the sequence n → g(n) converges to zero, there exists exactly one (up to an additive constant) eventually convex solution <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> to the equation f = g. When g(x) = ln x, this latter result clearly reduces to the above Bohr-Mollerup characterization of the gamma function.

Krull-Webster's result constitutes an important contribution to the resolution of the difference equation f <sup>=</sup> <sup>g</sup> on the real half-line <sup>R</sup>+. Indeed, it provides analogues of Bohr-Mollerup's characterization for many functions, including the gamma function, the digamma function, and the q-gamma functions. Nevertheless, we can see that the asymptotic condition imposed on the function g remains rather restrictive. For instance, it is not satisfied by the functions g(x) = x ln x and g(x) = ln -(x). In fact, it is not even satisfied by the identity function g(x) = x.

In this book, we generalize Krull-Webster's result by relaxing considerably the asymptotic condition into requiring that the sequence <sup>n</sup> → pg(n) converges to zero for some nonnegative integer p. Each of the functions g(x) = x ln x, g(x) = ln -(x), and g(x) = x clearly satisfies this new assumption for p = 2. Moreover, in our generalization, the convexity and concavity properties used by Krull and Webster are naturally replaced with their p-order versions. On this matter, it is noteworthy that many of the familiar functions of real analysis are eventually convex or concave of any order.

The solutions arising from Krull-Webster's characterization are called log --*type functions*. Those arising from our generalized version are called *multiple* log --*type functions*. As we demonstrate through this work, this latter class of functions is very rich and includes a wide variety of special functions.

In the diagram opposite, we describe how our result generalizes to any nonnegative integer p the special case when p = 1 obtained by Krull and Webster, who both generalized Bohr-Mollerup's theorem.

We also follow and generalize Webster's approach and provide for multiple log --type functions analogues of *Euler's constant*, *Euler's infinite product*, *Gauss' limit*, *Gauss' multiplication formula*, *Gautschi's inequality*, *Legendre's duplication formula*, *Raabe's formula*, *Stirling's constant*, *Stirling's formula*, *Wallis's product formula*, *Weierstrass' infinite product*, and *Wendel's inequality* for the gamma function. We also introduce and discuss analogues of *Binet's function*, *Burnside's formula*, *Euler's reflection formula*, *Fontana-Mascheroni's series*, *Gauss' digamma theorem*, and *Webster's functional equation*. Some additional properties of multiple log --type functions are also provided and discussed, including asymptotic equivalences, asymptotic expansion formulas, Taylor series expansion formulas, and Gregory formula-based series representations.

Lastly, we apply our results thoroughly to several usual special functions, including the gamma and digamma functions, the polygamma functions, the qgamma function, the Barnes G-function, the Hurwitz zeta function and its higher order derivatives, and the generalized Stieltjes constants. We also briefly discuss some further special functions such as the Gauss error function, the exponential integral, the regularized incomplete gamma function, the multiple gamma functions,

#### **Higher order version of Krull-Webster's theory**

f (x) = g(x) <sup>g</sup> is eventually <sup>p</sup>-concave and pg(n) <sup>→</sup> <sup>0</sup> f is eventually p-convex

**Solutions: Multiple log** *-*-**type functions**

## **↑**

#### **Krull-Webster's theory**

f (x) = g(x) g is eventually concave and g(n) → 0 f is eventually convex

**Solutions: log** *-*-**type functions**

## **↑**

#### **Bohr-Mollerup's characterization**

f (x) = ln x f is eventually convex

**Solutions:** f (x) = c + ln -(x) and the Bernoulli polynomials. All these examples illustrate how powerful is our theory to produce formulas and identities almost mechanically.

For example, applying our results to the gamma function -: <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup><sup>+</sup> itself, we easily retrieve the following Gauss limit

$$\Gamma(\mathbf{x}) = \lim\_{n \to \infty} \frac{n! n^{\mathbf{x}}}{\mathbf{x}(\mathbf{x}+1) \cdots (\mathbf{x}+n)}, \qquad \mathbf{x} > \mathbf{0},$$

and the Weierstrass infinite product

$$\Gamma(\alpha) := \frac{e^{-\gamma \chi}}{\chi} \prod\_{k=1}^{\infty} \frac{e^{\frac{\chi}{k}}}{1 + \frac{\chi}{k}}, \qquad \chi > 0,$$

where γ is the Euler constant. We also easily establish the double inequality

$$\left(1+\frac{1}{x}\right)^{-\frac{1}{2}} \le \frac{\Gamma(x)}{\sqrt{2\pi} \ge^{\chi-\frac{1}{2}} e^{-\chi}} \le \left(1+\frac{1}{x}\right)^{\frac{1}{2}}, \qquad x > 0,$$

from which we immediately derive the Stirling formula

$$
\Gamma(\mathbf{x}) \sim \sqrt{2\pi} \, x^{\mathbf{x} - \frac{1}{2}} e^{-\mathbf{x}} \qquad \text{as } \mathbf{x} \to \infty.
$$

To give another example, let us consider the restriction to <sup>R</sup><sup>+</sup> of the Barnes <sup>G</sup>function (see Barnes [14–16]). That is, the function <sup>G</sup>: <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup><sup>+</sup> whose logarithm f (x) = ln G(x) is the unique 2-convex solution vanishing at x = 1 to the equation

$$f(\mathbf{x} + \mathbf{l}) - f(\mathbf{x}) = \ln \Gamma(\mathbf{x}), \qquad \mathbf{x} > \mathbf{0}.$$

Thus defined, the function ln G(x) is a multiple log --type function, and we can therefore state the following analogue of Bohr-Mollerup's characterization.

*All eventually* 2-*convex solutions to the equation* f (x) = ln -(x) *on* <sup>R</sup><sup>+</sup> *are of the form* f (x) <sup>=</sup> <sup>c</sup> <sup>+</sup> ln G(x), *where* <sup>c</sup> <sup>∈</sup> <sup>R</sup>.

Using our results, we can also easily show that the Barnes G-function satisfies the following analogue of Gauss' limit for the gamma function

$$G(\mathbf{x}) = \lim\_{n \to \infty} \frac{\Gamma(1)\Gamma(2)\cdots\Gamma(n)}{\Gamma(\mathbf{x})\Gamma(\mathbf{x}+1)\cdots\Gamma(\mathbf{x}+n)} n!^{\chi} n^{\binom{\chi}{2}}, \qquad \mathbf{x} > \mathbf{0}.$$

Moreover, it satisfies the following analogue of Weierstrass' infinite product

$$G(\boldsymbol{x}) = \frac{e^{(-\boldsymbol{\gamma}-\boldsymbol{1})\binom{\boldsymbol{x}}{2}}}{\Gamma(\boldsymbol{\alpha})} \prod\_{k=1}^{\infty} \frac{\Gamma(k)}{\Gamma(\boldsymbol{x}+k)} k^{\boldsymbol{x}} e^{\psi\_1(k)\binom{\boldsymbol{x}}{2}}, \qquad \boldsymbol{x} > \boldsymbol{0},$$

Preface xi

where ψ<sup>1</sup> is the trigamma function defined by the equation

$$\psi\_{\mathbf{l}}(\mathbf{x}) := D^2 \ln \Gamma(\mathbf{x}) \qquad \text{for } \mathbf{x} > \mathbf{0}.$$

We also establish the double inequality

$$\left(1+\frac{1}{\chi}\right)^{-\frac{\mathfrak{z}}{12}} \le \frac{G(\mathbf{x})\,\Gamma(\mathbf{x})^{\frac{1}{2}}\,A^2\,(2\pi)^{\frac{1}{4}}}{\mathfrak{x}\,\frac{\mathfrak{z}}{2}\,e^{\psi\_{-2}(\mathbf{x})+\frac{1}{12}}} \le \left(1+\frac{1}{\chi}\right)^{\frac{\mathfrak{z}}{12}}, \qquad \mathbf{x}>0,1$$

from which we immediately derive the following analogue of Stirling's formula

$$G(\mathbf{x}) \sim A^{-2} (2\pi)^{-\frac{1}{4}} \mathbf{x}^{\frac{1}{12}} \Gamma(\mathbf{x})^{-\frac{1}{2}} e^{\psi\_{-2}(\mathbf{x}) + \frac{1}{12}} \qquad \text{as } \mathbf{x} \to \infty,$$

where ψ−<sup>2</sup> is the polygamma function defined by the equation

$$\psi\_{-2}(\mathbf{x}) = \int\_0^\chi \ln \Gamma(t) \, dt \qquad \text{for } \mathbf{x} > \mathbf{0}$$

and A is the Glaisher-Kinkelin constant defined by the equation

$$
\zeta'(-1) = \frac{1}{12} - \ln A\_{\cdot}
$$

In this work, we also derive many other properties of the Barnes G-function simply as analogues of properties of the gamma function.

To sum up, in this book, we develop a far-reaching generalization of the Bohr-Mollerup theorem, along lines initiated by Krull, Webster, and some others but going considerably further than past work. In particular, we show using elementary techniques that many classical properties of the gamma function have counterparts for a very wide variety of functions.

In this regard, we observe what Emil Artin [11, p. vi] wrote in his outstanding exposition of the gamma function:

"*I feel that this monograph will help to show that the gamma function can be thought of as one of the elementary functions, and that all of its basic properties can be established using elementary methods of the calculus.*"

In writing this book, our hope is to convince the reader that Artin's statement applies also to all the multiple log --type functions.

Lastly, since Bohr-Mollerup's theorem dates back to 1922, this work is also an opportunity to mark the 100th anniversary of this remarkable result and to spark the interest and enthusiasm of a large number of researchers in this theory.

Esch-sur-Alzette, Luxembourg Jean-Luc Marichal Liège, Belgium Naïm Zenaïdi

#### **Acknowledgments**

This work was supported by the Luxembourg National Research Fund (FNR, Project 16552440), as well as by the University of Luxembourg.

## **Contents**





## **List of Main Symbols**




## **Chapter 1 Introduction**

Let <sup>R</sup><sup>+</sup> denote the open half-line (0,∞) and let denote the forward difference operator on the space of functions from <sup>R</sup><sup>+</sup> to <sup>R</sup>. In this book, we are interested in the classical difference equation f <sup>=</sup> <sup>g</sup> on <sup>R</sup>+, which can be written explicitly as

$$f(\alpha + 1) - f(\alpha) = g(\alpha), \qquad \alpha > 0,$$

where <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> is a given function. This equation appears naturally in the theory of the Euler gamma function, with f (x) = ln -(x) and g(x) = ln x, but also in the study of many other special functions such as the Barnes G-function and the Hurwitz zeta function (see Examples 1.6 and 1.7 below).

It is easily seen that, for any function <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup>, the equation above has infinitely many solutions, and each of them can be uniquely determined by prescribing its values in the interval (0, 1]. Moreover, any two solutions always differ by a 1-periodic function, i.e., a periodic function of period 1.

For certain functions g, however, special solutions can be determined by their local properties or their asymptotic behaviors. On this issue, a seminal result is the very nice characterization of the gamma function by Bohr and Mollerup [23]. We recall this important result in the following theorem.

**Theorem 1.1 (Bohr-Mollerup's theorem)** *All log-convex solutions* <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup><sup>+</sup> *to the equation*

$$f(\mathbf{x} + \mathbf{l}) = \mathbf{x} \, f(\mathbf{x}), \qquad \mathbf{x} > \mathbf{0}, \tag{1.1}$$

*are of the form* f (x) = c -(x)*, where* c > 0*.*

The additive, but equivalent, version of this result, obtained by taking the logarithm of both sides of (1.1), can be stated as follows.

*For* g(x) <sup>=</sup> ln <sup>x</sup>, *all convex solutions* <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *to the difference equation* f <sup>=</sup> <sup>g</sup> *are of the form* f (x) = c + ln -(x), *where* <sup>c</sup> <sup>∈</sup> <sup>R</sup>.

As we can see, this characterization enables one to single out the gamma function as a kind of *principal solution* to its equation (Nörlund [82, Chapter 5] calls it the "Hauptlösung").

It is noteworthy that the proof of Bohr-Mollerup's characterization was simplified later by Artin [10] (see also Artin [11]) and, as observed by Webster [98], this result has then become known also "as the Bohr-Mollerup-Artin Theorem, and was adopted by Bourbaki [24] as the starting point for his exposition of the gamma function."

*Remark 1.2* In their original result, Bohr and Mollerup actually considered the additional assumption that f (1) = 1, thus leading to the gamma function as the unique solution (see Artin [11, p. 14]). However, it is easy to see that Theorem 1.1 immediately follows from this original result (just replace f (x) with f (x)/f (1)). ♦

A remarkable generalization of Bohr-Mollerup's theorem was provided by Krull [54, 55] and then independently by Webster [97, 98]. Recall that a function <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> is said to be eventually convex (resp. eventually concave) if it is convex (resp. concave) in a neighborhood of infinity. Krull [54] essentially showed that for any eventually concave function <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> having the asymptotic property that, for each h > 0,

$$\mathbf{g}(\mathbf{x} + h) - \mathbf{g}(\mathbf{x}) \to \mathbf{0} \qquad \text{as } \mathbf{x} \to \infty,\tag{1.2}$$

there exists exactly one (up to an additive constant) eventually convex solution <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> to the equation f <sup>=</sup> <sup>g</sup> (and dually, if <sup>g</sup> is eventually convex, then <sup>f</sup> is eventually concave). He also provided an explicit expression for this solution as a pointwise limit of functions, namely

$$f(\mathbf{x}) = f(1) + \lim\_{n \to \infty} f\_n^1[\mathbf{g}](\mathbf{x}), \qquad \mathbf{x} > \mathbf{0},$$

where

$$\, \, \, f\_n^{\,\\\,\\\,}[\mathbf{g}](\mathbf{x}) = \, \, -\, \mathbf{g}(\mathbf{x}) + \sum\_{k=1}^{n-1} (\mathbf{g}(k) - \mathbf{g}(\mathbf{x} + k)) + \mathbf{x} \, \mathbf{g}(n) . \tag{1.3}$$

Much later, and independently, Webster [97, 98] established the multiplicative version of Krull's result.

We can actually show that this result still holds if we replace the asymptotic condition (1.2) imposed on the function g with the slightly more general condition that the sequence n → g(n) converges to zero. However, although this result constitutes a very nice generalization of Bohr-Mollerup's theorem, we note that the latter asymptotic condition remains a rather restrictive assumption. For instance, it is not satisfied by the functions g(x) = x ln x and g(x) = ln -(x).

In this work, we generalize Krull-Webster's result above by relaxing the asymptotic condition on g into the much weaker requirement that the sequence n → pg(n) converges to zero for some nonnegative integer p. More precisely, we show that Krull-Webster's result still holds if we assume this weaker condition, provided that we replace the convexity and concavity properties with the p-convexity and p-concavity properties (see Definition 2.2) and the function f <sup>1</sup> <sup>n</sup> [g] defined in (1.3) with an appropriate version of it, which we now introduce.

Throughout this book, we let N denote the set of nonnegative integers and we let N<sup>∗</sup> denote the set of strictly positive integers.

**Definition 1.3** For any <sup>p</sup> <sup>∈</sup> <sup>N</sup>, any <sup>n</sup> <sup>∈</sup> <sup>N</sup>∗, and any <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup>, we define the function f <sup>p</sup> <sup>n</sup> [g]: <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> by the equation

$$f\_n^{p}[\mathbf{g}](\mathbf{x}) = \mathbf{-}\, \mathbf{g}(\mathbf{x}) + \sum\_{k=1}^{n-1} (\mathbf{g}(k) - \mathbf{g}(\mathbf{x} + k)) + \sum\_{j=1}^{p} \binom{\mathbf{x}}{j} \Delta^{j-1} \mathbf{g}(\mathbf{n}).\tag{1.4}$$

We now state our result in the following existence theorem. It actually constitutes the p-order version of Krull-Webster's result.

**Theorem 1.4 (Existence)** *Let* <sup>p</sup> <sup>∈</sup> <sup>N</sup> *and suppose that the function* <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *is eventually* p*-convex or eventually* p*-concave and has the asymptotic property that the sequence* <sup>n</sup> → pg(n) *converges to zero. Then there exists a unique (up to an additive constant) eventually* <sup>p</sup>*-convex or eventually* <sup>p</sup>*-concave solution* <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *to the difference equation* f <sup>=</sup> <sup>g</sup>*. Moreover,*

$$f(\mathbf{x}) = f(\mathbf{l}) + \lim\_{n \to \infty} f\_n^p[\mathbf{g}](\mathbf{x}), \qquad \mathbf{x} > \mathbf{0}, \tag{1.5}$$

*and* <sup>f</sup> *is* <sup>p</sup>*-convex (resp.* <sup>p</sup>*-concave) on any unbounded subinterval of* <sup>R</sup><sup>+</sup> *on which* g *is* p*-concave (resp.* p*-convex).*

Webster [98, Theorem 3.1] also established (in the multiplicative notation) a uniqueness theorem, which does not require the function g to be eventually convex or eventually concave. In the next theorem, we provide the p-order version of this result.

**Theorem 1.5 (Uniqueness)** *Let* <sup>p</sup> <sup>∈</sup> <sup>N</sup> *and let the function* <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *have the property that the sequence* <sup>n</sup> → pg(n) *converges to zero. Suppose that* <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *is an eventually* <sup>p</sup>*-convex or eventually* <sup>p</sup>*-concave function satisfying the difference equation* f = g*. Then* f *is uniquely determined (up to an additive constant) by* g *through the equation*

$$f(\mathbf{x}) = f(\mathbf{l}) + \lim\_{n \to \infty} f\_n^p[\mathbf{g}](\mathbf{x}), \qquad \mathbf{x} > \mathbf{0}.$$

We observe that Theorem 1.4 was first proved in the case when p = 0 by John [49]. As mentioned above, it was also established in the case when p = 1 by Krull [54] and then by Webster [98]. More recently, the case when p = 2 was investigated by Rassias and Trif [86], but the asymptotic condition they imposed on the function g is much stronger than ours and hence it defines a very specific subclass of functions. (We discuss Rassias and Trif's result in Appendix B.) We also observe that attempts to establish Theorem 1.4 for any value of p were made by Kuczma [58, Theorem 1] (see also Kuczma [60, pp. 118–121]) and then by Ardjomande [9]. However, the representation formulas they provide for the solutions are rather intricate. Thus, to the best of our knowledge, both Theorems 1.4 and 1.5, as stated above in their full generality and simplicity, were previously unknown.

For any solution f arising from Theorem 1.4 when p = 1, Webster [98] calls the function exp ◦f a --*type function*. In fact, exp ◦f reduces to the gamma function (i.e., f (x) = ln -(x)) when exp ◦g is the identity function (i.e., g(x) = ln x), which simply means that the gamma function restricted to <sup>R</sup><sup>+</sup> is itself a --type function. In this particular case, the limit given in (1.5) reduces to the following Gauss wellknown limit for the gamma function (see Artin [11, p. 15])

$$\Gamma(\mathbf{x}) = \lim\_{n \to \infty} \frac{n! n^{\mathbf{x}}}{\mathbf{x}(\mathbf{x}+1) \cdots (\mathbf{x}+n)}, \qquad \mathbf{x} > \mathbf{0}. \tag{1.6}$$

Similarly, for any fixed <sup>p</sup> <sup>∈</sup> <sup>N</sup> and any solution <sup>f</sup> arising from Theorem 1.4, we call the function exp ◦f a <sup>p</sup>-*type function*, and we naturally call the function f a log <sup>p</sup>-*type function*. When the value of p is not specified, we call these functions *multiple* --*type function* and *multiple* log --*type function*, respectively. This terminology will be introduced more formally and justified in Sect. 5.2.

Interestingly, Webster established for --type functions analogues of *Euler's constant*, *Gauss' multiplication formula*, *Legendre's duplication formula*, *Stirling's formula*, and *Weierstrass' infinite product* for the gamma function. In this work, we also establish for multiple --type functions and multiple log --type functions analogues of all the formulas above as well as analogues of *Euler's infinite product*, *Gautschi's inequality*, *Raabe's formula*, *Stirling's constant*, *Wallis's product formula*, and *Wendel's inequality*. We also introduce and discuss analogues of *Binet's function*, *Burnside's formula*, *Euler's reflection formula*, *Fontana-Mascheroni's series*, and *Gauss' digamma theorem*. Thus, (to paraphrase Webster [98, p. 607]) for each multiple --type function, it is no longer surprising for instance that "some analogue of *Legendre's duplication formula* must hold, almost rendering a formal proof unnecessary!"

All these results, together with the uniqueness and existence theorems above, show that the theory we develop in this book provides a very general and unified framework to study the properties of a large variety of functions. Thus, for each of these functions we can retrieve known formulas and sometimes establish new ones.

#### 1 Introduction 5

At the risk of repeating a large part of our preface, we now present two representative examples to illustrate the way our results can be applied to derive formulas methodically.

*Example 1.6 (The Barnes* <sup>G</sup>*-function, see Sect. 10.5)* The restriction to <sup>R</sup><sup>+</sup> of the Barnes <sup>G</sup>-function can be defined as the function <sup>G</sup>: <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup><sup>+</sup> whose logarithm f (x) = ln G(x) is the unique eventually 2-convex solution that vanishes at x = 1 to the equation

$$f(\mathbf{x} + \mathbf{l}) - f(\mathbf{x}) = \ln \Gamma(\mathbf{x}), \qquad \mathbf{x} > \mathbf{0}.$$

Thus, our Theorems 1.4 and 1.5 apply with g(x) = ln -(x) and p = 2, which shows that the function ln G(x) is a log -2-type function and hence that the function G(x) is a -2-type function. In particular, formula (1.5) provides the following analogue of *Gauss' limit* for the gamma function

$$G(\mathbf{x}) = \lim\_{n \to \infty} \frac{\Gamma(1)\Gamma(2) \cdots \Gamma(n)}{\Gamma(\mathbf{x})\Gamma(\mathbf{x}+1) \cdots \Gamma(\mathbf{x}+n)} n!^{\chi} n^{\binom{\chi}{2}} \cdots$$

Using some of our new results, we are also able to derive various unusual formulas and properties. For instance, we have the following analogue of *Euler's infinite product*

$$G(\mathbf{x}) = \frac{1}{\Gamma(\mathbf{x})} \prod\_{k=1}^{\infty} \frac{\Gamma(k)}{\Gamma(\mathbf{x} + k)} k^{\chi} (1 + 1/k)^{\binom{\chi}{2}}$$

and the following analogue of *Weierstrass' infinite product*

$$G(\boldsymbol{\chi}) = \frac{e^{(-\boldsymbol{\chi}-\boldsymbol{1})\binom{\boldsymbol{\chi}}{2}}}{\Gamma(\boldsymbol{\chi})} \prod\_{k=1}^{\infty} \frac{\Gamma(k)}{\Gamma(\boldsymbol{\chi}+k)} k^{\boldsymbol{\chi}} e^{\psi'(k)\binom{\boldsymbol{\chi}}{2}},$$

where γ is the Euler constant and ψ is the digamma function. We also have the following analogue of *Stirling's formula*

$$G(\mathbf{x}) \sim A^{-2} (2\pi)^{-\frac{1}{4}} \mathbf{x} \not\vdash \Gamma(\mathbf{x})^{-\frac{1}{2}} e^{\psi\_{-2}(\mathbf{x}) + \frac{1}{12}} \qquad \text{as } \mathbf{x} \to \infty, \mathbf{x}$$

where ψ−<sup>2</sup> is the polygamma function defined by the equation

$$
\psi\_{-2}(\mathbf{x}) = \int\_0^\chi \ln \Gamma(t) \, dt \qquad \text{for } \mathbf{x} > \mathbf{0},
$$

and A is Glaisher-Kinkelin's constant defined by the equation

$$
\zeta'(-1) = \frac{1}{12} - \ln A
$$

(Here the map s → ζ (s) denotes the derivative of the Riemann zeta function.) We can also easily derive the following analogue of *Wendel's double inequality*

$$\left| \left( 1 + \frac{a}{\chi} \right)^{-\left| \binom{a-1}{2} \right|} \right| \le \frac{G(\chi + a)}{G(\chi) \left| \Gamma(\chi)^a \chi^{\binom{a}{2}} \right|} \le \left( 1 + \frac{a}{\chi} \right)^{\left| \binom{a-1}{2} \right|},$$

which holds for any x > 0 and any a ≥ 0. As a corollary, this inequality immediately provides the following asymptotic equivalence

$$\frac{G(x+a)}{G(x)} \sim \Gamma(x)^a x^{\binom{a}{2}} \qquad \text{as } x \to \infty, \text{ }$$

which reveals the asymptotic behavior of G(x + a)/G(x) for large values of x. ♦

*Example 1.7 (The Hurwitz zeta function, see Sect. 10.6)* Consider the Hurwitz zeta function <sup>s</sup> → ζ (s, a), defined when (a) > 0 as an analytic continuation to <sup>C</sup> \ {1} of the series

$$\sum\_{k=0}^{\infty} (a+k)^{-s} \quad , \qquad \Re(s) > 1 \text{ .}$$

This function is known to satisfy the difference equation

$$
\zeta(s, a+1) - \zeta(s, a) = -a^{-s}.
$$

Thus, it is not difficult to see that, for any <sup>s</sup> <sup>∈</sup> <sup>R</sup> \ {1}, the restriction of the map <sup>x</sup> → ζ (s, x) to <sup>R</sup><sup>+</sup> is a log p(s)-type function, where

$$p(\mathbf{s}) = \max\{0, \lfloor 1 - \mathbf{s} \rfloor\}.$$

Theorem 1.5 then tells us that all eventually p(s)-convex or eventually p(s)-concave solutions fs : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> to the difference equation

$$f\_s(\mathbf{x} + \mathbf{l}) - f\_s(\mathbf{x}) = -\mathbf{x}^{-s}$$

are of the form

$$f\_s(\mathbf{x}) := c\_s + \zeta(\mathbf{s}, \mathbf{x}),$$

where cs <sup>∈</sup> <sup>R</sup>. Moreover, equation (1.5) provides the following analogue of *Gauss' limit* for the gamma function

$$\xi(s,x) = \xi(s) + x^{-s} + \lim\_{n \to \infty} \left( \sum\_{k=1}^{n-1} \left( (x+k)^{-s} - k^{-s} \right) - \sum\_{j=1}^{p(s)} \binom{x}{j} \Delta\_n^{j-1} n^{-s} \right),$$

where s → ζ (s) = ζ (s, 1) is the Riemann zeta function. Some of our results also enable us to derive the following analogues of *Stirling's formula*

$$\xi(s,x) + \frac{x^{1-s}}{1-s} - \sum\_{j=1}^{p(s)} G\_j \, \Delta\_x^{j-1} x^{-s} \to 0 \qquad \text{as } x \to \infty,$$

$$\xi(s,x) + \frac{1}{1-s} \sum\_{j=0}^{p(s)} \binom{1-s}{j} \frac{B\_j}{\varkappa^{s+j-1}} \to 0 \qquad \text{as } x \to \infty,$$

where Gn is the nth Gregory coefficient and Bn is the nth Bernoulli number. For instance, setting <sup>s</sup> = −<sup>3</sup> <sup>2</sup> in these asymptotic formulas, we obtain

$$\xi\left(-\frac{3}{2},x\right) + \frac{2}{5}x^{5/2} - \frac{7}{12}x^{3/2} + \frac{1}{12}(x+1)^{3/2} \to 0 \qquad \text{as } x \to \infty, \qquad$$

$$\xi\left(-\frac{3}{2},x\right) + \frac{2}{5}x^{5/2} - \frac{1}{2}x^{3/2} + \frac{1}{8}x^{1/2} \to 0 \qquad \text{as } x \to \infty.$$

Many more formulas and properties involving the Hurwitz zeta function will be provided and discussed in Sect. 10.6. ♦

The two examples above illustrate the scope of our theory and the diversity of our results. These examples and many others will be explored and discussed in the last chapters of this book. However, in the first chapters we will almost always use the basic function g(x) = ln x as the guiding example to illustrate our results.

**Outline of the Book** Let us now see how this book is organized. On the whole, Chaps. 2 to 8 are devoted to the conceptual part: we develop our theory and establish our results. Chapters 10 to 12 focus on applications to a large number of functions, including several classical special functions. In between, Chap. 9 presents an overview and a summary of our results. After reading this introduction, the reader interested by such an overview can go immediately to Chap. 9.

In Chap. 2, we present some definitions and preliminary results on Newton interpolation theory as well as on higher order convexity properties.

In Chap. 3, we establish Theorems 1.4 and 1.5 and provide conditions for the sequence <sup>n</sup> → <sup>f</sup> <sup>p</sup> <sup>n</sup> [g](x) to converge uniformly on any bounded subset of <sup>R</sup>+. We also examine the particular case when the sequence n → g(n) is summable, and we provide historical remarks on some improvements of Krull-Webster's theory.

In Chap. 4, we investigate the functions that satisfy the asymptotic condition stated in Theorems 1.4 and 1.5. We also investigate those functions that are eventually p-convex or eventually p-concave.

In Chap. 5, we introduce, investigate, and characterize the multiple log --type functions.

Chapter 6 is devoted to an asymptotic analysis of multiple log --type functions. More specifically, in that chapter we show how Euler's constant, Stirling's constant, Stirling's formula, and Wendel's inequality for the gamma function can be generalized to the multiple --type functions and multiple log --type functions and we introduce and discuss analogues of Binet's function and Burnside's formula. We also show how the so-called Gregory summation formula, with an integral form of the remainder, can be very easily derived in this setting.

In Chap. 7, we discuss conditions for the multiple log --type functions to be differentiable and establish several important properties of the higher order derivatives of these functions.

In Chap. 8, we explore further properties of the multiple log --type functions. Specifically, we provide asymptotic expansions of these functions as well as analogues of Euler's infinite product, Fontana-Mascheroni's series, Gauss' multiplication formula, Gautschi's inequality, Raabe's formula, Wallis's product formula, and Weierstrass' infinite product for the gamma function. We also discuss analogues of Euler's reflection formula and Gauss' digamma theorem, and we define and solve a generalized version of a functional equation proposed by Webster.

Chapter 9 is the transition from the theory to the applications. It provides a catalogue of our most relevant results, which can be used as a checklist to investigate the multiple log --type functions. Chapter 9 is self-contained and can be read right after this introduction.

In Chaps. 10 to 12, we apply our results to a number of multiple --type functions and multiple log --type functions, some of which are well-known special functions related to the gamma function.

In Chap. 13, we make some concluding remarks and propose a list of interesting open questions.

**Notation and Basic Definitions** Throughout this book, we use the following notation and definitions. Further definitions will be given in the subsequent chapters.

Unless indicated otherwise, the symbol I always denotes an arbitrary interval of the real line whose interior is nonempty.

The symbol S represents either <sup>N</sup> or <sup>R</sup>. For any S ∈ {N, <sup>R</sup>}, the notation <sup>x</sup> <sup>→</sup><sup>S</sup> <sup>∞</sup> means that x tends to infinity, assuming only values in S. We sometimes omit the subscript S when no confusion may arise.

Two functions <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> and <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> such that f (x)/g(x) <sup>→</sup> 1 as x →<sup>S</sup> ∞ are said to be asymptotically equivalent (over S). In this case, we write

$$f(\mathbf{x}) \sim g(\mathbf{x}) \qquad \text{ as } \mathbf{x} \to \mathbf{s} \text{ } \infty.$$

For any <sup>x</sup> <sup>∈</sup> <sup>R</sup>, we set

$$x\_+ = \max\{0, x\}.$$

As usual, we also let x denote the floor of x, i.e., the greatest integer less than or equal to x. Similarly, we let x denote the ceiling of x, i.e., the smallest integer greater than or equal to x. When no confusion may arise, we let {x} denote the fractional part of x, i.e., {x} = x − x.

#### 1 Introduction 9

For any <sup>x</sup> <sup>∈</sup> <sup>R</sup> and any <sup>k</sup> <sup>∈</sup> <sup>N</sup>, we set

$$\alpha^{\underline{k}} = \left. x(x-1) \cdots (x-k+1) \right| \\ = \frac{\Gamma(x+1)}{\Gamma(x-k+1)},$$

and we let

$$\varepsilon\_k(x) \in \{-1, 0, 1\}$$

denote the sign of xk.

For any <sup>k</sup> <sup>∈</sup> <sup>N</sup> and any nonempty open real interval <sup>I</sup> , we let *<sup>C</sup>*k(I ) denote the set of <sup>k</sup> times continuously differentiable functions on <sup>I</sup> , and we set *<sup>C</sup>*<sup>k</sup> <sup>=</sup> *<sup>C</sup>*k(R+). We also introduce the intersection sets

$$\mathcal{C}^{\infty}(I) = \bigcap\_{k \ge 0} \mathcal{C}^k(I) \qquad \text{and} \qquad \mathcal{C}^{\infty} = \bigcap\_{k \ge 0} \mathcal{C}^k.$$

We let and D denote the usual difference and derivative operators, respectively. We sometimes add a subscript to specify the variable on which the operator acts, e.g., writing n and Dx.

Recall that the digamma function <sup>ψ</sup> is defined on <sup>R</sup><sup>+</sup> by the equation

$$
\psi(\mathbf{x}) = D \ln \Gamma(\mathbf{x}) \qquad \text{for } \mathbf{x} > \mathbf{0}.
$$

The polygamma functions ψν (<sup>ν</sup> <sup>∈</sup> <sup>Z</sup>) are defined on <sup>R</sup><sup>+</sup> as follows (see, e.g., Srivastava and Choi [93]). If <sup>ν</sup> <sup>∈</sup> <sup>N</sup>, then

ψν (x) <sup>=</sup> <sup>D</sup>νψ(x) <sup>=</sup> <sup>ψ</sup>(ν)(x).

In particular, <sup>ψ</sup><sup>0</sup> <sup>=</sup> <sup>ψ</sup> is the digamma function. If <sup>ν</sup> <sup>∈</sup> <sup>Z</sup> \ <sup>N</sup>, then we introduce the functions

$$
\psi\_{-1}(\mathfrak{x}) = \ln \Gamma(\mathfrak{x})
$$

and

$$\psi\_{\boldsymbol{\nu}-1}(\boldsymbol{x}) \, = \int\_0^{\boldsymbol{\chi}} \psi\_{\boldsymbol{\nu}}(t) \, dt \, = \int\_0^{\boldsymbol{\chi}} \frac{(\boldsymbol{x} - t)^{-\boldsymbol{\nu} - 1}}{(-\boldsymbol{\nu} - 1)!} \ln \Gamma(t) \, dt \,.$$

Recall also that the harmonic number function x → Hx is defined on (−1,∞) by the equation

$$H\_{\chi} = \sum\_{k=1}^{\infty} \left( \frac{1}{k} - \frac{1}{x+k} \right) \qquad \text{for } x > -1.$$

Clearly, this function has the property that

$$
\Delta\_{\lambda} H\_{\lambda} = \frac{1}{x+1}, \qquad x > -1.
$$

Moreover, both functions Hx and ψ(x) are strongly related: we have

$$H\_{\chi-1} = \psi(\chi) + \mathcal{Y}, \qquad x > 0,$$

where γ is Euler's constant (also called Euler-Mascheroni constant).

We end this first chapter by introducing some new concepts that will be very useful in this book.

**Definition 1.8** For any a > 0, any <sup>p</sup> <sup>∈</sup> <sup>N</sup>, and any <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup>, we define the function ρ<sup>p</sup> <sup>a</sup> [g]: [0,∞) <sup>→</sup> <sup>R</sup> by the equation

$$\rho\_a^p[\mathbf{g}](\mathbf{x}) = \mathbf{g}(\mathbf{x} + a) - \sum\_{j=0}^{p-1} \binom{\chi}{j} \Delta^j \mathbf{g}(a) \qquad \text{for } \mathbf{x} \ge \mathbf{0}. \tag{1.7}$$

Identity (1.7) clearly shows that the function ρ<sup>p</sup> <sup>a</sup> [g] is actually defined on the open interval (−a,∞). However, in this work we will almost always consider it as a function defined on the interval [0,∞). We also note that <sup>ρ</sup><sup>p</sup> <sup>a</sup> [g](0) = 0.

**Definition 1.9** For any <sup>p</sup> <sup>∈</sup> <sup>N</sup> and any S ∈ {N, <sup>R</sup>}, we let *<sup>R</sup>*<sup>p</sup> <sup>S</sup> denote the set of functions <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> having the asymptotic property that, for each x > 0,

$$
\rho\_a^p[\mathfrak{g}](\mathfrak{x}) \to \begin{array}{c} 0 \\ \end{array} \quad \text{as } a \to\_{\mathbb{S}} \infty.
$$

We also let *<sup>D</sup>*<sup>p</sup> <sup>S</sup> denote the set of functions <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> having the asymptotic property that

$$
\Delta^p g(x) \to \begin{array}{c} 0 \\ \end{array} \quad \text{as } x \to\_{\mathbb{S}} \infty.
$$

We immediately observe that the inclusion *<sup>D</sup>*<sup>p</sup> <sup>S</sup> <sup>⊂</sup> *<sup>D</sup>*p+<sup>1</sup> <sup>S</sup> holds for every <sup>p</sup> <sup>∈</sup> <sup>N</sup>. We will see in Sects. 3.1 and 4.1 that the inclusion *<sup>R</sup>*<sup>p</sup> <sup>S</sup> <sup>⊂</sup> *<sup>R</sup>*p+<sup>1</sup> <sup>S</sup> also holds for every <sup>p</sup> <sup>∈</sup> <sup>N</sup>.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

## **Chapter 2 Preliminaries**

This chapter is devoted to some basic definitions and results that are needed in this book. We essentially focus on the Newton interpolation theory and the higher order convexity and concavity properties.

Recall that, unless indicated otherwise, the symbol I always denotes an arbitrary real interval whose interior is nonempty.

#### **2.1 Newton Interpolation Theory**

In this first section, we recall some basic facts about Newton interpolation theory and divided differences. We also establish a result on the derivatives of interpolating polynomials. For background see, e.g., de Boor [32, Chapter 1], Gel'fond [39, Chapter 1], Quarteroni et al.[85, Section 8.2.2], and Stoer and Bulirsch [94, Section 2.1.3].

Let <sup>n</sup> <sup>∈</sup> <sup>N</sup> and let <sup>x</sup>0, x1,...,xn be any (not necessarily distinct) points of <sup>I</sup> . Let also <sup>f</sup> : <sup>I</sup> <sup>→</sup> <sup>R</sup> be so that <sup>D</sup>mi−<sup>1</sup>f (xi) exists for <sup>i</sup> <sup>=</sup> <sup>0</sup>,...,n, where mi is the multiplicity of xi among the points x0, x1,...,xn.

We let

$$f[\mathbf{x}0, \mathbf{x}1, \dots, \mathbf{x}\_n]$$

denote the divided difference of f at the points x0, x1,...,xn, and we let the map

$$\{\mathbf{x} \; \mapsto \; \; P\_n[f](\mathbf{x}\_0, \mathbf{x}\_1, \dots, \mathbf{x}\_n; \mathbf{x})\}$$

13

denote the interpolating polynomial of f with nodes at x0, x1,...,xn, i.e., the unique polynomial P satisfying the equations

$$D^k P(\mathbf{x}\_l) = D^k f(\mathbf{x}\_l), \qquad 0 \le k \le m\_l - 1, \qquad l = 0, \dots, n.$$

This polynomial has degree at most n.

Recall that <sup>f</sup> [x0, x1,...,xn] is precisely the coefficient of <sup>x</sup><sup>n</sup> in the interpolating polynomial Pn[f ](x0, x1,...,xn; x). More precisely, the Newton interpolation formula states that

$$P\_n[f](\mathbf{x}\_0, \mathbf{x}\_1, \dots, \mathbf{x}\_n; \mathbf{x}) \ = \sum\_{k=0}^n f[\mathbf{x}\_0, \mathbf{x}\_1, \dots, \mathbf{x}\_k] \prod\_{l=0}^{k-1} (\mathbf{x} - \mathbf{x}\_l) . \tag{2.1}$$

Moreover, the corresponding interpolation error at any x ∈ I can take the following form

$$f(\mathbf{x}) - P\_{\mathbb{R}}[f](\mathbf{x}\_{\emptyset}, \mathbf{x}\_{\emptyset}, \dots, \mathbf{x}\_{\emptyset}; \mathbf{x}) \ = \ f[\mathbf{x}\_{\emptyset}, \mathbf{x}\_{\emptyset}, \dots, \mathbf{x}\_{\emptyset}, \mathbf{x}] \prod\_{l=0}^{n} (\mathbf{x} - \mathbf{x}\_{l}). \tag{2.2}$$

Recall also that the map

$$(z\_0, z\_1, \dots, z\_n) \mapsto f[z\_0, z\_1, \dots, z\_n]$$

is symmetric, i.e., invariant under any permutation of its arguments. Moreover, the divided differences of f can be computed via the following recurrence relation. For any k ∈ {0, 1,...,n}, we have f [xk] = f (xk) and

$$f[\mathbf{x}\_0, \dots, \mathbf{x}\_k] = \begin{cases} \frac{f[\mathbf{x}\_1, \dots, \mathbf{x}\_k] - f[\mathbf{x}\_0, \dots, \mathbf{x}\_{k-1}]}{\mathbf{x}\_k - \mathbf{x}\_0}, & \text{if } \mathbf{x}\_k \neq \mathbf{x}\_0, \\\\ \frac{1}{k!} D^k f(\mathbf{x}\_0), & \text{if } \mathbf{x}\_0 = \mathbf{x}\_1 = \dots = \mathbf{x}\_k. \end{cases} \tag{2.3}$$

When the points x0, x1,...,xn are pairwise distinct, we also have the following explicit expression

$$f[\mathbf{x}\_0, \mathbf{x}\_1, \dots, \mathbf{x}\_n] = \sum\_{k=0}^n \frac{f(\mathbf{x}\_k)}{\prod\_{j \neq k} (\mathbf{x}\_k - \mathbf{x}\_j)}.\tag{2.4}$$

We now establish a proposition that shows how the derivative of an interpolating polynomial of a differentiable function f is related to the derivative of f .

**Proposition 2.1** *Suppose that* I *is an arbitrary nonempty open real interval. For any* <sup>n</sup> <sup>∈</sup> <sup>N</sup>∗*, any system* <sup>x</sup><sup>0</sup> < x<sup>1</sup> <sup>&</sup>lt; ··· < xn *of* <sup>n</sup> <sup>+</sup> <sup>1</sup> *points in* <sup>I</sup> *, and any differentiable function* <sup>f</sup> : <sup>I</sup> <sup>→</sup> <sup>R</sup>*, there exist* <sup>n</sup> *points* <sup>ξ</sup>0,...,ξn−<sup>1</sup> *in* <sup>I</sup> *such that, for* i = 0,...,n − 1*, we have* xi < ξi < xi+<sup>1</sup> *and*

$$\left.D\_{\mathbf{x}}P\_{\mathbf{n}}[f](\mathbf{x}\_{0},\ldots,\mathbf{x}\_{\mathbf{n}};\mathbf{x})\right|\_{\mathbf{x}=\xi\_{l}}=f'(\xi\_{l}).\tag{2.5}$$

*Moreover, we have*

$$D\_{\mathbf{x}}P\_{n}[f](\mathbf{x}\_{0},\ldots,\mathbf{x}\_{n};\mathbf{x}) \;=\; P\_{n-1}[f'](\xi\_{0},\ldots,\xi\_{n-1};\mathbf{x})\tag{2.6}$$

*and*

$$n f[\mathbf{x}\_0, \dots, \mathbf{x}\_n] = f'[\xi\_0, \dots, \xi\_{n-1}].\tag{2.7}$$

*Proof* The function <sup>g</sup> : <sup>I</sup> <sup>→</sup> <sup>R</sup> defined by the equation

$$g(\mathbf{x}) := P\_n[f](\mathbf{x}\_0, \dots, \mathbf{x}\_n; \mathbf{x}) - f(\mathbf{x}) \qquad \text{for } \mathbf{x} \in I$$

vanishes at the n + 1 points x0, x1,...,xn. The first part of the proposition then follows from applying Rolle's theorem in each interval (xi, xi+1). Now, identity (2.6) immediately follows from (2.5) and the very definition of the interpolating polynomial. Identity (2.7) then follows by equating the coefficients of xn−<sup>1</sup> in (2.6). 

#### **2.2 Higher Order Convexity and Concavity**

Let us recall the definitions of p-convex and p-concave functions and present some related results. For background see, e.g., Kuczma [58], Kuczma [61, Chapter 15], Popoviciu [84], and Roberts and Varberg [87, pp. 237–240].

**Definition 2.2 (***p***-Convexity and** *p***-Concavity)** A function <sup>f</sup> : <sup>I</sup> <sup>→</sup> <sup>R</sup> is said to be *convex of order* p (resp. *concave of order* p) or simply p-*convex* (resp. p*concave*) for some integer p ≥ −1 if for any system x<sup>0</sup> < x<sup>1</sup> < ··· < xp+<sup>1</sup> of p + 2 points in I it holds that

f [x0, x1,...,xp+1] ≥ 0 (resp. f [x0, x1,...,xp+1] ≤ 0).

Thus defined, a function <sup>f</sup> : <sup>I</sup> <sup>→</sup> <sup>R</sup> is 1-convex if it is an ordinary convex function; it is 0-convex if it is increasing (in the wide sense); it is (−1)-convex if it is nonnegative.

Let us now introduce a practical notation to denote the set of p-convex functions and the set of p-concave functions.

**Definition 2.3** Let p ≥ −1 be an integer.


We also set

$$
\mathcal{K}^p(I) = \mathcal{K}^p\_+(I) \cup \mathcal{K}^p\_-(I) \qquad \text{and} \qquad \mathcal{K}^p = \mathcal{K}^p\_+ \cup \mathcal{K}^p\_-.
$$

The following proposition shows that both sets *<sup>K</sup>*<sup>p</sup> <sup>+</sup>(I ) and *<sup>K</sup>*<sup>p</sup> <sup>−</sup>(I ) are convex cones whose intersection is precisely the real linear space of all polynomials of degree less than or equal to <sup>p</sup>. A similar description of the sets *<sup>K</sup>*<sup>p</sup> <sup>+</sup> and *<sup>K</sup>*<sup>p</sup> <sup>−</sup> will be given in Corollary 4.6.

**Proposition 2.4** *For any* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*, the sets <sup>K</sup>*<sup>p</sup> <sup>+</sup>(I ) *and <sup>K</sup>*<sup>p</sup> <sup>−</sup>(I ) *are convex cones. These cones are opposite in the sense that* <sup>f</sup> *lies in <sup>K</sup>*<sup>p</sup> <sup>+</sup>(I ) *if and only if* <sup>−</sup><sup>f</sup> *lies in <sup>K</sup>*<sup>p</sup> <sup>−</sup>(I )*. Moreover, the intersection <sup>K</sup>*<sup>p</sup> +(I )∩*K*<sup>p</sup> <sup>−</sup>(I ) *is the real linear space of all polynomials of degree less than or equal to* p*.*

*Proof* That the sets *<sup>K</sup>*<sup>p</sup> <sup>+</sup>(I ) and *<sup>K</sup>*<sup>p</sup> <sup>−</sup>(I ) are convex cones is trivial; indeed, if f<sup>1</sup> and <sup>f</sup><sup>2</sup> lie in *<sup>K</sup>*<sup>p</sup> <sup>+</sup>(I ) for instance, then so does c1f<sup>1</sup> + c2f<sup>2</sup> for any c1, c<sup>2</sup> ≥ 0. By definition of *<sup>K</sup>*<sup>p</sup> <sup>+</sup>(I ) and *<sup>K</sup>*<sup>p</sup> <sup>−</sup>(I ), these cones are clearly opposite. Now, let f lie in *K*p <sup>+</sup>(I ) <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> <sup>−</sup>(I ) and let x<sup>0</sup> < ··· < xp be p + 1 points in I . By (2.2), for any x ∈ I we must have

$$f(\mathbf{x}) - P\_p[f](\mathbf{x}\_0, \mathbf{x}\_1, \dots, \mathbf{x}\_p; \mathbf{x}) \ = \mathbf{0},$$

which shows that f is a polynomial of degree at most p. Conversely, using (2.2) again, we can readily see that any such polynomial lies in *<sup>K</sup>*<sup>p</sup> <sup>+</sup>(I ) <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> <sup>−</sup>(I ).

We now present an important lemma. It is interesting in its own right and will be very useful in the subsequent chapters. A variant of this result can be found in Kuczma [61, Lemma 15.7.2].

Recall first that for any <sup>f</sup> : <sup>I</sup> <sup>→</sup> <sup>R</sup>, any <sup>p</sup> <sup>∈</sup> <sup>N</sup>, and any <sup>x</sup> <sup>∈</sup> <sup>I</sup> such that <sup>x</sup>+<sup>p</sup> <sup>∈</sup> <sup>I</sup> , we have

$$
\Delta^p f(\mathbf{x}) = p! f[\mathbf{x}, \mathbf{x} + 1, \dots, \mathbf{x} + p]. \tag{2.8}
$$

**Lemma 2.5** *Let* <sup>p</sup> <sup>∈</sup> <sup>N</sup> *and let <sup>I</sup>*p+<sup>1</sup> *denote the set of tuples of* <sup>I</sup>p+<sup>1</sup> *whose components are pairwise distinct. A function* <sup>f</sup> : <sup>I</sup> <sup>→</sup> <sup>R</sup> *lies in <sup>K</sup>*<sup>p</sup> <sup>+</sup>(I ) *(resp. <sup>K</sup>*<sup>p</sup> <sup>−</sup>(I )*) if and only if the restriction of the map*

$$(z\_0, \ldots, z\_p) \mapsto f[z\_0, \ldots, z\_p]$$

*to I*p+<sup>1</sup> *is increasing (resp. decreasing) in each place. In particular, if* I *is not upper bounded, then for any function* <sup>f</sup> *lying in <sup>K</sup>*<sup>p</sup> <sup>+</sup>(I ) *(resp. <sup>K</sup>*<sup>p</sup> <sup>−</sup>(I )*), the function* pf *is increasing (resp. decreasing) on* I *.*

*Proof* Using the definition of p-convexity and the standard recurrence relation (2.3) for divided differences, we can see that <sup>f</sup> lies in *<sup>K</sup>*<sup>p</sup> <sup>+</sup>(I ) if and only if, for any pairwise distinct x0,...,xp+<sup>1</sup> ∈ I , we have

$$\frac{f[\mathbf{x}\_1, \mathbf{x}\_2 \dots, \mathbf{x}\_{p+1}] - f[\mathbf{x}\_0, \mathbf{x}\_2 \dots, \mathbf{x}\_{p+1}]}{\mathbf{x}\_1 - \mathbf{x}\_0} \ge \mathbf{0}.$$

Equivalently, for any pairwise distinct x0,...,xp+<sup>1</sup> ∈ I , we have

$$f \ge \mathbf{x}\_0 \quad \Rightarrow \quad f[\mathbf{x}\_1, \mathbf{x}\_2, \dots, \mathbf{x}\_{p+1}] - f[\mathbf{x}\_0, \mathbf{x}\_2, \dots, \mathbf{x}\_{p+1}] \ge 0.$$

The latter condition exactly means that the map defined in the statement is increasing in the first place. Since this map is symmetric, it must be increasing in each place. The second part of the lemma follows from (2.8).

We end this section with a second lemma, which provides some important connections between higher order convexity and higher order differentiability. In fact, these connections can be derived (sometimes tediously) from various results given in the references mentioned in the beginning of this section, especially the book by Kuczma [61, Chapter 15]. However, for the sake of self-containment we provide a detailed proof in Appendix A.

**Lemma 2.6** *Let* <sup>I</sup> *be an nonempty open real interval and let* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*. Then the following assertions hold.*


*Proof* See Appendix A.

#### **2.3 A Key Lemma**

Let <sup>p</sup> <sup>∈</sup> <sup>N</sup>, a > 0, and <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup>. Combining Newton's interpolation formula (2.1) with identity (2.8), we can readily see that the unique interpolating polynomial of f with nodes at the p points a, a + 1,...,a + p − 1 takes the form

$$P\_{p-1}[f](a, a+1, \ldots, a+p-1; \mathbf{x}) \ = \sum\_{j=0}^{p-1} {n-a \choose j} \Delta^j f(a). \tag{2.9}$$

If p = 0, then this polynomial is naturally the zero polynomial, which is assumed to have degree −1. Moreover, using (2.2) we can immediately see that the corresponding interpolation error at any x > 0 is

$$f(\mathbf{x}) - \sum\_{j=0}^{p-1} \binom{\mathbf{x} - a}{j} \Delta^j f(a) = (\mathbf{x} - a)^{\underline{p}} f[a, a+1, \dots, a+p-1, \mathbf{x}].\tag{2.10}$$

Now, the right side of (2.10) is actually the remainder of the (p−1)th degree Newton expansion of f (x) about x = a (see, e.g., Graham et al. [41, Section 5.3]). Note also that formula (2.10) is a pure identity in the sense that it is valid without any restriction on the form of f (x).

Using (2.9) and (2.10) we see that, for any a > 0, any <sup>x</sup> <sup>≥</sup> 0, any <sup>p</sup> <sup>∈</sup> <sup>N</sup>, and any <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup>, the quantity <sup>ρ</sup><sup>p</sup> <sup>a</sup> [g](x) defined in (1.7) is precisely the interpolation error at a + x when considering the interpolating polynomial of g with nodes at a, a + 1,...,a + p − 1. We then immediately derive the following identities:

$$\rho\_a^p[\mathbf{g}](\mathbf{x}) = \mathbf{g}(a+\mathbf{x}) - P\_{p-1}[\mathbf{g}](a, a+1, \dots, a+p-1; a+\mathbf{x}), \tag{2.11}$$

$$\rho\_a^p[\mathbf{g}](\mathbf{x}) = \mathbf{x}^p \mathbf{g}[a, a+1, \dots, a+p-1, a+\mathbf{x}].\tag{2.12}$$

We note that identity (2.12) also extends to the case when x ∈ {0, 1,...,p − 1}, even if g is not differentiable. Indeed, in this case we must have ρ<sup>p</sup> <sup>a</sup> [g](x) = 0 by (2.11).

We now end this chapter with a key lemma that will be used repeatedly in this book. Although this lemma is rather technical, it is at the root of various fundamental convergence results of our theory. Recall first that, for any <sup>k</sup> <sup>∈</sup> <sup>N</sup>, the symbol εk(x) stands for the sign of xk.

**Lemma 2.7** *Let* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*,* <sup>f</sup> <sup>∈</sup> *<sup>K</sup>*p*, and* a > <sup>0</sup> *be so that* <sup>f</sup> *is* <sup>p</sup>*-convex or* <sup>p</sup>*-concave on* [a,∞)*. Then, for any* x ≥ 0*, we have*

$$\begin{aligned} 0 \le \pm \varepsilon\_{p+1}(\mathbf{x}) \, \rho\_a^{p+1}[f](\mathbf{x}) \le \pm \left| \binom{\mathbf{x}-1}{p} \right| \left( \Delta^p f(a+\mathbf{x}) - \Delta^p f(a) \right), \\ \le \pm \left| \binom{\mathbf{x}-1}{p} \right| \sum\_{j=0}^{\lceil \chi \rceil -1} \Delta^{p+1} f(a+j), \end{aligned}$$

*where* <sup>±</sup> *stands for* <sup>1</sup> *or* <sup>−</sup><sup>1</sup> *according to whether* <sup>f</sup> *lies in <sup>K</sup>*<sup>p</sup> <sup>+</sup> *or <sup>K</sup>*<sup>p</sup> <sup>−</sup> *. Moreover, if* <sup>x</sup> ∈ {0, <sup>1</sup>,...,p} *(i.e.,* εp+1(x) <sup>=</sup> <sup>0</sup>*), then* <sup>ρ</sup>p+<sup>1</sup> <sup>a</sup> [<sup>f</sup> ](x) <sup>=</sup> <sup>0</sup>*.*

*Proof* If <sup>x</sup> ∈ {0, <sup>1</sup>,...,p}, then we have that <sup>ρ</sup>p+<sup>1</sup> <sup>a</sup> [<sup>f</sup> ](x) <sup>=</sup> 0 by (2.11), and then the inequalities hold trivially. Let us now assume that x /∈ {0, 1,...,p}, which means that εp+1(x) = 0. Negating <sup>f</sup> if necessary, we may assume that it lies in *<sup>K</sup>*<sup>p</sup> +. By (2.12) we then have

$$
\varepsilon\_{p+1}(\mathbf{x})\,\rho\_a^{p+1}[f](\mathbf{x}) \, = \,\varepsilon\_{p+1}(\mathbf{x})\,\mathbf{x} \\
\underline{\xrightarrow{p+1}} f[a, a+1, \dots, a+p, a+\mathbf{x}] \ge 0.
$$

Hence, using identities (2.3) and (2.8) and Lemma 2.5, we obtain

$$\begin{aligned} 0 &\le \varepsilon\_{p+1}(\mathbf{x}) \, \rho\_a^{p+1} \, [f](\mathbf{x}) \\ &= \varepsilon\_{p+1}(\mathbf{x}) \, \mathbf{x}^{\frac{p+1}{p+1}} f[a, a+1, \dots, a+p, a+\mathbf{x}] \\ &= \varepsilon\_{p+1}(\mathbf{x}) \, (\mathbf{x}-1) \frac{p}{\varepsilon} (f[a+\mathbf{x}, a+1, \dots, a+p] - f[a, a+1, \dots, a+p]) \\ &\le \varepsilon\_{p+1}(\mathbf{x}) \, \binom{\mathbf{x}-1}{p} (\Delta^p f(a+\mathbf{x}) - \Delta^p f(a)) \\ &\le \varepsilon\_{p+1}(\mathbf{x}) \binom{\mathbf{x}-1}{p} (\Delta^p f(a+\lceil \mathbf{x} \rceil) - \Delta^p f(a)), \end{aligned}$$

which proves the first two inequalities. The third one can be immediately proved using a telescoping sum.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

## **Chapter 3 Uniqueness and Existence Results**

In this chapter, we establish Theorems 1.4 and 1.5 and show that, under the assumptions of these theorems, the sequence <sup>n</sup> → <sup>f</sup> <sup>p</sup> <sup>n</sup> [g] converges uniformly on any bounded subset of <sup>R</sup>+. We also discuss the particular case where the sequence n → g(n) is summable. Lastly, we provide historical notes on Krull-Webster's theory and some of its improvements.

Although their proofs are short and elementary, the main results given in this chapter are of utmost importance. They constitute the fundamental cornerstone of the whole theory developed in this book.

#### **3.1 Main Results**

We start this chapter by establishing a slightly improved version of our uniqueness Theorem 1.5. We state this new version in Theorem 3.1 below and provide a very short proof. Let us first note that any solution <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> to the equation f <sup>=</sup> <sup>g</sup> satisfies trivially the equations

$$f(n) = f(\mathbf{l}) + \sum\_{k=1}^{n-1} \mathbf{g}(k), \qquad n \in \mathbb{N}^\*;\tag{3.1}$$

$$f(\mathbf{x} + n) = f(\mathbf{x}) + \sum\_{k=0}^{n-1} g(\mathbf{x} + k), \qquad n \in \mathbb{N}.\tag{3.2}$$

Moreover, using (1.4), (1.7), (3.1), and (3.2), we can easily derive the identity

$$f(\mathbf{x}) = f(\mathbf{l}) + f\_n^p[\mathbf{g}](\mathbf{x}) + \rho\_n^{p+1}[f](\mathbf{x}), \qquad n \in \mathbb{N}^\*. \tag{3.3}$$

© The Author(s) 2022 J.-L. Marichal, N. Zenaïdi, *A Generalization of Bohr-Mollerup's Theorem for Higher Order Convex Functions*, Developments in Mathematics 70, https://doi.org/10.1007/978-3-030-95088-0\_3

21

We also observe that the identity obtained by setting p = 0 in (3.3) can also be derived by subtracting (3.2) from (3.1).

**Theorem 3.1 (Uniqueness)** *Let* <sup>p</sup> <sup>∈</sup> <sup>N</sup> *and* <sup>g</sup> <sup>∈</sup> *<sup>D</sup>*<sup>p</sup> <sup>S</sup> *. Suppose that* <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *is a solution to the equation* f <sup>=</sup> <sup>g</sup> *that lies in <sup>K</sup>*p*. Then, the following assertions hold.*


$$f(\mathbf{x}) = f(\mathbf{l}) + \lim\_{n \to \infty} f\_n^p[\mathbf{g}](\mathbf{x})\,.$$

*(c) The sequence* <sup>n</sup> → <sup>f</sup> <sup>p</sup> <sup>n</sup> [g] *converges uniformly on any bounded subset of* <sup>R</sup><sup>+</sup> *to* f − f (1)*.*

*Proof* We clearly have that <sup>f</sup> <sup>∈</sup> *<sup>D</sup>*p+<sup>1</sup> <sup>S</sup> . Assertion (a) then follows from Lemma 2.7 and the squeeze theorem. Assertion (b) follows from assertion (a) and identity (3.3). Now, let <sup>E</sup> be any bounded subset of <sup>R</sup>+. Using again identity (3.3) and Lemma 2.7, for large integer n we obtain

$$\begin{aligned} \left| \sup\_{\mathbf{x} \in E} \left| f\_n^p[\mathbf{g}](\mathbf{x}) - f(\mathbf{x}) + f(\mathbf{1}) \right| \right| &= \sup\_{\mathbf{x} \in E} \left| \rho\_n^{p+1}[f](\mathbf{x}) \right| \\ &\le \sup\_{\mathbf{x} \in E} \left| \binom{\mathbf{x} - \mathbf{1}}{p} \right| \cdot \sum\_{j=0}^{\lceil \sup E \rceil - 1} \left| \Delta^{p+1} f(n+j) \right|. \end{aligned}$$

This establishes assertion (c).

*Example 3.2* Using Theorem 3.1 with g(x) = ln x and p = 1, we obtain that all solutions <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> lying in *<sup>K</sup>*<sup>1</sup> to the equation f (x) <sup>=</sup> ln <sup>x</sup> are of the form f (x) = c + ln -(x), where <sup>c</sup> <sup>∈</sup> <sup>R</sup>. We thus simply retrieve both Bohr-Mollerup's Theorem 1.1 and Gauss' limit (1.6), as expected. We also observe that the set *<sup>K</sup>*<sup>1</sup> cannot be replaced with *<sup>K</sup>*<sup>0</sup> in this characterization. For example, the function

$$f(\mathbf{x}) = \ln \Gamma(\mathbf{x}) + \ln(1 + \frac{1}{2}\sin(2\pi\mathbf{x})) $$

is also a solution lying in *<sup>K</sup>*<sup>0</sup> to the equation f (x) <sup>=</sup> ln <sup>x</sup>. ♦

*Remark 3.3* We note that the assumption that ln f is convex in Bohr-Mollerup's Theorem 1.1 can be easily replaced with the fact that ln <sup>f</sup> lies in *<sup>K</sup>*<sup>1</sup> <sup>+</sup> (without using the uniqueness Theorem 3.1). Indeed, if ln <sup>f</sup> is convex on [n,∞) for some <sup>n</sup> <sup>∈</sup> <sup>N</sup>, then using (3.2) we have that

$$
\ln f(\mathbf{x}) = \ln f(\mathbf{x} + \mathbf{n}) - \sum\_{k=0}^{n-1} \ln(\mathbf{x} + k), \qquad \mathbf{x} > \mathbf{0},
$$

and hence ln <sup>f</sup> must be convex on <sup>R</sup><sup>+</sup> (as a finite sum of convex functions on <sup>R</sup>+). We can also replace *<sup>K</sup>*<sup>1</sup> <sup>+</sup> with *<sup>K</sup>*1; indeed, assuming that ln <sup>f</sup> lies in *<sup>K</sup>*<sup>1</sup> <sup>−</sup>, we would obtain that ln f (x) <sup>=</sup> ln <sup>x</sup> lies in *<sup>K</sup>*<sup>0</sup> <sup>−</sup> by Lemma 2.6(b), a contradiction. ♦

*Remark 3.4 (A Proof of Bohr-Mollerup's Theorem)* We have seen in Example 3.2 how both Bohr-Mollerup's theorem and Gauss' limit can be retrieved using our results. Let us now examine our proof in a self-contained way, using the needed arguments only. Let <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> be an eventually convex solution to the equation f (x) = ln x. The nature of this equation shows that it is actually enough to assume that x > 1 to find the form of f (x). For any <sup>n</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> and any x > 1, we then have

$$f(\mathbf{n}) = f(\mathbf{l}) + \sum\_{k=1}^{n-1} \ln k \qquad \text{and} \qquad f(\mathbf{x} + \mathbf{n}) = f(\mathbf{x}) + \sum\_{k=0}^{n-1} \ln(\mathbf{x} + k)$$

and hence also the identity

$$f(\mathbf{x}) = f(\mathbf{l}) + \left(\sum\_{k=1}^{n-1} \ln k - \sum\_{k=0}^{n-1} \ln(\mathbf{x} + k) + \mathbf{x} \ln n\right) + \rho\_n(\mathbf{x}),$$

where

$$\rho\_n(\mathbf{x}) = f(\mathbf{x} + n) - f(n) - \mathbf{x} \ln n.$$

To conclude the proof, we only need to show that, for each x > 1, the sequence <sup>n</sup> → ρn(x) converges to zero. Let <sup>n</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> be so that <sup>f</sup> is convex on [n,∞). Using the convexity of f we then obtain the following two inequalities

$$f(n+1) \le \left(1 - \frac{1}{\chi}\right) f(n) + \frac{1}{\chi} f(\chi+n)\,.$$

$$f(n+\chi) \le \frac{1}{\chi} f(n+1) + \left(1 - \frac{1}{\chi}\right) f\left(\chi+n+1\right)\,.$$

Using these inequalities and the identity f (n + 1) − f (n) = ln n, we obtain

$$\begin{aligned} 0 \le \rho\_n(\mathbf{x}) &= f(\mathbf{x} + n) - f(n+1) - (\mathbf{x} - 1)\ln n \\ &\le (\mathbf{x} - \mathbf{l}) \left( f(\mathbf{x} + n + \mathbf{l}) - f(\mathbf{x} + n) - \ln n \right) = (\mathbf{x} - \mathbf{l})\ln(\mathbf{l} + \frac{\mathbf{x}}{n}). \end{aligned}$$

The proof is now complete since the latter expression converges to zero as n → ∞. This shows to which extent the proofs of Bohr-Mollerup's theorem and Gauss' limit can be short and elementary. Note that a variant of this proof can be derived from the proof of Webster's uniqueness theorem [98, Theorem 3.1]. ♦

Now that we have established the uniqueness Theorem 3.1, let us prepare the ground for the existence theorem. Using the definition of ρ<sup>p</sup> <sup>a</sup> [g](x) given in (1.7), we can easily derive the following two identities

$$
\rho\_a^p[\mathbf{g}](p) = \Delta^p \mathbf{g}(a) \; ; \tag{3.4}
$$

$$
\left[\rho\_a^p \text{[g]}(\mathbf{x}) - \rho\_a^{p+1} \text{[g]}(\mathbf{x}) = \binom{\mathbf{x}}{p} \rho\_a^p \text{[g]}(p) \,. \tag{3.5}
$$

These identities clearly show that the inclusions *<sup>R</sup>*<sup>p</sup> <sup>S</sup> <sup>⊂</sup> *<sup>D</sup>*<sup>p</sup> <sup>S</sup> and *<sup>R</sup>*<sup>p</sup> <sup>S</sup> <sup>⊂</sup> *<sup>R</sup>*p+<sup>1</sup> <sup>S</sup> hold for any <sup>p</sup> <sup>∈</sup> <sup>N</sup>. We will see in Proposition 4.2 that these inclusions are actually strict.

Now, the following straightforward identities will also be useful as we continue

$$f\_{n+1}^{p} \text{[g]}(\mathbf{x}) - f\_n^{p} \text{[g]}(\mathbf{x}) = -\rho\_n^{p+1} \text{[g]}(\mathbf{x});\tag{3.6}$$

$$\left[f\_n^{p}\left[\mathbf{g}\right](\mathbf{x}+\mathbf{l})-f\_n^{p}\left[\mathbf{g}\right](\mathbf{x})=\mathbf{g}(\mathbf{x})-\rho\_n^{p}\left[\mathbf{g}\right](\mathbf{x})\right.\tag{3.7}$$

For any integers 1 ≤ m ≤ n, from (3.6) we obtain

$$f\_n^{p}[\mathbf{g}](\mathbf{x}) = f\_m^p[\mathbf{g}](\mathbf{x}) - \sum\_{k=m}^{n-1} \rho\_k^{p+1}[\mathbf{g}](\mathbf{x})\,,\tag{3.8}$$

which shows that, for any x > 0, the convergence of the sequence <sup>n</sup> → <sup>f</sup> <sup>p</sup> <sup>n</sup> [g](x) is equivalent to the summability of the sequence <sup>n</sup> → <sup>ρ</sup>p+<sup>1</sup> <sup>n</sup> [g](x).

We now establish a slightly improved version of our existence Theorem 1.4. We first present a technical lemma, which follows straightforwardly from Lemma 2.7.

**Lemma 3.5** *Let* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*,* <sup>g</sup> <sup>∈</sup> *<sup>K</sup>*p*, and* <sup>m</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> *be so that* <sup>g</sup> *is* <sup>p</sup>*-convex or* <sup>p</sup>*-concave on* [m,∞)*. Then, for any* x ≥ 0 *and any integer* n ≥ m*, we have*

$$\left|\sum\_{k=m}^{n-1} \rho\_k^{p+1} \mathbf{l} \mathbf{g} \mathbf{l}(\mathbf{x})\right| \le \left| \binom{\mathbf{x}-1}{p} \right| \sum\_{j=0}^{\lceil \mathbf{x} \rceil -1} |\Delta^p \mathbf{g}(n+j) - \Delta^p \mathbf{g}(m+j)|.$$

*Proof* For any fixed <sup>x</sup> <sup>≥</sup> 0, the sequence <sup>k</sup> → <sup>ρ</sup>p+<sup>1</sup> <sup>k</sup> [g](x) for k ≥ m does not change in sign by Lemma 2.7 and hence we have

$$\left|\sum\_{k=m}^{n-1} \rho\_k^{p+1} \mathbf{[g](x)}\right| = \sum\_{k=m}^{n-1} \left|\rho\_k^{p+1} \mathbf{[g](x)}\right| \\
\leq \left| \binom{\boldsymbol{\chi}-1}{p} \right| \sum\_{j=0}^{\lceil \boldsymbol{\chi} \rceil -1} \left| \sum\_{k=m}^{n-1} \Delta^{p+1} g(k+j) \right|,$$

where the inner sum clearly telescopes to pg(n <sup>+</sup> j ) <sup>−</sup> pg(m <sup>+</sup> j ).

$$^{24}$$

$$\mathbb{D}$$

**Theorem 3.6 (Existence)** *Let* <sup>p</sup> <sup>∈</sup> <sup>N</sup> *and* <sup>g</sup> <sup>∈</sup> *<sup>D</sup>*<sup>p</sup> <sup>S</sup> <sup>∩</sup> *<sup>K</sup>*p*. The following assertions hold.*


$$f(\mathbf{x}) := \lim\_{n \to \infty} f\_n^p[\mathbf{g}](\mathbf{x})\,, \qquad \mathbf{x} > \mathbf{0},$$

*is a solution to the equation* f = g *that is* p*-concave (resp.* p*-convex) on any unbounded subinterval* <sup>I</sup> *of* <sup>R</sup><sup>+</sup> *on which* <sup>g</sup> *is* <sup>p</sup>*-convex (resp.* <sup>p</sup>*-concave). Moreover, we have* f (1) = 0 *and*

$$\left| \left| f\_n^{p} [\mathbf{g}](\mathbf{x}) - f(\mathbf{x}) \right| \right| \le \left\lceil \mathbf{x} \right\rceil \left| \binom{\mathbf{x} - \mathbf{l}}{p} \right| \left| \Delta^{p} \mathbf{g}(\mathbf{n}) \right|, \qquad \mathbf{x} > \mathbf{0}, \ n \in I \cap \mathbb{N}^\*.$$

*If* p ≥ 1*, we also have the following tighter inequality*

$$|f\_n^{p^p}[g](\mathbf{x}) - f(\mathbf{x})| \le \left| \binom{\mathbf{x} - 1}{p} \right| \left| \Delta^{p-1} g(n + \mathbf{x}) - \Delta^{p-1} g(n) \right|, \quad \mathbf{x} > \mathbf{0}, \ n \in I \cap \mathbb{N}^\*.$$

*(c) The sequence* <sup>n</sup> → <sup>f</sup> <sup>p</sup> <sup>n</sup> [g] *converges uniformly on any bounded subset of* <sup>R</sup><sup>+</sup> *to* f *.*

*Proof* We have that <sup>g</sup> <sup>∈</sup> *<sup>D</sup>*<sup>p</sup> <sup>S</sup> <sup>⊂</sup> *<sup>D</sup>*p+<sup>1</sup> <sup>S</sup> . By Lemma 2.7, it follows immediately that g lies in *<sup>R</sup>*p+<sup>1</sup> <sup>S</sup> , and hence also in *<sup>R</sup>*<sup>p</sup> <sup>S</sup> by (3.4) and (3.5). This establishes assertion (a). Now, suppose for instance that <sup>g</sup> lies in *<sup>K</sup>*<sup>p</sup> <sup>+</sup>. Let I be any unbounded subinterval of <sup>R</sup><sup>+</sup> on which <sup>g</sup> is <sup>p</sup>-convex and let <sup>m</sup> <sup>∈</sup> <sup>I</sup> <sup>∩</sup> <sup>N</sup>∗. For any x > 0, the sequence <sup>k</sup> → <sup>ρ</sup>p+<sup>1</sup> <sup>k</sup> [g](x) for k ≥ m does not change in sign by Lemma 2.7. Thus, since g lies in *<sup>D</sup>*<sup>p</sup> <sup>N</sup> , for any x > 0 the series

$$\sum\_{k=m}^{\infty} \rho\_k^{p+1} \text{[g]}(\mathbf{x})$$

converges by Lemma 3.5. By (3.8) it follows that the sequence <sup>n</sup> → <sup>f</sup> <sup>p</sup> <sup>n</sup> [g](x) converges. Denoting the limiting function by f , we necessarily have f (1) = 0. Moreover, by (3.7) and assertion (a) we must have f = g.

Since <sup>g</sup> is <sup>p</sup>-convex on <sup>I</sup> , for every <sup>n</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> the function <sup>f</sup> <sup>p</sup> <sup>n</sup> [g] is clearly pconcave on I . (Note that the second sum in (1.4) is a polynomial of degree less than or equal to p in x, hence by Proposition 2.4 it is both p-convex and p-concave.) Since f is a pointwise limit of functions p-concave on I , it too is p-concave on I .

The claimed inequalities then follow from identity (3.3), Lemma 2.7, and the observation that the restriction of the sequence <sup>n</sup> → pg(n) to <sup>I</sup> <sup>∩</sup> <sup>N</sup><sup>∗</sup> increases to zero by Lemma 2.5. Indeed, for any x > 0 and any <sup>n</sup> <sup>∈</sup> <sup>I</sup> <sup>∩</sup> <sup>N</sup>∗, we then have

$$\begin{aligned} \left| f\_n^p[g](\mathbf{x}) - f(\mathbf{x}) \right| &= \left| \rho\_n^{p+1}[f](\mathbf{x}) \right| \le \left| \binom{\mathbf{x}-1}{p} \right| \left| \Delta^p f(n+\mathbf{x}) - \Delta^p f(n) \right| \\ &\le \left| \binom{\mathbf{x}-1}{p} \right| \sum\_{j=0}^{\lceil \mathbf{x} \rceil - 1} \left| \Delta^p g(j+n) \right| \le \lceil \mathbf{x} \rceil \left| \binom{\mathbf{x}-1}{p} \right| \left| \Delta^p g(n) \right|. \end{aligned}$$

This proves assertion (b). Assertion (c) immediately follows from the first inequality of assertion (b).

*Remark 3.7* We have shown in Theorems 3.1 and 3.6 that the sequence <sup>n</sup> → <sup>f</sup> <sup>p</sup> <sup>n</sup> [g] converges uniformly on any bounded subset of <sup>R</sup>+. In fact, we have proved the slightly more general property that the sequence <sup>n</sup> → <sup>ρ</sup>p+<sup>1</sup> <sup>n</sup> [<sup>f</sup> ] converges uniformly on any bounded subset of [0,∞) to 0. ♦

Theorems 3.1 and 3.6 show that the assumption that <sup>g</sup> <sup>∈</sup> *<sup>D</sup>*<sup>p</sup> <sup>S</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> constitutes a sufficient condition to ensure both the uniqueness (up to an additive constant) and existence of solutions to the equation f <sup>=</sup> <sup>g</sup> that lie in *<sup>K</sup>*p. Nevertheless, we can show that this condition is actually not quite necessary. We discuss and elaborate on this natural question in Appendix C.

We now present an important property of the sequence <sup>n</sup> → <sup>f</sup> <sup>p</sup> <sup>n</sup> [g]. Considering the straightforward identity

<sup>f</sup> <sup>p</sup>+<sup>1</sup> <sup>n</sup> [g](x) <sup>−</sup> <sup>f</sup> <sup>p</sup> <sup>n</sup> [g](x) <sup>=</sup> <sup>x</sup> p+1 pg(n),

we immediately see that if the sequence

$$n \mapsto \left. f\_n^{p+1} \mathbf{[g](x) - f\_n^p \mathbf{[g](x)}} \right|$$

approaches zero for some <sup>x</sup> <sup>∈</sup> <sup>R</sup><sup>+</sup> \ {0, <sup>1</sup>,...,p}, then <sup>g</sup> must lie in *<sup>D</sup>*<sup>p</sup> <sup>N</sup>. More importantly, the identity above also shows that if <sup>g</sup> lies in *<sup>D</sup>*<sup>p</sup> <sup>N</sup> and if the sequence <sup>n</sup> → <sup>f</sup> <sup>p</sup> <sup>n</sup> [g](x) converges, then so does the sequence <sup>n</sup> → <sup>f</sup> <sup>p</sup>+<sup>1</sup> <sup>n</sup> [g](x) and both sequences converge to the same limit. Since the inclusion *<sup>D</sup>*<sup>p</sup> <sup>N</sup> <sup>⊂</sup> *<sup>D</sup>*p+<sup>1</sup> <sup>N</sup> holds for any <sup>p</sup> <sup>∈</sup> <sup>N</sup>, we immediately obtain the following important proposition.

**Proposition 3.8** *Let* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*. If* <sup>g</sup> <sup>∈</sup> *<sup>D</sup>*<sup>p</sup> <sup>N</sup> *and if the sequence* <sup>n</sup> → <sup>f</sup> <sup>p</sup> <sup>n</sup> [g](x) *converges, then for any integer* q ≥ p *the sequence*

$$n \mapsto \|f\_n^p[\mathbf{g}](\mathbf{x}) - f\_n^q[\mathbf{g}](\mathbf{x})\|$$

*converges to zero. Moreover, the convergence is uniform on any bounded subset of* <sup>R</sup>+*.*

Let us end this section with the following observation about our uniqueness and existence results. In Theorem 3.1, we have proved the uniqueness of the solution f that lies in *<sup>K</sup>*<sup>p</sup> by first proving that this solution necessarily lies in *<sup>R</sup>*p+<sup>1</sup> <sup>S</sup> . Although this latter asymptotic condition may seem a bit less natural than the assumption that <sup>f</sup> lies in *<sup>K</sup>*p, we could as well consider it as a sufficient condition to guarantee uniqueness. A similar observation can be made for the existence Theorem 3.6. We can therefore establish the following two alternative results.

**Proposition 3.9 (Uniqueness)** *Let* <sup>p</sup> <sup>∈</sup> <sup>N</sup> *and let* <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup>*. Suppose that* <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *is a solution to the equation* f <sup>=</sup> <sup>g</sup> *that lies in <sup>R</sup>*p+<sup>1</sup> <sup>N</sup> *. Then assertion (b) of Theorem 3.1 holds, and hence* f *is unique (up to an additive constant).*

*Proof* This follows immediately from identity (3.3).

**Proposition 3.10 (Existence)** *Let* <sup>p</sup> <sup>∈</sup> <sup>N</sup> *and suppose that the function* <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *lies in <sup>D</sup>*<sup>p</sup> <sup>N</sup> *and has the property that, for each* x > 0*, the sequence* n → <sup>ρ</sup>p+<sup>1</sup> <sup>n</sup> [g](x) *is summable. Then* <sup>g</sup> *lies in <sup>R</sup>*<sup>p</sup> <sup>N</sup> *and there exists a unique (up to an additive constant) solution* <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *to the equation* f <sup>=</sup> <sup>g</sup> *that lies in <sup>R</sup>*p+<sup>1</sup> <sup>N</sup> *.*

*Proof* Since the sequence <sup>n</sup> → <sup>ρ</sup>p+<sup>1</sup> <sup>n</sup> [g](x) is summable, by (3.8) the sequence <sup>n</sup> → <sup>f</sup> <sup>p</sup> <sup>n</sup> [g](x) converges. Denoting the limiting function by f , we necessarily have f (1) <sup>=</sup> 0. By (3.6), the function <sup>g</sup> necessarily lies in *<sup>R</sup>*p+<sup>1</sup> <sup>N</sup> , and hence also in *<sup>R</sup>*<sup>p</sup> N by (3.4) and (3.5). Thus, we must have f <sup>=</sup> <sup>g</sup> by (3.7) and <sup>f</sup> lies in *<sup>R</sup>*p+<sup>1</sup> <sup>N</sup> by (3.3).

*Example 3.11* Let us apply Proposition 3.9 to g(x) = ln x and p = 1. We then obtain the following alternative characterization of the gamma function (in the multiplicative notation).

*If* <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup><sup>+</sup> *is a solution to the equation* f (x <sup>+</sup> <sup>1</sup>) <sup>=</sup> x f (x) *having the asymptotic property that, for each* x > 0,

$$f(\mathbf{x} + \mathbf{n}) \sim \pi^{\chi} f(\mathbf{n}) \qquad \text{ as } \mathbf{n} \to \mathbb{N} \text{ } \infty,$$

*then* f (x) = c -(x) *for some* c > 0.

It is easy to see that this characterization also holds on the whole complex domain of the gamma function, namely <sup>C</sup> \ (−N). ♦

## **3.2 The Case when the Sequence** *g(n)* **Is Summable**

Let *<sup>D</sup>*−<sup>1</sup> <sup>N</sup> be the set of functions <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> having the asymptotic property that the series ∞ <sup>k</sup>=<sup>1</sup> g(k) converges. We immediately observe that *<sup>D</sup>*−<sup>1</sup> <sup>N</sup> <sup>⊂</sup> *<sup>D</sup>*<sup>0</sup> <sup>N</sup>. In this context, our uniqueness and existence results can be complemented by the following two theorems.

**Theorem 3.12 (Uniqueness)** *Let* <sup>g</sup> <sup>∈</sup> *<sup>D</sup>*−<sup>1</sup> <sup>N</sup> *and suppose that* <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *is a solution to the equation* f <sup>=</sup> <sup>g</sup> *that lies in <sup>K</sup>*0*. Then, the following assertions hold.*


$$f(\mathbf{x}) = f(\infty) - \sum\_{k=0}^{\infty} g(\mathbf{x} + k) \dots$$

*(c) The series* x → <sup>∞</sup> <sup>k</sup>=<sup>0</sup> g(x <sup>+</sup> k) *converges uniformly on* <sup>R</sup><sup>+</sup> *to* f (∞) <sup>−</sup> <sup>f</sup> *.*

*Proof* The sequence n → f (n) converges by (3.1). Assuming for instance that f lies in *<sup>K</sup>*<sup>0</sup> <sup>+</sup>, for any x > 0 we obtain

$$f(\lfloor x \rfloor + n) \le f(x + n) \le f(\lceil x \rceil + n) \quad \text{for large integer } n.$$

Letting n →<sup>N</sup> ∞ in these inequalities and using the squeeze theorem, we get assertion (a). Assertion (b) follows from assertion (a) and identity (3.2). Now, for large integer n, by assertion (b) and identity (3.2) we have

$$\sup\_{\mathbf{x}\in\mathbb{R}\_{+}} \left| \sum\_{k=n}^{\infty} g(\mathbf{x} + k) \right| \\ = \sup\_{\mathbf{x}\in\mathbb{R}\_{+}} |f(\mathbf{x} + n) - f(\infty)| \\ \le |f(n) - f(\infty)|.$$

This proves assertion (c).

**Theorem 3.13 (Existence)** *Let* <sup>g</sup> <sup>∈</sup> *<sup>D</sup>*−<sup>1</sup> <sup>N</sup> <sup>∩</sup> *<sup>K</sup>*0*. The following assertions hold.*


$$f(\mathbf{x}) = \ -\sum\_{k=0}^{\infty} g(\mathbf{x} + k), \qquad \mathbf{x} > \mathbf{0}, \tag{3.9}$$

*is a solution to the equation* f = g *that is decreasing (resp. increasing) on any unbounded subinterval* <sup>I</sup> *of* <sup>R</sup><sup>+</sup> *on which* <sup>g</sup> *is increasing (resp. decreasing). Moreover, we have* f (x) <sup>→</sup> <sup>0</sup> *as* <sup>x</sup> → ∞ *and, for every* <sup>n</sup> <sup>∈</sup> <sup>I</sup> <sup>∩</sup> <sup>N</sup>∗*,*

$$\left|\sum\_{k=n}^{\infty} g(\alpha + k)\right| = \left|f(\alpha + n)\right| \le \left|f(n)\right|, \qquad \alpha > 0.$$

*(c) The series* x → <sup>∞</sup> <sup>k</sup>=<sup>0</sup> g(x <sup>+</sup> k) *converges uniformly on* <sup>R</sup><sup>+</sup> *to* <sup>−</sup><sup>f</sup> *.*

$$\mathbb{D}$$

*Proof* By Theorem 3.6, assertion (a) clearly holds (since <sup>g</sup> also lies in *<sup>D</sup>*<sup>0</sup> <sup>N</sup>) and, for each x > 0, the series (3.9) converges and is a solution to the equation f = g that satisfies the claimed monotonicity properties. Theorem 3.12 then shows that the function f vanishes at infinity. The rest of assertion (b) follows from (3.2). Assertion (c) is then immediate.

Theorems 3.12 and 3.13 motivate the following definition.

**Definition 3.14** For any S ∈ {N, <sup>R</sup>}, we let *<sup>D</sup>*−<sup>1</sup> <sup>S</sup> denote the set of functions <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> having the asymptotic property that, for each <sup>x</sup> <sup>∈</sup> S, the series

$$\sum\_{k=0}^{\infty} g(x+k)$$

converges and tends to zero as x →<sup>S</sup> ∞.

Clearly, this definition is consistent with our prior definition of *<sup>D</sup>*−<sup>1</sup> <sup>N</sup> and we can immediately see that the inclusion *<sup>D</sup>*−<sup>1</sup> <sup>R</sup> <sup>⊂</sup> *<sup>D</sup>*−<sup>1</sup> <sup>N</sup> holds. Moreover, by Theorem 3.13 we have that

$$\mathcal{D}\_{\mathbb{R}}^{-1} \cap \mathcal{K}^0 = \mathcal{D}\_{\mathbb{N}}^{-1} \cap \mathcal{K}^0. \tag{3.10}$$

*Example 3.15 (The Trigamma Function)* The trigamma function ψ<sup>1</sup> is defined on <sup>R</sup><sup>+</sup> as the derivative <sup>ψ</sup> of the digamma function. Hence, it has the property that

$$
\Delta \psi\_{\mathbb{I}}(\mathbf{x}) = D \Delta \psi(\mathbf{x}) = \left. -\mathbf{1}/\mathbf{x}^2 \right|\_{\mathbb{I}} \quad \text{for all } \mathbf{x} > \mathbf{0}.
$$

Since the function <sup>ψ</sup> lies in *<sup>D</sup>*<sup>1</sup> <sup>N</sup> <sup>∩</sup> *<sup>K</sup>*<sup>1</sup> <sup>−</sup>, one can show (see Proposition 4.12 in the next chapter) that <sup>ψ</sup><sup>1</sup> lies in *<sup>D</sup>*<sup>0</sup> <sup>N</sup> <sup>∩</sup>*K*<sup>0</sup> <sup>−</sup>. Now, the function g(x) = −1/x<sup>2</sup> clearly lies in *<sup>D</sup>*−<sup>1</sup> <sup>N</sup> <sup>∩</sup> *<sup>K</sup>*<sup>0</sup> <sup>+</sup> and hence also in *<sup>D</sup>*<sup>0</sup> <sup>N</sup> <sup>∩</sup> *<sup>K</sup>*<sup>0</sup> <sup>+</sup>. It also lies in *<sup>D</sup>*−<sup>1</sup> <sup>R</sup> <sup>∩</sup> *<sup>K</sup>*<sup>0</sup> <sup>+</sup> by (3.10). Thus, by Theorems 3.6, 3.12, and 3.13, we see that the trigamma function ψ<sup>1</sup> is the unique decreasing solution f to the equation f = g that vanishes at infinity. Moreover, we have that

$$\psi\_{\mathbf{l}}(\mathbf{x}) = \sum\_{k=0}^{\infty} \frac{1}{(\mathbf{x} + k)^2} \qquad \text{and} \qquad \psi\_{\mathbf{l}}(\mathbf{l}) = \sum\_{k=1}^{\infty} \frac{1}{k^2} = \frac{\pi^2}{6}.$$

Furthermore, the sequence of functions

$$n \mapsto \sum\_{k=0}^{n-1} \frac{1}{(\alpha + k)^2} = \psi\_1(\alpha) - \psi\_1(\alpha + n)$$

converges uniformly on <sup>R</sup><sup>+</sup> to the function <sup>ψ</sup>1(x), and Theorem 3.13 provides the following inequalities

$$0 \le \psi\_1(\mathbf{x} + \mathbf{n}) = \sum\_{k=n}^{\infty} \frac{1}{(\mathbf{x} + \mathbf{k})^2} \le \psi\_1(\mathbf{n}), \qquad \mathbf{x} > \mathbf{0}, \ n \in \mathbb{N}^\*.$$

Finally, Theorem 3.6 provides the following additional inequalities

$$0 \le \psi\_1(n) - \psi\_1(\chi + n) \le \frac{\lceil \chi \rceil}{n^2}, \qquad \chi > 0, \ n \in \mathbb{N}^\*.$$

We will further investigate the trigamma function ψ<sup>1</sup> as a special polygamma function in Sect. 10.3. ♦

#### **3.3 Historical Notes**

As mentioned in Chap. 1, the uniqueness and existence result in the case when p = 1 was established in the pioneering work of Krull [54, 55] and then independently by Webster [97, 98] as a generalization of Bohr-Mollerup's theorem. We observe that it was also partially rediscovered by Dinghas [33]. In addition, we note that Krull's result was presented and somewhat revisited by Kuczma [56] (see also Kuczma [59] and Kuczma [60, pp. 114–118]) as well as by Anastassiadis [7, pp. 69–73]. To our knowledge, the only attempts to establish uniqueness and existence results for any value of p were made by Kuczma [60, pp. 118–121] and Ardjomande [9]. Independently of these latter results, an investigation of the special case when p = 2, illustrated by the Barnes G-function, was made by Rassias and Trif [86] (see our Appendix B).

We also observe that Gronau and Matkowski [44, 45] improved the multiplicative version of Krull's result by replacing the log-convexity property with the much weaker condition of geometrical convexity (see also Guan [46] for a recent application of this result), thus providing another characterization of the gamma function, which was later improved by Alzer and Matkowski [4] and Matkowski [68, 69]. (For further characterizations of the gamma function and generalizations, see also Anastassiadis [7] and Muldoon [79].)

Many other variants and improvements of Krull's result can actually be found in the literature. For instance, Anastassiadis [6] (see also Anastassiadis [7, p. 71]) generalized it by modifying the asymptotic condition. Rohde [88] also generalized it by modifying the convexity property. Gronau [42] proposed a variant of Krull's result and applied it to characterize the Euler beta and gamma functions and study certain spirals (see also Gronau [43]). Merkle and Ribeiro Merkle [71] proposed to combine Krull's approach with differentiation techniques to characterize the Barnes G-function. Himmel and Matkowski [48] also proposed improvements of Krull's result to characterize the beta and gamma functions.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

## **Chapter 4 Interpretations of the Asymptotic Conditions**

In this chapter, we study some important properties of the sets *<sup>R</sup>*<sup>p</sup> <sup>S</sup> and *<sup>D</sup>*<sup>p</sup> <sup>S</sup> and provide interpretations of the asymptotic condition that defines the set *<sup>R</sup>*<sup>p</sup> S .

We also investigate the sets *<sup>R</sup>*<sup>p</sup> <sup>S</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> and *<sup>D</sup>*<sup>p</sup> <sup>S</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> and show that they actually coincide and are independent of S (and hence we can remove this subscript). We also provide an interpretation of this common set *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> and present some of its properties that will be very useful in the next chapters. In particular, we show that the intersection set *<sup>C</sup>*<sup>p</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> is precisely the set of functions <sup>g</sup> <sup>∈</sup> *<sup>C</sup>*<sup>p</sup> for which g(p) eventually increases or decreases to zero (see Theorem 4.14).

#### **4.1 Some Properties of the Sets** *R<sup>p</sup>* **<sup>S</sup> and** *D<sup>p</sup>* **S**

Although the definition of the set *<sup>R</sup>*<sup>p</sup> <sup>S</sup> seems rather technical (see Definition 1.9), the following proposition shows that this set can be nicely characterized in terms of interpolating polynomials. We omit the proof for it follows immediately from (2.11) and (2.12).

**Proposition 4.1** *Let* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*. A function* <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *lies in <sup>R</sup>*<sup>p</sup> <sup>S</sup> *if and only if for each* x > <sup>0</sup> *such that* <sup>x</sup><sup>p</sup> = <sup>0</sup>*, we have that*

$$\text{sg}[a, a+1, \ldots, a+p-1, a+\mathbf{x}] \to 0 \qquad a \diamond \text{s} \cdot \infty.$$

*When* <sup>S</sup> <sup>=</sup> <sup>R</sup> *(resp.* <sup>S</sup> <sup>=</sup> <sup>N</sup>*), this latter condition means that* <sup>g</sup> *asymptotically coincides with its interpolating polynomial whose nodes are any* p *points equally spaced by* 1 *(resp. any* p *consecutive integers).*

Interestingly, from (3.2) and (3.3) we can also immediately derive the following alternative characterization of the set *<sup>R</sup>*<sup>p</sup> <sup>N</sup>. For any function <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup>, we have

$$f \in \mathcal{R}\_{\mathbb{N}}^{0} \Leftrightarrow f(\mathbf{x}) = -\sum\_{k=0}^{\infty} \Delta f(\mathbf{x} + k) \,, \qquad \mathbf{x} > \mathbf{0};$$

$$f \in \mathcal{R}\_{\mathbb{N}}^{p+1} \Leftrightarrow f(\mathbf{x}) = f(\mathbf{l}) + \lim\_{n \to \infty} f\_n^p[\Delta f](\mathbf{x}) \,, \qquad \mathbf{x} > \mathbf{0}.$$

(Note that we have already used these equivalences in the proofs of the uniqueness Theorems 3.1 and 3.12 and Proposition 3.9.)

We now present a proposition that reveals some interesting inclusions among the sets *<sup>R</sup>*<sup>p</sup> <sup>S</sup> and *<sup>D</sup>*<sup>p</sup> <sup>S</sup> . In particular, it shows that just as the sets *<sup>D</sup>*<sup>0</sup> <sup>S</sup>, *<sup>D</sup>*<sup>1</sup> <sup>S</sup>, *<sup>D</sup>*<sup>2</sup> <sup>S</sup>,... are increasingly nested, so are the sets *<sup>R</sup>*<sup>0</sup> <sup>S</sup>, *<sup>R</sup>*<sup>1</sup> <sup>S</sup>, *<sup>R</sup>*<sup>2</sup> <sup>S</sup>,..., and hence each of these families defines a filtration.

**Proposition 4.2** *For any* <sup>p</sup> <sup>∈</sup> <sup>N</sup> *and any* <sup>S</sup> ∈ {N, <sup>R</sup>}*, the sets <sup>R</sup>*<sup>p</sup> <sup>S</sup> *and <sup>D</sup>*<sup>p</sup> <sup>S</sup> *are real linear spaces that satisfy the identity*

$$\mathcal{R}\_{\mathbb{S}}^{p} = \mathcal{R}\_{\mathbb{S}}^{p+1} \cap \mathcal{D}\_{\mathbb{S}}^{p} \tag{4.1}$$

*and the strict inclusions*

$$\mathcal{R}\_{\mathbb{S}}^{p} \subsetneq \mathcal{R}\_{\mathbb{S}}^{p+1} \qquad and \qquad \mathcal{D}\_{\mathbb{S}}^{p} \subsetneq \mathcal{D}\_{\mathbb{S}}^{p+1}.$$

*When* p ≥ 1 *we also have*

$$
\mathcal{R}\_{\mathbb{S}}^{p} \subsetneq \mathcal{D}\_{\mathbb{S}}^{p}.
$$

*Finally, when* p = 0 *we have*

$$\mathcal{D}\_{\mathbb{R}}^{0} = \mathcal{R}\_{\mathbb{R}}^{0} \subsetneq \mathcal{R}\_{\mathbb{N}}^{0} \subsetneq \mathcal{D}\_{\mathbb{N}}^{0} \dots$$

*Proof* It is clear that the sets *<sup>R</sup>*<sup>p</sup> <sup>S</sup> and *<sup>D</sup>*<sup>p</sup> <sup>S</sup> are closed under linear combinations; hence they are real linear spaces. Identity (4.1) then follows immediately from (3.4) and (3.5). This identity also shows that *<sup>R</sup>*<sup>p</sup> <sup>S</sup> <sup>⊂</sup> *<sup>R</sup>*p+<sup>1</sup> <sup>S</sup> . As already observed, we also have *<sup>D</sup>*<sup>p</sup> <sup>S</sup> <sup>⊂</sup> *<sup>D</sup>*p+<sup>1</sup> <sup>S</sup> trivially. Now, identity (2.11) shows that the polynomial function <sup>x</sup> → <sup>x</sup><sup>p</sup> lies in *<sup>R</sup>*p+<sup>1</sup> <sup>S</sup> \ *<sup>R</sup>*<sup>p</sup> <sup>S</sup> and we can easily see that it lies also in *<sup>D</sup>*p+<sup>1</sup> <sup>S</sup> \ *<sup>D</sup>*<sup>p</sup> <sup>S</sup> . The inclusion *<sup>R</sup>*<sup>p</sup> <sup>S</sup> <sup>⊂</sup> *<sup>D</sup>*<sup>p</sup> <sup>S</sup> follows from (4.1) and we can easily see that the 1-periodic function <sup>x</sup> → sin(2πx) lies in *<sup>D</sup>*<sup>p</sup> <sup>S</sup> \ *<sup>R</sup>*<sup>p</sup> <sup>S</sup> for any <sup>p</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> as well as in *<sup>D</sup>*<sup>0</sup> <sup>N</sup> \ *<sup>R</sup>*<sup>0</sup> N. Finally, let us now show that *<sup>R</sup>*<sup>0</sup> <sup>R</sup> - *<sup>R</sup>*<sup>0</sup> <sup>N</sup>. Using bump functions for instance, we can easily construct a smooth function <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> such that for any <sup>n</sup> <sup>∈</sup> <sup>N</sup>∗, we have <sup>f</sup> <sup>=</sup> 0 on the interval [<sup>n</sup> <sup>−</sup> <sup>1</sup>, n <sup>−</sup> <sup>1</sup> <sup>n</sup> ] and f (n <sup>−</sup> <sup>1</sup> <sup>2</sup><sup>n</sup> ) = 1. Such a function clearly lies in *<sup>R</sup>*<sup>0</sup> <sup>N</sup>. However, it does not vanish at infinity, i.e., it does not lie in *<sup>R</sup>*<sup>0</sup> <sup>R</sup>.

We now present an important result that will be used repeatedly as we continue. It actually follows from the second of the following straightforward identities

$$
\rho\_{a+1}^{p}[f](\mathbf{x}) - \rho\_a^{p}[f](\mathbf{x}) = \rho\_a^{p}[\Delta f](\mathbf{x})\,,\tag{4.2}
$$

$$
\rho\_a^{p+1} \lfloor f \rfloor(\mathbf{x} + 1) - \rho\_a^{p+1} \lfloor f \rfloor(\mathbf{x}) = \rho\_a^p \lfloor \Delta f \rfloor(\mathbf{x}) \,. \tag{4.3}
$$

**Proposition 4.3** *Let* j,p <sup>∈</sup> <sup>N</sup> *be such that* <sup>j</sup> <sup>≤</sup> <sup>p</sup>*. The following assertions hold.*

*(a) If* <sup>f</sup> <sup>∈</sup> *<sup>R</sup>*<sup>p</sup> <sup>S</sup> *, then* jf <sup>∈</sup> *<sup>R</sup>*p−<sup>j</sup> <sup>S</sup> *. (b)* <sup>f</sup> <sup>∈</sup> *<sup>D</sup>*<sup>p</sup> <sup>S</sup> *if and only if* jf <sup>∈</sup> *<sup>D</sup>*p−<sup>j</sup> <sup>S</sup> *.*

*Proof* If <sup>f</sup> lies in *<sup>R</sup>*p+<sup>1</sup> <sup>S</sup> , then f lies in *<sup>R</sup>*<sup>p</sup> <sup>S</sup> by (4.3). On the other hand, it is clear that <sup>f</sup> lies in *<sup>D</sup>*p+<sup>1</sup> <sup>S</sup> if and only if f lies in *<sup>D</sup>*<sup>p</sup> <sup>S</sup> .

It is easy to see that a function <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> whose difference f lies in *<sup>R</sup>*<sup>p</sup> <sup>S</sup> for some <sup>p</sup> <sup>∈</sup> <sup>N</sup> need not lie in *<sup>R</sup>*p+<sup>1</sup> <sup>S</sup> . For instance, the function <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> defined by the equation f (x) <sup>=</sup> sin(2πx) for x > 0 does not lie in *<sup>R</sup>*<sup>1</sup> <sup>S</sup> but its difference f <sup>=</sup> 0 lies in *<sup>R</sup>*<sup>0</sup> <sup>S</sup>. However, we will see in Corollary 4.10 that, if <sup>f</sup> <sup>∈</sup> *<sup>K</sup>*p−1, then the implication in assertion (a) of Proposition 4.3 becomes an equivalence.

*Remark 4.4* In view of Proposition 4.3(b), it is natural to wonder whether there exists a set *<sup>D</sup>* of functions from <sup>R</sup><sup>+</sup> to <sup>R</sup> having the property that <sup>f</sup> <sup>∈</sup> *<sup>D</sup>*<sup>0</sup> <sup>S</sup> if and only if f ∈ *D*. However, such a set does not exist. Indeed, identities (3.1) and (3.2) show that if <sup>f</sup> lies in *<sup>D</sup>*<sup>0</sup> <sup>S</sup>, then necessarily f lies in *<sup>D</sup>*−<sup>1</sup> <sup>S</sup> . Conversely, for any <sup>g</sup> <sup>∈</sup> *<sup>D</sup>*−<sup>1</sup> <sup>S</sup> , there are infinitely many functions <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> that satisfy f <sup>=</sup> <sup>g</sup> but that do not lie in *<sup>D</sup>*<sup>0</sup> <sup>S</sup>. ♦

It is clear that, for any <sup>p</sup> <sup>∈</sup> <sup>N</sup>, if both functions <sup>h</sup> and <sup>g</sup> <sup>−</sup> <sup>h</sup> lie in the space *R*p <sup>S</sup> , then so does the function <sup>g</sup>. For instance, if <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> has the asymptotic property that

$$g(\mathbf{x}) - P(\mathbf{x}) \to 0 \qquad \text{as } \mathbf{x} \to \mathbf{s} \text{ } \infty$$

for some polynomial P of degree less than or equal to p − 1, then g must lie in *R*p <sup>S</sup> . Indeed, <sup>P</sup> clearly lies in *<sup>R</sup>*<sup>p</sup> <sup>S</sup> and we also have that <sup>g</sup> <sup>−</sup> <sup>P</sup> lies in *<sup>R</sup>*<sup>0</sup> <sup>S</sup> (which is included in *<sup>R</sup>*<sup>p</sup> <sup>S</sup> by Proposition 4.2). Thus, the space *<sup>R</sup>*<sup>p</sup> <sup>S</sup> contains not only every polynomial of degree less than or equal to p − 1 but also every function that behaves asymptotically like a polynomial of degree less than or equal to p − 1. To give another illustration of the property above, we observe for instance that both functions ln x and Hx − ln x (the latter tends to Euler's constant γ as x →<sup>S</sup> ∞) lie in *<sup>R</sup>*<sup>1</sup> <sup>R</sup> and hence so does the function Hx, which means that, for each a ≥ 0,

$$H\_{\chi+a} - H\_{\chi} \to 0 \qquad \text{as } \mathfrak{x} \to \infty$$

(which, a priori, is a not completely trivial result).

These examples illustrate the fact that the spaces

$$\mathcal{R}\_{\mathbb{S}}^{\infty} = \bigcup\_{p \ge 0} \mathcal{R}\_{\mathbb{S}}^p \qquad \text{and} \qquad \mathcal{D}\_{\mathbb{S}}^{\infty} = \bigcup\_{p \ge 0} \mathcal{D}\_{\mathbb{S}}^p$$

are very rich and contain a huge variety of functions, including not only all the functions that have polynomial behaviors at infinity as discussed above, and in particular all the rational functions, but also many other functions. We observe, however, that they do not contain any strictly increasing exponential function. For instance, if g(x) <sup>=</sup> <sup>2</sup>x, then pg(x) <sup>=</sup> <sup>2</sup><sup>x</sup> for any <sup>p</sup> <sup>∈</sup> <sup>N</sup>, and this function does not vanish at infinity. Actually, such exponential functions grow asymptotically much faster than polynomial functions and may remain eventually p-convex even after adding nonconstant 1-periodic functions. For instance, both functions 2<sup>x</sup> and <sup>2</sup><sup>x</sup> <sup>+</sup> sin(2πx) are eventually <sup>p</sup>-convex for any <sup>p</sup> <sup>∈</sup> <sup>N</sup>.

*Remark 4.5* Using (1.7) and (4.1), we also obtain *<sup>R</sup>*<sup>p</sup> <sup>S</sup> = *R*<sup>∞</sup> <sup>S</sup> <sup>∩</sup>*D*<sup>p</sup> <sup>S</sup> for any <sup>p</sup> <sup>∈</sup> <sup>N</sup>. ♦

#### **4.2 The Intersection Sets** *R<sup>p</sup>* **<sup>S</sup> ∩** *K<sup>p</sup>* **and** *D<sup>p</sup>* **S ∩** *Kp*

Let us now consider the set *<sup>K</sup>*<sup>p</sup> and its subsets *<sup>R</sup>*<sup>p</sup> <sup>S</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> and *<sup>D</sup>*<sup>p</sup> <sup>S</sup> <sup>∩</sup> *<sup>K</sup>*p. As these sets will be used repeatedly throughout this book, it is important to study their basic properties. In this section, we present a number of results about these sets that will be very useful in the subsequent chapters.

Let us first observe that the set *<sup>K</sup>*<sup>p</sup> is not a linear space. For instance, using Lemma 2.6 we can see that both functions

$$f(\mathbf{x}) = \mathbf{x}^{p+1} + \sin \mathbf{x} \qquad \text{and} \qquad \mathbf{g}(\mathbf{x}) = \mathbf{x}^{p+1}$$

lie in *<sup>K</sup>*<sup>p</sup> but <sup>f</sup> <sup>−</sup> <sup>g</sup> does not. We also have that f does not lie in *<sup>K</sup>*<sup>p</sup> (because <sup>D</sup>pf <sup>=</sup> Dpf does not lie in *<sup>K</sup>*0), which shows that *<sup>K</sup>*<sup>p</sup> is not closed under the operator .

The following corollary shows that *<sup>K</sup>*<sup>p</sup> is actually the union of two convex cones. This result is an immediate consequence of Proposition 2.4.

**Corollary 4.6** *For any* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*, the sets <sup>K</sup>*<sup>p</sup> <sup>+</sup> *and <sup>K</sup>*<sup>p</sup> <sup>−</sup> *are convex cones. These cones are opposite in the sense that* <sup>f</sup> *lies in <sup>K</sup>*<sup>p</sup> <sup>+</sup> *if and only if* <sup>−</sup><sup>f</sup> *lies in <sup>K</sup>*<sup>p</sup> <sup>−</sup>*. Moreover, the intersection <sup>K</sup>*<sup>p</sup> <sup>+</sup> <sup>∩</sup>*K*<sup>p</sup> <sup>−</sup> *is the real linear space of all the real functions on* <sup>R</sup><sup>+</sup> *that are eventually polynomials of degree less than or equal to* p*.*

It is now clear that *<sup>D</sup>*<sup>p</sup> <sup>S</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> is also the union of two opposite convex cones that is not a linear space. For instance, both functions

$$f(\mathbf{x}) = 2\ln \mathbf{x} + \frac{\sin \mathbf{x}}{\mathbf{x}^2} \qquad \text{and} \qquad \mathbf{g}(\mathbf{x}) = 2\ln \mathbf{x}$$

lie in *<sup>D</sup>*<sup>1</sup> <sup>S</sup> <sup>∩</sup> *<sup>K</sup>*<sup>1</sup> (use, e.g., Theorem 4.14(b) below) but <sup>f</sup> <sup>−</sup> <sup>g</sup> does not.

Now, the following proposition shows that, just as the sets *<sup>C</sup>*0, *<sup>C</sup>*1, *<sup>C</sup>*2,... are decreasingly nested, so are the sets *<sup>K</sup>*−<sup>1</sup> , *<sup>K</sup>*0, *<sup>K</sup>*1,.... Thus, this latter family defines a descending filtration and we can therefore introduce the intersection set

$$\kappa^{\infty} = \bigcap\_{p \ge 0} \mathcal{K}^p.$$

**Proposition 4.7** *For any integer* <sup>p</sup> ≥ −1*, we have <sup>K</sup>*p+<sup>1</sup> -*<sup>K</sup>*p*.*

*Proof* Let <sup>f</sup> lie in *<sup>K</sup>*p+<sup>1</sup> for some integer <sup>p</sup> ≥ −1. Suppose for instance that <sup>f</sup> lies in *<sup>K</sup>*p+<sup>1</sup> <sup>+</sup> and let <sup>I</sup> be an unbounded subinterval of <sup>R</sup><sup>+</sup> on which <sup>f</sup> is (p+1)-convex. Let *<sup>I</sup>*p+<sup>2</sup> denote the set of tuples of <sup>I</sup>p+<sup>2</sup> whose components are pairwise distinct. By Lemma 2.5, it follows that the restriction of the map

$$f(z\_0, \ldots, z\_{p+1}) \mapsto f[z\_0, \ldots, z\_{p+1}]$$

to *<sup>I</sup>*p+<sup>2</sup> is increasing in each place. If <sup>f</sup> does not lie in *<sup>K</sup>*<sup>p</sup> <sup>−</sup>, then there are p + 2 points x<sup>0</sup> < ··· < xp+<sup>1</sup> in I such that f [x0,...,xp+1] > 0. But then, f is pconvex on the interval (xp+1,∞), and hence it lies in *<sup>K</sup>*<sup>p</sup> <sup>+</sup>, which establishes the inclusion. To see that the inclusion is strict, using Lemma 2.6 we just observe that the function <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> defined by the equation

$$f(\mathbf{x}) = \mathbf{x}^{p+1} + \sin \mathbf{x} \qquad \text{for } \mathbf{x} > \mathbf{0}$$

lies in *<sup>K</sup>*<sup>p</sup> \ *<sup>K</sup>*p+1.

Interestingly, Proposition 4.7 shows that the assumption that <sup>g</sup> lies in *<sup>K</sup>*p, which occurs in many statements (e.g., in Theorem 3.6), can be given equivalently by the condition that <sup>g</sup> lies in <sup>∪</sup>q≥p*K*<sup>q</sup> .

We now present two useful propositions. The first one is very important: it shows that the sets *<sup>R</sup>*<sup>p</sup> <sup>S</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> and *<sup>D</sup>*<sup>p</sup> <sup>S</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> coincide and are actually independent of S.

**Proposition 4.8** *For any* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*, we have*

$$\mathcal{R}\_{\mathbb{R}}^p \cap \mathbb{K}^p = \mathcal{D}\_{\mathbb{R}}^p \cap \mathbb{K}^p = \mathcal{R}\_{\mathbb{N}}^p \cap \mathbb{K}^p = \mathcal{D}\_{\mathbb{N}}^p \cap \mathbb{K}^p.$$

*Proof* We already know that *<sup>R</sup>*<sup>p</sup> <sup>S</sup> <sup>⊂</sup> *<sup>D</sup>*<sup>p</sup> <sup>S</sup> (cf. Proposition 4.2) and *<sup>D</sup>*<sup>p</sup> <sup>R</sup> <sup>⊂</sup> *<sup>D</sup>*<sup>p</sup> N. Moreover, we have that *<sup>D</sup>*<sup>p</sup> <sup>S</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> <sup>⊂</sup> *<sup>R</sup>*<sup>p</sup> <sup>S</sup> by Theorem 3.6. It remains to show that

*D*p <sup>N</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> <sup>⊂</sup> *<sup>D</sup>*<sup>p</sup> <sup>R</sup>. Let <sup>g</sup> lie in *<sup>D</sup>*<sup>p</sup> <sup>N</sup> <sup>∩</sup> *<sup>K</sup>*p. Suppose for instance that <sup>g</sup> lies in *<sup>K</sup>*<sup>p</sup> <sup>+</sup> and let a > 0 be so that <sup>g</sup> is <sup>p</sup>-convex on [a,∞). By Lemma 2.5, pg is increasing on [a,∞). Thus, for any x ≥ a + 1, we have

$$
\Delta^p \mathbf{g}(\lfloor \mathbf{x} \rfloor) \le \Delta^p \mathbf{g}(\mathbf{x}) \le \Delta^p \mathbf{g}(\lceil \mathbf{x} \rceil).
$$

Letting <sup>x</sup> → ∞ and using the squeeze theorem, we obtain that <sup>g</sup> lies in *<sup>D</sup>*<sup>p</sup> <sup>R</sup>.

**Proposition 4.9** *If* <sup>f</sup> <sup>∈</sup> *<sup>K</sup>*<sup>p</sup> *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*, then the following assertions are equivalent:*

$$\text{(i) } f \in \mathcal{R}\_{\mathbb{S}}^{p+1}, \text{ (ii) } f \in \mathcal{D}\_{\mathbb{S}}^{p+1}, \text{ (iii) } \Delta f \in \mathcal{R}\_{\mathbb{S}}^{p}, \text{ (iv) } \Delta f \in \mathcal{D}\_{\mathbb{S}}^{p} \text{ } \Delta f \in \mathcal{D}\_{\mathbb{S}}^{p} \text{ } \Delta f \in \mathcal{D}\_{\mathbb{S}}^{p} \text{ } \Delta f \in \mathcal{D}\_{\mathbb{S}}^{p} \text{ } \Delta f \in \mathcal{D}\_{\mathbb{S}}^{p} \text{ } \Delta f \in \mathcal{D}\_{\mathbb{S}}^{p} \text{ } \Delta f \in \mathcal{D}\_{\mathbb{S}}^{p}$$

*Proof* By Proposition 4.2, we clearly have that (i) implies (ii) and that (iii) implies (iv). By Proposition 4.3, we also have that (i) implies (iii) and that (ii) implies (iv). Finally, by Theorem 3.1, we have that (iv) implies (i).

Combining Proposition 4.3 with Propositions 4.7 and 4.9, we immediately obtain the following corollary, which naturally complements Proposition 4.3.

**Corollary 4.10** *Let* j,p <sup>∈</sup> <sup>N</sup> *be such that* <sup>j</sup> <sup>≤</sup> <sup>p</sup>*. If* <sup>f</sup> <sup>∈</sup> *<sup>K</sup>*p−1*, then we have* <sup>f</sup> <sup>∈</sup> *<sup>R</sup>*<sup>p</sup> <sup>S</sup> *if and only if* jf <sup>∈</sup> *<sup>R</sup>*p−<sup>j</sup> <sup>S</sup> *.*

Due to Proposition 4.8, we will henceforth write *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> instead of *<sup>D</sup>*<sup>p</sup> <sup>S</sup> <sup>∩</sup> *<sup>K</sup>*p. In view of (3.10), we will also write *<sup>D</sup>*−<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*<sup>0</sup> instead of *<sup>D</sup>*−<sup>1</sup> <sup>S</sup> <sup>∩</sup> *<sup>K</sup>*0.

Since the set *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> is clearly a central object of our theory (cf. our existence Theorem 3.6), it is important to investigate its properties. In this respect, we have the following two propositions.

**Proposition 4.11** *Let* j,p <sup>∈</sup> <sup>N</sup> *be such that* <sup>j</sup> <sup>≤</sup> <sup>p</sup>*. The following assertions hold.*


*Proof* This result immediately follows from Lemma 2.6(b) and Proposition 4.3.

**Proposition 4.12** *Let* j,p <sup>∈</sup> <sup>N</sup> *be such that* <sup>j</sup> <sup>≤</sup> <sup>p</sup> *and let* <sup>g</sup> <sup>∈</sup> *<sup>C</sup>*<sup>j</sup> *. The following assertions hold.*


*Proof* Assertion (a) follows from assertions (c) and (d) of Lemma 2.6. To see that assertion (b) holds, it is enough to show that, for any <sup>p</sup> <sup>≥</sup> 1, we have <sup>g</sup> <sup>∈</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> + if and only if <sup>g</sup> <sup>∈</sup> *<sup>D</sup>*p−<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*p−<sup>1</sup> <sup>+</sup> .

Suppose first that <sup>g</sup> lies in *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> <sup>+</sup>. Then <sup>g</sup> lies in *<sup>K</sup>*p−<sup>1</sup> <sup>+</sup> by assertion (a). Let x > 1 be so that <sup>g</sup> is <sup>p</sup>-convex on [<sup>x</sup> <sup>−</sup> <sup>1</sup>,∞). Then p−1<sup>g</sup> is increasing on [x − 1,∞) by assertion (a) and Proposition 4.11(a). By the mean value theorem, there exist ξ <sup>1</sup> <sup>x</sup> , ξ <sup>2</sup> <sup>x</sup> ∈ (0, 1) such that

$$\begin{aligned} \Delta^p g(\mathbf{x} - \mathbf{l}) &= \Delta^{p-1} g'(\mathbf{x} - \mathbf{l} + \boldsymbol{\xi}\_\mathbf{x}^\mathbf{l}) \leq \Delta^{p-1} g'(\mathbf{x}) \\ &\leq \Delta^{p-1} g'(\mathbf{x} + \boldsymbol{\xi}\_\mathbf{x}^\mathbf{l}) = \Delta^p g(\mathbf{x}). \end{aligned}$$

Letting <sup>x</sup> → ∞, we see that <sup>g</sup> lies in *<sup>D</sup>*p−<sup>1</sup> <sup>R</sup> by the squeeze theorem.

Conversely, suppose that <sup>g</sup> lies in *<sup>D</sup>*p−<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*p−<sup>1</sup> <sup>+</sup> . Then <sup>g</sup> lies in *<sup>K</sup>*<sup>p</sup> <sup>+</sup> by assertion (a). Let x > 0 be so that g is (p − 1)-convex on [x,∞) and let t ∈ (x, x + 1). Then p−1<sup>g</sup> is increasing on [x,∞) by Proposition 4.11(a), and hence we have

$$
\Delta^{p-1} \mathcal{g}'(\mathbf{x}) \le \Delta^{p-1} \mathcal{g}'(t) \le \Delta^{p-1} \mathcal{g}'(\mathbf{x} + 1).
$$

Integrating on t ∈ (x, x + 1), we obtain

$$
\Delta^{p-1} g'(\mathbf{x}) \le \Delta^p g(\mathbf{x}) \le \Delta^{p-1} g'(\mathbf{x} + 1).
$$

Letting <sup>x</sup> → ∞, we see that <sup>g</sup> lies in *<sup>D</sup>*<sup>p</sup> <sup>R</sup>.

*Remark 4.13* If a function <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> is such that f lies in *<sup>K</sup>*<sup>p</sup> for some <sup>p</sup> <sup>∈</sup> <sup>N</sup>, then <sup>f</sup> need not lie in *<sup>K</sup>*p+1, even if f lies in *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*p. For instance, the function <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> defined by the equation

$$f(\mathbf{x}) = \frac{1}{2^{\chi}} \left( 1 + \frac{1}{3} \sin \mathbf{x} \right) \qquad \text{for } \mathbf{x} > \mathbf{0}$$

lies in *<sup>K</sup>*<sup>0</sup> <sup>−</sup> \ *<sup>K</sup>*1. Indeed, 2xf (x) is 2π-periodic and negative while 2xf (x) is 2πperiodic and change in sign from <sup>x</sup> <sup>=</sup> <sup>π</sup> <sup>6</sup> to x = π. However, the function f lies in *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*<sup>0</sup> <sup>+</sup> for 2xf (x) is 2π-periodic and positive. This example shows that the implications in Proposition 4.11 cannot be equivalences. ♦

If a function <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> lies in *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> for some <sup>p</sup> <sup>∈</sup> <sup>N</sup>, then by Proposition 4.11 the function pg lies in *<sup>D</sup>*<sup>0</sup> <sup>∩</sup>*K*0, i.e., pg eventually increases or decreases to zero. However, a function <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> that satisfies this latter property need not lie in *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*p, unless <sup>g</sup> lies in *<sup>K</sup>*<sup>p</sup> or <sup>p</sup> <sup>=</sup> 0. The example introduced in Remark 4.13 illustrates this phenomenon when p = 1. On the other hand, when g lies in *<sup>C</sup>*p, by Proposition 4.12 we have that <sup>g</sup> lies in *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> if and only if <sup>g</sup>(p) lies in *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*0.

We gather these important observations in the following theorem.

**Theorem 4.14** *Let* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*. The following assertions hold.*


*Proof* Assertion (a) immediately follows from Propositions 4.3 and 4.11. Assertion (b) immediately follows from Proposition 4.12.

*Remark 4.15* It is not difficult to see that the function g(x) <sup>=</sup> <sup>1</sup> <sup>x</sup> sin <sup>x</sup><sup>2</sup> vanishes at infinity while its derivative does not. Theorem 4.14(b) shows that if g lies in *<sup>C</sup>*<sup>q</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>q</sup> for some p, q <sup>∈</sup> <sup>N</sup> such that <sup>p</sup> <sup>≤</sup> <sup>q</sup>, then all the functions g(p), g(p+1) ,...,g(q) vanish at infinity. ♦

Propositions 4.11 and 4.12 do not provide any information on the functions g and <sup>g</sup> when <sup>g</sup> lies in *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*<sup>0</sup> and *<sup>C</sup>*<sup>1</sup> <sup>∩</sup> *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*0, respectively. The following proposition fills this gap under the additional assumptions that g and <sup>g</sup> lie in *<sup>K</sup>*0, respectively.

#### **Proposition 4.16** *The following assertions hold.*

*(a) If* <sup>g</sup> *lies in <sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*<sup>0</sup> <sup>−</sup> *and is such that* g *lies in <sup>K</sup>*0*, then* g *lies in <sup>D</sup>*−<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*<sup>0</sup> +*. (b) If* <sup>g</sup> *lies in <sup>C</sup>*<sup>1</sup> <sup>∩</sup> *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*<sup>0</sup> <sup>−</sup> *and is such that* <sup>g</sup> *lies in <sup>K</sup>*<sup>0</sup> *(or equivalently,* <sup>g</sup> *lies in <sup>K</sup>*1*), then* <sup>g</sup> *lies in <sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*−<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*<sup>0</sup> +*.*

*Proof* Let us first prove assertion (a). Since g is eventually decreasing, g must be eventually negative. But since g also lies in *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*0, it must be eventually increasing to zero. On the other hand, since <sup>g</sup> lies in *<sup>D</sup>*0, g must lie in *<sup>D</sup>*−<sup>1</sup> <sup>N</sup> . This proves assertion (a).

Let us now prove assertion (b). Since g is eventually decreasing, g must be eventually negative. Since <sup>g</sup> lies in *<sup>K</sup>*<sup>0</sup> (and hence <sup>g</sup> lies in *<sup>K</sup>*<sup>1</sup> by Lemma 2.6), we have that <sup>g</sup> lies in *<sup>D</sup>*<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*<sup>1</sup> (since *<sup>D</sup>*<sup>0</sup> <sup>S</sup> <sup>⊂</sup> *<sup>D</sup>*<sup>1</sup> <sup>S</sup>). Proposition 4.12 then tells us that g lies in *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*0, and hence it must be eventually increasing to zero.

It remains to show that <sup>g</sup> lies in *<sup>D</sup>*−<sup>1</sup> <sup>N</sup> . Let x > 1 be so that g is decreasing and g is increasing on Ix = [x − 1,∞). By the mean value theorem, for any integer k ≥ x there exist ξk ∈ (0, 1) such that

$$
\Delta \mathbf{g}(k-1) \, = \, \mathbf{g}'(k-1+\xi\_k) \, \le \, \mathbf{g}'(k).
$$

For any integers m, n such that x ≤ m ≤ n, we then have

$$\log(n-1) - \operatorname{g}(m-1) = \sum\_{k=m}^{n-1} \Delta \operatorname{g}(k-1) \le \sum\_{k=m}^{n-1} \operatorname{g}'(k) \le 0.1$$

Letting <sup>n</sup> <sup>→</sup><sup>N</sup> <sup>∞</sup>, we can see that <sup>g</sup> lies in *<sup>D</sup>*−<sup>1</sup> <sup>N</sup> .

*Remark 4.17* The assumption that g lies in *<sup>K</sup>*<sup>0</sup> cannot be ignored in Proposition 4.16(a). Indeed, take for instance the function g = f , where f is the function defined in Remark 4.13. We have seen that this function lies in *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*0. However, it is not difficult to see that g does not lie in *<sup>K</sup>*0. Similarly, the assumption that <sup>g</sup> lies in *<sup>K</sup>*<sup>0</sup> cannot be ignored in Proposition 4.16(b). Indeed, one can show that the same function <sup>g</sup> has the property that <sup>g</sup> does not lie in *<sup>K</sup>*0. To give another example, one can show that the function

$$g(x) := \frac{1}{x^3} (x + \sin x)$$

lies in *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*<sup>0</sup> whereas its derivative <sup>g</sup> does not lie in *<sup>K</sup>*0. ♦

We also have the following two corollaries, in which the symbols *R* and *D* can be used interchangeably.

**Corollary 4.18** *Let* <sup>g</sup> *lie in <sup>K</sup>*<sup>p</sup> <sup>+</sup> *(resp. <sup>K</sup>*<sup>p</sup> <sup>−</sup>*) for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*. Then* <sup>g</sup> *lies in <sup>D</sup>*<sup>p</sup> <sup>S</sup> *if and only if there exists a solution* <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *to the equation* f <sup>=</sup> <sup>g</sup> *that lies in D*p+<sup>1</sup> <sup>S</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> <sup>−</sup> *(resp. <sup>D</sup>*p+<sup>1</sup> <sup>S</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> +*).*

*Proof* The *D*-version immediately follows from Theorem 3.6 and Proposition 4.3(b). The *R*-version then follows from Proposition 4.9 and Proposition 4.3(a).

**Corollary 4.19** *For any* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*, we have that*

$$\mathcal{D}^p \cap \mathcal{K}\_+^p \subset \mathcal{K}\_-^{p-1} \qquad \text{and} \qquad \mathcal{D}^p \cap \mathcal{K}\_-^p \subset \mathcal{K}\_+^{p-1}.$$

*More precisely, if* <sup>g</sup> *lies in <sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> *and is* <sup>p</sup>*-convex (resp.* <sup>p</sup>*-concave) on an unbounded interval of* <sup>R</sup>+*, then on this interval it is also* (p <sup>−</sup> <sup>1</sup>)*-concave (resp.* (p − 1)*-convex).*

*Proof* Let <sup>g</sup> lie in *<sup>D</sup>*p∩*K*<sup>p</sup> <sup>+</sup>. Then the function <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> defined in the existence Theorem 3.6 is <sup>p</sup>-concave on any unbounded subinterval of <sup>R</sup><sup>+</sup> on which <sup>g</sup> is <sup>p</sup>convex. By Lemma 2.6(b), the function g = f is also (p − 1)-concave on this interval.

We end this chapter by providing a characterization of the set *<sup>R</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> <sup>=</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> in terms of interpolating polynomials. We also give a corollary that will be very useful in the subsequent chapters.

**Proposition 4.20** *Let* <sup>g</sup> *lie in <sup>K</sup>*<sup>p</sup> *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*. Then we have that* <sup>g</sup> *lies in <sup>D</sup>*<sup>p</sup> S *if and only if for any pairwise distinct* x0,...,xp > 0*, we have that*

$$\mathbf{g}[a+\mathbf{x}\_0, \dots, a+\mathbf{x}\_p] \to 0 \qquad a \text{ as } a \to\_{\mathbf{S}} \infty.$$

*This latter condition means that* g *asymptotically coincides with its interpolating polynomial with any* p *nodes.*

*Proof* (Necessity) Suppose for instance that <sup>g</sup> lies in *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> <sup>+</sup>. By Corollary 4.19, it also lies in *<sup>K</sup>*p−<sup>1</sup> <sup>−</sup> . Let x0,...,xp > 0 be any pairwise distinct points and let a > 0 be so that g is p-convex and (p − 1)-concave on [a,∞). Then the map

$$\{\mathbf{x} \mapsto \mathbf{g} \| \mathbf{x} + \mathbf{x}\_0, \dots, \mathbf{x} + \mathbf{x}\_p \}$$

is nonpositive on [a,∞) and, by Lemma 2.5, it is also increasing on [a,∞). By (2.8), we then have

$$\frac{1}{p!} \Delta^p \mathbf{g}(a) = \mathbf{g}[a, a+1, \dots, a+p] \le \mathbf{g}[a+p+\mathbf{x}\_0, \dots, a+p+\mathbf{x}\_p] \le 0,$$

where the left side increases to zero as a →<sup>S</sup> ∞.

(Sufficiency) This immediately follows from Propositions 4.1 and 4.8.

**Corollary 4.21** *Let* <sup>g</sup> *lie in <sup>K</sup>*<sup>p</sup> <sup>+</sup> *(resp. <sup>K</sup>*<sup>p</sup> <sup>−</sup>*) for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*, let* a > <sup>0</sup> *and* <sup>b</sup> <sup>≥</sup> <sup>0</sup>*, and let* <sup>h</sup>: <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *be defined by the equation* h(x) <sup>=</sup> g(ax+b) *for* x > <sup>0</sup>*. Then*

*(a)* <sup>h</sup> *lies in <sup>K</sup>*<sup>p</sup> <sup>+</sup> *(resp. <sup>K</sup>*<sup>p</sup> −*); (b) if* <sup>g</sup> *lies in <sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*p*, then* <sup>h</sup> *lies in <sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> <sup>+</sup> *(resp. <sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> −*).*

*Proof* The result is trivial if p = 0. So let us assume that p ≥ 1 and for instance that g is p-convex on [s,∞) for some s > 0. Using (2.4), we can easily show that for any pairwise distinct points x0,...,xp > 0 we have

$$h[\mathbf{x}\_0, \dots, \mathbf{x}\_p] = a^p \operatorname{g}[a\mathbf{x}\_0 + b, \dots, a\mathbf{x}\_p + b].$$

This immediately shows that <sup>h</sup> is <sup>p</sup>-convex on [ <sup>1</sup> <sup>a</sup> (s−b),∞) and hence that assertion (a) holds. Now, suppose that <sup>g</sup> lies in *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> <sup>+</sup>. Then <sup>h</sup> lies in *<sup>K</sup>*<sup>p</sup> <sup>+</sup> by assertion (a). Moreover, for any pairwise distinct x0,...,xp > 0, by Proposition 4.20 we have that

$$h[n+\mathbf{x}\_0, \dots, n+\mathbf{x}\_p] = a^p \operatorname{g}[an+a\mathbf{x}\_0+b, \dots, an+a\mathbf{x}\_p+b] \to 0$$

as <sup>n</sup> <sup>→</sup><sup>N</sup> <sup>∞</sup>. Hence <sup>h</sup> also lies in *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> <sup>+</sup> by Proposition 4.20. This establishes assertion (b).

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

## **Chapter 5 Multiple log** *-***-Type Functions**

In this chapter, we introduce and investigate the map, denote it by , that carries any function g lying in

$$\bigcup\_{p\geq 0} (\mathcal{D}^p \cap \mathcal{K}^p)$$

into the unique solution f to the equation f = g that arises from the existence Theorem 3.6. We call these solutions *multiple* log --*type functions* and we investigate certain of their properties. We also discuss the search for simple conditions on the function <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> to ensure the existence of g. Further important properties of these functions, including counterparts of several classical properties of the gamma function, will be investigated in the next three chapters.

The map is actually a central concept of the theory developed here. Its definition and properties seem to show that it is as fundamental as the basic antiderivative operation. In the next chapter we show that both concepts actually share many common features.

## **5.1 The Map and Its Basic Properties**

In this section, we introduce the map and discuss some of its basic properties. We begin with the following important definition.

**Definition 5.1 (Asymptotic Degree)** The *asymptotic degree* of a function <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup>, denoted deg <sup>f</sup> , is defined by the equation

$$\deg f \;=\; -1 + \min\{q \in \mathbb{N} : f \in \mathcal{D}\_{\mathbb{R}}^q\}.$$

For instance, if <sup>f</sup> is a polynomial of degree <sup>p</sup> for some <sup>p</sup> <sup>∈</sup> <sup>N</sup>, then deg <sup>f</sup> <sup>=</sup> <sup>p</sup>. If f (x) <sup>=</sup> 0 or f (x) <sup>=</sup> <sup>1</sup> <sup>x</sup> , or f (x) <sup>=</sup> ln(<sup>1</sup> <sup>+</sup> <sup>1</sup> <sup>x</sup> ), then deg f = −1. If f (x) = sin x or f (x) <sup>=</sup> <sup>x</sup> <sup>+</sup> sin <sup>x</sup>, or f (x) <sup>=</sup> <sup>2</sup>x, then deg <sup>f</sup> = ∞.

It is easy to see that the identity

$$
\deg f \, = \, 1 + \deg \Delta f
$$

holds whenever deg f is a nonnegative integer. However, it is no longer true when deg <sup>f</sup> = −1. For instance, for the function f (x) <sup>=</sup> 0 or the function f (x) <sup>=</sup> <sup>1</sup> <sup>x</sup> , we have deg f = deg f = −1. This shows that in general we have

$$(\deg f)\_+ = \text{l} + \deg \Delta f.$$

We are now ready to introduce the map . Here and throughout, the symbols dom() and ran() denote the domain and range of , respectively.

**Definition 5.2 (The Map )** We define the map : dom() <sup>→</sup> ran(), where

$$\text{dom}(\Sigma) = \bigcup\_{p \ge 0} (\mathcal{D}^p \cap \mathcal{K}^p),$$

by the following condition: if <sup>g</sup> <sup>∈</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> for some <sup>p</sup> <sup>∈</sup> <sup>N</sup>, then

$$\Sigma \mathfrak{g} = \lim\_{n \to \infty} f\_n^p[\mathfrak{g}].\tag{5.1}$$

It is important to note that the map is well defined; indeed, if g lies in both sets *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> and *<sup>D</sup>*<sup>q</sup> <sup>∩</sup> *<sup>K</sup>*<sup>q</sup> for some integers 0 <sup>≤</sup> p<q, then by Proposition 3.8 both sequences <sup>n</sup> → <sup>f</sup> <sup>p</sup> <sup>n</sup> [g] and <sup>n</sup> → <sup>f</sup> <sup>q</sup> <sup>n</sup> [g] have the same limiting function. Thus, in view of Proposition 4.7, we can see that condition (5.1) holds for p = 1 + deg g.

Thus defined, it is clear that the map is one-to-one; indeed, if g<sup>1</sup> = g<sup>2</sup> for some functions g<sup>1</sup> and g<sup>2</sup> lying in dom(), then g<sup>1</sup> = g<sup>1</sup> = g<sup>2</sup> = g2. This map is even a bijection since we have restricted its codomain to its range. We then have the following immediate result.

**Proposition 5.3** *The map is a bijection and its inverse is the restriction of the difference operator to* ran()*.*

Just as the indefinite integral (or antiderivative) of a function g is the class of functions whose derivative is g, the indefinite sum (or antidifference) of a function g is the class of functions whose difference is g (see, e.g., Graham et al. [41, p. 48]). Recall also that any two indefinite integrals of a function differ by a constant while any two indefinite sums of a function differ by a 1-periodic function. The map now enables one to refine the definition of an indefinite sum as follows.

**Definition 5.4** We say that the *principal indefinite sum* of a function g lying in dom() is the class of functions <sup>c</sup> <sup>+</sup> g, where <sup>c</sup> <sup>∈</sup> <sup>R</sup>.

*Example 5.5 (The Log-Gamma Function)* If g(x) = ln x, then we have g(x) = ln -(x), and we simply write

$$
\Sigma \ln x = \ln \Gamma(\mathbf{x}), \qquad \mathbf{x} > \mathbf{0}.
$$

Thus, the principal indefinite sum of the function x → ln x is the class of functions x → c + ln -(x), where <sup>c</sup> <sup>∈</sup> <sup>R</sup>. With some abuse of language, we can say that the principal indefinite sum of the log function is the log-gamma function. ♦

Exactly as for the difference operator , we will sometimes add a subscript to the symbol to specify the variable on which the map acts. For instance, x g(2x) stands for the function obtained by applying to the function x → g(2x) while g(2x) stands for the value of the function g at 2x.

The following proposition provides some straightforward properties of the map that will be very useful as we continue.

**Proposition 5.6** *Let* <sup>g</sup> *lie in <sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*. The following assertions hold.*


$$\Sigma \lg(n) = \sum\_{k=1}^{n-1} g(k) \,, \qquad n \in \mathbb{N}^\*, \tag{5.2}$$

$$
\Sigma \mathbf{g}(\mathbf{x} + \mathbf{n}) = \Sigma \mathbf{g}(\mathbf{x}) + \sum\_{k=0}^{n-1} \mathbf{g}(\mathbf{x} + k) \,, \qquad n \in \mathbb{N}, \tag{5.3}
$$

*and*

$$\left[\Sigma g(\mathbf{x})\right] = f\_n^p[\mathbf{g}](\mathbf{x}) + \rho\_n^{p+1}[\Sigma \mathbf{g}](\mathbf{x})\,,\qquad n \in \mathbb{N}^\*.\tag{5.4}$$

*Proof* Assertions (a) and (b) immediately follow from Theorems 3.1 and 3.6 and Proposition 4.9. Identities (5.2)–(5.4) follow from (3.1)–(3.3).

Quite surprisingly, we observe that if <sup>g</sup> lies in *<sup>D</sup>*<sup>p</sup> <sup>∩</sup>*K*<sup>p</sup> for some <sup>p</sup> <sup>∈</sup> <sup>N</sup>, then g need not lie in *<sup>K</sup>*p+1. The example given in Remark 4.13 illustrates this observation.

We also have that

$$
\deg \Sigma \mathfrak{g} = \mathfrak{l} + \deg \mathfrak{g}
$$

whenever degg is a nonnegative integer; but this property no longer holds if degg = −1. For instance, considering the functions

$$\log(\mathbf{x}) = \frac{2 - \mathbf{x}}{\mathbf{x}(\mathbf{x} + 1)(\mathbf{x} + 2)} \qquad \text{and} \qquad \Sigma \mathbf{g}(\mathbf{x}) = \frac{\mathbf{x} - 1}{\mathbf{x}(\mathbf{x} + 1)},$$

we have deg g = degg = −1. Thus, in general we have

$$(\deg \Sigma g)\_+ = \lg + \deg g.$$

We now give two important propositions, which were essentially proved by Webster [98, Theorem 5.1] in the special case when p = 1.

**Proposition 5.7** *Let* <sup>g</sup><sup>1</sup> *and* <sup>g</sup><sup>2</sup> *lie in <sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup> *and let* <sup>c</sup>1, c<sup>2</sup> <sup>∈</sup> <sup>R</sup>*. If* <sup>c</sup>1g<sup>1</sup> <sup>+</sup> <sup>c</sup>2g<sup>2</sup> *lies in <sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*p*, then*

$$
\Sigma(c\_1\mathbf{g}\_1 + c\_2\mathbf{g}\_2) = c\_1\Sigma\mathbf{g}\_1 + c\_2\Sigma\mathbf{g}\_2.
$$

*Proof* It is clear that if <sup>g</sup> lies in *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*p, then we have cg <sup>=</sup> cg for any <sup>c</sup> <sup>∈</sup> <sup>R</sup>. Now, suppose that <sup>g</sup>1, <sup>g</sup>2, and <sup>g</sup><sup>1</sup> <sup>+</sup> <sup>g</sup><sup>2</sup> lie in *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> and let us show that

$$
\Sigma(\mathbf{g}\_1 + \mathbf{g}\_2) = \Sigma \mathbf{g}\_1 + \Sigma \mathbf{g}\_2.
$$

It is actually enough to consider the following two cases.


$$
\Delta \mathbf{g}\_2 = \Sigma((\mathbf{g}\_1 + \mathbf{g}\_2) + (-\mathbf{g}\_1)) \\
= \Sigma(\mathbf{g}\_1 + \mathbf{g}\_2) - \Sigma \mathbf{g}\_1.
$$

This completes the proof.

**Proposition 5.8** *Let* <sup>g</sup> *lie in <sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> <sup>+</sup> *(resp. <sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> <sup>−</sup>*) for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*, let* <sup>a</sup> <sup>≥</sup> <sup>0</sup>*, and let* <sup>h</sup>: <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *be defined by the equation* h(x) <sup>=</sup> g(x <sup>+</sup> a) *for* x > <sup>0</sup>*. Then* <sup>h</sup> *lies in <sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> <sup>+</sup> *(resp. <sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> <sup>−</sup>*) and*

$$
\Sigma h(\mathbf{x}) = \Sigma\_{\mathbf{x}} \mathbf{g}(\mathbf{x} + a) = \Sigma \mathbf{g}(\mathbf{x} + a) - \Sigma \mathbf{g}(a + 1).
$$

*Proof* Define a function <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> by the equation

$$f(\mathbf{x}) = \Sigma \mathbf{g}(\mathbf{x} + a) - \Sigma \mathbf{g}(a + 1)$$

$$\mathbb{D}$$

for x > 0. By Corollary 4.21, f is a solution to the equation f = h that lies in *K*p <sup>−</sup> (resp. *<sup>K</sup>*<sup>p</sup> <sup>+</sup>) and satisfies f (1) = 0. Hence, h = f , as required.

*Example 5.9 (See Webster [98])* For any a > 0, consider the function ga : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> defined by

$$\log\_a(\mathbf{x}) = \ln \frac{\mathbf{x}}{\mathbf{x} + a} = \ln \mathbf{x} - \ln(\mathbf{x} + a) \qquad \text{for } \mathbf{x} > \mathbf{0}.$$

Then ga lies in *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*<sup>0</sup> <sup>+</sup> (and also in *<sup>D</sup>*<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*<sup>1</sup> <sup>−</sup>) and Propositions 5.7 and 5.8 show that

$$\Sigma \mathfrak{g}\_a(\mathfrak{x}) \;= \; \ln \frac{\Gamma(\mathfrak{x}) \Gamma(a+1)}{\Gamma(\mathfrak{x}+a)} \;.$$

Also, since ga is concave on <sup>R</sup>+, we have that ga is convex on <sup>R</sup>+. As Webster [98, p. 615] observed, this is "a not completely trivial result, but one immediate from the approach adopted here." ♦

*Example 5.10 (A Rational Function)* The function

$$g(x) \, = \frac{x^4 + 1}{x^3 + x} \, = \, x + \frac{1}{x} - \frac{2x}{x^2 + 1}$$

clearly lies in *<sup>D</sup>*<sup>2</sup> <sup>∩</sup> *<sup>K</sup>*2. Using Proposition 5.7, we then have

$$
\Delta \mathbf{g}(\mathbf{x}) = \begin{pmatrix} \mathbf{x} \\ \mathbf{2} \end{pmatrix} + H\_{\mathbf{x}-1} - \mathbf{2} \,\Sigma h(\mathbf{x}),
$$

where the function

$$h(\mathbf{x}) = \frac{\mathbf{x}}{\mathbf{x}^2 + 1} = \mathfrak{R}\left(\frac{1}{\mathbf{x} + i}\right)$$

lies in *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*0. Now, recalling that x <sup>1</sup> <sup>x</sup> = Hx−1, it is not difficult to see that

$$\Sigma h(\mathbf{x}) \, = \, c + \Re H\_{\mathbf{x} + \mathbf{i} - \mathbf{k}}$$

for some <sup>c</sup> <sup>∈</sup> <sup>R</sup>, where the function <sup>z</sup> → Hz on <sup>C</sup> \ (−N∗) satisfies the identity

$$H\_{\mathbb{C}} = \sum\_{k=1}^{\infty} \left( \frac{1}{k} - \frac{1}{z+k} \right).$$

Indeed, the function <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> defined by the equation

$$f(\mathbf{x}) = \Re H\_{\mathbf{x} + \mathbf{i} - 1} = \sum\_{k=1}^{\infty} \left( \frac{1}{k} - \frac{\mathbf{x} + k - 1}{(\mathbf{x} + k - 1)^2 + 1} \right), \qquad \mathbf{x} > \mathbf{0},$$

lies in *<sup>K</sup>*<sup>0</sup> and satisfies f <sup>=</sup> <sup>h</sup>. ♦

We also have the following surprising proposition, which says that if a function <sup>g</sup> lies in *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> <sup>−</sup> <sup>∩</sup> *<sup>K</sup>*<sup>q</sup> for some integers 0 <sup>≤</sup> <sup>p</sup> <sup>≤</sup> <sup>q</sup>, then it actually lies in

$$
\mathbb{K}\_-^p \cap \mathbb{K}\_+^{p+1} \cap \mathbb{K}\_-^{p+2} \cap \mathbb{K}\_+^{p+3} \cap \dots \cap \mathbb{K}\_\pm^q,
$$

where the subscripts alternate in sign. The same property holds for g.

**Proposition 5.11** *Let* <sup>g</sup> *lie in <sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> <sup>−</sup> <sup>∩</sup> *<sup>K</sup>*p+<sup>1</sup> *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*. Then it lies in K*p+<sup>1</sup> <sup>+</sup> *and* g *lies in <sup>D</sup>*p+<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> <sup>+</sup> <sup>∩</sup> *<sup>K</sup>*p+<sup>1</sup> <sup>−</sup> *.*

*Proof* Suppose that <sup>g</sup> lies in *<sup>K</sup>*p+<sup>1</sup> <sup>−</sup> . Since it also lies in *<sup>D</sup>*p+<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*p+<sup>1</sup> <sup>−</sup> , by Corollary 4.19 it must lie in *<sup>K</sup>*<sup>p</sup> <sup>+</sup>. By Corollary 4.6, g is eventually a polynomial of degree less than or equal to p. But then, using Corollary 4.6 again, g lies in *K*p+<sup>1</sup> <sup>+</sup> . The result about g is then trivial.

*Example 5.12* Let us apply Proposition 5.11 to the function g(x) = ln x with p = 1. We then obtain that

$$g \text{ lies in } \mathcal{D}^1 \cap \mathcal{K}\_-^1 \cap \mathcal{K}\_+^2 \cap \mathcal{K}\_-^3 \cap \mathcal{K}\_+^4 \cap \cdots$$

$$\text{while } \quad \Sigma \text{g lies in } \mathcal{D}^2 \cap \mathcal{K}\_+^1 \cap \mathcal{K}\_-^2 \cap \mathcal{K}\_+^3 \cap \mathcal{K}\_-^4 \cap \cdots,$$

where g(x) = ln -(x). Moreover, it is easy to see that <sup>g</sup> is 1-concave on <sup>R</sup>+, 2-convex on <sup>R</sup>+, and so on, and similarly for g. ♦

*Example 5.13* Applying Proposition 5.11 to the function g(x) = −<sup>1</sup> <sup>x</sup> ln x with p = 0, we obtain that

$$g \text{ lies in } \mathcal{D}^0 \cap \mathcal{K}\_+^0 \cap \mathcal{K}\_-^1 \cap \mathcal{K}\_+^2 \cap \mathcal{K}\_-^3 \cap \cdots$$

$$\text{while } \quad \Sigma \text{g lies in } \mathcal{D}^1 \cap \mathcal{K}\_-^0 \cap \mathcal{K}\_+^1 \cap \mathcal{K}\_-^2 \cap \mathcal{K}\_+^3 \cap \cdots \text{ } ,$$

where g(x) = γ1(x) − γ<sup>1</sup> is a generalized Stieltjes constant (see Sect. 10.7). Now, for every <sup>q</sup> <sup>∈</sup> <sup>N</sup>, we have <sup>g</sup>(q+1) (x) <sup>=</sup> 0 if and only if <sup>x</sup> <sup>=</sup> <sup>e</sup>Hq+<sup>1</sup> . Hence we can easily see that <sup>g</sup> is <sup>q</sup>-convex or <sup>q</sup>-concave on the unbounded interval (eHq+<sup>1</sup> ,∞). ♦

*Remark 5.14* Although the asymptotic degree of a function (see Definition 5.1) defines an important and useful concept, it is not always easy to compute. For

instance, we can show after some calculus that, for any <sup>p</sup> <sup>∈</sup> <sup>N</sup>, the function hp : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> defined by the equation (see Sect. 11.3)

$$h\_p(\mathbf{x}) = \frac{\mathbf{x}^p}{\ln(\mathbf{x} + \mathbf{l})} \qquad \text{for } \mathbf{x} > \mathbf{0}$$

has the asymptotic degree deg hp = p−1. Thus, it would be useful to have a simple formula to compute easily the asymptotic degree of any function. On this matter, let us consider the limiting value (when it exists)

$$e\_f = \lim\_{\mathfrak{x} \to \infty} \mathbf{x} \frac{\Delta f(\mathfrak{x})}{f(\mathfrak{x})},$$

which is inspired from the concept of the elasticity of a function f (see, e.g., Nievergelt [81]). Computing this limit for the function hp above for instance, we easily obtain ehp = p. Interestingly, we can observe empirically that many functions <sup>f</sup> lying in *<sup>K</sup>*<sup>0</sup> satisfy the double inequality

$$\lfloor e\_f \rfloor\_+ \le \lfloor 1 + \deg f \rfloor \le \lfloor 1 + e\_f \rfloor\_+ \dots$$

It would then be useful to find necessary and sufficient conditions on the function f for this double inequality to hold. ♦

#### **5.2 Multiple log***-***-Type Functions**

Barnes [14–16] introduced a sequence of functions -1, -<sup>2</sup>,..., called *multiple gamma functions*, that generalize the Euler gamma function. The restrictions of these functions to <sup>R</sup><sup>+</sup> are characterized by the equations

$$\begin{aligned} \Gamma\_{p+1}(\mathbf{x}+\mathbf{1}) &= \frac{\Gamma\_{p+1}(\mathbf{x})}{\Gamma\_p(\mathbf{x})},\\ \Gamma\_\mathbf{l}(\mathbf{x}) &= \Gamma(\mathbf{x}), \quad \Gamma\_p(\mathbf{l}) = \mathbf{1}, \qquad \text{for } \mathbf{x} > \mathbf{0} \text{ and } p \in \mathbb{N}^\*, \end{aligned}$$

together with the convexity condition

$$(-1)^{p+1} D^{p+1} \ln \Gamma\_p(\mathbf{x}) \ge 0, \qquad \mathbf{x} > \mathbf{0}.$$

For more recent references, see, e.g., Adamchik [1, 2] and Srivastava and Choi [93].

Thus defined, this sequence of functions satisfies the conditions

$$
\ln \Gamma\_{p+1}(\mathbf{x}) = -\Sigma \ln \Gamma\_p(\mathbf{x}) \qquad \text{and} \qquad \deg(\ln \circ \Gamma\_p) = \lfloor p \rfloor
$$

Moreover, it can be naturally extended to the case when p = 0 by setting -<sup>0</sup>(x) = 1/x.

Now, these observations motivate the following definition.

## **Definition 5.15** Let <sup>p</sup> <sup>∈</sup> <sup>N</sup>.


When p ≥ 1, exp ◦g reduces to the function <sup>p</sup> when exp ◦g is precisely the function 1/ <sup>p</sup>−1, which simply shows that the function <sup>p</sup> restricted to <sup>R</sup><sup>+</sup> is itself a p-type function.

We also introduce the following notation. We let <sup>p</sup> (resp. Logp) denote the set of <sup>p</sup>-type functions (resp. log p-type functions). Thus, by definition the set ran() can be decomposed using the following disjoint union

$$\text{ran}(\Sigma) = \bigcup\_{p \ge 0} \text{ran}(\Sigma |\_{\mathcal{D}^p \cap \mathcal{K}^p}) = \bigcup\_{p \ge 0} \text{Log}\Gamma\_p \dots$$

Thus defined, the set of logp-type functions can be characterized as follows.

**Proposition 5.16** *For any function* <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *and any* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*, the following assertions are equivalent.*


*Proof* The equivalence (i) ⇔ (ii) ⇔ (iii) is immediate by definition of . The implications (iii) ⇒ (iv) ⇒ (ii) are straightforward. Finally, the equivalence (iv) ⇔ (v) is trivial.

From Proposition 5.16 we immediately derive the following characterization of the set ran() of all multiple log --type functions.

**Corollary 5.17** *A function* <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *lies in* ran() *if and only if there exists* <sup>p</sup> <sup>∈</sup> <sup>N</sup> *such that* f (1) <sup>=</sup> <sup>0</sup>*,* <sup>f</sup> <sup>∈</sup> *<sup>K</sup>*p*, and* f <sup>∈</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*p*.*

#### **5.3 Integration of Multiple log** *-***-Type Functions**

The uniform convergence of the sequence <sup>n</sup> → <sup>f</sup> <sup>p</sup> <sup>n</sup> [g] (cf. Theorem 3.6) shows that the function g is continuous whenever so is g. More generally, we also have the following result.

**Proposition 5.18** *Let* <sup>g</sup> *lie in <sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*. The following assertions hold.*


$$\left| \int\_{a}^{\chi} (f\_n^{p}[\mathbf{g}](t) - \Sigma \mathbf{g}(t)) \, dt \right| \le \int\_{a}^{\chi} \lceil t \rceil \left| \binom{t-1}{p} \right| \, dt \, \left| \Delta^{p} \mathbf{g}(n) \right| \, \Delta$$

*If* p ≥ 1*, we also have the following tighter inequality*

$$\left| \int\_{a}^{\chi} (f\_n^{p}[\!g\!\!/](t) - \!\!\!\!/ \!g(t)) \, dt \right| \leq \int\_{a}^{\chi} \left| \binom{l-1}{p} \right| \left| \Delta^{p-1} \!\!g(n+t) - \Delta^{p-1} \!\!\!\!/ \!g(n) \right| \, dt.$$

*Moreover, the following assertions hold.*

*(c1) The sequence*

$$m \mapsto \int\_{a}^{\chi} \left( f\_n^p[\mathbf{g}](t) - \Sigma \mathbf{g}(t) \right) dt$$

*converges to zero.*

*(c2) The sequence*

$$n \mapsto \int\_{a}^{\chi} (f\_n^p[\mathbf{g}](t) + \mathbf{g}(t)) \, dt$$

*converges to*

$$\int\_{a}^{\chi} (\Sigma \mathbf{g}(t) + \mathbf{g}(t)) \, dt \, = \int\_{a}^{\chi} \Sigma \mathbf{g}(t+1) \, dt.$$

*(c3) For any* <sup>m</sup> <sup>∈</sup> <sup>N</sup>∗*, the sequence*

$$m \mapsto \int\_{a}^{\chi} (f\_n^{p}[\mathbf{g}](t) - f\_m^{p}[\mathbf{g}](t)) \, dt$$

*converges to*

$$\int\_{a}^{\chi} \left(\Sigma g(t) - f\_{m}^{p}[g](t)\right) dt.$$

*Proof* Assertion (a) follows from Proposition 5.6 and the uniform convergence of the sequence <sup>n</sup> → <sup>f</sup> <sup>p</sup> <sup>n</sup> [g]. Assertion (b) follows from assertion (a) and the identity g(x <sup>+</sup> <sup>1</sup>) <sup>−</sup> g(x) <sup>=</sup> g(x). Now, for any <sup>n</sup> <sup>∈</sup> <sup>N</sup>∗, since <sup>ρ</sup>p+<sup>1</sup> <sup>n</sup> [g](0) <sup>=</sup> 0 by (1.7), the function <sup>ρ</sup>p+<sup>1</sup> <sup>n</sup> [g] is clearly integrable on (0,x) and hence on (a, x). Using (5.4), it follows that the function f <sup>p</sup> <sup>n</sup> [g]−g is also integrable on (a, x). The inequalities of assertion (c) then follows from Theorem 3.6(b); and hence assertion (c1) also holds. Assertion (c2) follows from assertion (c1) and the identity g(x + <sup>1</sup>) <sup>−</sup> g(x) <sup>=</sup> g(x). Finally, using (3.8) we see that the function <sup>f</sup> <sup>p</sup> <sup>m</sup> [g] − <sup>f</sup> <sup>p</sup> <sup>n</sup> [g] is integrable on (a, x) and hence assertion (c3) follows from assertion (c1).

*Remark 5.19* Assertion (c) of Proposition 5.18 has been obtained by integrating the function <sup>ρ</sup>p+<sup>1</sup> <sup>n</sup> [g] on (a, x). The first inequality in assertion (c) then clearly shows that the sequences of functions defined in assertions (c1)–(c3) converge uniformly on any bounded subset of <sup>R</sup>+. Now, we also observe that the integral

$$\int\_{a}^{\chi} \rho\_{n}^{p+1} [\Sigma \mathcal{g}](t) \, dt$$

itself can be integrated on (a, x), and we can repeat this process as often as we wish. After n integrations, we obtain

$$\frac{1}{(n-1)!} \int\_{a}^{\chi} (\chi - t)^{n-1} \, \rho\_n^{p+1} [\Sigma \g](t) \, dt,$$

and, proceeding as in Proposition 5.18, it is then clear that the following inequality holds

$$\left| \int\_{a}^{\chi} (\mathbf{x} - t)^{n-1} \left( f\_n^p[\mathbf{g}](t) - \Sigma \mathbf{g}(t) \right) dt \right| \le \int\_{a}^{\chi} (\mathbf{x} - t)^{n-1} \left[ t \right] \left| \binom{t-1}{p} \right| dt \ \left| \Delta^p \mathbf{g}(n) \right| \dots$$

In particular, this inequality shows that the left-hand integral converges uniformly on any bounded subset of <sup>R</sup><sup>+</sup> to zero. ♦

Let us end this section with the following important remark. In Proposition 5.18 we have assumed the continuity of function g to ensure that the integrals of both functions g and g be defined. Of course, we could somewhat generalize our result by relaxing this continuity assumption into weaker properties such as local integrability of both g and g. However, for the sake of simplicity, in this work we will always assume the continuity of any function whenever we need to integrate it on a compact interval (see also Remark 9.1). In this case, continuity can be regarded simply as a handy assumption to keep the results simple. We then encourage the interested reader to generalize those results by searching for the weakest assumptions. This may sometimes lead to challenging but stimulating problems.

## **5.4 The Quest for a Characterization of dom***()*

Recall that the map is defined on the set

$$\text{dom}(\Sigma) = \bigcup\_{p \ge 0} (\mathcal{D}^p \cap \mathcal{K}^p).$$

In this respect, it would be useful to have a very simple test to check whether a given function <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> lies in this set. By Propositions 4.2 and 4.7, the condition that g lies in *D*<sup>∞</sup> <sup>N</sup> <sup>∩</sup> *<sup>K</sup>*<sup>0</sup> is clearly necessary. In the next proposition we show that, if <sup>g</sup> is not eventually identically zero, then it must also satisfy the following property

$$\limsup\_{n \to \infty} \frac{\operatorname{g}(n+1)}{\operatorname{g}(n)} \le 1. \tag{5.5}$$

We first recall the following discrete version of L'Hospital's rule, also called Stolz-Cesàro theorem. For a recent reference see, e.g., Ash et al. [12].

**Lemma 5.20 (Stolz-Cesàro Theorem)** *Let* n → an *and* n → bn *be two real sequences. If the second sequence is strictly monotone and unbounded, then*

$$\liminf\_{n \to \infty} \frac{a\_{n+1} - a\_n}{b\_{n+1} - b\_n} \le \lim\_{n \to \infty} \inf\_{b\_n} \frac{a\_n}{b\_n} \le \limsup\_{n \to \infty} \frac{a\_n}{b\_n} \le \limsup\_{n \to \infty} \frac{a\_{n+1} - a\_n}{b\_{n+1} - b\_n}$$

*In particular, if*

$$\lim\_{n \to \infty} \frac{a\_{n+1} - a\_n}{b\_{n+1} - b\_n} = \ell\_\ell$$

*for some* <sup>∈</sup> <sup>R</sup>*, then*

$$\lim\_{n \to \infty} \frac{a\_n}{b\_n} = \ell.$$

**Proposition 5.21** *If* g *lies in* dom() *and is not eventually identically zero, then condition* (5.5) *holds.*

*Proof* Assume that <sup>g</sup> lies in *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> for some <sup>p</sup> <sup>∈</sup> <sup>N</sup>. Of course we can assume that p = 1 + deg g. We can also assume that g is not eventually a polynomial; for otherwise the condition (5.5) clearly holds. If p = 0, then the function x → |g(x)| eventually decreases to zero and hence condition (5.5) holds. Now suppose that

.

<sup>p</sup> <sup>≥</sup> 1. Then the function pg lies in *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*<sup>0</sup> and there are two exclusive cases to consider.

(a) Suppose that the eventually monotone sequence <sup>n</sup> → p−1g(n) is unbounded. This sequence is actually eventually strictly monotone. Indeed, otherwise the function pg <sup>∈</sup> *<sup>K</sup>*<sup>0</sup> would vanish in any unbounded interval of <sup>R</sup>+, and hence would eventually be identically zero. Equivalently, g would eventually be a polynomial of degree less than or equal to p − 1, a contradiction. Using the Stolz-Cesàro theorem (see Lemma 5.20) and the fact that condition (5.5) holds for pg, we then obtain

$$\limsup\_{n \to \infty} \frac{\Delta^{p-1} \mathbf{g}(n+1)}{\Delta^{p-1} \mathbf{g}(n)} \le \limsup\_{n \to \infty} \frac{\Delta^p \mathbf{g}(n+1)}{\Delta^p \mathbf{g}(n)} \le 1.$$

Iterating this process, we see that condition (5.5) holds for g.

(b) Suppose that the sequence <sup>n</sup> → p−1g(n) has a finite limit (which is necessarily nonzero by minimality of p). If p = 1, then condition (5.5) holds trivially. If <sup>p</sup> <sup>≥</sup> 2, then the eventually monotone sequence <sup>n</sup> → p−2g(n) is unbounded and we can show as in the previous case that it is actually eventually strictly monotone. Using the Stolz-Cesàro theorem, we then obtain

$$\limsup\_{n \to \infty} \frac{\Delta^{p-2} \mathbf{g}(n+1)}{\Delta^{p-2} \mathbf{g}(n)} \le \limsup\_{n \to \infty} \frac{\Delta^{p-1} \mathbf{g}(n+1)}{\Delta^{p-1} \mathbf{g}(n)} = 1.$$

Iterating this process, we see that condition (5.5) holds.

This completes the proof.

*Remark 5.22* We observe that the left side of (5.5) is not always a limit. For instance, the function <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> defined by the equation

$$g(\alpha) = \frac{1}{2^{\alpha}} \left( 1 + \frac{1}{3} \sin x \right) \qquad \text{for } x > 0$$

lies in *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*<sup>0</sup> (see Remark 4.13) but the function g(x <sup>+</sup> <sup>1</sup>)/g(x) is a nonconstant periodic function. The first example in Remark 6.21 also illustrates this behavior.

On the other hand, a function <sup>g</sup> <sup>∈</sup> *<sup>K</sup>*<sup>0</sup> that satisfies condition (5.5) need not lie in *D*<sup>∞</sup> <sup>N</sup> . For instance, for any <sup>q</sup> <sup>∈</sup> <sup>N</sup> the function

$$\mathbf{g}\_q(x) = x^{q+1} + \sin x$$

lies in *<sup>K</sup>*<sup>q</sup> \ *<sup>K</sup>*q+1, and hence also in *<sup>K</sup>*0, and satisfies

$$\lim\_{n \to \infty} \frac{\mathbf{g}\_q(n+1)}{\mathbf{g}\_q(n)} = \mathbf{1}.$$

However, it does not lie in *D*<sup>∞</sup> <sup>N</sup> . ♦

$$\blacksquare$$

We observe that condition (5.5) is very easy to check for many functions g lying in *<sup>K</sup>*0. Thus, this condition provides a simple and useful test. In particular, when the inequality in (5.5) is strict, the sequence n → g(n) is summable by the ratio test, and hence <sup>g</sup> lies in *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*0. On the other hand, when the inequality is an equality, it is not known whether this condition, together with the property that <sup>g</sup> lies in *<sup>K</sup>*0, are also sufficient for g to lie in dom().

Now, it is easy to see that a function <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> lies in *<sup>D</sup>*<sup>∞</sup> <sup>N</sup> if and only if there exists <sup>p</sup> <sup>∈</sup> <sup>N</sup> for which the sequence <sup>n</sup> → pg(n) converges. In particular, if we assume that g lies in *K*∞, then g does not lie in *D*<sup>∞</sup> <sup>N</sup> (and hence it does not lie in dom()) if and only if for every <sup>p</sup> <sup>∈</sup> <sup>N</sup> the sequence <sup>n</sup> → pg(n) tends to infinity. On the other hand, we can observe empirically that condition (5.5) fails to hold for many functions g lying in *K*<sup>∞</sup> \ *D*<sup>∞</sup> <sup>N</sup> . Examples of such functions include g(x) <sup>=</sup> <sup>2</sup><sup>x</sup> and g(x) <sup>=</sup> -(x). It seems then reasonable to think that this observation follows from a general rule. We then formulate the following conjecture.

*Conjecture 5.23* If a function <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> lies *<sup>K</sup>*<sup>∞</sup> and is not eventually identically zero, then it also lies in *D*<sup>∞</sup> <sup>N</sup> if and only if condition (5.5) holds.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

## **Chapter 6 Asymptotic Analysis**

The asymptotic behavior of the gamma function for large values of its argument can be summarized as follows: for any a ≥ 0, we have the following asymptotic equivalences (see Titchmarsh [96, Section 1.87])

$$
\Gamma(\mathbf{x} + a) \sim x^a \Gamma(\mathbf{x}) \text{ as } \mathbf{x} \to \infty,\tag{6.1}
$$

$$
\Gamma(\mathbf{x}) \sim \sqrt{2\pi} \, e^{-\mathbf{x}} \mathbf{x}^{\mathbf{x} - \frac{\mathbf{h}}{2}} \text{ as } \mathbf{x} \to \infty \,, \tag{6.2}
$$

$$
\Gamma(\mathbf{x} + \mathbf{l}) \sim \sqrt{2\pi \mathbf{x}} e^{-\mathbf{x}} \mathbf{x}^{\mathbf{x}} \text{ as } \mathbf{x} \to \infty \text{ }, \tag{6.3}
$$

where both formulas (6.2) and (6.3) are known by the name *Stirling's formula*.

In this chapter, we investigate the asymptotic behaviors of the multiple log --type functions and provide analogues of the formulas above.

More specifically, for these functions we establish analogues of *Wendel's inequality*, *Stirling's formula*, and *Burnside's formula* for the gamma function. We also introduce the concept of the *asymptotic constant*, an analogue of *Stirling's constant*, and an analogue of *Binet's function* related to the log-gamma function, and we show how all these generalized concepts can be used in the asymptotic analysis of multiple log --type functions. We also establish a general asymptotic equivalence for these functions.

We revisit *Gregory's summation formula*, with an integral form of the remainder, and show how it can be derived very easily in this context. Using this formula, we then introduce a generalization of *Euler's constant* and provide a geometric interpretation.

#### **6.1 Generalized Wendel's Inequality**

Recall that if a function <sup>g</sup> lies in *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> for some <sup>p</sup> <sup>∈</sup> <sup>N</sup>, then the function g lies in *<sup>R</sup>*p+<sup>1</sup> <sup>R</sup> by Proposition 5.6. At first glance, this observation may seem rather unimportant. However, its explicit statement tells us that for any a ≥ 0 we have

$$
\rho\_\mathbf{x}^{p+1}[\Sigma \underline{\mathbf{g}}](a) \to 0 \qquad \text{as } \mathbf{x} \to \infty,
$$

or equivalently,

$$
\Delta \mathbf{g}(\mathbf{x} + a) - \Delta \mathbf{g}(\mathbf{x}) - \sum\_{j=1}^{p} \binom{a}{j} \Delta^{j-1} \mathbf{g}(\mathbf{x}) \to \mathbf{0} \qquad \text{as } \mathbf{x} \to \infty. \tag{6.4}
$$

This is actually a nice convergence result that reveals the asymptotic behavior of the difference g(x + a) − g(x) for large values of x. The special case when p = 1 was established by Webster [98, Theorem 6.1].

When g(x) = ln x and p = 1, this result reduces to

$$
\ln \Gamma(\mathbf{x} + a) - \ln \Gamma(\mathbf{x}) - a \ln \mathbf{x} \to 0 \qquad \text{as } \mathbf{x} \to \infty,
$$

which is precisely the additive version of the asymptotic equivalence given in (6.1). We thus observe that (6.4) immediately provides an analogue of the asymptotic equivalence (6.1) for all the multiple log --type functions.

Now, we observe that formula (6.1) was also established by Wendel [99], who first provided a short and elegant proof of the following double inequality

$$\left(1+\frac{a}{x}\right)^{a-1} \le \frac{\Gamma(x+a)}{\Gamma(x)\ge^a} \le 1, \qquad x > 0, \quad 0 \le a \le 1,\tag{6.5}$$

or equivalently, in the additive notation,

$$(a-1)\ln\left(1+\frac{a}{x}\right) \le \rho\_x^2[\ln \circ \Gamma](a) \le 0, \qquad x > 0, \quad 0 \le a \le 1,\tag{6.6}$$

where

$$
\rho\_\mathbf{x}^2[\ln \diamond \Gamma](a) = \ln \Gamma(\mathbf{x} + a) - \ln \Gamma(\mathbf{x}) - a \ln \mathbf{x}.\tag{6.7}
$$

We can readily see that this double inequality is actually a simple application of Lemma 2.7 to the log-gamma function with p = 1. Its generalization to all the multiple log --type functions is then straightforward and we present it in the following theorem. We call it the *generalized Wendel inequality*.

**Theorem 6.1 (Generalized Wendel's Inequality)** *Let* <sup>g</sup> *lie in <sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup> *and let* <sup>±</sup> *stand for* <sup>1</sup> *or* <sup>−</sup><sup>1</sup> *according to whether* <sup>g</sup> *lies in <sup>K</sup>*<sup>p</sup> <sup>+</sup> *or <sup>K</sup>*<sup>p</sup> <sup>−</sup> *. Let also* x > 0 *be so that* g *is* p*-convex or* p*-concave on* [x,∞) *and let* a ≥ 0*. Then we have*

$$\begin{aligned} 0 \le \ \pm(-1)\,\varepsilon\_{p+1}(a)\,\rho\_x^{p+1}[\Sigma g](a) \le \pm(-1)\left|\binom{a-1}{p}\right|\left(\Delta^p\Sigma g(\chi+a)-\Delta^p\Sigma g(\chi)\right),\\ \le \pm(-1)\,\lceil a\rceil\left|\binom{a-1}{p}\right|\Delta^p g(\chi),\end{aligned}$$

*with equalities if* <sup>a</sup> ∈ {0, <sup>1</sup>,...,p}*. In particular,* <sup>ρ</sup>p+<sup>1</sup> <sup>x</sup> [g](a) <sup>→</sup> <sup>0</sup> *as* <sup>x</sup> → ∞*. If* p ≥ 1*, we also have*

$$\begin{aligned} 0 \le \pm(-1)\,\varepsilon\_p(a)\,\rho\_\chi^p[\mathbf{g}](a) \le \pm(-1)\left|\binom{a-1}{p-1}\right|\left(\Delta^{p-1}\mathbf{g}(\mathbf{x}+a)-\Delta^{p-1}\mathbf{g}(\mathbf{x})\right) \\ \le \pm(-1)\,\lceil a\rceil\,\left|\binom{a-1}{p-1}\right|\Delta^p\mathbf{g}(\mathbf{x}), \end{aligned}$$

*with equalities if* <sup>a</sup> ∈ {0, <sup>1</sup>,...,p <sup>−</sup> <sup>1</sup>}*. In particular,* <sup>ρ</sup><sup>p</sup> <sup>x</sup> [g](a) → 0 *as* x → ∞*.*

*Proof* Negating g if necessary, we can assume that it is p-convex on [x,∞). By the existence Theorem 3.6, the function g is then p-concave on [x,∞). By Lemma 2.5 and Proposition 4.11, the function pg is negative and increases to zero on [x,∞). Thus, for any a ≥ 0 we have

$$(-1)\sum\_{j=0}^{\lceil a \rceil -1} \Delta^p g(\chi + j) \le (-1)\lceil a \rceil \Delta^p g(\chi).$$

We then derive the first inequalities by applying Lemma 2.7 to f = g. Suppose now that p ≥ 1. By Corollary 4.19, we have that g is (p − 1)-concave on [x,∞). We then derive the remaining inequalities by applying Lemma 2.7 to f = g.

A symmetrized version of the generalized Wendel inequality can be easily obtained simply by taking the absolute value of each of its sides. This provides a coarsened, but simplified form of the generalized Wendel inequality. For instance, when g(x) = ln x and p = 1 we then obtain the following inequality

$$\left|\ln\Gamma(x+a) - \ln\Gamma(x) - a\ln x\right| \le |a-1|\ln\left(1 + \frac{a}{x}\right), \qquad x > 0, \ a \ge 0,\tag{6.8}$$

that is, in the multiplicative notation,

$$\left(1+\frac{a}{x}\right)^{-|a-1|} \le \frac{\Gamma(x+a)}{\Gamma(x)\,x^a} \le \left(1+\frac{a}{x}\right)^{|a-1|}, \qquad x>0, \, a \ge 0. \tag{6.9}$$

We then have the following immediate corollary, which provides a symmetrized version of the generalized Wendel inequality.

**Corollary 6.2** *Let* <sup>g</sup> *lie in <sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*. Let also* x > <sup>0</sup> *be so that* <sup>g</sup> *is* p*-convex or* p*-concave on* [x,∞) *and let* a ≥ 0*. Then we have*

$$\left|\rho\_{\mathbf{x}}^{p+1}\{\Sigma\mathbf{g}\}(a)\right| \leq \left|\binom{a-1}{p}\right| \left|\Delta^{p}\Sigma\mathbf{g}(\mathbf{x}+a) - \Delta^{p}\Sigma\mathbf{g}(\mathbf{x})\right| \leq \lceil a\rceil \left|\binom{a-1}{p}\right| |\Delta^{p}\mathbf{g}(\mathbf{x})|,$$

*with equalities if* <sup>a</sup> ∈ {0, <sup>1</sup>,...,p}*. In particular,* <sup>ρ</sup>p+<sup>1</sup> <sup>x</sup> [g](a) <sup>→</sup> <sup>0</sup> *as* <sup>x</sup> → ∞*. If* p ≥ 1*, we also have*

$$\left| \rho\_{\mathbf{x}}^{p} [\mathbf{g}](a) \right| \leq \left| \binom{a-1}{p-1} \right| \left| \Delta^{p-1} \mathbf{g}(\mathbf{x} + a) - \Delta^{p-1} \mathbf{g}(\mathbf{x}) \right| \\ \leq \lceil a \rceil \left| \binom{a-1}{p-1} \right| |\Delta^{p} \mathbf{g}(\mathbf{x})|,$$

*with equalities if* <sup>a</sup> ∈ {0, <sup>1</sup>,...,p <sup>−</sup> <sup>1</sup>}*. In particular,* <sup>ρ</sup><sup>p</sup> <sup>x</sup> [g](a) → 0 *as* x → ∞*.*

*Example 6.3* Applying Theorem 6.1 and Corollary 6.2 to the function g(x) = ln x, for which we have p = 1 + deg g = 1 and g(x) = ln -(x), we immediately retrieve the inequalities (6.5)–(6.9) and hence also the asymptotic equivalence (6.1). Further inequalities can actually be obtained by considering higher values of p. For instance, since <sup>g</sup> also lies in *<sup>D</sup>*<sup>2</sup> <sup>∩</sup> *<sup>K</sup>*2, we can set <sup>p</sup> <sup>=</sup> 2 in Corollary 6.2 and we then obtain the inequalities

$$\left| \left( 1 + \frac{1}{x} \right)^{\binom{a}{2}} \left( 1 + \frac{a}{x} \right)^{-\left| \binom{a-1}{2} \right|} \left( 1 + \frac{a}{x+1} \right)^{\left| \binom{a-1}{2} \right|} \right| \le \frac{\Gamma(x+a)}{\Gamma(x) \, x^a} $$

$$\le \left( 1 + \frac{1}{x} \right)^{\binom{a}{2}} \left( 1 + \frac{a}{x} \right)^{\left| \binom{a-1}{2} \right|} \left( 1 + \frac{a}{x+1} \right)^{-\left| \binom{a-1}{2} \right|}.$$

Thus, we can see that the central function in these inequalities can always be "sandwiched" by finite products of powers of rational functions. For further inequalities involving this central function, see, e.g., Srivastava and Choi [93, pp. 106–107]. ♦

**Discrete Version of the Generalized Wendel Inequality** The restrictions to the natural integers of the generalized Wendel inequality and its symmetrized form are obtained by setting <sup>x</sup> <sup>=</sup> <sup>n</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> in the inequalities of Theorem 6.1 and Corollary 6.2. In view of identity (5.4), the symmetrized forms then reduce to those of the existence Theorem 3.6.

For instance, when g(x) = ln x and p = 1, the symmetrized version of generalized Wendel's inequality is given in (6.8) while its discrete version can take the form

$$|\ln \Gamma(\mathbf{x}) - f\_n^1[\ln](\mathbf{x})| \le |\mathbf{x} - \mathbf{1}| \ln \left( 1 + \frac{\mathbf{x}}{n} \right), \qquad \mathbf{x} > \mathbf{0}, \ n \in \mathbb{N}^\*,$$

where

$$\left(f\_n^{\mathbb{I}}[\ln](\boldsymbol{x})\right) = \sum\_{k=1}^{n-1} \ln k - \sum\_{k=0}^{n-1} \ln(\boldsymbol{x} + k) + \boldsymbol{x} \ln n.$$

This latter inequality clearly generalizes Gauss' limit (1.6), which simply expresses that

$$\ln \Gamma(\mathbf{x}) \, = \lim\_{n \to \infty} f\_n^{\mathbf{l}}[\ln](\mathbf{x}), \qquad \mathbf{x} > \mathbf{0}.$$

#### **6.2 The Asymptotic Constant**

We now introduce a new important concept that will play a key role in our theory, namely the *asymptotic constant*. This concept will actually be used intensively throughout the rest of this book.

**Definition 6.4 (Asymptotic Constant)** The *asymptotic constant* associated with a function <sup>g</sup> <sup>∈</sup> *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> dom() is the number

$$
\sigma[\mathbf{g}] = \int\_0^1 \Sigma \mathbf{g}(t+1) \, dt \, = \int\_0^1 (\Sigma \mathbf{g}(t) + \mathbf{g}(t)) \, dt \,. \tag{6.10}
$$

Using Definition 6.4, we can readily see that the following identity holds for any function <sup>g</sup> lying in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> dom()

$$\int\_{\chi}^{\chi+1} \Sigma \mathbf{g}(t) \, dt \, = \sigma[\mathbf{g}] + \int\_{1}^{\chi} \mathbf{g}(t) \, dt, \qquad \chi > 0. \tag{6.11}$$

Indeed, both sides are functions of x that have the same derivative and the same value at x = 1.

*Example 6.5 (Raabe's Formula)* Taking g(x) = ln x in (6.10), we obtain

$$\sigma[\mathfrak{g}] = \int\_0^1 \ln \Gamma(t+1) \, dt = -1 + \frac{1}{2} \ln(2\pi) \, .$$

Combining this result with (6.11), we obtain the following more general identity

$$\int\_{\chi}^{\chi+1} \ln \Gamma(t) \, dt \, = \frac{1}{2} \ln(2\pi) + x \ln x - x \, , \qquad x > 0.$$

This identity is known by the name *Raabe's formula* (see, e.g., Cohen and Friedman [30]). We will discuss this formula and investigate its analogues in Sect. 8.5. ♦

Identity (6.11) will also play a very important role in this work. In this respect, it is clear that the integral

$$\int\_{\chi}^{\chi+1} \Sigma \mathbf{g}(t) \, dt, \qquad x > 0,\tag{6.12}$$

cancels out the cyclic variations of any 1-periodic additive component of g in the sense that the function

$$x \mapsto \int\_{\mathfrak{x}}^{\mathfrak{x}+1} w(t) \, dt$$

is constant for any 1-periodic function <sup>ω</sup>: <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup>. Thus, the integral (6.12) can be interpreted as the *trend* of the function g, just as a moving average enables one to decompose a time series into its trend and its seasonal variation. In this light, identity (6.11) simply tells us that the trend of the function g is precisely the antiderivative of g (up to an additive constant).

Let us end this section with the following two technical results related to the asymptotic constant.

**Proposition 6.6** *Let* <sup>g</sup><sup>1</sup> *and* <sup>g</sup><sup>2</sup> *lie in <sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup> *and let* <sup>c</sup>1, c<sup>2</sup> <sup>∈</sup> <sup>R</sup>*. If* <sup>c</sup>1g<sup>1</sup> <sup>+</sup> <sup>c</sup>2g<sup>2</sup> *lies in <sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*p*, then*

$$
\sigma[c\_1\mathbf{g}\_1 + c\_2\mathbf{g}\_2] = c\_1\sigma[\mathbf{g}\_1] + c\_2\sigma[\mathbf{g}\_2].
$$

*Moreover, we have* <sup>σ</sup>[**1**] = <sup>1</sup> <sup>2</sup> *, where* **<sup>1</sup>**: <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *is the constant function* **<sup>1</sup>**(x) <sup>=</sup> <sup>1</sup>*.*

*Proof* The first part of the statement is an immediate consequence of Proposition 5.7. Now, we clearly have **<sup>1</sup>** <sup>=</sup> <sup>x</sup> <sup>−</sup> 1 and hence <sup>σ</sup>[**1**] = <sup>1</sup> <sup>2</sup> .

**Proposition 6.7** *Let* <sup>g</sup> *lie in <sup>C</sup>*0∩dom()*, let* <sup>a</sup> <sup>≥</sup> <sup>0</sup>*, and let* <sup>h</sup>: <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *be defined by the equation* h(x) = g(x + a) *for* x > 0*. Then*

$$
\sigma[h] = \sigma[g] + \int\_1^{a+1} g(t) \, dt - \Sigma g(a+1).
$$

*Proof* Using Proposition 5.8 we obtain

$$
\sigma[h] = \int\_0^1 \Sigma g(t+a+1) \, dt - \Sigma g(a+1) \, \, = \int\_{a+1}^{a+2} \Sigma g(t) \, dt - \Sigma g(a+1) \,.
$$

We then get the result using (6.11).

#### **6.3 Generalized Binet's Function**

The *Binet function* related to the log-gamma function is the function <sup>J</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> defined by the equation (see, e.g., Cuyt et al. [31, p. 224])

$$J(\mathbf{x}) = \ln \Gamma(\mathbf{x}) - \frac{1}{2} \ln(2\pi) + \mathbf{x} - \left(\mathbf{x} - \frac{1}{2}\right) \ln \mathbf{x} \qquad \text{for } \mathbf{x} > \mathbf{0}. \tag{6.13}$$

Using identity (6.7) and Raabe's formula (see Example 6.5), we can easily provide the following integral form of Binet's function

$$J(\mathbf{x}) = \left. - \int\_0^1 \rho\_\mathbf{x}^2 [\ln \diamond \Gamma](t) \, dt, \qquad \mathbf{x} > \mathbf{0}.$$

This latter identity motivates the following definition, in which we introduce a generalization of Binet's function. Recall first that, for any <sup>q</sup> <sup>∈</sup> <sup>N</sup> and any x > 0, the function <sup>t</sup> → <sup>ρ</sup><sup>q</sup> <sup>x</sup> [g](t) is continuous whenever so is g. In this case, since it also vanishes at t = 0, it must be integrable on (0, 1).

**Definition 6.8 (Generalized Binet's Function)** For any <sup>g</sup> <sup>∈</sup> *<sup>C</sup>*<sup>0</sup> and any <sup>q</sup> <sup>∈</sup> <sup>N</sup>, we define the function <sup>J</sup> <sup>q</sup>[g]: <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> by the equation

$$J^q[g](\mathbf{x}) = -\int\_0^1 \rho^q\_x[g](t) \, dt \qquad \text{for } \mathbf{x} > \mathbf{0}. \tag{6.14}$$

We say that the function <sup>J</sup> <sup>q</sup> [g] is the *generalized Binet function* associated with the function g and the parameter q.

Taking g = ln ◦ and q = 1 + deg g = 2 in identity (6.14), we thus simply retrieve the Binet function J (x) <sup>=</sup> <sup>J</sup> <sup>2</sup>[ln ◦-](x) related to the log-gamma function, as defined in (6.13).

In the following two propositions, we collect a few immediate properties of the generalized Binet function. To this end, recall first that, for any <sup>n</sup> <sup>∈</sup> <sup>N</sup>, the <sup>n</sup>*th Gregory coefficient* (also called the n*th Bernoulli number of the second kind*) is the number Gn defined by the equation (see, e.g., [20–22, 72])

$$G\_n := \int\_0^1 \binom{l}{n} \, dt \qquad \text{for } n \ge 0.$$

The first few values of Gn are: 1, <sup>1</sup> <sup>2</sup> , <sup>−</sup> <sup>1</sup> <sup>12</sup> , <sup>1</sup> <sup>24</sup> , <sup>−</sup> <sup>19</sup> <sup>720</sup> ,.... These numbers are decreasing in absolute value and satisfy the equations

$$\sum\_{n=1}^{\infty} |G\_n| = 1 \qquad \text{and} \qquad G\_n = (-1)^{n-1} |G\_n| \quad \text{for } n \ge 1. \tag{6.15}$$

**Proposition 6.9** *Let* <sup>g</sup> <sup>∈</sup> *<sup>C</sup>*<sup>0</sup> *and* <sup>q</sup> <sup>∈</sup> <sup>N</sup>*. Then, for any* x > <sup>0</sup>*, we have*

$$J^q[g](\mathbf{x}) = \sum\_{j=0}^{q-1} G\_j \Delta^j g(\mathbf{x}) - \int\_{\mathbf{x}}^{\mathbf{x}+1} g(t) \, dt \,. \tag{6.16}$$

*In particular,*

$$
\Delta J^q[\mathbf{g}] = J^q[\Delta \mathbf{g}] \qquad \text{and} \qquad J^{q+1}[\mathbf{g}] - J^q[\mathbf{g}] = G\_q \Delta^q \mathbf{g}.\tag{6.17}
$$

*Proof* Identity (6.16) follows immediately from (1.7). The other two identities are trivial.

**Proposition 6.10** *Let* <sup>g</sup> *lie in <sup>C</sup>*<sup>0</sup> <sup>∩</sup>dom() *and let* <sup>q</sup> <sup>∈</sup> <sup>N</sup>*. Then, for any* x > <sup>0</sup> *and any* <sup>n</sup> <sup>∈</sup> <sup>N</sup>∗*, we have*

$$J^{q+1}[\Sigma g](\mathbf{x}) = \Sigma g(\mathbf{x}) - \sigma[\mathbf{g}] - \int\_1^\chi g(t) \, dt + \sum\_{j=1}^q G\_j \Delta^{j-1} g(\mathbf{x}) \,, \tag{6.18}$$

$$J^{q+1}[\Sigma g](n) = \int\_0^1 \left( f\_n^q[g](t) - \Sigma g(t) \right) dt \,. \tag{6.19}$$

*In particular,*

$$
\Delta J^{q+1}[\Sigma \mathbf{g}] = J^{q+1}[\mathbf{g}] \,, \qquad J^{q+1}[c + \Sigma \mathbf{g}] \, = \, J^{q+1}[\Sigma \mathbf{g}] , \quad c \in \mathbb{R},
$$

*and*

$$
\sigma[\mathfrak{g}] = \, -J^{\parallel}[\Sigma \mathfrak{g}](1).
$$

*Proof* Identity (6.18) follows from (6.11) and (6.16). Identity (6.19) follows from (5.4) and (6.14). The remaining identities are trivial.

As we will see in the rest of this book, many subsequent definitions and results can be expressed in terms of the generalized Binet function.

#### **6.4 Generalized Stirling's Formula**

Interestingly, the Binet function J (x) <sup>=</sup> <sup>J</sup> <sup>2</sup>[ ln](x) defined in (6.13) clearly satisfies the following identity (compare with Artin [11, p. 24])

$$
\Gamma(\mathbf{x}) = \sqrt{2\pi} \,\mathrm{x}^{\chi - \frac{1}{2}} e^{-\chi + J(\chi)}
$$

and hence Stirling's formula (6.2) simply states that J (x) → 0 as x → ∞. This observation seems to reveal a way to find a counterpart of Stirling's formula for any continuous multiple log --type function. In fact, we only need to show that the function <sup>J</sup> <sup>p</sup>+1[g] vanishes at infinity whenever <sup>g</sup> lies in *<sup>C</sup>*0∩*D*p∩*K*<sup>p</sup> for some <sup>p</sup> <sup>∈</sup> N. In the next theorem and its corollary, we establish this fact by simply integrating each side of the generalized Wendel inequality and its symmetrized version on a ∈ (0, 1).

Let us first define the sequence n → Gn by the equations

$$\overline{G\_n} = \|1 - \sum\_{j=1}^n |G\_j| \, = \sum\_{j=n+1}^\infty |G\_j| \, \quad \text{for } n \in \mathbb{N}.$$

In view of (6.15), we see that the sequence n → Gn decreases to zero. Its first values are: 1, <sup>1</sup> <sup>2</sup> , <sup>5</sup> <sup>12</sup> , <sup>3</sup> <sup>8</sup> , <sup>251</sup> <sup>720</sup> ,.... Moreover, from the straightforward identity (see, e.g., Graham et al. [41, p. 165])

$$(-1)^n \binom{l-1}{n} = 1 - \sum\_{j=1}^n (-1)^{j-1} \binom{l}{j} \dots$$

we easily derive

$$\int\_0^1 \left| \binom{t-1}{n} \right| dt \;= (-1)^n \int\_0^1 \binom{t-1}{n} dt \;= \left| \int\_0^1 \binom{t-1}{n} dt \right| \;= \overline{G}\_n. \tag{6.20}$$

We now have the following two results, which immediately follow from Theorem 6.1, Corollary 6.2, and identities (6.20).

**Theorem 6.11** *Let* <sup>g</sup> *lie in <sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup> *and let* <sup>±</sup> *stand for* <sup>1</sup> *or* <sup>−</sup><sup>1</sup> *according to whether* <sup>g</sup> *lies in <sup>K</sup>*<sup>p</sup> <sup>+</sup> *or <sup>K</sup>*<sup>p</sup> <sup>−</sup> *. Let also* x > 0 *be so that* g *is* p*-convex or* p*-concave on* [x,∞)*. Then we have*

$$\begin{aligned} 0 \le \ &\pm (-1)^p \, J^{p+1} [\Sigma g](\mathbf{x}) \le \pm (-1)^{p+1} \int\_0^1 \binom{t-1}{p} \left(\Delta^p \Sigma \mathbf{g}(\mathbf{x} + t)\right) \\ &- \Delta^p \Sigma \mathbf{g}(\mathbf{x})\Big) dt \\ &\le \pm (-1)^p \overline{G}\_p \, \Delta^p \mathbf{g}(\mathbf{x}). \end{aligned}$$

*In particular,* <sup>J</sup> <sup>p</sup>+1[g](x) <sup>→</sup> <sup>0</sup> *as* <sup>x</sup> → ∞*. If* <sup>p</sup> <sup>≥</sup> <sup>1</sup>*, we also have*

$$0 \le \pm (-1)^{p+1} J^p[\mathbf{g}](\mathbf{x}) \le \pm (-1)^p \int\_0^1 \binom{t-1}{p-1} \left(\Delta^{p-1} \mathbf{g}(\mathbf{x} + t) - \Delta^{p-1} \mathbf{g}(\mathbf{x})\right) dt$$

$$\le \pm (-1)^p \overline{G}\_{p-1} \Delta^p \mathbf{g}(\mathbf{x}).$$

*In particular,* <sup>J</sup> <sup>p</sup>[g](x) <sup>→</sup> <sup>0</sup> *as* <sup>x</sup> → ∞*.*

**Corollary 6.12** *Let* <sup>g</sup> *lie in <sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*. Let also* x > <sup>0</sup> *be so that* g *is* p*-convex or* p*-concave on* [x,∞)*. Then we have*

$$\left| J^{p+1} [\Sigma \mathbf{g}](\mathbf{x}) \right| \leq \left| \int\_0^1 \binom{t-1}{p} \left( \Delta^p \Sigma \mathbf{g}(\mathbf{x}+\mathbf{t}) - \Delta^p \Sigma \mathbf{g}(\mathbf{x}) \right) d\mathbf{t} \right| \leq \overline{G}\_p |\Delta^p \mathbf{g}(\mathbf{x})|.$$

*In particular,* <sup>J</sup> <sup>p</sup>+1[g](x) <sup>→</sup> <sup>0</sup> *as* <sup>x</sup> → ∞*. If* <sup>p</sup> <sup>≥</sup> <sup>1</sup>*, we also have*

$$\left| J^p[\mathbf{g}](\mathbf{x}) \right| \le \left| \int\_0^1 \binom{l-1}{p-1} \left( \Delta^{p-1} \mathbf{g}(\mathbf{x} + t) - \Delta^{p-1} \mathbf{g}(\mathbf{x}) \right) dt \right| \le \overline{G}\_{p-1} |\Delta^p \mathbf{g}(\mathbf{x})|.$$

*In particular,* <sup>J</sup> <sup>p</sup>[g](x) <sup>→</sup> <sup>0</sup> *as* <sup>x</sup> → ∞*.*

Both Theorem 6.11 and Corollary 6.12 state that <sup>J</sup> <sup>p</sup>+1[g] vanishes at infinity whenever <sup>g</sup> lies in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> for some <sup>p</sup> <sup>∈</sup> <sup>N</sup>. This result is precisely the analogue of Stirling's formula for all the continuous multiple log --type functions. As it is one of the central results of our theory, we state it explicitly in the following theorem. We call it the *generalized Stirling formula*. We also include the property that <sup>J</sup> <sup>p</sup>[g] vanishes at infinity.

**Theorem 6.13 (Generalized Stirling's Formula)** *Let* <sup>g</sup> *lie in <sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*. Then both functions* <sup>J</sup> <sup>p</sup>+1[g] *and* <sup>J</sup> <sup>p</sup>[g] *vanish at infinity. More precisely, we have*

$$
\Sigma \mathbf{g}(\mathbf{x}) - \int\_{1}^{\mathbf{x}} \mathbf{g}(t) \, dt + \sum\_{j=1}^{p} G\_{j} \Delta^{j-1} \mathbf{g}(\mathbf{x}) \to \sigma \mathbf{[g]} \qquad \text{as } \mathbf{x} \to \infty \qquad (6.21)
$$

*and*

$$\int\_{\mathbf{x}}^{\mathbf{x}+1} \mathbf{g}(t) \, dt - \sum\_{j=0}^{p-1} G\_j \Delta^j \mathbf{g}(\mathbf{x}) \to \mathbf{0} \qquad \text{as } \mathbf{x} \to \infty \text{ .} \tag{6.22}$$

*Proof* By Theorem 6.11, the functions <sup>J</sup> <sup>p</sup>+1[g] and <sup>J</sup> <sup>p</sup>[g] vanish at infinity when <sup>p</sup> <sup>≥</sup> 0 and <sup>p</sup> <sup>≥</sup> 1, respectively. The function <sup>J</sup> <sup>p</sup>[g] also vanishes at infinity when p = 0; indeed, in this case |g(x)| eventually decreases to zero and we have

$$|J^0[\mathbf{g}](\mathbf{x})| = \left| \int\_0^1 \mathbf{g}(\mathbf{x} + t) \, dt \right| \le |\mathbf{g}(\mathbf{x})| \to 0 \qquad \text{as } \mathbf{x} \to \infty.$$

Formulas (6.21) and (6.22) then immediately follow from (6.16) and (6.18).

The generalized Stirling formula (6.21) is actually the highlight of this chapter. It enables one to investigate the asymptotic behavior of the function g for large values of its argument. It also justifies the name "asymptotic constant" given to the quantity σ[g] introduced in Definition 6.4. Moreover, combining (6.4) with (6.21), we immediately derive the asymptotic behavior of g(x + a) for any a ≥ 0. We also observe that alternative formulations of (6.21) in the case when p = 1 were established by Krull [54, p. 368] and later by Webster [98, Theorem 6.3].

In the special case when <sup>g</sup> lies in *<sup>D</sup>*−<sup>1</sup> <sup>∩</sup>*K*0, the generalized Stirling formula and the asymptotic constant take very special forms. We present them in the following proposition.

**Proposition 6.14** *If* <sup>g</sup> *lies in <sup>D</sup>*−<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*0*, then we have*

$$\Sigma \mathbf{g}(\mathbf{x}) \to \sum\_{k=1}^{\infty} \mathbf{g}(k) \qquad \text{as } \mathbf{x} \to \infty. \tag{6.23}$$

*If, in addition, we have* <sup>g</sup> <sup>∈</sup> *<sup>C</sup>*0*, then* <sup>g</sup> *is integrable at infinity and*

$$\sigma[\mathbf{g}] = \sum\_{k=1}^{\infty} \mathbf{g}(k) - \int\_{1}^{\infty} \mathbf{g}(t) \, dt.$$

*Proof* By definition of the map , we have

$$\Sigma \mathbf{g}(\mathbf{x}) = \sum\_{k=1}^{\infty} \mathbf{g}(k) - \sum\_{k=0}^{\infty} \mathbf{g}(\mathbf{x} + k), \qquad \mathbf{x} > \mathbf{0}.$$

where the second series tends to zero as x → ∞ by Theorem 3.13. The claimed expression for σ[g] then immediately follows from formula (6.21).

*Example 6.15* Let us apply our results to the concave function g(x) = ln x with p = 1. Using (6.16) and (6.18), we first obtain

$$J^2[\ln \circ \Gamma](\mathbf{x}) = J(\mathbf{x}) = \ln \Gamma(\mathbf{x}) - \frac{1}{2} \ln(2\pi) + x - \left(\mathbf{x} - \frac{1}{2}\right) \ln \mathbf{x} \,,$$

$$J^1[\ln](\mathbf{x}) = 1 - (\mathbf{x} + \mathbf{l}) \ln \left(\mathbf{l} + \frac{1}{\mathbf{x}}\right) \,.$$

Now, Theorem 6.11 provides the following inequalities for any x > 0

$$0 \le J(\mathbf{x}) \le \frac{1}{2} (\mathbf{x} + \mathbf{l})^2 \ln \left( 1 + \frac{1}{\mathbf{x}} \right) - \frac{\mathbf{x}}{2} - \frac{3}{4} \le \frac{1}{2} \ln \left( 1 + \frac{1}{\mathbf{x}} \right), \qquad (6.24)$$

$$0 \le -1 + (\mathbf{x} + \mathbf{l}) \ln \left( 1 + \frac{1}{\mathbf{x}} \right) \le \ln \left( 1 + \frac{1}{\mathbf{x}} \right).$$

That is, in the multiplicative notation,

$$1 \le \frac{\Gamma(x)}{\sqrt{2\pi} \, e^{-x} \, x^{x - \frac{1}{2}}} \le e^{-\frac{x}{2} - \frac{3}{4}} \left( 1 + \frac{1}{x} \right)^{\frac{1}{2}(x+1)^2} \le \left( 1 + \frac{1}{x} \right)^{\frac{1}{2}},\qquad(6.25)$$

$$\left( 1 + \frac{1}{x} \right)^x \le e \le \left( 1 + \frac{1}{x} \right)^{x+1}.$$

Thus, we retrieve Stirling's formula (6.2) and (6.3), together with the well-known asymptotic equivalence (compare with Artin [11, p. 20])

$$\left(1+\frac{1}{x}\right)^{\chi} \sim \quad e \qquad \text{as } x \to \infty.$$

It is actually quite remarkable that the first two inequalities in (6.24) and (6.25) are precisely what we get when we "integrate" the additive version of the Wendel inequality (6.5) on the unit interval (0, 1).

Now, the coarsened inequality

$$\left| J^{p+1}[\Sigma \mathfrak{g}](\mathfrak{x}) \right| \leq \overline{G}\_p \left| \Delta^p \mathfrak{g}(\mathfrak{x}) \right|.$$

given in Corollary 6.12 takes the following simple form (in the multiplicative notation)

$$\left(1+\frac{1}{x}\right)^{-\frac{1}{2}} \le \frac{\Gamma(\mathbf{x})}{\sqrt{2\pi}\,e^{-\mathbf{x}}\,\mathbf{x}^{\mathbf{x}-\frac{1}{2}}} \le \left(1+\frac{1}{x}\right)^{\frac{1}{2}}.$$

Note that tighter inequalities can also be obtained by considering higher values of p in Corollary 6.12. For instance, taking p = 2 we obtain

$$\left(1+\frac{1}{x}\right)^{-\frac{2}{\pi}}\left(1+\frac{2}{x}\right)^{\frac{4}{\pi}} \le \frac{\Gamma(x)}{\sqrt{2\pi}\ e^{-x}x^{x-\frac{1}{2}}} \le \left(1+\frac{1}{x}\right)^{\frac{4}{\pi}}\left(1+\frac{2}{x}\right)^{-\frac{4}{\pi}}\dots$$

Taking p = 3 we obtain

$$\begin{aligned} \left(1+\frac{1}{x}\right)^{-\frac{23}{24}} \left(1+\frac{2}{x}\right)^{\frac{13}{12}} \left(1+\frac{3}{x}\right)^{-\frac{3}{8}} &\leq \frac{\Gamma(x)}{\sqrt{2\pi} \,e^{-x} \,x^{x-\frac{1}{2}}}\\ &\leq \left(1+\frac{1}{x}\right)^{\frac{31}{24}} \left(1+\frac{2}{x}\right)^{-\frac{2}{6}} \left(1+\frac{3}{x}\right)^{\frac{3}{8}}.\end{aligned}$$

Thus, we see that the central function in these inequalities can always be bracketed by finite products of radical functions. ♦

In the last part of Example 6.15, we have illustrated the possibility of obtaining closer bounds for the generalized Binet function <sup>J</sup> <sup>p</sup>+1[ ln](x) by considering in Corollary 6.12 any value of p that is higher than 1+deg g. Actually, it is not difficult to see that this feature applies to every continuous multiple log --type function. We discuss this topic in Appendix D and show that the inequalities actually get tighter and tighter as p increases.

*Remark 6.16* We observe that Theorem 6.11 together with the generalized Stirling formula (Theorem 6.13) have been immediately obtained by "integrating" the generalized Wendel inequality (Theorem 6.1) on the unit interval. In turn, the generalized Wendel inequality is a straight application of Lemma 2.7 to the function f = g. These remarkable facts show the considerable importance of Lemma 2.7 in this theory: it was first crucial to derive our uniqueness and existence results, and now it provides very nice counterparts of Wendel's inequality and Stirling's formula, with short and elegant proofs. We will use Lemma 2.7 again in Sect. 6.7 for an in-depth investigation of Gregory's summation formula. ♦

**Improvements of Stirling's Formula** The following estimate of the gamma function is due to Gosper [40]

$$
\Gamma(\mathbf{x}) \sim \sqrt{2\pi} \, e^{-\mathbf{x}} \, \mathbf{x}^{\mathbf{x} - \frac{1}{2}} \left( 1 + \frac{1}{6\chi} \right)^{\frac{1}{2}} \qquad \text{as } \mathbf{x} \to \infty,
$$

and is more accurate than Stirling's formula. On the basis of this alternative approximation, Mortici [76] provided the following narrow inequalities

$$\left(1+\frac{\alpha}{2\pi}\right)^{\frac{1}{2}} < \frac{\Gamma(\mathbf{x})}{\sqrt{2\pi}\,e^{-\mathbf{x}}\,\mathbf{x}^{\mathbf{x}-\frac{1}{2}}} < \left(1+\frac{\beta}{2\mathbf{x}}\right)^{\frac{1}{2}}, \qquad \text{for } \mathbf{x} \ge 2,$$

where <sup>α</sup> <sup>=</sup> <sup>1</sup> <sup>3</sup> and <sup>β</sup> <sup>=</sup> (391/30)1/<sup>3</sup> <sup>−</sup> <sup>2</sup> <sup>≈</sup> <sup>0</sup>.353. We actually observe that the quest for finer and finer bounds and approximations for the gamma function has gained an increasing interest during this last decade (see [26, 28, 29, 36, 65, 75–78, 100, 101] and the references therein). Some of these investigations could be generalized to various multiple --type functions. New results along this line would be welcome.

**Webster's Double Inequality** We have seen that Theorems 6.1 and 6.11 provide very useful bounds for both quantities <sup>ρ</sup>p+<sup>1</sup> <sup>x</sup> [g](a) and <sup>J</sup> <sup>p</sup>+1[g](x). It is actually possible to provide tighter bounds for these quantities using again the p-convexity or p-concavity properties of the function g. For instance, one can show that if g lies in *<sup>D</sup>*<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*<sup>1</sup> and if x > 0 and a > 0 are so that <sup>g</sup> is concave on [<sup>x</sup> <sup>+</sup> a,∞), then the following double inequality hold

$$\sum\_{k=0}^{\lfloor a \rfloor} g(\mathbf{x} + k) + (\lfloor a \rfloor - 1) \, \mathbf{g}(\mathbf{x} + a) - a \, \mathbf{g}(\mathbf{x}) \le \rho\_{\mathbf{x}}^2 [\Sigma \mathbf{g}](a)$$

$$\le \sum\_{k=0}^{\lfloor a \rfloor} \mathbf{g}(\mathbf{x} + k) - \mathbf{g}(\mathbf{x} + a) + \{a\} \, \mathbf{g}(\mathbf{x} + \lfloor a \rfloor + 1) - a \, \mathbf{g}(\mathbf{x}).\tag{6.26}$$

This inequality was actually provided by Webster [98, Eq. (6.4)] to establish the limit (6.4) in the case when p = 1.

Now, assuming that g is continuous, we can integrate every expression in the inequalities above on a ∈ (0, 1), and we then obtain the following bounds for <sup>J</sup> <sup>2</sup>[g](x)

$$0 \le -J^2[\mathbf{g}](\mathbf{x}) \le J^2[\Sigma \mathbf{g}](\mathbf{x})$$

$$\le -J^2[\mathbf{g}](\mathbf{x}) - \int\_0^1 t \, \mathbf{g}(\mathbf{x} + t) \, dt + \frac{1}{2} \, \mathbf{g}(\mathbf{x} + 1). \tag{6.27}$$

For instance, for g(x) = ln x, we obtain (in the multiplication notation)

$$1 \le e^{-1} \left( 1 + \frac{1}{x} \right)^{\chi + \frac{1}{2}} \le \frac{\Gamma(\chi)}{\sqrt{2\pi} \ e^{-\chi} x^{\chi - \frac{1}{2}}} \le e^{-\frac{x}{2} - \frac{3}{4}} \left( 1 + \frac{1}{x} \right)^{\frac{1}{2}(\chi + 1)^2},\tag{6.28}$$

which provides a better lower bound in the inequalities (6.25).

In Appendix E, we discuss this interesting issue and provide a generalization to multiple log --type functions of the Webster double inequality (6.26) and its "integrated" version (6.27).

**Generalized Stirling's Constant** The number <sup>√</sup> 2π arising in Stirling's formula (6.2) and Example 6.15 is called *Stirling's constant* (see, e.g., Finch [37]). For certain multiple --type functions, analogues of Stirling's constant can be easily defined as follows.

**Definition 6.17 (Generalized Stirling's Constant)** For any function <sup>g</sup> <sup>∈</sup> *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> dom() that is integrable at 0, we define the number

$$
\lfloor \overline{\sigma} \lg \rfloor = \lfloor \sigma \lg \rfloor - \int\_0^1 \lg(t) \, dt \, = \int\_0^1 \Sigma \lg(t) \, dt.
$$

We say that the number exp(σ[g]) is the *generalized Stirling constant* associated with g.

When g is integrable at 0, the generalized Stirling constant exists and hence the generalized Stirling formula (6.21) can take the following form

$$
\nabla \mathbf{g}(\mathbf{x}) - \int\_0^\chi \mathbf{g}(t) \, dt + \sum\_{j=1}^p G\_j \Delta^{j-1} \mathbf{g}(\mathbf{x}) \to \nabla \|\mathbf{g}\| \qquad \text{as } \mathbf{x} \to \infty \dots
$$

It is important to note that, contrary to the generalized Stirling constant, the asymptotic constant <sup>σ</sup>[g] exists for any function <sup>g</sup> lying in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> dom(), even if it is not integrable at 0. For instance, for the function g(x) <sup>=</sup> <sup>1</sup> <sup>x</sup> , we have that σ[g] is the Euler constant γ (see Example 8.19) while σ[g] does not exist.

This shows that the asymptotic constant is the "good" constant to consider in this new theory. It actually enables us to derive for multiple log --type functions analogues of several properties of the gamma function. For instance, we have seen that it was very useful to derive the generalized Stirling formula. To give a second example, we will see in Sect. 8.6 that it also enables us to derive analogues of Gauss' multiplication formula for the gamma function.

#### **6.5 Analogue of Burnside's Formula**

Let us recall *Burnside's formula*, which states that

$$
\Gamma(x) \sim \sqrt{2\pi} \left(\frac{x - \frac{1}{2}}{e}\right)^{x - \frac{1}{2}} \qquad \text{as } x \to \infty. \tag{6.29}
$$

This formula actually provides a much better approximation of the gamma function than Stirling's formula. It was first established by Burnside [27] (see also Mortici [75]) and then rediscovered by Spouge [91]. In this section, we provide an analogue of Burnside's formula for any continuous <sup>p</sup>-type function when p = 0 and p = 1, and we note that such an analogue no longer exists when p ≥ 2.

Let us first state the following corollary, which particularizes the generalized Stirling formula when the function <sup>g</sup> lies in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*0. This corollary actually follows immediately from (6.11) and (6.21).

**Corollary 6.18** *Let* <sup>g</sup> *lie in <sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*0*. Then*

$$
\Sigma \mathbf{g}(\mathbf{x}) - \int\_{\mathbf{x}}^{\mathbf{x} + \mathbf{l}} \Sigma \mathbf{g}(t) \, dt \to \mathbf{0} \qquad \text{as } \mathbf{x} \to \infty \dots
$$

*Equivalently,*

$$
\Sigma \mathbf{g}(\mathbf{x}) - \int\_1^\mathbf{x} \mathbf{g}(t) \, dt \to \sigma \mathbf{[g]} \qquad a \mathbf{s} \to \infty \dots
$$

Corollary 6.18 tells us that, when <sup>g</sup> lies in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*0, the function g(x) coincides asymptotically with its trend (i.e., the integral (6.12)) and, in a sense, behaves asymptotically like the antiderivative of function g.

It is natural to think that a more accurate trend of g can be obtained by considering the centered version of the integral (6.12), namely

$$\int\_{\frac{1}{\lambda - \frac{1}{2}}}^{\chi + \frac{1}{2}} \Sigma g(t) \, dt \, = \,\sigma[g] + \int\_{1}^{\chi - \frac{1}{2}} g(t) \, dt, \qquad \chi > \frac{1}{2}.$$

On this matter, in the following proposition we provide a double inequality that shows that g(x) coincides asymptotically with this latter trend whenever g lies in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*<sup>0</sup> or in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*1. However, it is not difficult to see that in general this result no longer holds when <sup>g</sup> lies in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup>*D*<sup>2</sup> <sup>∩</sup>*K*2. The logarithm of the Barnes G-function (see Sect. 10.5) could serve as an example here.

**Proposition 6.19** *Let* <sup>p</sup> ∈ {0, <sup>1</sup>}*,* <sup>g</sup> <sup>∈</sup> *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*p*, and* x > <sup>0</sup> *be so that* <sup>g</sup> *is* p*-convex or* p*-concave on* [x,∞)*. Then*

$$\left| \left\| \mathbb{E} \mathbf{g} \left( \mathbf{x} + \frac{1}{2} \right) - \int\_{\mathbf{x}}^{\mathbf{x} + 1} \mathbb{E} \mathbf{g}(t) \, dt \right| \right| \le \left| J^{p+1} [\Sigma \mathbf{g}](\mathbf{x}) \right| \le \overline{G}\_p |\Delta^p \mathbf{g}(\mathbf{x})|.$$

*In particular,*

$$
\Sigma g(\mathbf{x}) - \int\_{\mathbf{x} - \frac{1}{2}}^{\mathbf{x} + \frac{1}{2}} \Sigma g(t) \, dt \to 0 \qquad \text{as } \mathbf{x} \to \infty,
$$

*or equivalently,*

$$
\Sigma g(\mathbf{x}) - \int\_1^{\mathbf{x} - \frac{1}{2}} g(t) \, dt \to \sigma \, [\mathbf{g}] \qquad \text{as } \mathbf{x} \to \infty \,\, \mathbf{x}
$$

*Proof* Using Corollary 6.12, we see that it is enough to prove the first inequality. Let

$$h(\boldsymbol{x}) := \Sigma \boldsymbol{g}\left(\boldsymbol{x} + \frac{1}{2}\right) - \int\_{\boldsymbol{x}}^{\boldsymbol{x}+1} \Sigma \boldsymbol{g}(t) \, dt.$$

Consider first the case when <sup>p</sup> <sup>=</sup> 0 and suppose for instance that <sup>g</sup> lies in *<sup>K</sup>*<sup>0</sup> <sup>+</sup>; hence g is decreasing on [x,∞). If h(x) ≥ 0, then we clearly have

$$|h(\mathbf{x})| = |h(\mathbf{x})| \le \left| \Sigma \mathbf{g}(\mathbf{x}) - \int\_{\mathbf{x}}^{\mathbf{x} + \mathbf{l}} \Sigma \mathbf{g}(t) \, dt \right| = J^{\mathbf{l}}[\Sigma \mathbf{g}](\mathbf{x}).$$

If h(x) ≤ 0, then we have

$$|h(\mathbf{x})| = \int\_{\mathbf{x}}^{\mathbf{x}+\mathbf{l}} \Sigma \mathbf{g}(\mathbf{t}) \, dt - \Sigma \mathbf{g}\left(\mathbf{x} + \frac{\mathbf{l}}{2}\right) \\
\leq \int\_{\mathbf{x}}^{\mathbf{x} + \frac{\mathbf{l}}{2}} \Sigma \mathbf{g}(\mathbf{t}) \, dt - \frac{1}{2} \Sigma \mathbf{g}\left(\mathbf{x} + \frac{\mathbf{l}}{2}\right).$$

and it is geometrically clear that the latter quantity is less than <sup>J</sup> <sup>1</sup>[g](x).

Suppose now that <sup>p</sup> <sup>=</sup> 1 and for instance that <sup>g</sup> lies in *<sup>K</sup>*<sup>1</sup> <sup>+</sup>; hence g is concave on [x,∞). Applying the Hermite-Hadamard inequality to g on the interval [x,x+ 1], we obtain that h(x) ≥ 0. Applying the trapezoidal rule to g on the intervals [x,x <sup>+</sup> <sup>1</sup> <sup>2</sup> ] and [<sup>x</sup> <sup>+</sup> <sup>1</sup> <sup>2</sup> , x + 1], we obtain the following inequality

$$h(\mathbf{x}) \le \int\_{\boldsymbol{\chi}}^{\boldsymbol{\chi}+1} \boldsymbol{\Sigma} \mathbf{g}(t) \, dt - \frac{1}{2} \, \boldsymbol{\Sigma} \mathbf{g}(\mathbf{x}+1) - \frac{1}{2} \, \boldsymbol{\Sigma} \mathbf{g}(\boldsymbol{\chi}),$$

where the right-hand quantity is exactly <sup>−</sup><sup>J</sup> <sup>2</sup>[g](x). This completes the proof.

Applying Proposition 6.19 to the function g(x) = ln x with p = 1, we retrieve Burnside's formula (6.29). Thus, Proposition 6.19 gives an analogue of Burnside's formula for any continuous <sup>p</sup>-type function when p ∈ {0, 1}. It also shows that this new formula provides a better approximation than the generalized Stirling formula whenever <sup>g</sup> lies in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> with <sup>p</sup> ∈ {0, <sup>1</sup>}.

#### **6.6 A General Asymptotic Equivalence**

The following result provides a sufficient condition for a continuous multiple log- type function to be asymptotically equivalent to its (possibly shifted) trend.

**Proposition 6.20** *Let* <sup>g</sup> *lie in <sup>C</sup>*<sup>0</sup> <sup>∩</sup> dom() *and let* <sup>a</sup> <sup>≥</sup> <sup>0</sup> *and* <sup>c</sup> <sup>∈</sup> <sup>R</sup>*. When* <sup>c</sup> <sup>+</sup> g *vanishes at infinity, we also assume that*

$$c + \Sigma g(n+1) \sim c + \Sigma g(n) \qquad \text{as } n \to \infty. \tag{6.30}$$

*Then we have*

$$c + \Sigma g(\mathbf{x} + a) \sim c + \int\_{\mathcal{X}}^{\mathbf{x} + \mathbf{l}} \Sigma g(t) \, dt \qquad \text{as } \mathbf{x} \to \infty. \tag{6.31}$$

*If* <sup>g</sup> *does not lie in <sup>D</sup>*−<sup>1</sup> <sup>N</sup> *, then we also have*

$$
\Sigma \mathbf{g}(\mathbf{x} + a) \sim \mathbf{c} + \int\_1^\chi \mathbf{g}(t) \, dt \qquad \text{as } \mathbf{x} \to \infty.
$$

*Proof* Let us first prove that (6.30) holds for any <sup>g</sup> lying in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> dom(), even if c + g does not vanish at infinity. Of course, this result clearly holds if g is eventually a polynomial (since so is g in this case). Thus, we will now assume that g is not eventually a polynomial.

Suppose first that <sup>p</sup> <sup>=</sup> <sup>1</sup> <sup>+</sup> deg <sup>g</sup> <sup>=</sup> 0. If <sup>g</sup> lies in *<sup>D</sup>*−<sup>1</sup> <sup>N</sup> , then (6.30) follows immediately from (6.23). If <sup>g</sup> lies in *<sup>D</sup>*<sup>0</sup> <sup>N</sup> \ *<sup>D</sup>*−<sup>1</sup> <sup>N</sup> , then it is not integrable at infinity by the integral test for convergence. By the generalized Stirling formula (6.21), it follows that the eventually monotone sequence n → g(n) is unbounded. This sequence is actually eventually strictly monotone; indeed, otherwise the function g <sup>=</sup> <sup>g</sup> <sup>∈</sup> *<sup>K</sup>*<sup>0</sup> would vanish in any unbounded interval of <sup>R</sup>+, and hence would eventually be identically zero, a contradiction. We then obtain

$$\frac{c + \Sigma g(n+1)}{c + \Sigma g(n)} = 1 + \frac{g(n)}{c + \Sigma g(n)} \to 1 \qquad \text{as } n \to \infty, \text{ }$$

and hence (6.30) holds whenever p = 0.

Suppose now that <sup>p</sup> <sup>=</sup> <sup>1</sup> <sup>+</sup> deg <sup>g</sup> <sup>≥</sup> 1. In this case, we have that pg lies in *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*0. By the uniqueness Theorem 3.1, we also have

$$
\Delta^p \Sigma \mathcal{g} \;= \; c\_p + \Sigma \Delta^p \mathcal{g}
$$

for some cp <sup>∈</sup> <sup>R</sup>, and it is clear (by minimality of <sup>p</sup>) that this latter function cannot vanish at infinity. Moreover, we can show as above that the sequence <sup>n</sup> → pg(n) is eventually strictly monotone. In view of the first case, we then have

$$\frac{\Delta^p \Sigma \mathbf{g}(n+1)}{\Delta^p \Sigma \mathbf{g}(n)} = \frac{c\_p + \Sigma \Delta^p \mathbf{g}(n+1)}{c\_p + \Sigma \Delta^p \mathbf{g}(n)} \to 1 \qquad \text{as } n \to \infty.$$

Let us now show that the sequence

$$n \mapsto \frac{c + \Delta^{p-1} \Sigma g(n+1)}{c + \Delta^{p-1} \Sigma g(n)}$$

exists for large values of n and converges to 1. By minimality of p, the function p−1g lies in *<sup>D</sup>*<sup>2</sup> <sup>N</sup> \ *<sup>D</sup>*<sup>1</sup> <sup>N</sup> and hence the sequence <sup>n</sup> → p−1g(n) is unbounded. Moreover, we can show as above that this sequence is eventually strictly monotone. Hence, the sequence above eventually exists and, using the Stolz-Cesàro theorem (see Lemma 5.20), we have that

$$\lim\_{n \to \infty} \frac{c + \Delta^{p-1} \Sigma \mathbf{g}(n+1)}{c + \Delta^{p-1} \Sigma \mathbf{g}(n)} = \lim\_{n \to \infty} \frac{\Delta^p \Sigma \mathbf{g}(n+1)}{\Delta^p \Sigma \mathbf{g}(n)} = 1.1$$

Iterating this process, we finally see that condition (6.30) holds for any <sup>p</sup> <sup>∈</sup> <sup>N</sup>.

We can now easily see that

$$c + \Sigma g(\mathbf{x} + a) \sim c + \Sigma g(\mathbf{x}) \qquad \text{as } \mathbf{x} \to \infty. \tag{6.32}$$

Indeed, this result clearly holds if both x and a are integers. For instance we have

$$c + \Sigma g(n+2) \sim c + \Sigma g(n+1) \sim c + \Sigma g(n) \qquad \text{as } n \to \infty.$$

Otherwise, assuming for instance that g is eventually increasing and nonnegative, for sufficiently large x we have

$$\frac{c + \Sigma g(\lfloor \chi + a \rfloor)}{c + \Sigma g(\lceil \chi \rceil)} \le \frac{c + \Sigma g(\chi + a)}{c + \Sigma g(\chi)} \le \frac{c + \Sigma g(\lceil \chi + a \rceil)}{c + \Sigma g(\lfloor \chi \rfloor)},$$

and (6.32) then follows by the squeeze theorem.

Finally, assuming again that g is eventually increasing and nonnegative, for sufficiently large x we have

$$1 = \frac{c + \Sigma g(\mathbf{x})}{c + \Sigma g(\mathbf{x})} \le \frac{c + \int\_{\chi}^{\chi + 1} \Sigma g(t) \, dt}{c + \Sigma g(\mathbf{x})} \le \frac{c + \Sigma g(\mathbf{x} + 1)}{c + \Sigma g(\mathbf{x})}$$

and, using again the squeeze theorem, we immediately obtain the first claimed asymptotic equivalence.

Now, if <sup>g</sup> does not lie in *<sup>D</sup>*−<sup>1</sup> <sup>N</sup> , then g(x) tends to infinity as x → ∞. Using (6.11), we then have

$$\frac{c + \int\_1^\chi \mathbf{g}(t) \, dt}{\Sigma \mathbf{g}(\mathbf{x} + a)} = \frac{c - \sigma[\mathbf{g}]}{\Sigma \mathbf{g}(\mathbf{x} + a)} + \frac{\int\_\chi^{\chi + 1} \Sigma \mathbf{g}(t) \, dt}{\Sigma \mathbf{g}(\mathbf{x} + a)} \to 1 \qquad \text{as } \mathbf{x} \to \infty, \mathbf{x}$$

which completes the proof.

*Remark 6.21* Let us show that the assumption on the function c + g cannot be ignored in Proposition 6.20. Consider the functions <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> and <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> defined by the equations

$$f(\mathbf{x}) = \frac{\mathbf{x} - 1}{2^{\mathbf{x}}} \left( 1 + \frac{1}{4} \sin \mathbf{x} \right) \quad \text{and} \quad \mathbf{g}(\mathbf{x}) = \Delta f(\mathbf{x}) \qquad \text{for } \mathbf{x} > \mathbf{0}.$$

It is clear that <sup>f</sup> lies in *<sup>D</sup>*<sup>0</sup> <sup>N</sup> and that <sup>g</sup> lies in *<sup>D</sup>*−<sup>1</sup> <sup>N</sup> . Moreover, it is not difficult to see that the inequalities

$$-2^{\chi+2}f'(x) \ge x \qquad \text{and} \qquad 2^{\chi+4}g'(x) \ge x$$

$$\mathbb{D}$$

eventually hold, which shows that both <sup>f</sup> and <sup>g</sup> lie in *<sup>K</sup>*0. By the uniqueness theorem it follows that f = g. However, we can readily see that the sequence

$$n \mapsto \frac{\Sigma \lg(n+1)}{\Sigma \lg(n)}$$

does not converge, which shows that (6.30) does not hold when c = 0. It is then possible to show that the equivalence (6.31) does not hold either.

Now, to see that the last asymptotic equivalence in Proposition 6.20 need not hold if <sup>g</sup> lies in *<sup>D</sup>*−<sup>1</sup> <sup>N</sup> , take for instance

$$\log(\mathbf{x}) = \frac{2}{(\mathbf{x}+\mathbf{l})(\mathbf{x}+\mathbf{2})} \qquad \text{and} \qquad \mathbb{E}\mathbf{g}(\mathbf{x}) = \frac{\mathbf{x}-1}{\mathbf{x}+1}.$$

We then have

$$\lim\_{\chi \to \infty} \frac{c + \int\_1^\chi g(t) \, dt}{\Sigma g(\chi + a)} = c + \ln \frac{9}{4} \,.$$

♦

#### **6.7 The Gregory Summation Formula Revisited**

Let <sup>g</sup> <sup>∈</sup> *<sup>C</sup>*0, <sup>q</sup> <sup>∈</sup> <sup>N</sup>, and let 1 <sup>≤</sup> <sup>m</sup> <sup>≤</sup> <sup>n</sup> be integers. Integrating both sides of identity (3.8) on x ∈ (0, 1), we immediately obtain the following identity

$$\int\_{m}^{n} \mathbf{g}(t) \, dt = \sum\_{k=m}^{n-1} \mathbf{g}(k) + \sum\_{j=1}^{q} G\_{j}(\Delta^{j-1} \mathbf{g}(n) - \Delta^{j-1} \mathbf{g}(m)) + R\_{m,n}^{q}[\mathbf{g}], \quad (6.33)$$

where

$$R\_{m,n}^q[\mathbf{g}] = \int\_0^1 \sum\_{k=m}^{n-1} \rho\_k^{q+1}[\mathbf{g}](t) \, dt \, = \int\_0^1 (f\_m^q[\mathbf{g}](t) - f\_n^q[\mathbf{g}](t)) \, dt. \tag{6.34}$$

Identity (6.33) is nothing other than *Gregory's summation formula* (see, e.g., [17, 50, 73]) with an integral form of the remainder. Note that, just like identity (2.10), Eq. (6.33) is a pure identity in the sense that it holds without any restriction on the form of g(x), except that here we asked g to be continuous.

Combining (6.14) with (6.34) we immediately see that this identity can be simply written in terms of the generalized Binet function as

$$\sum\_{k=m}^{n-1} J^{q+1}[g](k) + R\_{m,n}^q[g] \, \, = \, 0 \,\,. \tag{6.35}$$

Equivalently, if <sup>g</sup> lies in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> dom(g), using (6.19) and (6.34) we see that this identity can also take the form

$$J^{q+1}[\Sigma g](n) - J^{q+1}[\Sigma g](m) + R^q\_{m,n}[g] \ = \begin{array}{c} 0 \ . \end{array} \tag{6.36}$$

The next lemma, which is yet another straightforward consequence of Lemma 2.7, provides an upper bound for <sup>|</sup>R<sup>q</sup> m,n[g]| when g is q-convex or qconcave on [m,∞). Under this latter assumption, we can then use Gregory's formula (6.33) as a quadrature method for the numerical computation of the integral of g over the interval [m, n).

**Lemma 6.22** *Let* <sup>g</sup> *lie in <sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*<sup>q</sup> *for some* <sup>q</sup> <sup>∈</sup> <sup>N</sup> *and let* <sup>m</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> *be so that* <sup>g</sup> *is* q*-convex or* q*-concave on* [m,∞)*. Then, for any integer* n ≥ m*, we have*

$$|R\_{m,n}^q[\mathbf{g}]| \le \overline{G}\_q |\Delta^q \mathbf{g}(n) - \Delta^q \mathbf{g}(m)|.\tag{6.37}$$

*Proof* This result is an immediate consequence of Lemma 2.7. Indeed, we can write

$$|R\_{m,n}^{q}[g]| = \left| \sum\_{k=m}^{n-1} \int\_0^1 \rho\_k^{q+1} [g](t) \, dt \right| \le \overline{G}\_q \left| \sum\_{k=m}^{n-1} \Delta^{q+1} g(k) \right|,$$

where the latter sum clearly telescopes to qg(n) <sup>−</sup> qg(m).

*Example 6.23* Let us compute numerically the integral

$$I = \int\_{\pi}^{2\pi} \ln x \, dx = \, 4.809854526737\dots$$

using Gregory's summation formula (6.33) and the upper bound (6.37) of its remainder. Using an appropriate linear change of variable, we obtain

$$I \;= \int\_{1}^{n} \mathbf{g}(t) \, dt,\qquad \text{where}\quad \mathbf{g}(t) \;= \frac{\pi}{n-1} \ln \left( \frac{\pi}{n-1} (t-1) + \pi \right).$$

Taking n = 20 and q = 10 for instance, we obtain

$$I \approx \sum\_{k=1}^{19} g(k) + \sum\_{j=1}^{10} G\_j(\Delta^{j-1} g(20) - \Delta^{j-1} g(1)) = 4.809854526746\dots$$

and (6.37) gives <sup>|</sup>R<sup>10</sup> <sup>1</sup>,20[g]| ≤ <sup>5</sup>.<sup>9</sup> <sup>×</sup> <sup>10</sup>−11. ♦

In the following result, we give sufficient conditions on the function g for the sequence <sup>q</sup> → <sup>R</sup><sup>q</sup> m,n[g] to converge to zero. Gregory's formula (6.33) then takes a special form.

**Proposition 6.24** *Let* <sup>g</sup> <sup>∈</sup> *<sup>C</sup>*0∩*K*∞*,* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*, and let* <sup>1</sup> <sup>≤</sup> <sup>m</sup> <sup>≤</sup> <sup>n</sup> *be integers. Suppose that, for every integer* q ≥ p*, the function* g *is* q*-convex or* q*-concave on* [m,∞)*. Suppose also that the sequence* <sup>q</sup> → qg(n) <sup>−</sup> qg(m) *is bounded. Then we have*

$$R\_{m,n}^q[g] \to \begin{array}{c} 0 \\ \end{array} \quad \text{as } q \to\_{\mathbb{N}} \infty,$$

*or equivalently,*

$$\int\_{m}^{n} \mathbf{g}(t) \, dt \, = \sum\_{k=m}^{n-1} \mathbf{g}(k) + \sum\_{j=1}^{\infty} G\_{j} (\Delta^{j-1} \mathbf{g}(n) - \Delta^{j-1} \mathbf{g}(m)) \,.$$

*If* <sup>g</sup> *lies in <sup>C</sup>*<sup>0</sup> <sup>∩</sup> dom(g)*, then the latter identity also takes the form*

$$
\left(\Sigma \mathbf{g}(n) - \Sigma \mathbf{g}(m)\right) = \int\_m^n \mathbf{g}(t) \, dt - \sum\_{j=1}^\infty G\_j(\Delta^{j-1} \mathbf{g}(n) - \Delta^{j-1} \mathbf{g}(m)) \, dt
$$

*Proof* Under the assumptions of this proposition, the sequence <sup>q</sup> → <sup>R</sup><sup>q</sup> m,n[g] converges to zero by Lemma 6.22. (Recall that the sequence n → Gn converges to zero.) The result then immediately follows from Gregory's formula (6.33). The last part then follows from identity (5.2).

*Example 6.25* Taking g(x) = ln x and m = p = 1 in Proposition 6.24, we obtain the following identity

$$\ln n! = 1 - n + \left(n + \frac{1}{2}\right)\ln n + \frac{1}{12}\ln\left(\frac{n+1}{2n}\right) - \frac{1}{24}\ln\left(\frac{4n(n+2)}{3(n+1)^2}\right) + \dotsb$$

which holds for any <sup>n</sup> <sup>∈</sup> <sup>N</sup>∗. ♦

**A Geometric Interpretation of Gregory's Formula** For any <sup>g</sup> <sup>∈</sup> *<sup>C</sup>*<sup>0</sup> and any <sup>q</sup> <sup>∈</sup> <sup>N</sup>, we let Pq [g]: [1,∞) <sup>→</sup> <sup>R</sup> denote the piecewise polynomial function whose restriction to any interval [k, k <sup>+</sup> <sup>1</sup>), with <sup>k</sup> <sup>∈</sup> <sup>N</sup>∗, is the interpolating polynomial of g with nodes at k, k + 1,...,k + q. That is,

$$\overline{P}\_q[\mathbf{g}](\mathbf{x}) = P\_q[\mathbf{g}](k, k+1, \dots, k+q; \mathbf{x}), \qquad \mathbf{x} \in [k, k+1), \tag{6.38}$$

or equivalently, using (2.9),

$$\overline{P}\_q[g](\mathbf{x}) = P\_q[g](\lfloor \mathbf{x} \rfloor, \lfloor \mathbf{x} \rfloor + 1, \dots, \lfloor \mathbf{x} \rfloor + q; \mathbf{x})$$

$$= \sum\_{j=0}^q \binom{\lfloor \mathbf{x} \rfloor}{j} \Delta^j g(\lfloor \mathbf{x} \rfloor), \qquad \mathbf{x} \ge 1.$$

$$\Diamond$$

In the following proposition, we provide an integral expression for the remainder Rq m,n[g] in terms of the function Pq [g].

**Proposition 6.26** *For any* <sup>g</sup> <sup>∈</sup> *<sup>C</sup>*0*, any* <sup>q</sup> <sup>∈</sup> <sup>N</sup>*, and any integers* <sup>1</sup> <sup>≤</sup> <sup>m</sup> <sup>≤</sup> <sup>n</sup>*, we have*

$$R\_{m,n}^q[\mathbf{g}] \;= \int\_m^n (\mathbf{g}(t) - \overline{P}\_q[\mathbf{g}](t)) \, dt. \tag{6.39}$$

*Proof* Using (2.11) and (6.14) we then obtain

$$-J^{q+1}[\mathbf{g}](k) \ = \int\_0^1 \rho\_k^{q+1}[\mathbf{g}](t) \, dt \ = \int\_k^{k+1} (\mathbf{g}(t) - \overline{P}\_q[\mathbf{g}](t)) \, dt.$$

The result then follows from (6.35).

Proposition 6.26 immediately provides an interesting interpretation of Gregory's formula as a quadrature method. It actually shows that Gregory's formula approximates the integral of g over the interval [m, n) by replacing g with the piecewise polynomial function Pq [g]. In particular, the remainder <sup>R</sup><sup>q</sup> m,n[g] reduces to zero whenever g is a polynomial of degree less than or equal to q.

We also observe that Gregory's formula reduces to the "left" rectangle method (left Riemann sum) when q = 0, and the trapezoidal rule when q = 1. However, it does not reduce to Simpson's rule when q = 2. In fact, Gregory's formula does not correspond to a Newton-Cotes quadrature rule when q ≥ 2.

Now, if g is q-convex or q-concave on [m,∞), then for any k ∈ {m, m + 1,...,n − 1} and any t ∈ [0, 1), using Lemma 2.7 and identity (2.11) we obtain

$$0 \le \pm (-1)^q \rho\_k^{q+1} \text{[g]}(t) = \pm (-1)^q \left( \text{g}(k+t) - \overline{P}\_q \text{[g]}(k+t) \right)^q$$

where ± stands for 1 or −1 according to whether g is q-convex or q-concave on [m,∞). This observation provides the following additional geometric interpretation. It shows that, on the interval [k, k + 1), the graph of g lies over or under that of Pq [g] according to whether ±(−1) <sup>q</sup> is 1 or <sup>−</sup>1. As an immediate consequence, the quantity <sup>|</sup><sup>J</sup> <sup>q</sup>+1[g](k)<sup>|</sup> is precisely the surface area between both graphs over the interval [k, k <sup>+</sup> <sup>1</sup>) while the remainder <sup>|</sup>R<sup>q</sup> m,n[g]| is the surface area between both graphs over the interval [m, n).

*Example 6.27* With the function g(x) = ln x and the parameter q = 1 we associate the piecewise linear function

$$\overline{P}\_{\mathbb{I}}[\emptyset](x) = \ln|x\downarrow + (x - \lfloor x \rfloor) \ln\left(1 + \frac{1}{\lfloor x \rfloor}\right) \dots$$

,

Since g is concave, for any integer n ≥ 1 the graph of g on [1, n) lies over (or on) that of P1[g], which is the polygonal line through the points (k, g(k)) for k = 1,...,n. The value (see (6.36))

$$R\_{1,n}^{\mathbb{I}}[g] = J(\mathbb{I}) - J(n) = -\ln \Gamma(n) + \left(n - \frac{1}{2}\right) \ln n - n + 1\,,$$

where J (x) is Binet's function defined in (6.13), is then nothing other than the remainder in the trapezoidal rule on [1, n) with the integer nodes 1,...,n. Geometrically, it measures the surface area between the graph of g and the polygonal line. ♦

**Alternative Integral Form of the Remainder** The following proposition yields an alternative integral form of the remainder R<sup>q</sup> m,n[g] when <sup>g</sup> lies in *<sup>C</sup>*q+<sup>1</sup> for some <sup>q</sup> <sup>∈</sup> <sup>N</sup>∗. Consider first the (kernel) function <sup>K</sup><sup>q</sup> m,n : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> defined by the equation

$$K\_{m,n}^{q}(t) \;= \frac{1}{q!} R\_{m,n}^{q}[(\cdot - t)\_{+}^{q}] \qquad \text{ for } t \in \mathbb{R}\_{+}... $$

It is not difficult to show that this function lies in *<sup>C</sup>*q−<sup>1</sup> and has the compact support [m, n + q − 1].

**Proposition 6.28** *Suppose that* <sup>g</sup> *lies in <sup>C</sup>*q+<sup>1</sup> *for some* <sup>q</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> *and let* <sup>1</sup> <sup>≤</sup> <sup>m</sup> <sup>≤</sup> <sup>n</sup> *be integers. Then we have*

$$R\_{m,n}^q[g] = \int\_m^{n+q-1} K\_{m,n}^q(t) \, D^{q+1} g(t) \, dt \, .$$

*Proof* By Taylor's theorem, the following identity

$$g(\mathbf{x}) = \,^p P\_q(\mathbf{x}) + \int\_m^{n+q-1} \frac{(\mathbf{x} - t)\_+^q}{q!} \,^p D^{q+1} g(t) \, dt$$

holds on the interval [m, n + q − 1] for some polynomial Pq of degree less than or equal to q. The result then follows from the definition of the remainder R<sup>q</sup> m,n[g] and the fact that R<sup>q</sup> m,n[Pq ] = 0.

Interestingly, if the function K<sup>q</sup> m,n does not change in sign (and we conjecture that (−1)<sup>q</sup> <sup>K</sup><sup>q</sup> m,n is nonnegative), then by the mean value theorem for definite integrals the remainder also takes the form

$$R\_{m,n}^q[g] = \left. D^{q+1}g(\xi) \int\_m^{n+q-1} K\_{m,n}^q(t) \, dt \right|$$

for some ξ ∈ [m, n + q − 1].

*Remark 6.29* We observe that Jordan [50, p. 285] claimed that

$$\text{\textquotedblleft } R\_{m,n}^{q}[\mathfrak{g}] \text{\textquotedblright} = \text{\textquotedblleft } G\_{q+1}(n-m)\,\Delta^{q+1}\mathfrak{g}(\xi) \text{\textquotedblleft }$$

for some <sup>ξ</sup> <sup>∈</sup> (m, n). However, taking for instance g(x) <sup>=</sup> <sup>x</sup><sup>2</sup> and (q, m, n) <sup>=</sup> (0, 1, 2), we can see that this form of the remainder is not correct. Nevertheless, several examples suggest that Jordan's statement could possibly be corrected by assuming that ξ ∈ (m − 1, n − 1). This question thus remains open. ♦

**General Gregory's Formula and Euler-Maclaurin's Formula** The following proposition provides Gregory's formula in its general form using our integral expression for the remainder.

**Proposition 6.30 (General Form of Gregory's Formula)** *Let* <sup>a</sup> <sup>∈</sup> <sup>R</sup>*,* n, q <sup>∈</sup> <sup>N</sup>*,* h > <sup>0</sup>*, and* <sup>f</sup> <sup>∈</sup> *<sup>C</sup>*0([a,∞))*. Then we have*

$$\begin{aligned} \frac{1}{h} \int\_a^{a+nh} f(t) \, dt &= \sum\_{k=0}^{n-1} f(a+kh) \\ &+ \sum\_{j=1}^q G\_j \left( (\Delta\_{\left[h\right]}^{j-1} f)(a+nh) - (\Delta\_{\left[h\right]}^{j-1} f)(a) \right) + R\_{1,n+1}^q \left[ f\_a^h \right], \end{aligned}$$

*where*

$$\mathcal{R}\_{1,n+1}^{q}[f\_a^{h}] = \int\_0^1 \sum\_{k=1}^n \rho\_k^{q+1} [f\_a^{h}](t) \, dt \quad \text{and} \quad f\_a^{h}(\mathbf{x}) = f(a + (\mathbf{x} - \mathbf{l})h).$$

*Moreover, if* f *is* q*-convex or* q*-concave on* [a,∞)*, then*

$$|\mathcal{R}\_{1,n+1}^q[f\_a^{h}]| \le \overline{G}\_q \left| (\Delta\_{[h]}^q f)(a+nh) - (\Delta\_{[h]}^q f)(a) \right|.$$

*Here,* [h] *denotes the forward difference operator with step* h > 0*.*

*Proof* This formula can be obtained immediately from (6.33) and (6.34) replacing n with n + 1 and then setting m = 1 and g(x) = f (a + (x − 1)h). The last part follows from Lemma 6.22.

The general Gregory formula is often compared with the corresponding *Euler-Maclaurin summation formula*. We will use the latter in Chap. 8, so we now state it in its general form (for background see, e.g., Apostol [8], Gel'fond [39], Lampret [62], Mariconda and Tonolo [67], and Srivastava and Choi [93]).

Recall first that the *Bernoulli numbers* B0, B1, B2,... are defined implicitly by the single equation (see, e.g., Gel'fond [39, Chapter 4] and Graham et al. [41, p. 284])

$$\sum\_{j=0}^{m} \binom{m+1}{j} \mathcal{B}\_{j} \, : \, \quad \, m \in \mathbb{N} \,. \tag{6.40}$$

The first few values of Bn are: 1, <sup>−</sup><sup>1</sup> 2 , 1 <sup>6</sup> , <sup>0</sup>, <sup>−</sup> <sup>1</sup> <sup>30</sup> , 0,.... Recall also that, for any <sup>n</sup> <sup>∈</sup> <sup>N</sup>, the <sup>n</sup>th degree Bernoulli polynomial Bn(x) is defined by the equation

$$B\_n(\mathbf{x}) = \sum\_{k=0}^n \binom{n}{k} B\_{n-k} \mathbf{x}^k \qquad \text{for } \mathbf{x} \in \mathbb{R}.$$

**Proposition 6.31 (Euler-Maclaurin's Formula)** *Let* <sup>N</sup> <sup>∈</sup> <sup>N</sup>∗*,* <sup>f</sup> <sup>∈</sup> *<sup>C</sup>*1([a, b])*, and* h = (b − a)/N*, for some real numbers* a<b*. Then we have*

$$\begin{aligned} h \sum\_{k=0}^{N} f(a+kh) &= \int\_{a}^{b} f(\mathbf{x}) \, d\mathbf{x} + \frac{h}{2} \left( f(a) + f(b) \right) \\ &+ h^2 \int\_{0}^{N} B\_1(\{t\}) \, f'(a+th) \, dt \dots \end{aligned}$$

*If, in addition,* <sup>f</sup> <sup>∈</sup> *<sup>C</sup>*<sup>2</sup>q([a, b]) *for some* <sup>q</sup> <sup>∈</sup> <sup>N</sup>∗*, then*

$$\begin{aligned} h \sum\_{k=0}^{N} f(a+kh) &= \int\_{a}^{b} f(\mathbf{x}) \, d\mathbf{x} + \frac{h}{2} \left( f(a) + f(b) \right) \\ &+ \sum\_{j=1}^{q} h^{2j} \frac{B\_{2j}}{(2j)!} \left( f^{(2j-1)}(b) - f^{(2j-1)}(a) \right) + R \dots \end{aligned}$$

*where*

$$\mathcal{R} = -h^{2q+1} \int\_0^N \frac{B\_{2q}(\{t\})}{(2q)!} \, f^{(2q)}(a+th) \, dt$$

*and*

$$|R| \le \left| h^{2q} \frac{|B\_{2q}|}{(2q)!} \int\_a^b |f^{(2q)}(\mathbf{x})| \, d\mathbf{x} \right| $$

*Here* <sup>f</sup> <sup>∈</sup> *<sup>C</sup>*k([a, b]) *means that* <sup>f</sup> <sup>∈</sup> *<sup>C</sup>*k(I ) *for some open interval* <sup>I</sup> *containing* [a, b]*.*

*Remark 6.32* We observe (to paraphrase Jordan [50, p. 285]) that Euler-Maclaurin's formula is more advantageous than Gregory's formula if we deal with functions whose derivatives are less complicated than their differences. However, there are functions for which Euler-Maclaurin's formula leads to divergent series while the corresponding Gregory's formula-based series (see Proposition 6.24) are convergent. For instance, this may be due to the fact that, for any x > 0, the sequence <sup>n</sup> → <sup>D</sup><sup>n</sup> <sup>1</sup> <sup>x</sup> is unbounded while the sequence <sup>n</sup> → <sup>n</sup> <sup>1</sup> <sup>x</sup> converges to zero. ♦

#### **6.8 Generalized Euler's Constant**

In this section, we introduce and discuss an analogue of Euler's constant for any function <sup>g</sup> lying in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> dom(). We first consider a lemma.

**Lemma 6.33** *Let* <sup>g</sup> *lie in <sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup> *and let* <sup>m</sup> <sup>∈</sup> <sup>N</sup>∗*. Then the sequence* <sup>n</sup> → <sup>R</sup><sup>p</sup> m,n[g] *for* <sup>n</sup> <sup>≥</sup> <sup>m</sup> *converges. Denoting its limit by* <sup>R</sup><sup>p</sup> m,∞[g]*, we have*

$$R\_{m,\infty}^{p}[\mathbf{g}] = J^{p+1}[\Sigma \mathbf{g}](m).$$

*Proof* The proof is an immediate consequence of (6.36) and the generalized Stirling formula (Theorem 6.13).

Under the assumptions of Lemma 6.33, using (6.34), (6.35), and (6.39) we immediately obtain the following identities

$$\mathcal{R}\_{m,\infty}^{p}[\![g]\!] = \sum\_{k=m}^{\infty} \int\_{0}^{1} \rho\_{k}^{p+1}[\![g]\!](t) \, dt \ = \int\_{0}^{1} \sum\_{k=m}^{\infty} \rho\_{k}^{p+1}[\![g]\!](t) \, dt$$

$$= \int\_{0}^{1} \left(f\_{m}^{p}[\![g]\!](t) - \Sigma \!{g}(t)\right) dt$$

and

$$R\_{m,\infty}^{p}[\mathbf{g}] = \ -\sum\_{k=m}^{\infty} J^{p+1}[\mathbf{g}](k) \ = \int\_{m}^{\infty} (\mathbf{g}(t) - \overline{P}\_{p}[\mathbf{g}](t)) \, dt. \tag{6.41}$$

Moreover, if g is p-convex or p-concave on [m,∞), the inequality (6.37) reduces to

$$\left| \left| R\_{m,\infty}^{p} [\![g]\!] \right| \right| = \left| J^{p+1} [\![\Sigma g](m) \!] \right| \le \overline{G}\_{p} \left| \Delta^{p} \!g(m) \!\right|,\tag{6.42}$$

which is also an immediate consequence of Corollary 6.12 (where a tighter inequality is also provided when p ≥ 1).

Let us now provide a geometric interpretation of the remainder R<sup>p</sup> m,∞[g] when g is p-convex or p-concave on [m,∞). Suppose for instance that g is p-convex on [m,∞). The interpretation of Gregory's formula discussed in Sect. 6.7 shows that, on the whole of the interval [m,∞), the graph of g lies over or under that of Pp[g] according to whether <sup>p</sup> is even or odd, and the remainder <sup>|</sup>R<sup>p</sup> m,∞[g]| is precisely the surface area between both graphs. Interestingly, the fact that this surface area converges to zero as m →<sup>N</sup> ∞ by (6.42) provides a direct interpretation of the restriction of the generalized Stirling formula to integer values.

This interpretation is particularly visual when p = 0 or p = 1. Consider for instance the case p = 1 and suppose that g is concave on [m,∞) (e.g., g(x) = ln x). Then, the graph of g on [m,∞) lies over (or on) the polygonal line through the points (k, g(k)) for all integers <sup>k</sup> <sup>≥</sup> <sup>m</sup>. The value <sup>|</sup>R<sup>p</sup> m,∞[g]| is then the surface area between the graph of g and this polygonal line. It is also the absolute value of the remainder in the trapezoidal rule on [m,∞).

We are now able to introduce an analogue of Euler's constant for any function g lying in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> dom(). We call it the *generalized Euler constant*.

**Definition 6.34 (Generalized Euler's Constant)** The *generalized Euler constant* associated with a function <sup>g</sup> <sup>∈</sup> *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> dom() is the number

$$\gamma[\mathfrak{g}] = \begin{array}{c} -R^p\_{\mathfrak{l},\infty}[\mathfrak{g}] \ = \ -J^{p+1}[\Sigma \mathfrak{g}](\mathfrak{l}) \ , \end{array}$$

where p = 1 + deg g.

For instance, if <sup>g</sup> lies in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*0, then using (6.33) we obtain

$$\mathcal{Y}[\mathcal{g}] = \lim\_{n \to \infty} \left( \sum\_{k=1}^{n-1} \mathcal{g}(k) - \int\_1^n \mathcal{g}(t) \, dt \right) \tag{6.43}$$

$$= \sum\_{k=1}^{\infty} \left( \mathcal{g}(k) - \int\_k^{k+1} \mathcal{g}(t) \, dt \right),$$

and this value represents the remainder in the "left" rectangle method on [1,∞) with the integer nodes <sup>k</sup> <sup>=</sup> <sup>1</sup>, <sup>2</sup>,.... Similarly, if <sup>g</sup> lies in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*<sup>1</sup> and deg <sup>g</sup> <sup>=</sup> 0, then we get

$$\mathcal{Y}\{\mathbf{g}\} = \lim\_{n \to \infty} \left( \sum\_{k=1}^{n-1} \mathbf{g}(k) - \int\_1^n \mathbf{g}(t) \, dt + \frac{1}{2} \mathbf{g}(n) - \frac{1}{2} \mathbf{g}(1) \right) \qquad (6.44)$$

$$= \sum\_{k=1}^{\infty} \left( \mathbf{g}(k) - \int\_k^{k+1} \mathbf{g}(t) \, dt + \frac{1}{2} \Delta \mathbf{g}(k) \right),$$

and this value represents the remainder in the trapezoidal rule on [1,∞) with the integer nodes k = 1, 2,....

Thus defined, the number <sup>γ</sup> [g] generalizes to any function <sup>g</sup> lying in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> dom() not only the classical Euler constant <sup>γ</sup> (obtained when g(x) <sup>=</sup> <sup>1</sup> <sup>x</sup> ) but also the generalized Euler constant γ [g] associated with a positive and strictly decreasing function g as defined in (6.43) (see, e.g., Apostol [8] and Finch [37, Section 1.5.3]). Moreover, as we will see in Sect. 8.2, this number plays a central role in the Weierstrassian form of g (which also justifies the choice m = 1 in the definition of γ [g]).

The definition of γ [g] does not require g to be p-convex or p-concave on [1,∞). However, if this latter condition holds, then by (6.42) we have the inequality

$$|\boldsymbol{\gamma}[\mathbf{g}]| \le \overline{\boldsymbol{G}}\_p |\boldsymbol{\Delta}^p \mathbf{g}(\mathbf{l})| \tag{6.45}$$

and by Corollary 6.12 the following tighter inequality also holds when p ≥ 1

$$|\boldsymbol{\gamma}\lg\boldsymbol{\mathfrak{k}}| \le \int\_0^1 \left| \binom{t-1}{p} \right| \left| \Delta^{p-1} \boldsymbol{g}(t+1) - \Delta^{p-1} \boldsymbol{g}(1) \right| \, dt. \tag{6.46}$$

We also provide and discuss finer bounds for γ [g] in Appendix E (see Remark E.7).

*Example 6.35* If g(x) = 1/x, then γ [g] reduces to Euler's constant γ , as expected. Indeed, in this case we obtain

$$\gamma[\mathfrak{g}] = \, \, -J^{\, \,}[\psi](1) \, \, = \, \, \underline{\chi}.$$

Using (6.43), we then retrieve the well-known formula

$$\mathcal{Y} = \lim\_{n \to \infty} \left( \sum\_{k=1}^{n} \frac{1}{k} - \ln n \right)$$

and its classical geometric interpretation. If g(x) = ln x, then the associated generalized Euler constant is

$$\gamma[\text{g}] = -J^2[\text{ln}\diamond\Gamma](\text{l}) = -J(\text{l}) = -\text{l} + \frac{1}{2}\ln(2\pi) \approx -0.081$$

and we can see that it coincides with the associated asymptotic constant σ[g] (see Example 6.5). Moreover, using (6.44) we obtain the following formula

$$\gamma\{\mathbf{g}\} = \lim\_{n \to \infty} \left( \ln n! + n - 1 - \left( n + \frac{1}{2} \right) \ln n \right).$$

The value |γ [g]| = −γ [g] can then be interpreted as the surface area between the graph of g on the unbounded interval [1,∞) and the polygonal line through the points (k, g(k)) for all integers k ≥ 1. Moreover, Eq. (6.46) provides the following inequality

$$|\mathcal{V}[\mathbf{g}]| \le \ln 4 - \frac{5}{4} \approx 0.14.$$

♦

**A Conversion Formula Between** *γ* **[***g***] and** *σ***[***g***]** The following proposition, which immediately follows from (6.18) and the identity

$$\mathcal{Y}[\mathbf{g}] = \ -J^{p+1}[\Sigma \mathbf{g}](1),$$

shows how the numbers γ [g] and σ[g] are related and provides an alternative way to compute the value of γ [g].

**Proposition 6.36** *For any function* <sup>g</sup> *lying in <sup>C</sup>*<sup>0</sup> <sup>∩</sup> dom()*, we have*

$$
\sigma[\mathbf{g}] = \left. \mathcal{V}[\mathbf{g}] + \sum\_{j=1}^{p} G\_j \, \Delta^{j-1} \mathbf{g}(\mathbf{l}), \right|
$$

*where* p = 1 + deg g*.*

**An Integral Form of** *γ* **[***g***]** The following proposition shows that the classical integral representation of the Euler constant

$$\mathcal{V} = \int\_1^\infty \left( \frac{1}{\lfloor t \rfloor} - \frac{1}{t} \right) dt$$

can be generalized to the constant <sup>γ</sup> [g] for any function <sup>g</sup> lying in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> dom().

**Proposition 6.37** *For any* <sup>g</sup> <sup>∈</sup> *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*p*, where* <sup>p</sup> <sup>=</sup> <sup>1</sup> <sup>+</sup> deg <sup>g</sup>*, we have*

$$\gamma[\mathbf{g}] = \int\_{1}^{\infty} \left( \sum\_{j=0}^{p} G\_{j} \Delta^{j} \mathbf{g}(\lfloor t \rfloor) - \mathbf{g}(t) \right) dt.$$

*In particular, when* deg g = −1*, we have*

$$\nu \lg \mathfrak{l} = \int\_{1}^{\infty} (\mathfrak{g}(\lfloor t \rfloor) - \mathfrak{g}(t)) \, dt.$$

*Proof* Using (6.16) and (6.41), we obtain

$$\mathbb{J}\eta[\mathbf{g}] = \sum\_{k=1}^{\infty} J^{p+1}[\mathbf{g}](k) = \sum\_{k=1}^{\infty} \left( \sum\_{j=0}^{p} G\_j \, \Delta^j \mathbf{g}(k) - \int\_{k}^{k+1} \mathbf{g}(t) \, dt \right),$$

which immediately provides the claimed formula.

**The Principal Indefinite Sum of the Generalized Binet Function** If <sup>g</sup> lies in *<sup>C</sup>*0<sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup>*K*<sup>p</sup> for some <sup>p</sup> <sup>∈</sup> <sup>N</sup>, then the function <sup>J</sup> <sup>p</sup>+1[g] lies in *<sup>D</sup>*<sup>0</sup> <sup>R</sup> by Theorem 6.13, and hence so does

$$
\Delta J^{p+1}[\Sigma g] = J^{p+1}[g].
$$

If, in addition, <sup>J</sup> <sup>p</sup>+1[g] lies in *<sup>K</sup>*0, then by the uniqueness Theorem 3.1 we have that

$$
\Sigma J^{p+1}[\mathbf{g}] = J^{p+1}[\Sigma \mathbf{g}] - J^{p+1}[\Sigma \mathbf{g}](\mathbf{l})\dots
$$

Thus, if p = 1 + deg g, then we obtain the identity

$$
\left[\Sigma J^{p+1}\mathbf{[g]}\right] = J^{p+1}\left[\Sigma \mathbf{g}\right] + \boldsymbol{\chi}\left[\mathbf{g}\right].\tag{6.47}
$$

Now, suppose that we wish to show that a given function <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> satisfies the equation <sup>f</sup> <sup>=</sup> <sup>J</sup> <sup>p</sup>+1[g] for some function <sup>g</sup> lying in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*p, with p = 1 + deg g. Using the uniqueness theorem with identity (6.47), we see that it is then enough to show that f <sup>=</sup> <sup>J</sup> <sup>p</sup>+1[g], f (1) = −<sup>γ</sup> [g], and <sup>f</sup> <sup>∈</sup> *<sup>K</sup>*0.

*Example 6.38* Let <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> be defined by the equation f (x) <sup>=</sup> ψ(x) <sup>−</sup> ln <sup>x</sup> for x > 0. To see that <sup>f</sup> <sup>=</sup> <sup>J</sup> <sup>1</sup>[ψ], it is enough to observe that <sup>f</sup> lies in *<sup>K</sup>*0, that f (1) = −γ , and that

$$
\Delta f(x) = \frac{1}{x} - \ln\left(1 + \frac{1}{x}\right)
$$

is precisely the function <sup>J</sup> <sup>1</sup>[g](x) when g(x) <sup>=</sup> <sup>1</sup>/x. ♦

*Example 6.39* Binet established the following integral representation (see, e.g., Sasvári [89])

$$J^2[\ln \circ \Gamma](\mathbf{x}) \, = \, J(\mathbf{x}) \, = \, \int\_0^\infty \left( \frac{1}{e^t - 1} - \frac{1}{t} + \frac{1}{2} \right) \frac{e^{-\chi t}}{t} \, dt \,.$$

Equation (6.47) then provides a possible (though not immediate) proof of this identity. ♦

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

#### **Chapter 7 Derivatives of Multiple log** *-***-Type Functions**

In this chapter, we discuss the higher order differentiability properties of g when <sup>g</sup> lies in *<sup>C</sup>*<sup>r</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*max{p,r} for any p, r <sup>∈</sup> <sup>N</sup>. In particular, we show the fundamental fact that g also lies in *<sup>C</sup>*<sup>r</sup> and that the sequence <sup>n</sup> → <sup>D</sup>rf <sup>p</sup> <sup>n</sup> [g] converges uniformly on any bounded subinterval of <sup>R</sup><sup>+</sup> to <sup>D</sup>rg.

We also show that the functions (g)(r) and g(r) differ by a constant and we investigate some properties of these functions, including asymptotic behaviors and an analogue of Euler's series representation of the constant γ . We present and discuss a procedure, that we call the "elevator" method, to compute g by first evaluating g(r). Finally, we provide an alternative uniqueness result for higher order differentiable solutions to the equation f = g.

#### **7.1 Differentiability of Multiple log** *-***-Type Functions**

In this first section we investigate the higher order differentiability of the function g when <sup>g</sup> is of class *<sup>C</sup>*<sup>r</sup> for some <sup>r</sup> <sup>∈</sup> <sup>N</sup>. We start with the following preliminary, but very important result.

**Proposition 7.1** *If* <sup>g</sup> *lies in <sup>C</sup>*<sup>r</sup> <sup>∩</sup>*D*p∩*K*max{p,r} *for some* r, p <sup>∈</sup> <sup>N</sup>*, then the function* g *lies in <sup>C</sup>*<sup>r</sup> <sup>∩</sup> *<sup>D</sup>*p+<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*max{p,r} *.*

*Proof* If <sup>g</sup> lies in *<sup>C</sup>*<sup>r</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*max{p,r} for some r, p <sup>∈</sup> <sup>N</sup>, then clearly it also lies in *<sup>C</sup>*<sup>r</sup> <sup>∩</sup> *<sup>D</sup>*max{p,r} <sup>∩</sup> *<sup>K</sup>*max{p,r} . By Proposition 5.6, g must lie in *<sup>D</sup>*p+<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*max{p,r} . Let us now show that it also lies in *<sup>C</sup>*r.

We first observe that <sup>g</sup>(r) lies in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*(p−r)<sup>+</sup> <sup>∩</sup> *<sup>K</sup>*(p−r)+. This is clear if <sup>r</sup> <sup>≤</sup> <sup>p</sup> by Proposition 4.12. If r>p, then we first see that <sup>g</sup>(p) lies in *<sup>C</sup>*r−<sup>p</sup> <sup>∩</sup> *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*r−p, and hence also in *<sup>K</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*1. Using Proposition 4.16(b) repeatedly, we then see that <sup>g</sup>(r) lies in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*−<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*0.

By Proposition 5.18, g(r) must lie in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*(p−r)++<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*(p−r)+. Hence, there exists <sup>F</sup> <sup>∈</sup> *<sup>C</sup>*<sup>r</sup> such that <sup>F</sup>(r) <sup>=</sup> g(r). By Proposition 4.12, <sup>F</sup> must lie in *<sup>K</sup>*max{p,r} . Now, we also have

$$D^r \Delta F = \Delta F^{(r)} = \Delta \Sigma g^{(r)} = g^{(r)},$$

which shows that (F + P ) = g for some polynomial P of degree at most r. By Corollary 4.6 we have that <sup>F</sup> <sup>+</sup> <sup>P</sup> lies in *<sup>K</sup>*max{p,r} . But then, by the uniqueness Theorem 3.1 we must have <sup>F</sup> <sup>+</sup> <sup>P</sup> <sup>=</sup> g <sup>+</sup> <sup>c</sup> for some <sup>c</sup> <sup>∈</sup> <sup>R</sup>. Hence g lies in *<sup>C</sup>*r.

*Remark 7.2* If <sup>g</sup> lies in *<sup>C</sup>*r∩*D*p∩*K*<sup>p</sup> for some integers 0 <sup>≤</sup> r<p, then the function g lies in *<sup>C</sup>*<sup>r</sup> by Proposition 7.1. Interestingly, this result can also be established very easily using the following argument. Let <sup>n</sup> <sup>∈</sup> <sup>N</sup> be so that g is <sup>p</sup>-convex or <sup>p</sup>-concave on In <sup>=</sup> (n,∞). By Lemma 2.6(a), the function g lies in *<sup>C</sup>*p−<sup>1</sup>(In) and hence also in *<sup>C</sup>*r(In). Using (5.3), we immediately obtain that g lies in *<sup>C</sup>*r. ♦

We now present the following important and very surprising result. It shows that Proposition 7.1 no longer holds when r>p if we ask <sup>g</sup> to lie in *<sup>K</sup>*<sup>p</sup> instead of *<sup>K</sup>*max{p,r} . Since the proof is somewhat technical, we defer it to Appendix F.

**Proposition 7.3** *For every* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*, there exists a function* <sup>g</sup> *lying in <sup>C</sup>*p+1∩*D*<sup>p</sup> <sup>∩</sup>*K*<sup>p</sup> *for which* g *does not lie in <sup>C</sup>*p+1*. Thus, the operator does not always preserve differentiability when the order of differentiability exceeds that of convexity.*

*Proof* See Appendix F.

The next theorem is the central result of this section. In this theorem, we recall the fundamental result given in Proposition 7.1 and we show that, under the same assumptions, the sequence <sup>n</sup> → <sup>D</sup>rf <sup>p</sup> <sup>n</sup> [g] converges uniformly on any bounded subinterval of <sup>R</sup><sup>+</sup> to <sup>D</sup>rg. We first consider a technical lemma.

**Lemma 7.4** *Let* <sup>g</sup> *lie in <sup>C</sup>*<sup>r</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> *for some integers* <sup>0</sup> <sup>≤</sup> <sup>r</sup> <sup>≤</sup> <sup>p</sup>*. Then, for any* <sup>n</sup> <sup>∈</sup> <sup>N</sup> *the function* <sup>ρ</sup>p+<sup>1</sup> <sup>n</sup> [g] *lies in <sup>C</sup>*r*. Moreover, the sequence* <sup>n</sup> → <sup>D</sup>rρ<sup>p</sup>+<sup>1</sup> <sup>n</sup> [g] *converges uniformly on any bounded subset of* <sup>R</sup><sup>+</sup> *to zero.*

*Proof* By Proposition 7.1, we have that g lies in *<sup>C</sup>*r. Using (1.7) it is then clear that, for any <sup>n</sup> <sup>∈</sup> <sup>N</sup>, the function <sup>ρ</sup>p+<sup>1</sup> <sup>n</sup> [g] lies in *<sup>C</sup>*r.

Let us now show the second part of the lemma. Negating g if necessary, we may assume that it lies in *<sup>K</sup>*<sup>p</sup> <sup>−</sup>. In this case, <sup>D</sup>rg must lie in *<sup>K</sup>*p−<sup>r</sup> <sup>+</sup> by Proposition 4.12. Let n ≥ p be an integer so that g is p-concave on [n,∞). Using Proposition 2.1 repeatedly, we can see that there exist p − r + 1 pairwise distinct points ξ <sup>n</sup> <sup>0</sup> ,...,ξ <sup>n</sup> <sup>p</sup>−<sup>r</sup> ∈ (0, p) such that

$$D\_x^r P\_p[\Sigma \mathbf{g}](n, \dots, n+p; n+\mathbf{x}) \ = P\_{p-r}[D^r \Sigma \mathbf{g}](n + \xi\_0^n, \dots, n + \xi\_{p-r}^n; n+\mathbf{x}).$$

$$\square$$

Let us now fix x > 0. Using (2.11) and then (2.2) and (2.3), we obtain

$$\begin{aligned} \, \_D^r\rho\_n^{p+1}[\Sigma g](\mathbf{x}) &= \, D^r\Sigma g[n+\xi\_0^n, \dots, n+\xi\_{p-r}^n, n+\mathbf{x}] \prod\_{l=0}^{p-r} (\mathbf{x} - \xi\_l^n), \\ &= A\_n \prod\_{l=1}^{p-r} (\mathbf{x} - \xi\_l^n), \end{aligned}$$

if <sup>x</sup> = <sup>ξ</sup> <sup>n</sup> <sup>i</sup> for <sup>i</sup> <sup>=</sup> <sup>0</sup>,...,p <sup>−</sup> <sup>r</sup>, and <sup>D</sup>rρ<sup>p</sup>+<sup>1</sup> <sup>n</sup> [g](x) <sup>=</sup> 0, otherwise, where

$$A\_n = \,^D D^r \Sigma \boldsymbol{g}[n + \xi\_1^n, \dots, n + \xi\_{p-r}^n, n + \ge] - D^r \Sigma \boldsymbol{g}[n + \xi\_0^n, \dots, n + \xi\_{p-r}^n].$$

Now, on the one hand, we clearly have

$$\prod\_{l=1}^{p-r} |x - \xi\_l^n| \le c\_x^{p-r}.$$

where cx = max{p, x}. On the other hand, using Lemma 2.5 (with the fact that <sup>D</sup>rg lies in *<sup>K</sup>*p−<sup>r</sup> <sup>+</sup> ) and then (2.8), we obtain

$$|A\_n| \le \left| D^r \Sigma \mathbf{g}[n + c\_x, \dots, n + c\_x + p - r] - D^r \Sigma \mathbf{g}[n - p + r, \dots, n] \right|$$

$$= \frac{1}{(p - r)!} \left| \Delta^{p - r} D^r \Sigma \mathbf{g}(n + c\_x) - \Delta^{p - r} D^r \Sigma \mathbf{g}(n - p + r) \right|$$

$$= \frac{1}{(p - r)!} \sum\_{j = -p + r}^{c\_x - 1} |\Delta^{p - r} D^r \mathbf{g}(n + j)|.$$

Thus, for any bounded subinterval <sup>E</sup> of <sup>R</sup>+, we obtain the inequality

$$\sup\_{\alpha \in E} \left| D^r \rho\_n^{p+1} [\Sigma g](\alpha) \right| \le \frac{c\_{\sup E}^{p-r}}{(p-r)!} \sum\_{j=-p+r}^{c\_{\sup E}-1} |\Delta^{p-r} D^r g(n+j)|.$$

But the latter sum converges to zero as <sup>n</sup> <sup>→</sup><sup>N</sup> <sup>∞</sup> since <sup>D</sup>rg lies in *<sup>D</sup>*p−<sup>r</sup> <sup>∩</sup> *<sup>K</sup>*p−<sup>r</sup> by Proposition 4.12. This completes the proof of the lemma.

**Theorem 7.5 (Higher Order Differentiability of Multiple log** *-***-Type Functions)** *Let* <sup>g</sup> *lie in <sup>C</sup>*<sup>r</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*max{p,r} *for some* r, p <sup>∈</sup> <sup>N</sup>*. The following assertions hold.*


*Proof* Assertion (a) immediately follows from Proposition 7.1. When r ≤ p, assertion (b) immediately follows from Lemma 7.4 and identity (5.4). Let us now assume that r>p. Using (5.4) and then (1.7) and (5.3) we obtain

$$D^r f\_n^p[g](\mathbf{x}) = D^r \Sigma g(\mathbf{x}) - D^r \Sigma g(\mathbf{x} + \mathbf{n}) = -\sum\_{k=0}^{n-1} g^{(r)}(\mathbf{x} + k).$$

By Proposition 4.12, we have that <sup>g</sup>(p) lies in *<sup>C</sup>*r−<sup>p</sup> <sup>∩</sup> *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*r−p, and hence also in *<sup>K</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*1. Using Proposition 4.16(b) repeatedly, we then see that <sup>g</sup>(r) lies in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*−<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*0. Thus, we can apply Theorem 3.12 to the function <sup>g</sup>(r), with <sup>f</sup> <sup>=</sup> <sup>D</sup>rg. Since <sup>f</sup> lies in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*<sup>0</sup> by assertion (a) and Proposition 4.12, it follows from Theorem 3.12 that the sequence <sup>n</sup> → <sup>D</sup>rf <sup>p</sup> <sup>n</sup> [g] converges uniformly on <sup>R</sup><sup>+</sup> to <sup>f</sup> <sup>−</sup> f (∞) <sup>=</sup> <sup>f</sup> <sup>=</sup> <sup>D</sup>rg.

*Example 7.6* The function g(x) <sup>=</sup> ln <sup>x</sup> clearly lies in *<sup>C</sup>*<sup>∞</sup> <sup>∩</sup> *<sup>D</sup>*<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*∞. Using Theorem 7.5, we now see that the function g(x) = ln -(x) lies in *<sup>C</sup>*<sup>∞</sup> <sup>∩</sup>*D*<sup>2</sup> <sup>∩</sup>*K*∞. Moreover, for any <sup>r</sup> <sup>∈</sup> <sup>N</sup>∗, we have

$$\begin{aligned} \psi\_{r-1}(\mathbf{x}) &= D^r \ln \Gamma(\mathbf{x}) \ &= \lim\_{n \to \infty} D^r f\_n^1[\ln](\mathbf{x}) \\ &= \lim\_{n \to \infty} \left( 0^{r-1} \ln n + (-1)^r (r-1)! \sum\_{k=0}^{n-1} \frac{1}{(\mathbf{x} + k)^r} \right) .\end{aligned}$$

If r = 1, then we obtain

$$\psi(\mathbf{x}) = \lim\_{n \to \infty} \left( \ln n - \sum\_{k=0}^{n-1} \frac{1}{x+k} \right)$$

.

If r ≥ 2, then we get (compare with, e.g., Srivastava and Choi [93, p. 33])

$$
\psi\_{r-1}(\mathbf{x}) \, = \, (-1)^r (r-1)! \, \zeta(r, \mathbf{x}),
$$

where s → ζ (s, x) is the Hurwitz zeta function (see Example 1.7). ♦

#### **7.2 Some Properties of the Derivatives**

In this section, we investigate the functions (g)(r) and g(r) and some of their properties. We also show how the asymptotic behaviors of these functions can be analyzed from results of Chap. 6, including the generalized Stirling formula. Finally, we provide a series representation of the asymptotic constant σ[g] as an analogue of Euler's series representation of γ .

In the next proposition, we essentially establish the fact that the functions(g)(r) and g(r) are equal up to an additive constant. This result will have several important consequences in this and the next chapters.

**Proposition 7.7** *Let* <sup>g</sup> *lie in <sup>C</sup>*<sup>r</sup> <sup>∩</sup>*D*<sup>p</sup> <sup>∩</sup>*K*max{p,r} *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup> *and* <sup>r</sup> <sup>∈</sup> <sup>N</sup>∗*. Then* <sup>g</sup>(r) *lies in <sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*(p−r)<sup>+</sup> <sup>∩</sup> *<sup>K</sup>*(p−r)<sup>+</sup> *. Moreover, for any* x > <sup>0</sup> *we have*

$$\left( (\Sigma \mathbf{g})^{(r)}(\mathbf{x}) - \Sigma \mathbf{g}^{(r)}(\mathbf{x}) \right) = (\Sigma \mathbf{g})^{(r)}(\mathbf{l}) = \left. \mathbf{g}^{(r-\mathbf{l})}(\mathbf{l}) - \sigma \left[ \mathbf{g}^{(r)} \right] \right. \tag{7.1}$$

*If* r>p*, then*

$$
\sigma[\mathfrak{g}^{(r)}] = \mathfrak{g}^{(r-1)}(\mathfrak{l}) + \sum\_{k=1}^{\infty} \mathfrak{g}^{(r)}(k).
$$

*Proof* As already observed in the proof of Proposition 7.1, the first claim follows from Propositions 4.12 and 4.16(b). Moreover, we have that g lies in *<sup>C</sup>*<sup>r</sup> <sup>∩</sup>*D*p+<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*max{p,r} . Let us now prove (7.1). By Proposition 4.12, the function <sup>ϕ</sup><sup>1</sup> <sup>=</sup> (g)(r) is a solution in *<sup>K</sup>*(p−r)<sup>+</sup> to the equation ϕ <sup>=</sup> <sup>g</sup>(r). By the existence Theorem 3.6, the function <sup>ϕ</sup><sup>2</sup> <sup>=</sup> g(r) is also a solution in *<sup>K</sup>*(p−r)+. Thus, by the uniqueness Theorem 3.1, we must have (g)(r) <sup>−</sup> g(r) <sup>=</sup> <sup>c</sup> for some <sup>c</sup> <sup>∈</sup> <sup>R</sup>, and hence we also have (g)(r)(1) <sup>=</sup> <sup>c</sup>.

Now, for any x > 0, using (6.11) we then get

$$\begin{aligned} \left[g^{(r-1)}(1) - \sigma\left[g^{(r)}\right] = g^{(r-1)}(\mathbf{x}) - \int\_{\mathbf{x}}^{\mathbf{x}+1} \Sigma g^{(r)}(t) \, dt \right. \\ &= c + g^{(r-1)}(\mathbf{x}) - \int\_{\mathbf{x}}^{\mathbf{x}+1} (\Sigma g)^{(r)}(t) \, dt . \end{aligned}$$

Evaluating the latter integral, we then obtain

$$\begin{aligned} \mathbf{g}^{(r-1)}(\mathbf{l}) - \sigma \mathbf{\bar{g}}^{(r)} &= c + \mathbf{g}^{(r-1)}(\mathbf{x}) - (\Sigma \mathbf{g})^{(r-1)}(\mathbf{x} + \mathbf{l}) + (\Sigma \mathbf{g})^{(r-1)}(\mathbf{x}) \\ &= c + \mathbf{g}^{(r-1)}(\mathbf{x}) - \Delta (\Sigma \mathbf{g})^{(r-1)}(\mathbf{x}) \\ &= c + \mathbf{g}^{(r-1)}(\mathbf{x}) - (\Delta \Sigma \mathbf{g})^{(r-1)}(\mathbf{x}) \\ &= c, \end{aligned}$$

which proves (7.1). Finally, if r>p, then we have that <sup>g</sup>(r−1) lies in *<sup>C</sup>*<sup>1</sup> <sup>∩</sup> *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*<sup>1</sup> and that <sup>g</sup>(r) lies in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*−<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*<sup>0</sup> by Proposition 4.16(b). The last part of the statement then follows from applying Proposition 6.14 to the function <sup>g</sup>(r). *Example 7.8* The function g(x) <sup>=</sup> <sup>1</sup> <sup>x</sup> lies in *<sup>C</sup>*<sup>∞</sup> <sup>∩</sup> *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*<sup>∞</sup> and all its derivatives lie in *<sup>K</sup>*0. By Theorem 7.5, the function

$$\Sigma g(\mathbf{x}) = \sum\_{k=0}^{\infty} \left( \frac{1}{k+1} - \frac{1}{\mathbf{x} + k} \right) \\ = H\_{\mathbf{x}-1} \\ = \psi(\mathbf{x}) + \chi$$

lies in *<sup>C</sup>*<sup>∞</sup> <sup>∩</sup> *<sup>D</sup>*<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*∞. Moreover, the series can be differentiated term by term infinitely many times and hence, for any <sup>r</sup> <sup>∈</sup> <sup>N</sup>∗, we have

$$(\Sigma g)^{(r)}(\mathbf{x}) = \sum\_{k=0}^{\infty} (-1)^{r+1} \frac{r!}{(\mathbf{x} + k)^{r+1}} = \psi\_r(\mathbf{x}).$$

By Proposition 7.7, we also have

$$\sigma[g^{(r)}] = -(-1)^r (r-1)! + (-1)^r r! \sum\_{k=1}^{\infty} \frac{1}{k^{r+1}}$$

$$= (-1)^r (r-1)! \left( r \xi(r+1) - 1 \right),$$

where s → ζ (s) is the Riemann zeta function. ♦

In the next proposition we show the remarkable fact that the asymptotic equivalence (6.31) still holds if we differentiate both sides.

**Proposition 7.9** *Let* <sup>g</sup> *lie in <sup>C</sup>*<sup>r</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*max{p,r} *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup> *and* <sup>r</sup> <sup>∈</sup> <sup>N</sup>∗*, and let* <sup>a</sup> <sup>≥</sup> <sup>0</sup>*. When* <sup>D</sup>rg *vanishes at infinity, we also assume that*

$$D^r \Sigma \mathbf{g}(n+1) \sim D^r \Sigma \mathbf{g}(n) \qquad \text{as } n \to \infty.$$

*Then we have*

$$D^r \Sigma \mathbf{g}(\mathbf{x} + a) \sim D\_\chi^r \int\_\chi^{\chi + 1} \Sigma \mathbf{g}(t) \, dt \, = \| \mathbf{g}^{(r - 1)}(\mathbf{x}) \Big| \qquad a \ge \infty.$$

*Proof* By Proposition 7.7, we have that <sup>g</sup>(r) lies in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*(p−r)<sup>+</sup> <sup>∩</sup> *<sup>K</sup>*(p−r)+. Moreover, for any x > 0 we have

$$D^r \Sigma g(\mathbf{x} + a) \ = c + \Sigma g^{(r)}(\mathbf{x} + a),$$

and, using (6.11),

$$\begin{aligned} D\_\chi^r \int\_{\chi}^{\chi+1} \Sigma \mathbf{g}(t) \, dt &= \mathbf{g}^{(r-1)}(\mathbf{x}) \, \ &= \int\_{\chi}^{\chi+1} (\Sigma \mathbf{g})^{(r)}(t) \, dt, \\ &= c + \int\_{\chi}^{\chi+1} \Sigma \mathbf{g}^{(r)}(t) \, dt, \end{aligned}$$

where <sup>c</sup> <sup>=</sup> <sup>g</sup>(r−1) (1) <sup>−</sup> <sup>σ</sup>[g(r)]. The result then immediately follows from applying Proposition 6.20 to the function <sup>g</sup>(r).

*Example 7.10* Applying Proposition 7.9 to the function g(x) = ln x, for any a ≥ 0 we obtain the equivalences

$$
\ln \Gamma(\mathbf{x} + a) \sim \mathbf{x} \ln \mathbf{x} \,, \qquad \psi(\mathbf{x} + a) \sim \ln \mathbf{x} \qquad \text{as } \mathbf{x} \to \infty,
$$

and for any <sup>ν</sup> <sup>∈</sup> <sup>N</sup>,

$$
\psi\_{\nu+1}(x+a) \sim (-1)^{\nu} \frac{\nu!}{x^{\nu+1}} \qquad \text{as } x \to \infty. \tag{6}
$$

In the next two propositions, we mainly investigate how the convergence results in (6.4) and (6.21) are modified when the function g is replaced with one of its higher order derivatives. The second proposition can be regarded as the "integrated" version of the first one, and hence it naturally involves the generalized Binet function.

**Proposition 7.11** *Let* <sup>g</sup> *lie in <sup>C</sup>*<sup>r</sup> <sup>∩</sup>*D*<sup>p</sup> <sup>∩</sup>*K*max{p,r} *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup> *and* <sup>r</sup> <sup>∈</sup> <sup>N</sup>∗*, and let* a ≥ 0*. The following assertions hold.*


$$D\_\chi^r \rho\_\chi^{q+1} [\Sigma \mathfrak{g}](a) \;= \rho\_\chi^{q+1} [\Sigma \mathfrak{g}^{(r)}](a) .$$

*(c) We have that* <sup>ρ</sup>(p−r)++<sup>1</sup> <sup>x</sup> [g(r)](a) <sup>→</sup> <sup>0</sup> *and* <sup>D</sup><sup>r</sup> xρ<sup>p</sup>+<sup>1</sup> <sup>x</sup> [g](a) <sup>→</sup> <sup>0</sup> *as* <sup>x</sup> → ∞*.*

*Proof* By Proposition 7.7, the function <sup>g</sup>(r) lies in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*(p−r)<sup>+</sup> <sup>∩</sup> *<sup>K</sup>*(p−r)+. This immediately proves assertion (a). Now, using (1.7) and then (7.1) we get

$$\begin{aligned} [D\_\chi^r \rho\_\chi^{q+1}] \Sigma \mathbf{g})(a) &= \Sigma \mathbf{g}^{(r)}(\mathbf{x} + a) - \Sigma \mathbf{g}^{(r)}(\mathbf{x}) - \sum\_{j=1}^q \binom{a}{j} \Delta^{j-1} \mathbf{g}^{(r)}(\mathbf{x}), \\ &= \rho\_\chi^{q+1} [\Sigma \mathbf{g}^{(r)}](a), \end{aligned}$$

which proves assertion (b). Assertion (c) follows from assertions (a) and (b) and the fact that *<sup>R</sup>*(p−r)++<sup>1</sup> <sup>R</sup> <sup>⊂</sup> *<sup>R</sup>*p+<sup>1</sup> <sup>R</sup> .

**Proposition 7.12** *Let* <sup>g</sup> *lie in <sup>C</sup>*<sup>r</sup> <sup>∩</sup>*D*<sup>p</sup> <sup>∩</sup>*K*max{p,r} *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup> *and* <sup>r</sup> <sup>∈</sup> <sup>N</sup>∗*. The following assertions hold.*

*(a) For any* <sup>q</sup> <sup>∈</sup> <sup>N</sup>*, the function* <sup>J</sup> <sup>q</sup>+1[g] *lies in <sup>C</sup>*<sup>r</sup> *and we have*

$$[D^r J^{q+1} [\Sigma g] \ = J^{q+1} [\Sigma g^{(r)}].$$

*In particular, we have* <sup>σ</sup>[g(r)]=−DrJ <sup>1</sup>[g](1)*.*


$$D\_\chi^r \int\_0^1 \rho\_\chi^{p+1} [\Sigma \mathcal{g}](t) \, dt \, = \int\_0^1 D\_\chi^r \rho\_\chi^{p+1} [\Sigma \mathcal{g}](t) \, dt.$$

*Proof* Using (6.18) and (7.1), we get

$$\begin{aligned} \, \_0D^r J^{q+1} [\Sigma g](\mathbf{x}) &= \Sigma g^{(r)}(\mathbf{x}) - \sigma [\mathbf{g}^{(r)}] - \int\_1^\chi \mathbf{g}^{(r)}(t) \, dt + \sum\_{j=1}^q G\_j \, \Delta^{j-1} \mathbf{g}^{(r)}(\mathbf{x}), \\ &= J^{q+1} [\Sigma g^{(r)}](\mathbf{x}), \end{aligned}$$

which proves assertion (a). Now, setting q = p in these equations we obtain

$$[D^r J^{p+1} [\Sigma g](\mathbf{x}) = J^{(p-r)+1} [\Sigma g^{(r)}](\mathbf{x}) + \sum\_{j=(p-r)\_++1}^p G\_j \, \Delta^{j-1} g^{(r)}(\mathbf{x}).$$

Since <sup>g</sup>(r) lies in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*(p−r)<sup>+</sup> <sup>∩</sup> *<sup>K</sup>*(p−r)+, this latter expression vanishes at infinity. This proves assertion (b). Finally, using Proposition 7.11 and assertion (a) we get

$$\int\_0^1 D\_\mathbf{x}^r \rho\_\mathbf{x}^{p+1} [\Sigma \mathbf{g}](t) \, dt = \int\_0^1 \rho\_\mathbf{x}^{p+1} [\Sigma \mathbf{g}^{(r)}](t) \, dt = -J^{p+1} [\Sigma \mathbf{g}^{(r)}](\mathbf{x})$$

$$= -D^r J^{p+1} [\Sigma \mathbf{g}](\mathbf{x}) = D\_\mathbf{x}^r \int\_0^1 \rho\_\mathbf{x}^{p+1} [\Sigma \mathbf{g}](t) \, dt,$$

which proves assertion (c).

Assertion (c) of Proposition 7.11 reveals a very important fact. It shows that the convergence result in (6.4) still holds if we replace <sup>g</sup> with <sup>g</sup>(r) and <sup>p</sup> with (p <sup>−</sup> r)+. But it also says that this new result can also be obtained by differentiating r times both sides of (6.4) and then removing the terms that vanish at infinity.

Similarly, assertion (b) of Proposition 7.12 shows that this property also applies to the generalized Stirling formula (6.21).

*Example 7.13* The function g(x) <sup>=</sup> ln <sup>x</sup> lies in *<sup>C</sup>*<sup>∞</sup> <sup>∩</sup> *<sup>D</sup>*<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*<sup>∞</sup> and its derivative g (x) <sup>=</sup> <sup>1</sup> <sup>x</sup> lies in *<sup>C</sup>*<sup>∞</sup> <sup>∩</sup> *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*∞. For any <sup>a</sup> <sup>≥</sup> 0, the limit in (6.4) reduces to

$$
\ln \Gamma(\mathbf{x} + a) - \ln \Gamma(\mathbf{x}) - a \ln \mathbf{x} \to 0 \qquad \text{as } \mathbf{x} \to \infty.
$$

If we replace g with g and set p = 0 in (6.4), we get

$$
\psi(x+a) - \psi(x) \to 0 \qquad \text{as } x \to \infty.
$$

However, this latter limit can also be obtained by differentiating both sides of the previous limit and then removing the term (−<sup>a</sup> <sup>x</sup> ) that vanishes at infinity.

Now, applying the generalized Stirling formula (6.21) to the function g(x) = ln x, we clearly retrieve the classical Stirling formula

$$
\ln \Gamma(\mathbf{x}) - \frac{1}{2} \ln(2\pi) + \mathbf{x} - \left(\mathbf{x} - \frac{1}{2}\right) \ln \mathbf{x} \to \mathbf{0} \qquad \text{as } \mathbf{x} \to \infty.
$$

Proceeding similarly as above, we then obtain

$$
\psi(\mathbf{x}) - \ln \mathbf{x} \to \mathbf{0} \qquad \text{as } \mathbf{x} \to \infty,
$$

which is actually the analogue of Stirling's formula for the digamma function. ♦

*Remark 7.14* To emphasize the similarities between Propositions 7.11 and 7.12, we could for instance extend our formalism a bit further as follows. For any <sup>p</sup> <sup>∈</sup> <sup>N</sup> and any S ∈ {N, <sup>R</sup>}, let *<sup>J</sup>* <sup>p</sup> <sup>S</sup> denote the set of continuous functions <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> having the asymptotic property that

$$J^p[\mathfrak{g}](t) \to \begin{array}{c} 0 \end{array} \quad \text{as } t \to\_{\mathbb{S}} \infty.$$

This new definition enables one to formalize some results more easily. For instance, using (6.17) we clearly obtain that

$$\mathcal{J}\_{\mathbb{S}}^{p} \cap \mathcal{D}\_{\mathbb{S}}^{p} = \mathcal{J}\_{\mathbb{S}}^{p+1} \cap \mathcal{D}\_{\mathbb{S}}^{p}$$

and this identity could be used to establish assertion (b) of Proposition 7.12 from assertion (a). To give another example, we can see that (6.22) actually means that

$$\mathcal{C}^0 \cap \mathcal{D}^p \cap \mathcal{K}^p \subset \mathcal{J}\_\mathbb{R}^p.$$

Note also that the generalized Stirling formula simply states that g lies in *<sup>J</sup>* <sup>p</sup>+<sup>1</sup> R whenever <sup>g</sup> lies in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*p. ♦ **Taylor Series Expansion of** *g* Suppose that <sup>g</sup> lies in *<sup>C</sup>*<sup>∞</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>∞</sup> for some <sup>p</sup> <sup>∈</sup> <sup>N</sup>. We know from Proposition 7.12 that

$$
\sigma[\mathbf{g}^{(k)}] = -D^k J^\mathbf{[}\Sigma\mathbf{g}\,\mathbf{l}(\mathbf{l}), \qquad k \in \mathbb{N}.
$$

Thus, the exponential generating function (see, e.g., Graham et al. [41, Chapter 7]) for the sequence <sup>n</sup> → <sup>σ</sup>[g(n)] is defined by the equation

$$\sum\_{k=1}^{\infty} \sigma \lg^{(k)} \text{l} \frac{\mathbf{x}^k}{k!} = -J^1 \text{l} \Sigma \mathbf{g} \mathbf{l} (\mathbf{x} + \mathbf{l}) \tag{7.2}$$

$$= \sigma \lg \mathbf{g} + \int\_1^{\mathbf{x} + \mathbf{l}} \mathbf{g}(t) \, dt - \Sigma \mathbf{g}(\mathbf{x} + \mathbf{l}) .$$

Denoting this exponential generating function by egf<sup>σ</sup> [g](x), the previous equation reduces to

$$\text{egf}\_{\sigma}[\mathfrak{g}](\mathfrak{x}) \;=\; -J^{\mathsf{I}}[\Sigma \mathfrak{g}](\mathfrak{x} + \mathfrak{l})\;.$$

If the function <sup>J</sup> <sup>1</sup>[g] is real analytic at 1, then the series in (7.2) converges in some neighborhood of x = 0. Similarly, if the function g is real analytic at 1, then the following Taylor series expansion

$$
\Sigma \mathbf{g}(\mathbf{x} + \mathbf{l}) = \sum\_{k=1}^{\infty} (\Sigma \mathbf{g})^{(k)}(\mathbf{l}) \frac{\mathbf{x}^{k}}{k!} \tag{7.3}
$$

holds in some neighborhood of <sup>x</sup> <sup>=</sup> 0, where the numbers (g)(k)(1) for <sup>k</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> can also be computed through (7.1).

*Example 7.15* Consider again the functions g(x) = ln x and g(x) = ln -(x). We know from Example 7.6 that

$$D\ln\Gamma(1) = \psi(1) = \lim\_{n\to\infty} \left(\ln n - \sum\_{k=1}^{n} \frac{1}{k}\right) = -\gamma \cdot \frac{1}{n}$$

and that for any integer k ≥ 2

$$D^k \ln \Gamma(\mathbf{l}) = \psi\_{k-1}(\mathbf{l}) = (-\mathbf{l})^k (k-1)! \zeta(k).$$

We then obtain the following Taylor series expansion

$$\ln \Gamma(\mathbf{x} + \mathbf{l}) \, \mathbf{l} = \, -\gamma \mathbf{x} + \sum\_{k=2}^{\infty} (-1)^k \frac{\xi(k)}{k} \mathbf{x}^k, \qquad |\mathbf{x}| < 1.$$

The values of the sequence <sup>n</sup> → <sup>σ</sup>[g(n)] can be obtained using (7.1) or (7.2). We get

$$
\sigma[\mathfrak{g}] = -1 + \frac{1}{2} \ln(2\pi), \qquad \sigma[\mathfrak{g'}] = \mathcal{Y},
$$

and for any integer k ≥ 2

$$
\sigma[\mathfrak{g}^{(k)}] = (-1)^k (k-2)! \left( 1 - (k-1)\zeta(k) \right). \qquad \phi
$$

**Analogues of Euler's Series Representation of** *γ* Integrating both sides of (7.3) on (0, 1) (assuming that the series can be integrated term by term), we obtain the identity

$$\sigma[g] = \sum\_{k=1}^{\infty} (\Sigma g)^{(k)}(1) \frac{1}{(k+1)!} \,. \tag{7.4}$$

Similarly, integrating both sides of (7.2) on (0, 1) (assuming again that the series can be integrated term by term), we obtain the identity

$$\sum\_{k=0}^{\infty} \sigma \lg^{(k)} \lg \frac{1}{(k+1)!} = \int\_{1}^{2} (2-t) \, \mathcal{g}(t) \, dt. \tag{7.5}$$

Taking for instance g(x) <sup>=</sup> <sup>1</sup> <sup>x</sup> in (7.4), we immediately retrieve Euler's series representation of γ (see, e.g., Srivastava and Choi [93, p. 272])

$$\lambda := \sum\_{k=2}^{\infty} (-1)^k \frac{\zeta(k)}{k}.$$

This formula can also be obtained taking g(x) <sup>=</sup> <sup>1</sup> <sup>x</sup> in (7.5) and using the straightforward identity

$$
\sigma[g^{(k)}] = \left.(-1)^k k! \left(\zeta(k+1) - \frac{1}{k}\right) \right. \\
\qquad k \in \mathbb{N}^\*.
$$

Considering different functions g(x) in (7.4) and (7.5) enables one to derive various interesting identities. A few applications are given in the following example.

*Example 7.16* Taking g(x) = ψ(x) in (7.5) and using the straightforward identity

$$
\sigma[g^{(k)}] = \sigma[\psi\_k] = (-1)^{k-1}(k-1)(k-1)!\zeta(k) \qquad k \in \mathbb{N}, \ k \ge 2,
$$

we obtain

$$\sum\_{k=2}^{\infty} (-1)^k \frac{k-1}{k(k+1)} \zeta(k) \, = \, 2 - \ln(2\pi) \, .$$

Similarly, taking g(x) = ln x and then g(x) = ln -(x) in (7.4) and (7.5) we obtain the identities

$$\sum\_{k=2}^{\infty}(-1)^{k}\frac{1}{k(k+1)}\xi(k) = \frac{1}{2}\mathcal{Y} - 1 + \frac{1}{2}\ln(2\pi),$$

$$\sum\_{k=2}^{\infty}(-1)^{k}\frac{1}{(k+1)(k+2)}\xi(k) = \frac{1}{2} + \frac{1}{6}\mathcal{Y} - 2\ln A,$$

$$\sum\_{k=2}^{\infty}(-1)^{k}\frac{k-1}{k(k+1)(k+2)}\xi(k) = \frac{5}{4} - \frac{1}{4}\ln(2\pi) - 3\ln A,$$

where A is Glaisher-Kinkelin's constant; see also Srivastava and Choi [93, Section 3.4]. ♦

#### **7.3 Finding Solutions from Derivatives**

Given <sup>r</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> and a function <sup>g</sup> <sup>∈</sup> *<sup>C</sup>*r, a solution <sup>f</sup> <sup>∈</sup> *<sup>C</sup>*<sup>r</sup> to the equation f <sup>=</sup> <sup>g</sup> can sometimes be found more easily by first searching for an appropriate solution <sup>ϕ</sup> <sup>∈</sup> *<sup>C</sup>*<sup>0</sup> to the equation ϕ <sup>=</sup> <sup>g</sup>(r) and then calculating <sup>f</sup> as an <sup>r</sup>th antiderivative of ϕ.

Let us first examine a very simple example to illustrate to which extent this approach can be easily and usefully applied.

*Example 7.17* Let <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> be defined by the equation

$$\lg(\mathbf{x}) = \int\_{1}^{\chi} \ln t \, dt \qquad \text{for } \mathbf{x} > \mathbf{0}.$$

Suppose that we search for a simple expression for the indefinite sum g. We can apply Proposition 7.7 and observe that <sup>g</sup> lies in *<sup>C</sup>*<sup>∞</sup> <sup>∩</sup> *<sup>D</sup>*<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*<sup>∞</sup> and hence that <sup>g</sup> lies in *<sup>C</sup>*<sup>∞</sup> <sup>∩</sup> *<sup>D</sup>*<sup>2</sup> <sup>∩</sup> *<sup>K</sup>*∞. Moreover, we have

$$(\Sigma \mathbf{g})'(\mathbf{x}) = c + \Sigma \mathbf{g}'(\mathbf{x}) = c + \ln \Gamma(\mathbf{x})$$

for some <sup>c</sup> <sup>∈</sup> <sup>R</sup>. Thus, we obtain

$$
\Sigma g(\mathbf{x}) = c(\mathbf{x} - \mathbf{l}) + \int\_1^\mathbf{x} \ln \Gamma(t) \, dt.
$$

To find the value of c, we then observe that

$$0 = \lg(\mathfrak{l}) = \Delta \Sigma \lg(\mathfrak{l}) = c + \int\_{\mathfrak{l}}^{2} \ln \Gamma(t) \, dt$$

and hence <sup>c</sup> <sup>=</sup> <sup>1</sup> <sup>−</sup> <sup>1</sup> <sup>2</sup> ln(2π ) (see Example 6.5). Alternatively, this value can also be obtained directly from (7.1); we have

$$c = \left[\mathfrak{g}(\mathfrak{l}) - \sigma\mathfrak{l}\mathfrak{g}'\right] = \left[-\sigma\mathfrak{l}\mathfrak{g}'\right] = \left[1 - \frac{1}{2}\ln(2\pi)\right].$$

Thus, this approach amounts to first searching for a simple expression for g , and then computing g using an antiderivative of g .

Finally, we get

$$
\Delta \mathbf{g}(\mathbf{x}) = \left. -1 + \left( 1 - \frac{1}{2} \ln(2\pi) \right) \mathbf{x} + \boldsymbol{\psi}\_{-2}(\mathbf{x}), \right|
$$

where <sup>ψ</sup>−<sup>2</sup> is the polygamma function <sup>ψ</sup>−2(x) <sup>=</sup> <sup>x</sup> <sup>0</sup> ln -(t) dt. ♦

The approach described in Example 7.17 is rather simple and can sometimes be very efficient. We will refer to this technique as *the elevator method*. In very basic terms, to find g one proceeds as follows.


$$\begin{aligned} \Delta f &= \text{g} & \quad f &= \text{Eg} \\ \downarrow & & \uparrow \\ \Delta \varphi &= \text{g}^{(r)} & \rightarrow \varphi & = \text{Eg}^{(r)} \end{aligned}$$

To our knowledge, this trick was investigated thoroughly by Krull [55] and then by Dufresnoy and Pisot [34].

In the next theorem we provide a general result based on this idea. This result is actually very general: it applies to any function <sup>g</sup> <sup>∈</sup> *<sup>C</sup>*r, even if g is not defined (e.g., g(x) <sup>=</sup> <sup>2</sup>x).

We first observe that if <sup>ϕ</sup> <sup>∈</sup> *<sup>C</sup>*<sup>0</sup> is a solution to the equation ϕ <sup>=</sup> <sup>g</sup>(r), then the map

$$\mathbf{x} \mapsto \int\_{\mathbf{x}}^{\mathbf{x}+1} \varphi(t) \, dt - \mathbf{g}^{(r-1)}(\mathbf{x})$$

has a zero derivative and hence it is constant on <sup>R</sup>+. In particular, it has a finite right limit at x = 0.

**Theorem 7.18 (The Elevator Method)** *Let* <sup>r</sup> <sup>∈</sup> <sup>N</sup>∗*,* a > <sup>0</sup>*,* <sup>g</sup> <sup>∈</sup> *<sup>C</sup>*r*, and let* <sup>ϕ</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *be a continuous solution to the equation* ϕ <sup>=</sup> <sup>g</sup>(r)*. Then there exists a solution* <sup>f</sup> <sup>∈</sup> *<sup>C</sup>*<sup>r</sup> *to the equation* f <sup>=</sup> <sup>g</sup> *such that* <sup>f</sup> (r) <sup>=</sup> <sup>ϕ</sup> *if and only if*

$$\int\_{a}^{a+1} \varphi(t) \, dt \, = \text{ g}^{(r-1)}(a). \tag{7.6}$$

*If any of these equivalent conditions holds, then* f *is uniquely determined (up to an additive constant) by*

$$f(\mathbf{x}) = f(a) + \sum\_{k=1}^{r-1} c\_k \frac{(\mathbf{x} - a)^k}{k!} + \int\_a^\mathbf{x} \frac{(\mathbf{x} - t)^{r-1}}{(r-1)!} \varphi(t) \, dt,\tag{7.7}$$

*where, for* k = 1,...,r − 1*,*

$$c\_k = \sum\_{j=0}^{r-k-1} \frac{B\_j}{j!} \left( g^{(j+k-1)}(a) - \int\_a^{a+1} \frac{(a+1-t)^{r-j-k}}{(r-j-k)!} \varphi(t) \, dt \right). \tag{7.8}$$

*Proof* Condition (7.6) is clearly necessary. Indeed, we have

$$\int\_{a}^{a+1} \varphi(t) \, dt \, = \, f^{(r-1)}(a+1) - f^{(r-1)}(a) \, = \, g^{(r-1)}(a).$$

Let us show that it is sufficient. Since <sup>ϕ</sup> is continuous, there exists <sup>f</sup> <sup>∈</sup> *<sup>C</sup>*<sup>r</sup> such that <sup>f</sup> (r) <sup>=</sup> <sup>ϕ</sup>. Taylor's theorem then provides the expansion formula (7.7) with arbitrary parameters ck <sup>=</sup> <sup>f</sup> (k)(a) for <sup>k</sup> <sup>=</sup> <sup>1</sup>,...,r <sup>−</sup> 1. Now we need to determine the parameters c1,...,ck for f to be a solution to the equation f = g. To this extent, we need the following claim.

*Claim* The function <sup>f</sup> satisfies the equation f <sup>=</sup> <sup>g</sup> if and only if <sup>f</sup> (r) satisfies the equation f (r) <sup>=</sup> <sup>g</sup>(r) and f (j )(a) <sup>=</sup> <sup>g</sup>(j )(a) for <sup>j</sup> <sup>=</sup> <sup>0</sup>,...,r <sup>−</sup> 1.

*Proof of the Claim* The condition is clearly necessary. To see that it is sufficient, we simply show by decreasing induction on <sup>j</sup> that f (j ) <sup>=</sup> <sup>g</sup>(j ). Clearly, this is true for j = r. Suppose that it is true for some integer j satisfying 1 ≤ j ≤ r. For any x > 0 we have

$$\begin{aligned} \Delta f^{(j-1)}(\mathbf{x}) - \Delta f^{(j-1)}(a) &= \int\_a^\chi \Delta f^{(j)}(t) \, dt = \int\_a^\chi g^{(j)}(t) \, dt \\ &= g^{(j-1)}(\mathbf{x}) - g^{(j-1)}(a) = g^{(j-1)}(\mathbf{x}) - \Delta f^{(j-1)}(a), \end{aligned}$$

which shows that the result still holds for j − 1.

By the claim, <sup>f</sup> satisfies the equation f <sup>=</sup> <sup>g</sup> if and only if f (j )(a) <sup>=</sup> <sup>g</sup>(j )(a) for j = 0,...,r − 1. When j = r − 1, the latter condition is nothing other than condition (7.6) and hence it is satisfied. Applying Taylor's theorem to f (j ), we obtain

$$f^{(j)}(a+1) - f^{(j)}(a) = \sum\_{k=1}^{r-j-1} \frac{1}{k!} f^{(j+k)}(a) + \int\_a^{a+1} \frac{(a+1-t)^{r-j-1}}{(r-j-1)!} \varphi(t) \, dt \, \ldots$$

and hence we see that the remaining r − 1 conditions are

$$\sum\_{k=1}^{r-j-1} \frac{1}{k!} c\_{j+k} = d\_j, \qquad j = 0, \ldots, r-2, 0$$

where

$$\begin{aligned} d\_j &= g^{(j)}(a) - \int\_a^{a+1} \frac{(a+1-t)^{r-j-1}}{(r-j-1)!} \varphi(t) \, dt, \qquad j = 0, \ldots, r-2, \\\ c\_k &= f^{(k)}(a), \qquad k = 1, \ldots, r-1. \end{aligned}$$

It is not difficult to see that these r−1 conditions form a consistent triangular system of r − 1 linear equations in the r − 1 unknowns c1,...,cr−1. This establishes the uniqueness of f up to an additive constant.

Let us now show that formula (7.8) holds. For k = 1,...,r − 1, we have

$$\sum\_{j=0}^{r-k-1} \frac{B\_j}{j!} d\_{j+k-1} = \sum\_{j=0}^{r-k-1} \frac{B\_j}{j!} \sum\_{i=1}^{r-j-k} \frac{1}{i!} c\_{i+j+k-1} \cdots$$

Replacing i with i − j − k + 1 and then permuting the resulting sums, the latter expression reduces to

$$\sum\_{j=0}^{r-k-1} \frac{B\_j}{j!} \sum\_{l=j+k}^{r-1} \frac{1}{(i-j-k+1)!} c\_l = \sum\_{l=k}^{r-1} \frac{c\_l}{(i-k+1)!} \sum\_{j=0}^{l-k} \binom{i-k+1}{j} \mathcal{B}\_j,$$

that is, using (6.40),

$$\sum\_{i=k}^{r-1} \frac{c\_i}{(i-k+1)!} 0^{i-k} = c\_k.$$

This completes the proof of the theorem.

Adding an appropriate constant to ϕ if necessary in Theorem 7.18, we can always assume that condition (7.6) holds. More precisely, the function <sup>ϕ</sup> <sup>=</sup> <sup>ϕ</sup> <sup>+</sup> <sup>C</sup>, where

$$C = \left. g^{(r-1)}(a) - \int\_a^{a+1} \varphi(t) \, dt \right|\_a$$

satisfies

$$\int\_{a}^{a+1} \varphi^\star(t) \, dt \, = \ g^{(r-1)}(a).$$

*Example 7.19* Let us see how we can apply Theorem 7.18 to somewhat generalize Example 7.17. Let <sup>g</sup> <sup>∈</sup> *<sup>C</sup>*0, let <sup>G</sup> <sup>∈</sup> *<sup>C</sup>*<sup>1</sup> be defined by the equation

$$G(\mathbf{x}) = \int\_{1}^{\mathbf{x}} \mathbf{g}(t) \, dt \qquad \text{for } \mathbf{x} > \mathbf{0},$$

and let <sup>f</sup> <sup>∈</sup> *<sup>C</sup>*<sup>0</sup> be any solution to the equation f <sup>=</sup> <sup>g</sup>. To find a solution <sup>F</sup> to the equation F = G such that F = f , we just need to apply Theorem 7.18 to the function G with r = 1 and a = 1. Defining the function

$$f^\star = f - \int\_1^2 f(t) \, dt \,,$$

we then obtain that the function <sup>F</sup> <sup>∈</sup> *<sup>C</sup>*<sup>1</sup> defined by the equation

$$F(\mathbf{x}) = \int\_{1}^{\chi} f^{\star}(t) \, dt = \int\_{1}^{\chi} f(t) \, dt - (\mathbf{x} - \mathbf{l}) \int\_{1}^{2} f(t) \, dt \qquad \text{for } \mathbf{x} > \mathbf{0},$$

is the unique (up to an additive constant) solution to the equation F = G such that F = f . For similar results, see Krull [55, p. 254] and Kuczma [58, Section 2]. ♦

The next corollary particularizes the elevator method when the function g lies in *<sup>C</sup>*<sup>r</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*max{p,r} for some <sup>p</sup> <sup>∈</sup> <sup>N</sup> and <sup>r</sup> <sup>∈</sup> <sup>N</sup>∗. We omit the proof, since it immediately follows from Theorem 7.5, Proposition 7.7, and Theorem 7.18.

**Corollary 7.20 (The Elevator Method)** *Let* <sup>g</sup> *lie in <sup>C</sup>*<sup>r</sup> <sup>∩</sup>*D*<sup>p</sup> <sup>∩</sup>*K*max{p,r} *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup> *and* <sup>r</sup> <sup>∈</sup> <sup>N</sup>∗*. Then* g *lies in <sup>C</sup>*<sup>r</sup> <sup>∩</sup> *<sup>D</sup>*p+<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*max{p,r} *and we have*

$$(\Sigma g)^{(r)} - \Sigma g^{(r)} = g^{(r-1)}(1) - \sigma [g^{(r)}].$$

*(This latter value reduces to* −<sup>∞</sup> <sup>k</sup>=<sup>1</sup> <sup>g</sup>(r)(k) *if* r>p*.) Moreover, for any* a > <sup>0</sup>*, we have*

$$
\Sigma g \, = \, f\_a - f\_a(1),
$$

*where* fa <sup>∈</sup> *<sup>C</sup>*<sup>r</sup> *is defined by*

$$f\_a(\mathbf{x}) = \sum\_{k=1}^{r-1} c\_k(a) \frac{(\mathbf{x} - a)^k}{k!} + \int\_a^\chi \frac{(\mathbf{x} - t)^{r-1}}{(r-1)!} (\Sigma \mathbf{g})^{(r)}(t) \, dt$$

*and, for* k = 1,...,r − 1*,*

$$c\_k(a) := \sum\_{j=0}^{r-k-1} \frac{B\_j}{j!} \left( \mathbf{g}^{(j+k-1)}(a) - \int\_a^{a+1} \frac{(a+1-t)^{r-j-k}}{(r-j-k)!} (\Sigma \mathbf{g})^{(r)}(t) \, dt \right).$$

Corollary 7.20 has an important practical value. It provides an explicit integral expression for g from an explicit expression for g(r). Setting <sup>a</sup> <sup>=</sup> 1 in this result, we simply obtain

$$\Sigma g(\mathbf{x}) = \sum\_{k=1}^{r-1} c\_k \frac{(\mathbf{x} - \mathbf{1})^k}{k!} + \int\_1^\chi \frac{(\mathbf{x} - \mathbf{t})^{r-1}}{(r-1)!} (\Sigma g)^{(r)}(\mathbf{t}) \, d\mathbf{t},$$

with, for k = 1,...,r − 1,

$$c\_k = \sum\_{j=0}^{r-k-1} \frac{B\_j}{j!} \left( g^{(j+k-1)}(1) - \int\_1^2 \frac{(2-t)^{r-j-k}}{(r-j-k)!} (\Sigma g)^{(r)}(t) \, dt \right).$$

The following three examples illustrate the use of Corollary 7.20. In the first one, we revisit Example 7.17.

*Example 7.21* The function

$$\mathbf{g}(\mathbf{x}) = \int\_{1}^{\mathbf{x}} \ln t \, dt$$

lies in *<sup>C</sup>*<sup>∞</sup> <sup>∩</sup> *<sup>D</sup>*<sup>2</sup> <sup>∩</sup> *<sup>K</sup>*∞. Choosing <sup>r</sup> <sup>=</sup> 1 and <sup>a</sup> <sup>=</sup> 1 in Corollary 7.20, we get

$$\begin{aligned} \mathbf{g}'(\mathbf{x}) &= \ln \mathbf{x} \,, \\ \boldsymbol{\Sigma} \mathbf{g}'(\mathbf{x}) &= \ln \boldsymbol{\Gamma}(\mathbf{x}) \,, \\ (\boldsymbol{\Sigma} \mathbf{g})'(\mathbf{x}) &= \ln \boldsymbol{\Gamma}(\mathbf{x}) + 1 - \frac{1}{2} \ln(2\pi) , \end{aligned}$$

and

$$\Sigma g(\mathbf{x}) = \left(1 - \frac{1}{2} \ln(2\pi)\right) (\mathbf{x} - \mathbf{l}) + \int\_{1}^{\chi} \ln \Gamma(t) \, dt. \qquad \phi$$

*Example 7.22* The function

$$\mathbf{g}(\mathbf{x}) = \int\_0^\chi (\mathbf{x} - t) \ln t \, dt$$

lies in *<sup>C</sup>*<sup>∞</sup> <sup>∩</sup> *<sup>D</sup>*<sup>3</sup> <sup>∩</sup> *<sup>K</sup>*∞. Choosing <sup>r</sup> <sup>=</sup> 2 and <sup>a</sup> <sup>=</sup> 0 (as a limiting value) in Corollary 7.20, we get

$$\begin{aligned} \text{g}''(\mathbf{x}) &= \ln \mathbf{x} \,, \\ \Sigma \text{g}''(\mathbf{x}) &= \ln \Gamma(\mathbf{x}) \,, \\ \left(\Sigma \text{g}\right)''(\mathbf{x}) &= \ln \Gamma(\mathbf{x}) - \frac{1}{2} \ln(2\pi) ,\end{aligned}$$

and

$$
\Delta \mathbf{g}(\mathbf{x}) = -\left(\ln A\right)\mathbf{x} - \frac{1}{4}\ln(2\pi)\,\mathbf{x}^2 + \int\_0^\chi (\mathbf{x} - t)\ln \Gamma(t) \,dt,
$$

where A is Glaisher-Kinkelin's constant and the integral is the polygamma function <sup>ψ</sup>−3(x). (Here we use the identity <sup>ψ</sup>−3(1) <sup>=</sup> ln <sup>A</sup> <sup>+</sup> <sup>1</sup> <sup>4</sup> ln(2π ).)

We can also investigate the asymptotic properties of g using our results. For instance, using the generalized Stirling formula (6.21), we also obtain the following asymptotic behavior of g

$$\begin{aligned} \left(\Sigma g(\mathbf{x}) + \frac{1}{72} (22\mathbf{x}^3 - 27\mathbf{x}^2 + 9\mathbf{x}) - \frac{1}{48} \mathbf{x}^2 (8\mathbf{x} - 15) \ln \mathbf{x} \\ -\frac{1}{12} (\mathbf{x} + 1)^2 \ln(\mathbf{x} + 1) + \frac{1}{48} (\mathbf{x} + 2)^2 \ln(\mathbf{x} + 2) &\rightarrow \frac{\xi(3)}{8\pi^2} \end{aligned} \quad \text{as } \mathbf{x} \to \infty. \,\big\vert \,\mathbf{x}$$

*Example 7.23* The function g(x) <sup>=</sup> arctan(x) lies in *<sup>C</sup>*<sup>∞</sup> <sup>∩</sup>*D*<sup>1</sup> <sup>∩</sup>*K*∞. Choosing <sup>r</sup> <sup>=</sup> 1 and a = 0 (as a limiting value) in Corollary 7.20, we get (see also Example 5.10)

$$\begin{aligned} \mathbf{g}'(\mathbf{x}) &= (\mathbf{x}^2 + 1)^{-1} = -\Re(\mathbf{x} + i)^{-1}, \\ \Sigma \mathbf{g}'(\mathbf{x}) &= \Im \psi (1 + i) - \Im \psi (\mathbf{x} + i), \\ \Sigma \mathbf{g})'(\mathbf{x}) &= c - \Im \psi (\mathbf{x} + i), \end{aligned}$$

for some <sup>c</sup> <sup>∈</sup> <sup>R</sup>, and hence

$$
\Sigma \mathbf{g}(\mathbf{x}) = c \left( \mathbf{x} - 1 \right) + \Im \ln \Gamma(1 + i) - \Im \ln \Gamma(\mathbf{x} + i).
$$

Applying the operator to both sides of this identity and then setting x = 1, we obtain <sup>c</sup> <sup>=</sup> <sup>π</sup> <sup>2</sup> . Thus, we have

$$
\Sigma g(\mathbf{x}) = \frac{\pi}{2} (\mathbf{x} - \mathbf{l}) + \Im \ln \Gamma(1 + i) - \Im \ln \Gamma(\mathbf{x} + i).
$$

Some properties of g can be investigated. For instance, using Corollary 6.12 together with the identity

$$\int\_{1}^{\chi} \arctan(t) \, dt = \left\| x \arctan(x) - \frac{1}{2} \ln(x^2 + 1) - \frac{\pi}{4} + \frac{1}{2} \ln 2 \right\|$$

we obtain the inequality

$$\begin{aligned} \left| \Sigma g(\mathbf{x}) - \left( \mathbf{x} - \frac{1}{2} \right) \arctan(\mathbf{x}) + \frac{1}{2} \ln(\mathbf{x}^2 + 1) - 1 + \frac{\pi}{4} - \Im \ln \Gamma(1+i) \right| \\ &\leq \frac{1}{2} \arctan \frac{1}{\mathbf{x}^2 + \mathbf{x} + 1} \end{aligned}$$

and hence the left side approaches zero as x → ∞, which provides the asymptotic behavior of the function g for large values of its argument. ♦

#### **7.4 An Alternative Uniqueness Result**

The following theorem provides a uniqueness result for higher order differentiable solutions to the equation f = g. These solutions can be computed from their derivatives using Theorem 7.18. We first state a surprising and useful fact.

**Fact 7.24** *A periodic function* <sup>ω</sup>: <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *is constant if and only if it lies in <sup>K</sup>*0*. In particular, if* <sup>ϕ</sup>1, ϕ<sup>2</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *are two solutions to the equation* ϕ <sup>=</sup> <sup>g</sup> *such that* <sup>ϕ</sup><sup>1</sup> <sup>−</sup> <sup>ϕ</sup><sup>2</sup> *lies in <sup>K</sup>*0*, then* <sup>ϕ</sup><sup>1</sup> <sup>−</sup> <sup>ϕ</sup><sup>2</sup> *is constant.*

**Theorem 7.25 (Uniqueness)** *Let* <sup>r</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> *and* <sup>g</sup> <sup>∈</sup> *<sup>C</sup>*r*, and assume that there exists* <sup>ϕ</sup> <sup>∈</sup> *<sup>C</sup>*<sup>r</sup> *such that* ϕ <sup>=</sup> <sup>g</sup> *and* <sup>ϕ</sup>(r) <sup>∈</sup> *<sup>R</sup>*<sup>0</sup> <sup>N</sup> *. Then, the following assertions hold.*

*(a) For each* x > 0*, the series* ∞ <sup>k</sup>=<sup>0</sup> <sup>g</sup>(r)(x <sup>+</sup> k) *converges and we have*

$$\varphi^{(r)}(\mathbf{x}) \, \, = \, - \sum\_{k=0}^{\infty} \mathbf{g}^{(r)}(\mathbf{x} + k) \, \,.$$

*(b) For any* <sup>f</sup> <sup>∈</sup> *<sup>C</sup>*<sup>r</sup> <sup>∩</sup> *<sup>K</sup>*r−<sup>1</sup> *such that* f <sup>=</sup> <sup>g</sup>*, we have* <sup>f</sup> <sup>=</sup> <sup>c</sup> <sup>+</sup> <sup>ϕ</sup> *for some* <sup>c</sup> <sup>∈</sup> <sup>R</sup>*.*

*Proof* Assertion (a) follows immediately from (3.2). Now, let <sup>f</sup> <sup>∈</sup> *<sup>C</sup>*<sup>r</sup> <sup>∩</sup> *<sup>K</sup>*r−<sup>1</sup> be such that f <sup>=</sup> <sup>g</sup>. By Lemma 2.6(c), <sup>f</sup> (r) must lie in *<sup>K</sup>*<sup>−</sup>1. Setting <sup>ω</sup> <sup>=</sup> <sup>f</sup> <sup>−</sup> <sup>ϕ</sup> and using (3.2) again, we then obtain

$$
\omega^{(r)}(\mathbf{x}) = f^{(r)}(\mathbf{x}) - \varphi^{(r)}(\mathbf{x}) \ = \lim\_{n \to \infty} f^{(r)}(\mathbf{x} + n),
$$

which shows that <sup>ω</sup>(r) also lies in *<sup>K</sup>*<sup>−</sup>1. By Lemma 2.6(d), <sup>ω</sup> lies in *<sup>K</sup>*r−<sup>1</sup> <sup>⊂</sup> *<sup>K</sup>*<sup>0</sup> and, since it is 1-periodic, it must be constant by Fact 7.24. This proves assertion (b).

*Example 7.26* The assumptions of Theorem 7.25 hold if g(x) = ln x, ϕ(x) = ln -(x), and r = 2. It then follows that all solutions to the equation f = g that lie in *<sup>C</sup>*<sup>2</sup> <sup>∩</sup> *<sup>K</sup>*<sup>1</sup> are of the form f (x) <sup>=</sup> <sup>c</sup> <sup>+</sup> ln -(x), where <sup>c</sup> <sup>∈</sup> <sup>R</sup>. We thus easily retrieve Bohr-Mollerup's theorem with the additional assumption that <sup>f</sup> lies in *<sup>C</sup>*2. It is remarkable that this latter result can be obtained here from a very elementary theorem that relies only on Lemma 2.6 and Fact 7.24. ♦

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

## **Chapter 8 Further Results**

As discussed in the first chapter, the main objective of our work is to generalize Krull-Webster's theory to multiple log --type functions and explore the properties of these functions that are analogues of classical properties of the gamma function.

In the previous chapters, we have presented and discussed several results related to these functions, including their differentiation and integration properties as well as important results on their asymptotic behaviors.

We are now in a position to explore further properties of multiple log --type functions. More precisely, in this chapter we provide for these functions analogues of *Euler's infinite product*, *Euler's reflection formula*, *Gauss' multiplication formula*, *Gautschi's inequality*, *Raabe's formula*, *Wallis's product formula*, *Webster's functional equation*, and *Weierstrass' infinite product* for the gamma function. We also discuss analogues of *Fontana-Mascheroni's series* and *Gauss' digamma theorem* and provide a Gregory's formula-based series representation, a general asymptotic expansion formula, and a few related results.

#### **8.1 Eulerian Form**

Let <sup>g</sup> lie in *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> for some <sup>p</sup> <sup>∈</sup> <sup>N</sup>. As we already observed in Chap. 1, the representation of g as the pointwise limit of the sequence <sup>n</sup> → <sup>f</sup> <sup>p</sup> <sup>n</sup> [g] is the analogue of Gauss' limit for the gamma function. Using identity (3.8), we immediately see that this form of g can be translated into a series, namely

$$\Sigma g(\mathbf{x}) = \left. f\_1^p[\mathbf{g}](\mathbf{x}) - \sum\_{k=1}^{\infty} \rho\_k^{p+1}[\mathbf{g}](\mathbf{x}), \qquad \mathbf{x} > \mathbf{0}. \tag{8.1}$$

© The Author(s) 2022 J.-L. Marichal, N. Zenaïdi, *A Generalization of Bohr-Mollerup's Theorem for Higher Order Convex Functions*, Developments in Mathematics 70, https://doi.org/10.1007/978-3-030-95088-0\_8

111

It is a simple exercise to see that, when g(x) = ln x and p = 1, this latter formula reduces to the following series representation of the log-gamma function

$$\ln \Gamma(\mathbf{x}) = -\ln \mathbf{x} - \sum\_{k=1}^{\infty} \left( \ln(\mathbf{x} + k) - \ln k - \mathbf{x} \ln \left( 1 + \frac{1}{k} \right) \right). \tag{8.2}$$

Its multiplicative version is nothing other than the classical Eulerian form (or Euler's product form) of the gamma function (see, e.g., Srivastava and Choi [93, p. 3]). We recall this form in the following proposition.

**Proposition 8.1 (Eulerian Form of the Gamma Function)** *The following identity holds*

$$\Gamma(\mathbf{x}) \, \, = \, \frac{1}{\mathbf{x}} \prod\_{k=1}^{\infty} \frac{(1 + 1/k)^{\mathbf{x}}}{1 + \mathbf{x}/k}, \qquad \mathbf{x} > \mathbf{0}.$$

We thus see that, for any multiple log --type function, the series representation (8.1) is the analogue of the Eulerian form of the gamma function in the additive notation. Moreover, we have shown in Theorem 7.5 that this series can be differentiated term by term on <sup>R</sup>+. We have also shown in Proposition 5.18 that this series can be integrated term by term on any bounded interval of [0,∞). Let us state these important facts in the following theorem.

**Theorem 8.2 (Eulerian Form)** *Let* <sup>g</sup> *lie in <sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*. The following assertions hold.*

*(a) For any* x > 0 *we have*

$$\Delta \mathbf{g}(\mathbf{x}) = \left. -\mathbf{g}(\mathbf{x}) + \sum\_{j=1}^{p} \binom{\mathbf{x}}{j} \Delta^{j-1} \mathbf{g}(\mathbf{l}) - \sum\_{k=1}^{\infty} \left( \mathbf{g}(\mathbf{x} + k) - \sum\_{j=0}^{p} \binom{\mathbf{x}}{j} \Delta^{j} \mathbf{g}(k) \right) \right|$$

*and the series converges uniformly on any bounded subset of* [0,∞)*.*


*Proof* Assertion (a) follows from identity (3.8) and the existence Theorem 3.6 (see also Remark 3.7). Assertion (b) follows from Proposition 5.18, especially its assertion (c2), and Remark 5.19. Assertion (c) follows from Theorem 7.5. *Example 8.3* Let us apply Theorem 8.2 to g(x) = ln x and p = 1. We immediately retrieve identity (8.2). Upon differentiation, we also obtain

$$\psi(x) = -\frac{1}{x} - \sum\_{k=1}^{\infty} \left( \frac{1}{x+k} - \ln\left(1 + \frac{1}{k}\right) \right)$$

and, for any <sup>r</sup> <sup>∈</sup> <sup>N</sup>∗,

$$\psi\_r(\mathbf{x}) = (-1)^{r+1} r! \sum\_{k=0}^{\infty} \frac{1}{(\mathbf{x} + k)^{r+1}} = (-1)^{r+1} r! \zeta(r+1, \mathbf{x}).$$

Integrating on (0,x), we obtain

$$\psi\_{-2}(\mathbf{x}) = \left. \mathbf{x} - \mathbf{x} \ln \mathbf{x} - \sum\_{k=1}^{\infty} \left( (\mathbf{x} + k) \ln \left( 1 + \frac{\mathbf{x}}{k} \right) - \mathbf{x} - \frac{\mathbf{x}^2}{2} \ln \left( 1 + \frac{1}{k} \right) \right) \right|\_{\mathbf{x} = \mathbf{x}^2}$$

Integrating once more on (0,x), we obtain

$$\begin{aligned} \psi\_{-3}(\mathbf{x}) &= \frac{1}{4} \mathbf{x}^2 (3 - 2 \ln \mathbf{x}) \\ &- \sum\_{k=1}^{\infty} \left( \frac{1}{2} (\mathbf{x} + k)^2 \ln \left( 1 + \frac{\mathbf{x}}{k} \right) - \frac{k}{2} \mathbf{x} - \frac{3}{4} \mathbf{x}^2 - \frac{1}{6} \mathbf{x}^3 \ln \left( 1 + \frac{1}{k} \right) \right) . \end{aligned}$$

We can actually integrate both sides on (0,x) repeatedly as we wish. ♦

#### **8.2 Weierstrassian Form**

In the following proposition, we recall an alternative infinite product representation of the gamma function, which was proposed by Weierstrass. This representation is usually called the *Weierstrass factorization* of the gamma function or the *Weierstrass canonical product form* of the gamma function (see Artin [11, pp. 15– 16] and Srivastava and Choi [93, p. 1]).

**Proposition 8.4 (Weierstrassian Form of the Gamma Function)** *The following identity holds*

$$\Gamma(x) = \frac{e^{-\gamma x}}{x} \prod\_{k=1}^{\infty} \frac{e^{\frac{x}{k}}}{1 + \frac{x}{k}}, \qquad x > 0. \tag{8.3}$$

We now show that this factorization can be generalized to any logp-type function that is of class *<sup>C</sup>*p. This new result is presented in the following two theorems, which deal with the cases p = 0 and p ≥ 1 separately. We observe that the special case when p = 1 was previously established by John [49, Theorem B'] and in the multiplicative notation by Webster [98, Theorem 7.1].

It is important to note that, just as in Theorem 8.2, the partial sums that define the series of the theorems below are nothing other than the sequence <sup>n</sup> → <sup>f</sup> <sup>p</sup> <sup>n</sup> [g](x). Thus, these series can be integrated and differentiated term by term.

**Theorem 8.5 (Weierstrassian Form When deg** *<sup>g</sup>* **= −1)** *Let* <sup>g</sup> *lie in <sup>C</sup>*0∩*D*0∩*K*0*. The following assertions hold.*


$$\Sigma \mathbf{g}(\mathbf{x}) = \sigma[\mathbf{g}] - \mathbf{g}(\mathbf{x}) - \sum\_{k=1}^{\infty} \left( \mathbf{g}(\mathbf{x} + k) - \int\_{k}^{k+1} \mathbf{g}(t) \, dt \right)^{\mathbf{x}}$$

*and the series converges uniformly on any bounded subset of* [0,∞)*.*


*Proof* Assertion (a) follows from Proposition 6.36. Assertion (b) follows from Theorem 8.2 and identity (6.43). Assertions (c) and (d) follow from Theorem 8.2.

To establish the second theorem (the case when deg g ≥ 0), we need the following technical lemma.

**Lemma 8.6** *Let* <sup>g</sup> *lie in <sup>C</sup>*<sup>1</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup>∗*. Then*

$$
\Delta \mathbf{g}(\mathbf{x}) - \sum\_{j=0}^{p-2} G\_j \Delta^j \mathbf{g}'(\mathbf{x}) \to \mathbf{0} \qquad \text{as } \mathbf{x} \to \infty.
$$

*If, in addition,* <sup>g</sup> <sup>∈</sup> *<sup>C</sup>*p−1*, then*

$$\Delta^{p-1}g(\mathbf{x}) - g^{(p-1)}(\mathbf{x}) \to 0 \qquad a \ge \mathbf{x} \to \infty.$$

*Proof* By Proposition 4.12, we have that <sup>g</sup> lies in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*p−<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*p−1. The first convergence result then follows immediately from the application of (6.22) to g . That is,

$$J^{p-1}[\mathfrak{g'}](x) \to 0 \qquad \text{as } x \to \infty.$$

Let us now assume that <sup>g</sup> <sup>∈</sup> *<sup>C</sup>*p−1. By Propositions 4.11 and 4.12, for every <sup>i</sup> <sup>∈</sup> {0,...,p − 2} the function

$$\mathbf{g}\_{l} = \Delta^{l} \mathbf{g}^{(p-2-l)}$$

lies in *<sup>C</sup>*<sup>1</sup> <sup>∩</sup> *<sup>D</sup>*<sup>2</sup> <sup>∩</sup> *<sup>K</sup>*<sup>2</sup> and hence, applying the first result to gi, we obtain that

$$
\Delta \mathbf{g}\_l(\mathbf{x}) - \mathbf{g}\_l'(\mathbf{x}) \to \mathbf{0} \qquad \text{as } \mathbf{x} \to \infty.
$$

Summing these limits for i = 0,...,p − 2, we obtain the claimed limit.

**Theorem 8.7 (Weierstrassian Form When deg** *g* **<sup>≥</sup> 0)** *Let* <sup>g</sup> *lie in <sup>C</sup>*<sup>p</sup> <sup>∩</sup>*D*<sup>p</sup> <sup>∩</sup>*K*<sup>p</sup> *with* deg <sup>g</sup> <sup>=</sup> <sup>p</sup> <sup>−</sup> <sup>1</sup> *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup>∗*. The following assertions hold.*


$$\begin{aligned} \Sigma g(\mathbf{x}) &= \sum\_{j=1}^{p-1} \binom{\boldsymbol{\chi}}{j} \Delta^{j-1} g(\mathbf{1}) + \binom{\boldsymbol{\chi}}{p} (\Sigma g)^{(p)}(\mathbf{1}) \\ &- g(\mathbf{x}) - \sum\_{k=1}^{\infty} \left( g(\mathbf{x} + k) - \sum\_{j=0}^{p-1} \binom{\boldsymbol{\chi}}{j} \Delta^{j} g(k) - \binom{\boldsymbol{\chi}}{p} g^{(p)}(k) \right) \end{aligned}$$

*and the series converges uniformly on any bounded subset of* [0,∞)*.*


*Proof* By Proposition 4.12, we have that <sup>g</sup>(p) lies in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*0. Assertion (a) then follows from Propositions 6.36 and 7.7. Now, using (6.43) we get

$$\mathbb{Y}[\mathfrak{g}^{(p)}] = \sum\_{k=1}^{\infty} (\mathfrak{g}^{(p)}(k) - \Delta \mathfrak{g}^{(p-1)}(k)).$$

Using Theorem 8.2, we then obtain

$$\begin{split} \boldsymbol{\Sigma}\boldsymbol{g}(\boldsymbol{x}) &= \sum\_{j=1}^{p-1} \binom{\boldsymbol{x}}{j} \, \Delta^{j-1} \boldsymbol{g}(1) + \binom{\boldsymbol{x}}{p} \left( \boldsymbol{g}^{(p-1)}(1) - \boldsymbol{\chi} \left[ \boldsymbol{g}^{(p)} \right] \right) \\ &- \boldsymbol{g}(\boldsymbol{x}) - \lim\_{n \to \infty} \sum\_{k=1}^{n-1} \left( \boldsymbol{g}(\boldsymbol{x} + k) - \sum\_{j=0}^{p-1} \binom{\boldsymbol{x}}{j} \, \Delta^{j} \boldsymbol{g}(k) - \binom{\boldsymbol{x}}{p} \, \boldsymbol{g}^{(p)}(k) \right) \\ &+ \lim\_{n \to \infty} \binom{\boldsymbol{x}}{p} \left( \Delta^{p-1} \boldsymbol{g}(n) - \boldsymbol{g}^{(p-1)}(n) \right), \end{split}$$

where the latter limit is zero by Lemma 8.6. This proves assertion (b). Assertions (c) and (d) follow from Theorem 8.2.

*Example 8.8* Let us apply Theorem 8.7 to g(x) = ln x and p = 1. We immediately get

$$
\ln \Gamma(\mathbf{x}) = \left[ -\gamma x - \ln x - \sum\_{k=1}^{\infty} \left( \ln(x+k) - \ln k - \frac{x}{k} \right) \right],
$$

which is the additive version of the Weierstrassian form (8.3) of the gamma function. It is remarkable that we can now retrieve this formula in an effortless way. Upon differentiation, we also obtain (see, e.g., Srivastava and Choi [93, p. 24])

$$\psi(\mathbf{x}) = \left. -\gamma - \frac{1}{\mathbf{x}} - \sum\_{k=1}^{\infty} \left( \frac{1}{\mathbf{x} + k} - \frac{1}{k} \right) \right\rangle$$

Integrating on (0,x), we obtain

$$\psi\_{-2}(\mathbf{x}) = -\mathbf{y}\,\frac{\mathbf{x}^2}{2} + \mathbf{x} - \mathbf{x}\ln\mathbf{x} - \sum\_{k=1}^{\infty} \left( (\mathbf{x} + k)\ln\left(1 + \frac{\mathbf{x}}{k}\right) - \mathbf{x} - \frac{\mathbf{x}^2}{2k} \right).$$

Integrating once more on (0,x), we obtain

$$\begin{aligned} \psi\_{-3}(\boldsymbol{\chi}) &= \frac{1}{12} \boldsymbol{x}^2 (9 - 2\boldsymbol{\chi}\boldsymbol{x} - 6\ln \boldsymbol{x}) \\ &- \sum\_{k=1}^{\infty} \left( \frac{1}{2} (\boldsymbol{\chi} + k)^2 \ln \left( 1 + \frac{\boldsymbol{x}}{k} \right) - \frac{k}{2} \boldsymbol{x} - \frac{3}{4} \boldsymbol{x}^2 - \frac{\boldsymbol{x}^3}{6k} \right). \end{aligned}$$

Just as in Example 8.3, we can integrate both sides on (0,x) repeatedly as we wish. ♦

Let us end this section with an aside about some potential consequences of the technical Lemma 8.6.

*Remark 8.9* If <sup>g</sup> lies in *<sup>C</sup>*<sup>1</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> for some <sup>p</sup> <sup>∈</sup> <sup>N</sup>∗, then by Propositions 4.8 and 4.12 we have <sup>g</sup> <sup>∈</sup> *<sup>R</sup>*p−<sup>1</sup> <sup>R</sup> . That is, for any a ≥ 0

$$\mathbf{g}'(\mathbf{x} + \mathbf{a}) - \sum\_{j=0}^{p-2} \binom{\mathbf{a}}{j} \Delta^j \mathbf{g}'(\mathbf{x}) \to \mathbf{0} \qquad \text{as } \mathbf{x} \to \infty.$$

Combining this result with the first part of Lemma 8.6, we can derive surprising limits. For instance, we obtain for any p ∈ {1, 2, 3}

$$\Delta \mathbf{g}(\mathbf{x}) - \mathbf{g}'\left(\mathbf{x} + \frac{1}{2}\right) \to \mathbf{0} \qquad \text{as } \mathbf{x} \to \infty.$$

This latter limit has the following interpretation. The mean value theorem tells us that g(x) = g (x + ξx ) for some ξx ∈ (0, 1). The limit above then says that

$$g'(x + \xi\_x) - g'(x + \frac{1}{2}) \to 0 \quad \text{as } x \to \infty.$$

In particular, if <sup>g</sup> lies in *<sup>C</sup>*<sup>2</sup> and for instance eventually satisfies <sup>g</sup> (x) <sup>≥</sup> <sup>c</sup> for some c > 0, then

$$\begin{aligned} \left| c \left| \xi\_{\chi} - \frac{1}{2} \right| \le \left| \int\_{\frac{1}{2}}^{\xi\_{\chi}} g''(x + t) \, dt \right| \\ &= \left| g'(x + \xi\_{\chi}) - g'(x + \frac{1}{2}) \right| \to 0 \quad \text{as } x \to \infty, \end{aligned}$$

which shows that ξx <sup>→</sup> <sup>1</sup> <sup>2</sup> as x → ∞. ♦

#### **8.3 Gregory's Formula-Based Series Representation**

The following proposition provides series expressions for g and σ[g] in terms of Gregory's coefficients (see also Proposition D.2 in Appendix D). This proposition follows from the next lemma, which in turn immediately follows from Corollary 6.12.

**Lemma 8.10** *Let* <sup>g</sup> *lie in <sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>q</sup> *for some* p, q <sup>∈</sup> <sup>N</sup> *such that* <sup>p</sup> <sup>≤</sup> <sup>q</sup>*. Let* x > 0 *be so that for* k = p, . . . , q *the function* g *is* k*-convex or* k*-concave on* [x,∞)*. Then we have*

$$\left|J^{k+1}[\Sigma \underline{\mathbf{g}}](\mathbf{x})\right| \le \overline{G}\_k \left|\Delta^k \underline{\mathbf{g}}(\mathbf{x})\right|, \qquad k = p, \dots, q.$$

**Proposition 8.11** *Let* <sup>g</sup> *lie in <sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>∞</sup> *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*. Let* x > <sup>0</sup> *be so that for every integer* q ≥ p *the function* g *is* q*-convex or* q*-concave on* [x,∞)*. Suppose also that the sequence* <sup>q</sup> → qg(x) *is bounded. Then we have*

$$J^{q+1}[\Sigma g](\mathbf{x}) \to \begin{array}{c} 0 \\ \end{array} \quad \text{as } q \to\_{\mathbb{N}} \infty,$$

*that is,*

$$\Sigma \mathbf{g}(\mathbf{x}) = \sigma \mathbf{[g]} + \int\_{1}^{\chi} \mathbf{g}(t) \, dt - \sum\_{n=1}^{\infty} G\_{n} \Delta^{n-1} \mathbf{g}(\mathbf{x}).\tag{8.4}$$

*In particular, if the assumptions above are satisfied for* x = 1*, then we have*

$$\sigma[\mathbf{g}] = \sum\_{n=1}^{\infty} G\_n \Delta^{n-1} \mathbf{g}(\mathbf{l}).\tag{8.5}$$

*Proof* This result is an immediate consequence of Lemma 8.10 and the fact that the sequence n → Gn decreases to zero. Identity (8.4) then follows from (6.18).

*Example 8.12* Applying Proposition 8.11 to the function g(x) = ln x with p = 1, we obtain the following series representation of the log-gamma function for x > 0

$$\ln \Gamma(\mathbf{x}) = \frac{1}{2} \ln(2\pi) - \mathbf{x} + \mathbf{x} \ln \mathbf{x} - \sum\_{n=0}^{\infty} G\_{n+1} \,\Delta^n \ln \mathbf{x} \tag{8.6}$$

$$= \frac{1}{2} \ln(2\pi) - \mathbf{x} + \mathbf{x} \ln \mathbf{x} - \sum\_{n=0}^{\infty} |G\_{n+1}| \sum\_{k=0}^{n} (-1)^k \binom{n}{k} \ln(\mathbf{x} + k),$$

where we have used the classical identity (see, e.g., Graham et al. [41, p. 188])

$$\Delta^n f(\mathbf{x}) = \sum\_{k=0}^n (-1)^{n-k} \binom{n}{k} f(\mathbf{x} + k).$$

Equivalently, using the Binet function J (x), identity (8.6) can take the form

$$J(\mathbf{x}) = \ -\sum\_{n=1}^{\infty} |G\_{n+1}| \sum\_{k=0}^{n} (-1)^k \binom{n}{k} \ln(\mathbf{x} + k), \qquad \mathbf{x} > \mathbf{0},$$

where, for any <sup>n</sup> <sup>∈</sup> <sup>N</sup>∗, the inner sum also reduces to the following integral (see, e.g., [41, p. 192])

$$(-1)^n \Delta^n \ln x = \ -\int\_0^\infty \frac{e^{-\chi t}}{t} \left(1 - e^{-t}\right)^n dt, \qquad n \in \mathbb{N}^\*.$$

In particular,

$$|\Delta^n \ln x| \le \int\_0^\infty \frac{e^{-\lambda t}}{t} \left(1 - e^{-t}\right) dt = \Delta \ln x = \ln \left(1 + \frac{1}{x}\right).$$

In the multiplicative notation, identity (8.6) takes the following form

$$\Gamma(\mathbf{x}) = \sqrt{2\pi} \, e^{-\mathbf{x}} \, x^{\mathbf{x} - \frac{1}{2}} \left( \frac{\mathbf{x} + 1}{\mathbf{x}} \right)^{\frac{1}{12}} \left( \frac{(\mathbf{x} + 2)\mathbf{x}}{(\mathbf{x} + 1)^2} \right)^{-\frac{1}{24}}$$

$$\times \left( \frac{(\mathbf{x} + 3)(\mathbf{x} + 1)^3}{(\mathbf{x} + 2)^3 \mathbf{x}} \right)^{\frac{19}{220}} \dots \dots$$

Further infinite product representations and approximations of the gamma function can be found for instance in Feng and Wang [36]. ♦

#### **8.4 Analogue of Fontana-Mascheroni's Series**

Interestingly, when g(x) <sup>=</sup> <sup>1</sup> <sup>x</sup> and p = 0, identity (8.5) reduces to the well-known formula

$$\lambda \;= \sum\_{n=1}^{\infty} \frac{|G\_n|}{n} \;,$$

where γ is Euler's constant and the series is called *Fontana-Mascheroni's series* (see, e.g., Blagouchine [20, p. 379]). Thus, the series representation of the asymptotic constant σ[g] given in (8.5) provides the analogue of Fontana-Mascheroni's series for any function g satisfying the assumptions of Proposition 8.11.

*Example 8.13* The analogue of Fontana-Mascheroni's series for the function g(x) = ln x can be obtained by setting x = 1 in (8.6). We obtain

$$\sum\_{n=0}^{\infty} |G\_{n+1}| \sum\_{k=0}^{n} (-1)^k \binom{n}{k} \ln(k+1) \, = \, -1 + \frac{1}{2} \ln(2\pi),$$

or equivalently (see Example 8.12),

$$\sum\_{n=0}^{\infty} |G\_{n+1}| \int\_0^{\infty} \frac{e^{-t}}{t} \left(1 - e^{-t}\right)^n dt = 1 - \frac{1}{2} \ln(2\pi). \tag{8}$$

The following proposition provides a way to construct a function g(x) that has a prescribed associated asymptotic constant σ[g] given in the form (8.5).

**Proposition 8.14** *Suppose that the series*

$$\mathcal{S} = \sum\_{n=1}^{\infty} G\_n \,\mathrm{s}\_n$$

*converges for a given real sequence* <sup>n</sup> → sn *and let* <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *be such that*

$$\log(n) \;=\sum\_{k=1}^{n} \binom{n-1}{k-1} s\_k \;, \qquad n \in \mathbb{N}^\*.\tag{8.7}$$

*If* g *satisfies the assumptions of Proposition 8.11 with* x = 1*, then the following assertions hold.*

*(a)* S = σ[g]*. (b)* g(n) <sup>=</sup> <sup>n</sup>−<sup>1</sup> k=1 <sup>n</sup>−<sup>1</sup> k sk *for any* <sup>n</sup> <sup>∈</sup> <sup>N</sup>∗*. (c)* sn <sup>=</sup> n−1g(1) <sup>=</sup> ng(1) *for any* <sup>n</sup> <sup>∈</sup> <sup>N</sup>∗*.*

*Proof* Identity (8.7) can take the following alternative form

$$\lg(n+1) \;=\sum\_{k=0}^{n} \binom{n}{k} \,\, s\_{k+1} \;, \qquad n \in \mathbb{N}.$$

Using the classical inversion formula (Graham et al. [41, p. 192]), we then obtain

$$s\_{n+1} = \sum\_{k=0}^{n} (-1)^{n-k} \binom{n}{k} \operatorname{g}(k+1) = \Delta^n \operatorname{g}(1), \qquad n \in \mathbb{N}.$$

This establishes assertion (c) and then assertion (a) by Proposition 8.11. Assertion (b) is straightforward using (5.2).

*Example 8.15* Let us apply Proposition 8.14 to the series

$$\mathcal{S} = \sum\_{n=1}^{\infty} \frac{|G\_n|}{n^2},$$

that is,

$$S = \sum\_{n=1}^{\infty} G\_n s\_n \qquad \text{with} \quad s\_n = (-1)^{n-1} \frac{1}{n^2}.$$

Let <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> be a function such that

$$\lg(n) = \sum\_{k=1}^{n} (-1)^{k-1} \binom{n-1}{k-1} \frac{1}{k^2}, \qquad n \in \mathbb{N}^\*,$$

or equivalently (see Graham et al. [41, p. 281] or Merlini et al. [72, Lemma 4.1]),

$$\lg(n) \;= \frac{1}{n} H\_n \;, \qquad n \in \mathbb{N}^\*.$$

We naturally take g(x) <sup>=</sup> <sup>1</sup> <sup>x</sup> Hx , from which we can derive (see, e.g., Graham et al. [41, p. 280])

$$\Sigma g(\mathbf{x}) = \frac{\pi^2}{12} - \frac{1}{2}\psi\_1(\mathbf{x}) + \frac{1}{2}H\_{\mathbf{x}-1}^2 \dots$$

Thus, we have S = σ[g]. Combining this result with the definition of σ[g], we derive the surprising identity (compare with Blagouchine and Coppo [22, pp. 469– 470])

$$\sum\_{n=1}^{\infty} \frac{|G\_n|}{n^2} = \frac{\pi^2}{12} - \frac{1}{2} + \frac{1}{2} \int\_0^1 H\_t^2 \, dt \, dt$$

Proceeding similarly, with a bit of computation one also finds

$$\sum\_{n=1}^{\infty} \frac{|G\_n|}{n^3} = \frac{1}{3}\xi(3) + \frac{\pi^2}{12}\mathcal{Y} - \frac{\mathfrak{F}}{12} + \frac{1}{6}\int\_0^1 H\_t^3 \,dt\,\ldots$$

Those formulas are worth comparing with the well-known identities (see Sect. 10.2)

$$\sum\_{n=1}^{\infty} \frac{|G\_n|}{n} = \left. \boldsymbol{\nu} \right| = \int\_0^1 H\_{\boldsymbol{l}} \, dt \, \boldsymbol{\lambda}$$

For similar formulas, see also Blagouchine and Coppo [22]. ♦ *Example 8.16* Let us apply Proposition 8.14 to the series

$$S = \sum\_{n=1}^{\infty} \frac{|G\_n|}{n+a},$$

where a > 0. For this series, we can take

$$\mathbf{g}(\mathbf{x}) \equiv \mathbf{B}(\mathbf{x}, a+\mathbf{l}) \qquad \text{and} \qquad \Sigma \mathbf{g}(\mathbf{x}) \equiv \frac{1}{a} - \mathbf{B}(\mathbf{x}, a),$$

where (x, y) → B(x, y) is the beta function. We then derive the identity

$$\sum\_{n=1}^{\infty} \frac{|G\_n|}{n+a} = \frac{1}{a} - \int\_0^1 \mathbf{B}(\mathbf{x}+1, a) \, d\mathbf{x} \dots$$

Using the definition of the beta function as an integral, this identity also reads

$$\sum\_{n=1}^{\infty} \frac{|G\_n|}{n+a} = \frac{1}{a} + \int\_0^1 \frac{x^a}{\ln(1-x)} dx.$$

Setting <sup>a</sup> <sup>=</sup> <sup>1</sup> <sup>2</sup> for instance, we obtain

$$\sum\_{n=1}^{\infty} \frac{|G\_n|}{2n+1} = 1 + \frac{1}{2} \int\_0^1 \frac{\sqrt{x}}{\ln(1-x)} dx...$$

We also observe that the decimal expansion of the latter integral is the sequence A094691 in the OEIS [90]. ♦

#### **8.5 Analogue of Raabe's Formula**

Recall that Raabe's formula yields, for any x > 0, a simple explicit expression for the integral of the log-gamma function over the interval (x, x + 1). We state this result in the following proposition (see Example 6.5). For recent references on Raabe's formula, see, e.g., Cohen and Friedman [30, p. 366] and Srivastava and Choi [93, p. 29].

**Proposition 8.17 (Raabe's Formula)** *The following identity holds*

$$\int\_{\mathbf{x}}^{\mathbf{x}+1} \ln \Gamma(t) \, dt \, = \frac{1}{2} \ln(2\pi) + \mathbf{x} \ln \mathbf{x} - \mathbf{x} \,, \qquad \mathbf{x} > \mathbf{0}. \tag{8.8}$$

Clearly, identities (6.10) and (6.11) provide the analogue of Raabe's formula for any continuous multiple log --type function g. We recall this important and useful formula in the next proposition.

**Proposition 8.18 (Analogue of Raabe's Formula)** *For any function* g *lying in <sup>C</sup>*<sup>0</sup> <sup>∩</sup> dom()*, we have*

$$\int\_{\chi}^{\chi+1} \Sigma \mathbf{g}(t) \, dt \, = \,\sigma[\mathbf{g}] + \int\_{1}^{\chi} \mathbf{g}(t) \, dt, \qquad \chi > 0,\tag{8.9}$$

*where* σ[g] *is the asymptotic constant associated with* g *and defined by the equation*

$$
\sigma[\!g\!\!/] = \int\_0^1 \Sigma g(t+1) \, dt \,. \tag{8.10}
$$

The challenging part in this context is to find a nice expression for σ[g]. For instance, setting x = 1 in Raabe's formula (8.8), we obtain the identity

$$\sigma[\ln] = \int\_0^1 \ln \Gamma(t+1) \, dt = -1 + \frac{1}{2} \ln(2\pi) \, dt$$

However, in general such a closed-form expression for σ[g] is not easy to derive.

An expression for σ[g] as a limit can be obtained using Proposition 5.18(c2). Specifically, if <sup>g</sup> lies in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> for some <sup>p</sup> <sup>∈</sup> <sup>N</sup>, then we have

$$\begin{aligned} \sigma[\mathbf{g}] &= \lim\_{n \to \infty} \int\_0^1 (f\_n^{\mathbf{p}}[\mathbf{g}](t) + \mathbf{g}(t)) \, dt \\ &= \lim\_{n \to \infty} \left( \sum\_{k=1}^{n-1} \mathbf{g}(k) - \int\_1^n \mathbf{g}(t) \, dt + \sum\_{j=1}^p G\_j \Delta^{j-1} \mathbf{g}(n) \right), \end{aligned} \tag{8.11}$$

which is nothing other than the restriction of the generalized Stirling formula (6.21) to the natural integers.

Series expressions for σ[g] can also be obtained by integrating on the interval (0, 1) the series representations of g + g given in Theorems 8.2 and 8.7. For instance, we have

$$\sigma[\mathbf{g}] = \sum\_{j=1}^{p} G\_j \, \Delta^{j-1} \mathbf{g}(1) - \sum\_{k=1}^{\infty} \left( \int\_{k}^{k+1} \mathbf{g}(t) \, dt - \sum\_{j=0}^{p} G\_j \, \Delta^j \mathbf{g}(k) \right). \tag{8.12}$$

Note also that, under certain assumptions, the latter series converges to zero as p →<sup>N</sup> ∞. In this case, (8.12) reduces to the analogue of Fontana-Mascheroni's series; see Proposition 8.11.

*Example 8.19* Applying (8.11) and (8.12) to g(x) <sup>=</sup> <sup>1</sup> <sup>x</sup> and p = 0, we obtain

$$\sigma[g] = \lim\_{n \to \infty} \left( \sum\_{k=1}^{n} \frac{1}{k} - \ln n \right) \\ = \sum\_{k=1}^{\infty} \left( \frac{1}{k} - \ln \left( 1 + \frac{1}{k} \right) \right),$$

which is Euler's constant γ . Identity (8.9) then immediately provides the following analogue of Raabe's formula

$$\int\_{\chi}^{\chi+l} \psi(t) \, dt \, = \, \ln x \,, \qquad x > 0. \tag{6}$$

The following proposition provides interesting identities that involve the antiderivative of g, where <sup>g</sup> is any function lying in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> dom(). It also yields a formula for G, where G is the antiderivative of g. This result is worth comparing with Example 7.19.

**Proposition 8.20** *Let* <sup>g</sup> *lie in <sup>C</sup>*<sup>0</sup> <sup>∩</sup>*D*<sup>p</sup> <sup>∩</sup>*K*<sup>p</sup> *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup> *and define the function* <sup>G</sup>: <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *by the equation*

$$G(\mathbf{x}) = \int\_{1}^{\mathbf{x}} \mathbf{g}(t) \, dt \qquad \text{for } \mathbf{x} > \mathbf{0}.$$

*Then* <sup>G</sup> *lies in <sup>C</sup>*<sup>1</sup> <sup>∩</sup> *<sup>D</sup>*p+<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*p+1*. Moreover, for any* x > <sup>0</sup> *we have*

$$\Sigma G(\mathbf{x}) = \int\_{1}^{\chi} \Sigma \mathbf{g}(t) \, dt - \sigma \lg(\mathbf{g} \, | \, \mathbf{x} - \mathbf{l} \, \mathbf{x})$$

*and*

$$\Sigma\_{\chi} \int\_{\chi}^{\chi+1} \Sigma \mathbf{g}(t) \, dt \, = \int\_{1}^{\chi} \Sigma \mathbf{g}(t) \, dt \, \, \,$$

*Proof* We have that <sup>G</sup> lies in *<sup>C</sup>*1∩*D*p+1∩*K*p+<sup>1</sup> by Proposition 4.12. We then obtain

$$(\Sigma G)' = |\Sigma g - \sigma[g]|$$

by Proposition 7.7. This establishes the first formula. Combining it with (8.9), we obtain

$$
\Sigma\_{\chi} \int\_{\chi}^{\chi+1} \Sigma \mathbf{g}(t) \, dt \, = \sigma \left[ \mathbf{g} \right](\chi - 1) + \Sigma G(\chi) \, = \int\_{1}^{\chi} \Sigma \mathbf{g}(t) \, dt,
$$

that is, the second formula.

*Example 8.21* Apply Proposition 8.20 to the function g(x) = ln x with p = 1, we obtain

$$\Sigma\_{\chi} \int\_{\mathfrak{X}}^{\chi+1} \ln \Gamma(t) \, dt \, = \int\_{1}^{\chi} \ln \Gamma(t) \, dt \, = \,\psi\_{-2}(\mathfrak{x}) - \psi\_{-2}(1).$$

Using Raabe's formula (8.8) in the left-hand side, we finally obtain

$$
\frac{1}{2}\ln(2\pi)(\mathbf{x}-\mathbf{l}) + \Sigma\_{\mathbf{x}}(\mathbf{x}\ln\mathbf{x}) - \begin{pmatrix} \mathbf{x} \\ \mathbf{2} \end{pmatrix} = \begin{vmatrix} \boldsymbol{\upmu}\_{-2}(\mathbf{x}) - \boldsymbol{\upmu}\_{-2}(\mathbf{l}), \mathbf{1} \end{vmatrix}
$$

from which we immediately derive a closed-form expression for x(x ln x); see also Sect. 12.5. ♦

We now present a proposition, immediately followed by a corollary that provides interesting characterizations of multiple --type functions based on the analogue of Raabe's formula. Example 8.24 below illustrates this characterization in the special case of the log-gamma function.

$$\square$$

**Proposition 8.22** *Let* <sup>h</sup> *lie in <sup>C</sup>*<sup>1</sup> <sup>∩</sup>*D*p+<sup>1</sup> <sup>∩</sup>*K*p+<sup>1</sup> *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup> *and let* <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *be a function. Then* <sup>f</sup> *lies in <sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> *and satisfies the equation*

$$\int\_{\mathbf{x}}^{\mathbf{x}+1} f(t) \, dt \, = \begin{array}{c} h(\mathbf{x}), \\ \end{array} \quad \text{x} > 0, \tag{8.13}$$

*if and only if* f = (h) *.*

*Proof* The sufficiency is trivial. Let us prove the necessity. Differentiating both sides of (8.13), we obtain f = h . Using the existence Theorem 3.6 and then Proposition 7.7, we then see that <sup>f</sup> <sup>=</sup> <sup>c</sup> <sup>+</sup> (h) for some <sup>c</sup> <sup>∈</sup> <sup>R</sup>. Using (8.13) again, we then see that c must be 0.

**Corollary 8.23 (A Characterization Result)** *Let* <sup>g</sup> *lie in <sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> *and let* <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *be a function. Then* <sup>f</sup> *lies in <sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> *and satisfies the equation*

$$\int\_{\chi}^{\chi+1} f(t) \, dt \, = \,\sigma[g] + \int\_{1}^{\chi} g(t) \, dt, \qquad x > 0,$$

*if and only if* f = g*.*

*Proof* The sufficiency is trivial by (8.9). Let us prove the necessity. Define the function <sup>h</sup>: <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> by the equation

$$h(\mathbf{x}) := \sigma[\mathbf{g}] + \int\_{\mathbf{l}}^{\mathbf{x}} \mathbf{g}(t) \, dt \qquad \text{for } \mathbf{x} > \mathbf{0}.$$

Then, <sup>h</sup> clearly lies in *<sup>C</sup>*<sup>1</sup> <sup>∩</sup> *<sup>D</sup>*p+<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*p+1. Using Proposition 8.22 and then Proposition 8.20, we immediately obtain that f = (h) = g.

*Example 8.24* Applying Corollary 8.23 to the function g(x) = ln x with p = 1, we obtain the following alternative characterization of the gamma function. A function <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> lies in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*<sup>1</sup> and satisfies the equation

$$\int\_{\chi}^{\chi+1} f(t) \, dt \, = \frac{1}{2} \ln(2\pi) + x \ln x - x \, , \qquad x > 0, 1$$

if and only if f (x) = ln -(x). ♦

#### **8.6 Analogue of Gauss' Multiplication Formula**

In the following proposition, we recall the *Gauss multiplication formula* for the gamma function, also called *Gauss' multiplication theorem* (see Artin [11, p. 24]).

**Proposition 8.25 (Gauss' Multiplication Formula)** *For any integer* m ≥ 1*, we have the following identity*

$$\prod\_{j=0}^{m-1} \Gamma\left(\frac{\mathbf{x} + j}{m}\right) = \frac{\Gamma(\mathbf{x})}{m^{\frac{1}{2} - \frac{1}{2}}} (2\pi)^{\frac{m-1}{2}}, \qquad \mathbf{x} > \mathbf{0}. \tag{8.14}$$

When m = 2, identity (8.14) reduces to *Legendre's duplication formula*

$$
\Gamma\left(\frac{\chi}{2}\right)\Gamma\left(\frac{\chi+1}{2}\right) \\
= \frac{\Gamma(\chi)}{2^{\chi-1}}\sqrt{\pi}, \qquad \chi > 0.
$$

*Remark 8.26* For any fixed m ≥ 2, the Gauss multiplication formula (8.14) enables one to retrieve easily the value of the asymptotic constant associated with the function g(x) = ln x. In particular, this value can be retrieved from Legendre's duplication formula. Indeed, taking the logarithm of both sides of (8.14) and then integrating on x ∈ (0, 1), we obtain

$$\sum\_{j=0}^{m-1} \int\_0^1 \ln \Gamma\left(\frac{\chi+j}{m}\right) dx \;= \frac{m-1}{2} \ln(2\pi) + \int\_0^1 \ln \Gamma(\chi) \, dx.$$

Using the change of variable <sup>t</sup> <sup>=</sup> <sup>x</sup>+<sup>j</sup> <sup>m</sup> in the left-hand integral, we then obtain almost immediately the following identity

$$\int\_0^1 \ln \Gamma(t) \, dt \,\,=\,\frac{1}{2} \ln(2\pi).$$

Combining this result with (8.9), we retrieve <sup>σ</sup>[ln]=−<sup>1</sup> <sup>+</sup> <sup>1</sup> <sup>2</sup> ln(2π ). ♦

Webster [98, Theorem 5.2] showed how an analogue of Gauss' multiplication formula can be partially constructed for any --type function. His proof is very short and essentially relies on the uniqueness and existence theorems in the special case when p = 1. We now show how Webster's approach can be further extended to all multiple --type functions. As usual, we use the additive notation.

**Theorem 8.27 (Analogue of Gauss' Multiplication Formula)** *Let* g *lie in* dom() *and let* <sup>m</sup> <sup>∈</sup> <sup>N</sup>∗*. Define also the function* gm : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *by the equation*

$$\mathbf{g}\_m(\mathbf{x}) \, \, = \, \mathbf{g}\left(\frac{\mathbf{x}}{m}\right) \qquad \text{for } \mathbf{x} > \mathbf{0}.$$

*Then we have*

$$\sum\_{j=0}^{m-1} \Sigma g\left(\frac{\mathbf{x} + j}{m}\right) \ = \sum\_{j=1}^{m} \Sigma g\left(\frac{j}{m}\right) + \Sigma g\_m(\mathbf{x}), \qquad \mathbf{x} > \mathbf{0}, \tag{8.15}$$

*and*

$$\Sigma \mathcal{g}\_m(m) := \sum\_{j=1}^{m-1} \mathcal{g}\left(\frac{j}{m}\right).$$

*Proof* Let <sup>g</sup> lie in *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> for some <sup>p</sup> <sup>∈</sup> <sup>N</sup>. Then gm also lies in *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> by Corollary 4.21. Now, we can readily check that the function <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> defined by

$$f(\mathbf{x}) = \sum\_{j=0}^{m-1} \Sigma \mathbf{g}\left(\frac{\mathbf{x} + j}{m}\right) - \sum\_{j=1}^{m} \Sigma \mathbf{g}\left(\frac{j}{m}\right).$$

is a solution to the equation f <sup>=</sup> gm that lies in *<sup>K</sup>*<sup>p</sup> and such that f (1) <sup>=</sup> 0. By the uniqueness Theorem 3.1, it follows that f = gm. This establishes (8.15). The last identity follows immediately.

Theorem 8.27 actually provides a partial solution to the problem of finding the analogue of Gauss' multiplication formula. A more complete result would also provide a closed-form expression for the right-hand side of identity (8.15).

Unfortunately, no general method to provide simple or compact expressions for gm seems to be known. However, such expressions can sometimes be found.

For instance, when g(x) = ln x, we obtain

$$\mathbf{g}\_m(\mathbf{x}) = \ln \mathbf{x} - \ln m \qquad \text{and} \qquad \Sigma \mathbf{g}\_m(\mathbf{x}) = \ln \Gamma(\mathbf{x}) - (\mathbf{x} - 1)\ln m.$$

Substituting this latter expression in identity (8.15), we immediately obtain the formula

$$\sum\_{j=0}^{m-1} \ln \Gamma \left( \frac{\mathbf{x} + j}{m} \right) \\ \quad = \sum\_{j=1}^{m} \ln \Gamma \left( \frac{j}{m} \right) + \ln \Gamma(\mathbf{x}) - (\mathbf{x} - \mathbf{l}) \ln m \,, \tag{8.16}$$

that is, in the multiplicative notation,

$$\prod\_{j=0}^{m-1} \Gamma\left(\frac{\chi+j}{m}\right) \, = \frac{\Gamma(\chi)}{m^{\chi-1}} \prod\_{j=1}^m \Gamma\left(\frac{j}{m}\right), \qquad \chi > 0.$$

It remains to find a nice expression for the latter product, and more generally for the right-hand sum of identity (8.15). On this issue, we have the following useful result.

**Proposition 8.28** *Let* <sup>g</sup> *lie in <sup>C</sup>*0∩dom() *and let* <sup>m</sup> <sup>∈</sup> <sup>N</sup>∗*. Define also the function* gm : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *by the equation* gm(x) <sup>=</sup> g( <sup>x</sup> <sup>m</sup> ) *for* x > 0*. Then we have*

$$\sum\_{j=1}^{m} \Sigma \mathbf{g}\left(\frac{j}{m}\right) = m \,\boldsymbol{\sigma}[\mathbf{g}] - \int\_{m}^{m+1} \Sigma \mathbf{g}\_{m}(t) \, dt$$

$$= m \,\boldsymbol{\sigma}[\mathbf{g}] - \boldsymbol{\sigma}[\mathbf{g}\_{m}] - m \int\_{1/m}^{1} \mathbf{g}(t) \, dt.$$

*Proof* The first identity can be proved simply by integrating both sides of (8.15) on <sup>x</sup> <sup>∈</sup> (m, m <sup>+</sup> <sup>1</sup>). Indeed, using the change of variable <sup>t</sup> <sup>=</sup> <sup>x</sup>+<sup>j</sup> <sup>m</sup> and identity (8.10), the left-hand side reduces to

$$m\sum\_{j=0}^{m-1} \int\_{1+\frac{j}{m}}^{1+\frac{j+1}{m}} \Sigma \mathbf{g}(t) \,dt \, = \, m \int\_{1}^{2} \Sigma \mathbf{g}(t) \,dt \, = \, m \, \sigma[\mathbf{g}].$$

The second identity then follows from a simple application of (8.9).

*Example 8.29* Let us apply Proposition 8.28 to the function g(x) = ln x. We obtain

$$\sum\_{j=1}^{m} \ln \Gamma \left( \frac{j}{m} \right) \, = \, -\frac{1}{2} \ln m + \frac{1}{2} \left( m - 1 \right) \ln(2\pi) \,.$$

Substituting this expression in (8.16) and then translating the resulting formula into the multiplicative notation, we retrieve Gauss' multiplication formula (8.14). ♦

In the following proposition, we provide a convergence result for the function defined in the left-hand side of (8.15), which does not require the computation of gm. This result simply reduces to the generalized Stirling formula when m = 1.

**Proposition 8.30** *Let* <sup>g</sup> *lie in <sup>C</sup>*0∩*D*<sup>p</sup> <sup>∩</sup>*K*<sup>p</sup> *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup> *and let* <sup>m</sup> <sup>∈</sup> <sup>N</sup>∗*. Define also the function* gm : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *by the equation* gm(x) <sup>=</sup> g( <sup>x</sup> <sup>m</sup> ) *for* x > 0*. Then we have*

$$\sum\_{j=0}^{m-1} \Sigma \mathbf{g}\left(\frac{\mathbf{x} + j}{m}\right) - \int\_1^\chi \mathbf{g}\_m(t) \, dt + \sum\_{j=1}^p G\_j \, \Delta^{j-1} \mathbf{g}\_m(\mathbf{x}) \to \; m \, \sigma\_m[\mathbf{g}]$$

*as* x → ∞*, where*

$$
\sigma\_m[\mathbf{g}] = \sigma[\mathbf{g}] - \int\_{1/m}^1 \mathbf{g}(t) \, dt.
$$

*Proof* Theorem 8.27 and Proposition 8.28 provide the following identity

$$
\left[\Sigma \mathbf{g}\_{\mathfrak{m}}(\mathbf{x}) - \sigma \mathbf{[g}\_{\mathfrak{m}}\right] = \sum\_{j=0}^{m-1} \Sigma \mathbf{g}\left(\frac{\mathbf{x} + j}{m}\right) - m \,\sigma\_{\mathfrak{m}}[\mathbf{g}] \qquad \mathbf{x} > \mathbf{0}.
$$

The result is then an immediate application of the generalized Stirling formula (Theorem 6.13) to the function gm (recall that gm lies in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*p).

We end this section with three corollaries. Corollaries 8.31 and 8.32 yield properties of the derivatives and antiderivatives of the function g in the context of the analogue of Gauss' multiplication formula. Corollary 8.33 shows how the antiderivative of g can be expressed as a limit involving the function gm.

**Corollary 8.31** *Let* <sup>g</sup> *lie in <sup>C</sup>*<sup>r</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*max{p,r} *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup> *and* <sup>r</sup> <sup>∈</sup> <sup>N</sup>∗*. Let also* <sup>m</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> *and define the function* gm : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *by the* gm(x) <sup>=</sup> g( <sup>x</sup> <sup>m</sup> )*. Then the equation obtained by replacing* g *with* g(r) *in* (8.15) *can also be obtained by differentiating* r *times both sides of* (8.15)*.*

*Proof* Differentiating r times both sides of (8.15), multiplying through by mr, and then using (7.1), we obtain

$$\sum\_{j=0}^{m-1} \Sigma \mathbf{g}^{(r)} \left( \frac{\mathbf{x} + j}{m} \right) + m(\Sigma \mathbf{g})^{(r)}(\mathbf{1}) = m^r \, \Sigma \mathbf{g}\_m^{(r)}(\mathbf{x}) + m^r (\Sigma \mathbf{g}\_m)^{(r)}(\mathbf{1}).$$

Setting x = 1, we then get

$$\sum\_{j=1}^{m} \Sigma \mathbf{g}^{(r)}\left(\frac{j}{m}\right) + m(\Sigma \mathbf{g})^{(r)}(1) = m^r (\Sigma \mathbf{g}\_m)^{(r)}(1).$$

Subtracting this latter equation from the former one, we finally get

$$\sum\_{j=0}^{m-1} \Sigma \mathbf{g}^{(r)} \left( \frac{\mathbf{x} + j}{m} \right) \\ = \sum\_{j=1}^{m} \Sigma \mathbf{g}^{(r)} \left( \frac{j}{m} \right) + m^r \Sigma \mathbf{g}\_m^{(r)}(\mathbf{x}),$$

which is precisely the equation obtained by replacing <sup>g</sup> with <sup>g</sup>(r) in (8.15).

**Corollary 8.32** *Let* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*,* <sup>m</sup> <sup>∈</sup> <sup>N</sup>∗*,* <sup>c</sup> <sup>∈</sup> <sup>R</sup>*, and* <sup>g</sup> <sup>∈</sup> *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*p*. Define also the functions* G, gm, Gm : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *by the equations*

$$G(\mathbf{x}) = c + \int\_1^\mathbf{x} g(t) \, dt, \quad g\_m(\mathbf{x}) = g\left(\frac{\mathbf{x}}{m}\right), \quad G\_m(\mathbf{x}) = G\left(\frac{\mathbf{x}}{m}\right) \quad \text{for } \mathbf{x} > \mathbf{0}.$$

*Then both functions* <sup>G</sup> *and* Gm *lie in <sup>C</sup>*<sup>1</sup> <sup>∩</sup> *<sup>D</sup>*p+<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*p+1*. Moreover, for any* x > <sup>0</sup> *we have*

$$
\Sigma G\_m(\mathbf{x}) = \frac{1}{m} \int\_1^\chi \Sigma g\_m(t) \, dt + (\mathbf{x} - \mathbf{l}) \left( c - \frac{1}{m} \int\_m^{m+1} \Sigma g\_m(t) \, dt \right).
$$

*Proof* The first part follows immediately from Proposition 8.20 and Corollary 4.21. Now, by definition of Gm we have

$$G\_m(\mathbf{x}) = c + \frac{1}{m} \int\_m^\chi \mathbf{g}\_m(t) \, dt \, = \left. c + \frac{1}{m} \left( \int\_1^\chi \mathbf{g}\_m(t) \, dt - \int\_1^m \mathbf{g}\_m(t) \, dt \right) \right|$$

The claimed identity can then be established easily using Proposition 8.20 and then applying identity (8.9).

**Corollary 8.33** *Let* <sup>g</sup> *lie in <sup>C</sup>*<sup>0</sup> <sup>∩</sup> dom()*. Define also the functions* gm : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> (m <sup>∈</sup> <sup>N</sup>∗) *by the equation* gm(x) <sup>=</sup> g( <sup>x</sup> <sup>m</sup> ) *for* x > 0*. Then we have*

$$\lim\_{m \to \infty} \frac{\Sigma g\_m(mx) - \Sigma g\_m(m)}{m} = \int\_1^\chi g(t) \, dt, \qquad x > 0.$$

*Moreover, if* g *is integrable at* 0*, then*

$$\lim\_{m \to \infty} \frac{1}{m} \Sigma\_{\mathcal{G}\_m}(m\mathbf{x}) \; = \int\_0^\chi \mathbf{g}(t) \, dt \; , \qquad \mathbf{x} > \mathbf{0}.$$

*Proof* Replacing x with mx in (8.15) and dividing through by m, we obtain

$$\frac{1}{m}\operatorname{\Sigma}\operatorname{g}\_{m}(m\boldsymbol{x}) \;= \frac{1}{m}\sum\_{j=0}^{m-1} \operatorname{\Sigma}\operatorname{g}\left(\boldsymbol{x} + \frac{j}{m}\right) - \frac{1}{m}\sum\_{j=1}^{m} \operatorname{\Sigma}\operatorname{g}\left(\frac{j}{m}\right) \cdot \frac{1}{m}$$

Letting m →<sup>N</sup> ∞ in this identity and using (8.9), we see that the first Riemann sum on the right side converges to

$$\int\_0^1 \Sigma \mathbf{g}(\mathbf{x} + t) \, dt \, = \sigma \mathbf{[g]} + \int\_1^\chi \mathbf{g}(t) \, dt$$

while the second one converges (if g is integrable at 0) to

$$\int\_0^1 \Sigma \mathbf{g}(t) \, dt \, = \,\, \sigma[\mathbf{g}] - \int\_0^1 \mathbf{g}(t) \, dt.$$

This establishes the corollary.

#### **8.7 Asymptotic Expansions and Related Results**

In this section, we provide and investigate asymptotic expansions of (higher order differentiable) multiple log --type functions. We also establish and discuss some important consequences of these expansions, including a variant of the generalized Stirling formula and an extension of the so-called Liu formula to multiple log --type functions.

To begin with, let us first recall the asymptotic expansion of the log-gamma function (see, e.g., Gel'fond [39, p. 342] and Srivastava and Choi [93, p. 7]).

**Proposition 8.34** *For any* <sup>q</sup> <sup>∈</sup> <sup>N</sup>∗*, we have the following asymptotic expansion as* x → ∞

$$\ln \Gamma(\mathbf{x}) = \frac{1}{2} \ln(2\pi) - \mathbf{x} + \left(\mathbf{x} - \frac{1}{2}\right) \ln \mathbf{x} + \sum\_{k=1}^{q} \frac{B\_{k+1}}{k(k+1)\mathbf{x}^{k}} + O\left(\mathbf{x}^{-q-1}\right). \tag{8.17}$$

For instance, setting q = 4 in equation (8.17), we obtain

$$
\ln \Gamma(\mathbf{x}) = \frac{1}{2} \ln(2\pi) - \mathbf{x} + \left(\mathbf{x} - \frac{1}{2}\right) \ln \mathbf{x} + \frac{1}{12\pi} - \frac{1}{360\mathbf{x}^3} + O\left(\mathbf{x}^{-5}\right).
$$

We now provide a generalization of this result to multiple log --type functions. Even more generally, in the next proposition we provide for any integer <sup>m</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> an asymptotic expansion of the function

$$\chi \leftrightarrow \frac{1}{m} \sum\_{j=0}^{m-1} \Sigma g\left(x + \frac{j}{m}\right). \tag{8.18}$$

#### **Proposition 8.35**

*(a) Let* <sup>g</sup> *lie in <sup>C</sup>*<sup>1</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*max{p,1} *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*. Then, for any* <sup>m</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> *and any* x > 0*, we have*

$$\frac{1}{m}\sum\_{j=0}^{m-1}\Sigma\mathbf{g}\left(\mathbf{x}+\frac{j}{m}\right) \\ = \int\_{\boldsymbol{\chi}}^{\boldsymbol{\chi}+1} \Sigma\mathbf{g}(t) \, dt - \frac{1}{2m}\mathbf{g}(\mathbf{x}) + R\_m(\boldsymbol{\chi})\,,$$

*with*

$$R\_m(\mathbf{x}) = \frac{1}{m} \int\_0^1 B\_1(\{mt\}) \left(\Sigma g\right)'(\mathbf{x} + t) \, dt$$

*and*

$$|R\_m(\mathbf{x})| \le \frac{1}{2m} \int\_0^1 |(\Sigma \mathbf{g})'(\mathbf{x} + t)| \, dt \dots$$

*For large* x *the latter integral reduces to* |g(x)|*.*

*(b) If* <sup>g</sup> *lie in <sup>C</sup>*2<sup>q</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*max{p,2q} *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup> *and some* <sup>q</sup> <sup>∈</sup> <sup>N</sup>∗*. Then, for any* <sup>m</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> *and any* x > <sup>0</sup>*, we have*

$$
\frac{1}{m} \sum\_{j=0}^{m-1} \Sigma \mathbf{g} \left( \mathbf{x} + \frac{j}{m} \right) \\
= \int\_{\mathbf{x}}^{\mathbf{x} + 1} \Sigma \mathbf{g}(t) \, dt - \frac{1}{2m} \mathbf{g}(\mathbf{x}),
$$

$$
+ \sum\_{k=1}^{q} \frac{1}{m^{2k}} \frac{B\_{2k}}{(2k)!} \mathbf{g}^{(2k-1)}(\mathbf{x}) + R\_m^q(\mathbf{x}) \,,
$$

*with*

$$R\_m^q(\chi) = -\frac{1}{m^{2q}} \int\_0^1 \frac{B\_{2q}(\{mt\})}{(2q)!} (\Sigma g)^{(2q)}(\chi + t) \, dt$$

*and*

$$|R\_m^q(\chi)| \le \frac{1}{m^{2q}} \frac{|B\_{2q}|}{(2q)!} \int\_0^1 |(\Sigma g)^{(2q)}(\chi + t)| \, dt \dots$$

*For large* <sup>x</sup> *the latter integral reduces to* <sup>|</sup>g(2q−1) (x)|*.*

*Proof* Let us prove assertion (b) first. The first part follows from a straightforward application of Euler-Maclaurin's formula (Proposition 6.31) to f = g, with a = <sup>x</sup>, <sup>b</sup> <sup>=</sup> <sup>x</sup> <sup>+</sup> 1, and <sup>N</sup> <sup>=</sup> <sup>m</sup>. Now, we see that the function (g)(2q) lies in *<sup>K</sup>*(p−2q)<sup>+</sup> by Proposition 4.12, and hence also in *<sup>K</sup>*−<sup>1</sup> by Proposition 4.7. Thus, for sufficiently large x we obtain

$$\begin{aligned} \left| \int\_0^1 |(\Sigma g)^{(2q)}(\mathbf{x} + t)| \, dt &= \left| \int\_0^1 (\Sigma g)^{(2q)}(\mathbf{x} + t) \, dt \right| \\ &= \left| (\Sigma g)^{(2q - 1)}(\mathbf{x} + 1) - (\Sigma g)^{(2q - 1)}(\mathbf{x}) \right| .\end{aligned}$$

By Proposition 7.7, the latter expression reduces to

$$\left| \left| \Sigma g^{(2q-1)}(x+1) - \Sigma g^{(2q-1)}(x) \right| \right| \\
= \left| g^{(2q-1)}(x) \right|.$$

Assertion (a) can be proved similarly. Here we observe that (g) lies in *<sup>K</sup>*(p−1)<sup>+</sup> and hence also in *<sup>K</sup>*<sup>−</sup>1. Thus, for sufficiently large <sup>x</sup> we obtain

$$\int\_0^1 |(\Sigma g)'(\chi + t)| \, dt \, = \left| \int\_0^1 (\Sigma g)'(\chi + t) \, dt \right| \, = |g(\chi)|.$$

This completes the proof.

Setting m = 1 in Proposition 8.35, we derive immediately an asymptotic expansion of the function g in terms of its trend and the higher order derivatives of g. As this special case is very important for the applications, we state it in the next proposition (in which we also use (8.9) to evaluate the integral of g on (x, x +1)).

**Proposition 8.36** *The following assertions hold.*

*(a) Let* <sup>g</sup> *lie in <sup>C</sup>*<sup>1</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*max{p,1} *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*. Then, for any* x > <sup>0</sup> *we have*

$$
\Sigma \mathbf{g}(\mathbf{x}) = \sigma \mathbf{[g]} + \int\_1^\chi \mathbf{g}(t) \, dt - \frac{1}{2} \mathbf{g}(\mathbf{x}) + R\_1(\mathbf{x}) \,, \mathbf{x}
$$

*with*

$$\mathcal{R}\_{\mathbb{I}}(\mathbf{x}) := \int\_0^1 B\_{\mathbb{I}}(t) \, (\Sigma \mathbf{g})'(\mathbf{x} + t) \, dt$$

*and*

$$|\mathcal{R}\_1(\mathbf{x})| \le \frac{1}{2} \int\_0^1 |(\Sigma \mathbf{g})'(\mathbf{x} + t)| \, dt \, \mathbf{x}$$

*For large* x *the latter integral reduces to* |g(x)|*.*

*(b) If* <sup>g</sup> *lie in <sup>C</sup>*2<sup>q</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*max{p,2q} *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup> *and some* <sup>q</sup> <sup>∈</sup> <sup>N</sup>∗*. Then, for any* x > 0 *we have*

$$\Sigma g(\mathbf{x}) = \sigma \mathbf{[g]} + \int\_{1}^{\mathbf{x}} \mathbf{g(t)} \, dt - \frac{1}{2} \, \mathbf{g(x)} + \sum\_{k=1}^{q} \frac{B\_{2k}}{(2k)!} \, \mathbf{g^{(2k-1)}}(\mathbf{x}) + R\_{1}^{q}(\mathbf{x}) \,. \tag{8.19}$$

*with*

$$R\_1^q(\chi) = -\int\_0^1 \frac{B\_{2q}(t)}{(2q)!} (\Sigma g)^{(2q)}(\chi + t) \, dt$$

*and*

$$|R\_1^q(\mathbf{x})| \le \frac{|B\_{2q}|}{(2q)!} \int\_0^1 |(\Sigma \mathbf{g})^{(2q)}(\mathbf{x} + t)| \, dt \, \ldots$$

*For large* <sup>x</sup> *the latter integral reduces to* <sup>|</sup>g(2q−1) (x)|*.*

*Example 8.37* Taking g(x) = ln x and p = 1 in (8.19), we retrieve immediately the asymptotic expansion given in (8.17). The following equivalent, but more concise, formulation of this expansion is given in terms of Binet's function. For any <sup>q</sup> <sup>∈</sup> <sup>N</sup>∗, we have

$$J(\mathbf{x}) = \sum\_{k=1}^{q} \frac{B\_{k+1}}{k(k+1)\,\mathbf{x}^{k}} + O\left(\mathbf{x}^{-q-1}\right) \qquad \text{as } \mathbf{x} \to \infty. \tag{8}$$

*Remark 8.38* The following alternative asymptotic expansion of the Riemann sum (8.18) can be immediately obtained using the general form of Gregory's formula (Proposition 6.30). If <sup>g</sup> lies in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> for some <sup>p</sup> <sup>∈</sup> <sup>N</sup> and if it is <sup>q</sup>-convex or q-concave on [x,∞) for every integer q ≥ p, then we have

$$\int\_{\chi}^{\chi+1} \Sigma \mathbf{g}(t) \, dt = \frac{1}{m} \sum\_{j=0}^{m-1} \Sigma \mathbf{g}\left(\mathbf{x} + \frac{j}{m}\right) + \frac{1}{m} \sum\_{k=1}^{q} G\_k \, \Delta^{k-1} \mathbf{g}\_m(m\mathbf{x}) + R,$$

where

$$||R| \le \frac{1}{m} \overline{G}\_q \left| \Delta^q g\_m(m\chi) \right| \qquad \text{and} \qquad g\_m(\chi) = g\left(\frac{\chi}{m}\right).$$

(Compare with Proposition 8.30.) If we set m = 1 in this latter expansion, then we immediately retrieve the inequality of Lemma 8.10 as well as the Gregory formulabased series expression for g given in (8.4). It is then important to note that the asymptotic expansion (8.19) often leads to divergent series, contrary to its "cousin" formula (8.4), as already observed in Remark 6.32. For instance, setting x = 1 in (8.17) leads to a divergent series whereas setting x = 1 in the "cousin" formula (8.6) leads to an analogue of Fontana-Mascheroni's series. In this regard, we observe that the Gregory coefficients have the asymptotic behavior

$$|G\_n| \sim \frac{1}{n(\ln n)^2} \qquad \text{as } n \to \infty,$$

while the Bernoulli numbers satisfy

$$|B\_{2n}| = \frac{2(2n)!}{(2\pi)^{2n}}\xi(2n) \sim 4\sqrt{\pi n} \left(\frac{n}{\pi e}\right)^{2n} \qquad \text{as } n \to \infty;$$

see, e.g., Graham et al. [41, p. 286]. ♦

**A Variant of the Generalized Stirling Formula** Interestingly, from Proposition 8.35 we can easily derive the following variant of the generalized Stirling formula.

**Proposition 8.39 (A Variant of the Generalized Stirling Formula)** *Let* g *lie in <sup>C</sup>*2<sup>q</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*2<sup>q</sup> *for some* <sup>q</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> ∪ { <sup>1</sup> <sup>2</sup> } *and some* <sup>p</sup> <sup>∈</sup> <sup>N</sup> *satisfying* <sup>p</sup> <sup>≤</sup> <sup>2</sup><sup>q</sup> <sup>−</sup> <sup>1</sup>*. For any* <sup>m</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> *we have*

$$\frac{1}{m}\sum\_{j=0}^{m-1} \Sigma \mathbf{g}\left(\mathbf{x} + \frac{j}{m}\right) - \int\_1^\mathbf{x} \mathbf{g}(t) \, dt - \sum\_{k=1}^p \frac{B\_k}{m^k k!} \mathbf{g}^{(k-1)}(\mathbf{x}) \to \ \sigma[\mathbf{g}] \qquad \text{as } \mathbf{x} \to \infty.$$

*In particular,*

$$\left(\Sigma \mathbf{g}(\mathbf{x}) - \int\_{1}^{\mathbf{x}} \mathbf{g}(t) \, dt - \sum\_{k=1}^{p} \frac{B\_{k}}{k!} \mathbf{g}^{(k-1)}(\mathbf{x}) \to \sigma \mathbf{[g]} \qquad \text{as } \mathbf{x} \to \infty. \tag{8.20}$$

*Proof* For every <sup>k</sup> ∈ {p, . . . , <sup>2</sup>q} we clearly have that <sup>g</sup> lies in *<sup>D</sup>*<sup>k</sup> <sup>∩</sup> *<sup>K</sup>*<sup>k</sup> and hence g(k) vanishes at infinity by Theorem 4.14(b). The result then follows from Proposition 8.35. The particular case is obtained by setting m = 1.

It is clear that the convergence result (8.20) coincides with the generalized Stirling formula (6.21) whenever p = 0 or p = 1. Thus, it does not bring anything new in these cases.

Now, we observe that if <sup>g</sup> lies in *<sup>C</sup>*max{2q,r} <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*max{2q,r} for some <sup>q</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> ∪ { <sup>1</sup> <sup>2</sup> } and some <sup>p</sup> <sup>∈</sup> <sup>N</sup> satisfying <sup>p</sup> <sup>≤</sup> <sup>2</sup><sup>q</sup> <sup>−</sup> 1, then the convergence result in (8.20) still holds if we replace <sup>g</sup> with <sup>g</sup>(r) and <sup>p</sup> with (p <sup>−</sup> r)+. Moreover, this modified result can also be obtained by differentiating r times both sides of (8.20) and then removing the terms that vanish at infinity. This important fact can be easily proved similarly as for the generalized Stirling formula (see Proposition 7.12 and the comment that follows it).

*Remark 8.40* We now see that the generalized Stirling formula (6.21) could also be established similarly as its variant (8.20), i.e., using the Gregory formula-based asymptotic expansion of g as discussed in Remark 8.38. However, formula (6.21) is a very elementary consequence of Lemma 2.7, as commented in Remark 6.16. Its proof is elementary, elegant, and leads to the whole Theorem 6.11, which is a strong result that also provides inequalities. ♦

The restriction of the limit (8.20) to the natural integers provides the following alternative formula to compute the asymptotic constant σ[g]. Under the assumptions of Proposition 8.39, we have

$$\sigma[\mathbf{g}] = \lim\_{n \to \infty} \left( \sum\_{k=1}^{n-1} \mathbf{g}(k) - \int\_1^n \mathbf{g}(t) \, dt - \sum\_{k=1}^p \frac{B\_k}{k!} \mathbf{g}^{(k-1)}(n) \right). \tag{8.21}$$

**Analogue of Liu's Formula** Liu [64] (see also Mortici [75]) established the following formula. For any <sup>n</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> we have

$$n! = \ \Gamma(n+1) = \sqrt{2\pi n} \ \left(\frac{n}{e}\right)^n \exp\left(\int\_n^\infty \frac{\frac{1}{2} - \{t\}}{t} dt\right).$$

This formula provides an exact (as opposed to asymptotic) expression for the gamma function with an integer argument.

We now propose a generalization of this identity to multiple log --type functions with real arguments. We call it the *generalized Liu formula*. Recall first the following Dirichlet test for convergence of improper integrals (see, e.g., Titchmarsh [96, p. 21]).

**Lemma 8.41 (Dirichlet's Test)** *Let* <sup>a</sup> <sup>≥</sup> <sup>0</sup> *and let* <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *be so that the function* <sup>x</sup> → <sup>x</sup> <sup>a</sup> f (t) dt *is bounded on* [a,∞)*. Let also* <sup>g</sup> *lie in <sup>C</sup>*1∩*D*<sup>0</sup> <sup>∩</sup>*K*0*. Then the improper integral*

$$\int\_{a}^{\infty} f(t)g(t) \, dt$$

*converges.*

#### **Proposition 8.42 (Generalized Liu's Formula)**

*(a) If* <sup>g</sup> *lies in <sup>C</sup>*<sup>2</sup> <sup>∩</sup> *<sup>D</sup>*<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*2*, then for any* x > <sup>0</sup> *we have*

$$\Sigma \mathbf{g}(\mathbf{x}) = \sigma \mathbf{f}[\mathbf{g}] + \int\_{1}^{\chi} \mathbf{g}(t) \, dt - \frac{1}{2} \mathbf{g}(\mathbf{x}) + \int\_{0}^{\infty} \left(\frac{1}{2} - \{t\}\right) \mathbf{g}'(\mathbf{x} + t) \, dt.$$

*(b) If* <sup>g</sup> *lies in <sup>C</sup>*2q+<sup>1</sup> <sup>∩</sup> *<sup>D</sup>*2<sup>q</sup> <sup>∩</sup> *<sup>K</sup>*2q+<sup>1</sup> *for some* <sup>q</sup> <sup>∈</sup> <sup>N</sup>∗*, then for any* x > <sup>0</sup> *we have*

$$\Sigma g(\mathbf{x}) = \sigma[\mathbf{g}] + \int\_1^\chi \mathbf{g}(t) \, dt - \frac{1}{2} \mathbf{g}(\mathbf{x}) + \sum\_{k=1}^q \frac{B\_{2k}}{(2k)!} \, g^{(2k-1)}(\mathbf{x}),$$

$$+ \int\_0^\infty \frac{B\_{2q}(\{t\})}{(2q)!} \, g^{(2q)}(\mathbf{x} + t) \, dt.$$

*Proof* Let us prove assertion (b) first. We apply assertion (b) of Proposition 8.36 to the function <sup>g</sup> with <sup>p</sup> <sup>=</sup> <sup>2</sup>q. Thus, for any x > 0 and any <sup>n</sup> <sup>∈</sup> <sup>N</sup> we have

$$\begin{aligned} R\_1^q(\mathbf{x}) &= \int\_{\mathbf{x}+1}^{\mathbf{x}+n+1} \frac{B\_{2q}(\{t-\mathbf{x}\})}{(2q)!} \left(\Sigma \mathbf{g}\right)^{(2q)}(t) \, dt \\ &- \int\_{\mathbf{x}}^{\mathbf{x}+n+1} \frac{B\_{2q}(\{t-\mathbf{x}\})}{(2q)!} \left(\Sigma \mathbf{g}\right)^{(2q)}(t) \, dt .\end{aligned}$$

By Proposition 7.7, we have

$$(\Sigma g)^{(2q)}(t+1) - (\Sigma g)^{(2q)}(t) = g^{(2q)}(t)$$

and hence we obtain

$$\mathcal{R}\_1^q(\mathfrak{x}) \, \, \, \, \, \, S\_n^q(\mathfrak{x}) + T\_n^q(\mathfrak{x}),$$

where

$$\begin{aligned} S\_n^q(\mathbf{x}) &= \int\_{\mathbf{x}}^{\mathbf{x}+n} \frac{B\_{2q}(\{t-\mathbf{x}\})}{(2q)!} \, g^{(2q)}(t) \, dt \,, \\\ T\_n^q(\mathbf{x}) &= -\int\_{\mathbf{x}+n}^{\mathbf{x}+n+1} \frac{B\_{2q}(\{t-\mathbf{x}\})}{(2q)!} \, (\Sigma g)^{(2q)}(t) \, dt \, . \end{aligned}$$

Now, we observe that the sequence <sup>n</sup> → <sup>S</sup><sup>q</sup> <sup>n</sup> (x) converges by Dirichlet's test (see Lemma 8.41). Indeed, <sup>g</sup>(2q) lies in *<sup>C</sup>*<sup>1</sup> <sup>∩</sup> *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*<sup>0</sup> by Proposition 4.12, and for every u ≥ x we have that

$$\begin{aligned} \left| \int\_{\boldsymbol{x}}^{\boldsymbol{\mu}} \frac{\mathcal{B}\_{2q}(\{t-\boldsymbol{x}\})}{(2q)!} \, dt \right| &= \left| \int\_{0}^{\boldsymbol{\mu}-\boldsymbol{\chi}} \frac{\mathcal{B}\_{2q}(\{t\})}{(2q)!} \, dt \right| \\ &= \left| \int\_{\lfloor \boldsymbol{\mu}-\boldsymbol{\chi} \rfloor}^{\boldsymbol{\mu}-\boldsymbol{\chi}} \frac{\mathcal{B}\_{2q}(\{t\})}{(2q)!} \, dt \right| \leq \frac{|\mathcal{B}\_{2q}|}{(2q)!}, \end{aligned}$$

where we have used the well-known fact that the integral on (0, 1) of the Bernoulli polynomial B2<sup>q</sup> is zero.

Let us now show that the sequence <sup>n</sup> → <sup>T</sup> <sup>q</sup> <sup>n</sup> (x) approaches zero as n → ∞. Using integration by parts, we obtain

$$T\_n^q(\mathbf{x}) = -\int\_0^1 \frac{B\_{2q}(t)}{(2q)!} (\Sigma g)^{(2q)}(\mathbf{x} + n + t) \, dt$$

$$= \int\_0^1 \frac{B\_{2q+1}(t)}{(2q+1)!} (\Sigma g)^{(2q+1)}(\mathbf{x} + n + t) \, dt.$$

Since (g)(2q+1) lies in *<sup>K</sup>*<sup>−</sup>1, for large <sup>n</sup> we obtain

$$\begin{aligned} |T\_n^q(\mathbf{x})| &\le \frac{|B\_{2q+1}|}{(2q+1)!} \left| \int\_0^1 (\Sigma g)^{(2q+1)}(\mathbf{x} + n + t) \, dt \right|,\\ &= \frac{|B\_{2q+1}|}{(2q+1)!} \left| g^{(2q)}(\mathbf{x} + n) \right|, \end{aligned}$$

which approaches zero as n → ∞ by Theorem 4.14(b). This proves assertion (b).

Assertion (a) can be proved similarly by applying assertion (a) of Proposition 8.36 to function <sup>g</sup> with <sup>p</sup> <sup>=</sup> 1. For any x > 0 and any <sup>n</sup> <sup>∈</sup> <sup>N</sup> we have

$$\mathcal{R}\_{\mathbb{I}}(\mathfrak{x}) := \mathcal{S}\_{\mathfrak{n}}(\mathfrak{x}) + T\_{\mathfrak{n}}(\mathfrak{x}),$$

where

$$S\_n(\boldsymbol{x}) = -\int\_{\boldsymbol{x}}^{\boldsymbol{x}+n} B\_1(\{t-\boldsymbol{x}\}) \, \boldsymbol{g}'(t) \, dt$$

$$T\_n(\boldsymbol{x}) = \int\_{\boldsymbol{x}+n}^{\boldsymbol{x}+n+1} B\_1(\{t-\boldsymbol{x}\}) \, (\Sigma \boldsymbol{g})'(t) \, dt \, \boldsymbol{\lambda}$$

We now see that the sequence n → Sn(x) converges by Dirichlet's test. Moreover, the sequence n → Tn(x) approaches zero as n → ∞. Indeed, using integration by parts we obtain

$$\begin{aligned} T\_n(\mathbf{x}) &= \int\_0^1 B\_1(t) \left( \Sigma \mathbf{g} \right)'(\mathbf{x} + n + t) \, dt \\ &= \frac{B\_2}{2} \, \mathbf{g}'(\mathbf{x} + n) - \int\_0^1 \frac{B\_2(t)}{2} \, (\Sigma \mathbf{g})''(\mathbf{x} + n + t) \, dt, \end{aligned}$$

and we conclude the proof as in assertion (b) since <sup>g</sup> lies in *<sup>C</sup>*<sup>1</sup> <sup>∩</sup> *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*0.

*Example 8.43* Let us apply assertion (a) of Proposition 8.42 to g(x) = ln x. We obtain

$$
\ln \Gamma(\mathbf{x}) = \frac{1}{2} \ln(2\pi) - \mathbf{x} + \left(\mathbf{x} - \frac{1}{2}\right) \ln \mathbf{x} + \int\_0^\infty \frac{\frac{1}{2} - \{t\}}{t + \mathbf{x}} dt,
$$

or equivalently,

$$J(\mathfrak{x}) = \, \_J ^2[\ln \circ \Gamma](\mathfrak{x}) \, = \, \_0 \int\_0^\infty \frac{\frac{1}{2} - \{t\}}{t + \mathfrak{x}} \, dt,$$

which extends the original Liu formula to a real argument. ♦

*Example 8.44* Applying assertion (a) of Proposition 8.42 to g(x) <sup>=</sup> <sup>1</sup> <sup>x</sup> , we obtain the following integral expression for the digamma function

$$\psi(x) = \ln x - \frac{1}{2x} + \int\_0^\infty \frac{\{t\} - \frac{1}{2}}{(t+x)^2} dt.$$

This expression seems to be previously unknown. ♦

Setting x = 1 in Proposition 8.42, we immediately derive an integral representation of the asymptotic constant σ[g]. We state this observation in the following corollary.

#### **Corollary 8.45**

*(a) If* <sup>g</sup> *lies in <sup>C</sup>*<sup>2</sup> <sup>∩</sup> *<sup>D</sup>*<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*2*, then we have*

$$
\sigma[\mathbf{g}] = \frac{1}{2}\mathbf{g}(\mathbf{l}) + \int\_{\mathbf{l}}^{\infty} \left( \{t\} - \frac{1}{2} \right) \mathbf{g}'(t) \, dt \, .
$$

*(b) If* <sup>g</sup> *lies in <sup>C</sup>*2q+<sup>1</sup> <sup>∩</sup> *<sup>D</sup>*2<sup>q</sup> <sup>∩</sup> *<sup>K</sup>*2q+<sup>1</sup> *for some* <sup>q</sup> <sup>∈</sup> <sup>N</sup>∗*, then we have*

$$\sigma[\mathbf{g}] = \frac{1}{2}\operatorname{g}(\mathbf{l}) - \sum\_{k=1}^{q} \frac{B\_{2k}}{(2k)!} \operatorname{g}^{(2k-1)}(\mathbf{l}) - \int\_{1}^{\infty} \frac{B\_{2q}(\{t\})}{(2q)!} \operatorname{g}^{(2q)}(t) \, dt \dots$$

*Remark 8.46* Proposition 8.42 and Corollary 8.45 enable one to evaluate certain improper integrals involving polynomial functions of the fractional part of the integration variable. For example, to establish the identity

$$\int\_{1}^{\infty} \frac{\{x\} - \frac{1}{2}}{2x + 1} dx = \, -\frac{3}{4} + \frac{1}{4} \ln 2 + \frac{1}{2} \ln 3$$

(Srivastava and Choi [93, p. 600, Problem 11]), we simply use assertion (a) of Corollary 8.45 with g(x) <sup>=</sup> <sup>1</sup> <sup>2</sup> ln(2x + 1). In this case, we have

$$\Sigma g(\mathbf{x}) = \frac{1}{2} \ln 2 \left( \mathbf{x} - \mathbf{1} \right) + \frac{1}{2} \ln \Gamma \left( \mathbf{x} + \frac{1}{2} \right) - \frac{1}{2} \ln \Gamma \left( \frac{3}{2} \right)$$

and the integral is simply equal to <sup>σ</sup>[g] − <sup>1</sup> <sup>2</sup>g(1). ♦

*Remark 8.47* In Proposition 8.42, we could substitute σ[g] from its expression given in Corollary 8.45. But then, the restriction to the natural integers of the resulting formulas will simply reduce to the application of Euler-Maclaurin's formula (Proposition 6.31) to g, with a = 1, b = n, h = 1, and N = n − 1. ♦

#### **8.8 Analogue of Wallis's Product Formula**

In the following proposition, we recall one of the different versions of Wallis's product formula (see, e.g., Finch [37, p. 21]).

#### **Proposition 8.48 (Wallis's Product Formula)** *The following limit holds*

$$\lim\_{n \to \infty} \frac{1 \cdot 3 \cdots (2n - 1)}{2 \cdot 4 \cdots (2n)} \sqrt{n} = \frac{1}{\sqrt{\pi}}.\tag{8.22}$$

In the additive notation, identity (8.22) becomes

$$\lim\_{n \to \infty} \left( \frac{1}{2} \ln(\pi n) + \sum\_{k=1}^{2n} (-1)^{k-1} \ln k \right) \\ = \ 0.$$

The following proposition gives an analogue of this latter formula for any function <sup>g</sup> lying in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> dom().

**Proposition 8.49** *Let* <sup>g</sup> *lie in <sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*. Let* <sup>g</sup>˜ : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *be the function defined by the equation* g(x) ˜ <sup>=</sup> <sup>2</sup> g(2x) *for* x > <sup>0</sup>*. Let also* <sup>h</sup>: <sup>N</sup><sup>∗</sup> <sup>→</sup> <sup>R</sup> *be the sequence defined by the equation*

$$\begin{aligned} h(n) &= \sigma[\tilde{g}] - \sigma[g] + \int\_1^2 (g(2n+t) - g(t)) \, dt \\ &+ \sum\_{j=1}^p G\_j \left( \Delta^{j-1} g(2n+1) - \Delta^{j-1} \tilde{g}(n+1) \right) \qquad \text{for } n \in \mathbb{N}^\*. \end{aligned}$$

*Then we have*

$$\lim\_{n \to \infty} \left( h(n) + \sum\_{k=1}^{2n} (-1)^{k-1} g(k) \right) = \text{ } 0. \tag{8.23}$$

*Proof* The function <sup>g</sup>˜ lies in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> by Corollary 4.21. By (5.2), for any <sup>n</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> we thus have

$$\sum\_{k=1}^{2n} (-1)^{k-1} g(k) \ = \sum\_{k=1}^{2n} g(k) - \sum\_{k=1}^{n} \tilde{g}(k) \ = \Sigma g(2n+1) - \Sigma \tilde{g}(n+1).$$

Using the discrete version of the generalized Stirling formula (8.11), we get

$$\sigma[\mathbf{g}] = \lim\_{n \to \infty} \left( \sum\_{k=1}^{2n} \mathbf{g}(k) - \int\_1^{2n+1} \mathbf{g}(t) \, dt + \sum\_{j=1}^p G\_j \, \Delta^{j-1} \mathbf{g}(2n+1) \right)$$

$$\sigma[\tilde{g}] = \lim\_{n \to \infty} \left( \sum\_{k=1}^{n} \tilde{g}(k) - \int\_{1}^{n+1} \tilde{g}(t) \, dt + \sum\_{j=1}^{p} G\_j \, \Delta^{j-1} \tilde{g}(n+1) \right).$$

This establishes the claimed formula.

Formula (8.23) actually holds for infinitely many sequences n → h(n). Indeed, if it holds for a sequence h(n), then it also holds for instance for the sequence h(n) <sup>+</sup> <sup>n</sup>−<sup>q</sup> for any <sup>q</sup> <sup>∈</sup> <sup>N</sup>∗. Thus, to obtain an elegant analogue of Wallis's product formula, it is advisable to choose h among the simplest functions. For instance, we could consider the sequence obtained from the series expansion for h(n) about infinity after removing all the summands that vanish at infinity.

*Example 8.50* Let us apply Proposition 8.49 to g(x) = ln x with p = 1. We obtain

$$\begin{aligned} h(n) &= 2n \ln(2n+2) - \left(2n + \frac{1}{2}\right) \ln(2n+1) + \ln(n+1) - 1 + \frac{1}{2} \ln(2\pi) \\ &= \frac{1}{2} \ln(\pi n) + O\left(n^{-2}\right). \end{aligned}$$

Replacing h(n) with <sup>1</sup> <sup>2</sup> ln(πn) in (8.23) as recommended above, we retrieve the original Wallis product formula (8.22). ♦

*Example 8.51* Let us apply Proposition 8.49 to the harmonic number function g(x) = Hx with p = 1. After a bit of calculus we get

$$h(n) = \frac{1}{2}H\_{2n+1} + \frac{1}{2}\ln 2 + \ln(n+1) - \psi(2n+3)$$

$$= \frac{1}{2}(\nu + \ln n) + O\left(n^{-1}\right).$$

We then obtain the following analogue of Wallis's product formula

$$\lim\_{n \to \infty} \left( -\ln n + 2 \sum\_{k=1}^{2n} (-1)^k H\_k \right) = \left\lfloor \mathcal{N} \right\rfloor,$$

which provides an alternative definition of Euler's constant γ . ♦

*Example 8.52* Let us apply Proposition 8.49 to the harmonic number function of order 2

$$g(\mathfrak{x}) := H\_{\mathfrak{x}}^{(2)} = \xi(2) - \xi(2, \mathfrak{x} + 1)$$

and

with p = 1. After some algebra we obtain the following analogue of Wallis's product formula

$$\lim\_{n\to\infty} \sum\_{k=1}^{2n} (-1)^k H\_k^{(2)} = \frac{\pi^2}{24}. \tag{8}$$

*Remark 8.53* Alternative sequences for h(n) may be considered in Proposition 8.49. For instance, if <sup>g</sup> lies in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> for some <sup>p</sup> <sup>∈</sup> <sup>N</sup>, then it is easy to see that

$$\sum\_{k=1}^{2n} (-1)^{k-1} g(k) \ = \ -\ \Sigma \tilde{g}(n+1), \qquad n \in \mathbb{N}^\*,$$

where <sup>g</sup>˜ : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> is the function defined by the equation g(x) ˜ <sup>=</sup> g(2<sup>x</sup> <sup>−</sup> <sup>1</sup>) for x > 0. Thus, assuming that <sup>g</sup>˜ lies in *<sup>K</sup>*0, identity (8.23) also holds for

$$h(n) := \sigma[\tilde{\mathbf{g}}] + \int\_1^{n+1} \tilde{\mathbf{g}}(t) \, dt - \sum\_{j=1}^{(p-1)\_+} G\_j \, \Delta^{j-1} \tilde{\mathbf{g}}(n+1).$$

Similarly, we can easily see that

$$\sum\_{k=1}^{2n} (-1)^{k-1} g(k) \ = \left. g(1) - g(2n) + \Sigma \widetilde{g}(n) \right|, \qquad n \in \mathbb{N}^\*,$$

where <sup>g</sup>˜ : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> is the function defined by the equation g(x) ˜ <sup>=</sup> g(2x) for x > 0. Thus, assuming again that <sup>g</sup>˜ lies in *<sup>K</sup>*0, identity (8.23) also holds for

$$h(n) = \operatorname{g}(2n) - \operatorname{g}(1) - \sigma[\tilde{\mathfrak{g}}] - \int\_1^n \tilde{\mathfrak{g}}(t) \, dt + \sum\_{j=1}^{(p-1)\_+} G\_j \, \Delta^{j-1} \tilde{\mathfrak{g}}(n).$$

It is clear that the most appropriate function h among these possibilities strongly depends on the form of the function g. ♦

*Remark 8.54* Using summation by parts with the classical indefinite sum operator (see, e.g., Graham et al. [41, p. 55]), it is not difficult to show that

$$
\Sigma\_{\mathbf{x}} \mathbf{g}(2\mathbf{x}) = \mathbf{x} \,\mathbf{g}(2\mathbf{x}) - \mathbf{g}(2) - \Sigma\_{\mathbf{x}} \, ((\mathbf{x} + \mathbf{l})(\Delta \mathbf{g}(2\mathbf{x}) + \Delta \mathbf{g}(2\mathbf{x} + \mathbf{l})) \tag{8.24}
$$

(provided both sides exist). More generally, for any <sup>m</sup> <sup>∈</sup> <sup>N</sup>∗, we can show that

$$\Sigma\_{\mathbf{x}}\mathbf{g}(m\mathbf{x}) = \mathbf{x}\,\mathbf{g}(m\mathbf{x}) - \mathbf{g}(m) - \sum\_{j=0}^{m-1} \Sigma\_{\mathbf{x}}\left( (\mathbf{x} + \mathbf{l})\,\Delta\mathbf{g}(m\mathbf{x} + j) \right).$$

For instance, using (8.24) we obtain

$$\begin{split} \Sigma\_{\mathbf{x}} \psi(2\mathbf{x}) &= \mathbf{x} \,\psi(2\mathbf{x}) - \psi(2) - \Sigma\_{\mathbf{x}} \left( 1 + \frac{1}{2\mathbf{x}} + \frac{1}{4(\mathbf{x} + \frac{1}{2})} \right) \\ &= \mathbf{x} \,\psi(2\mathbf{x}) - \psi(1) - \mathbf{x} - \frac{1}{2} (\psi(\mathbf{x}) + \boldsymbol{\gamma}) - \frac{1}{4} \left( \psi \left( \mathbf{x} + \frac{1}{2} \right) - \boldsymbol{\psi} \left( \frac{3}{2} \right) \right) \\ &= \mathbf{x} \,\psi(2\mathbf{x}) - \frac{1}{2} \,\psi(\mathbf{x}) - \mathbf{x} - \frac{1}{4} \psi \left( \mathbf{x} + \frac{1}{2} \right) + \frac{1}{4} \left( 2 - 2\ln 2 + \boldsymbol{\gamma} \right) . \end{split}$$

As this example demonstrates, formula (8.24) can sometimes be very useful in Proposition 8.49 for the computation of σ[ ˜g]. ♦

#### **8.9 Analogue of Euler's Reflection Formula**

Recall that the identity

$$
\Gamma(z)\Gamma(1-z) = \pi \csc(\pi z) \tag{8.25}
$$

holds for any <sup>z</sup> <sup>∈</sup> <sup>C</sup>\Z. This identity, known by the name *Euler's reflection formula* (see, e.g., Artin [11, p. 26] and Srivastava and Choi [93, p. 3]), can be proved for instance using the Weierstrassian form of the gamma function.

Motivated by this and similar examples, it is then natural to wonder if an analogue of Euler's reflection formula holds for any multiple log --type function, at least on <sup>R</sup> \ <sup>Z</sup>, or even on the interval (0, <sup>1</sup>). However, this question seems rather difficult and reflection formulas as beautiful as (8.25) are relatively exceptional.

Now, if we logarithmically differentiate both sides of (8.25), we obtain the following reflection formula for the digamma function (see [93, p. 25])

$$
\psi(\mathbf{x}) - \psi(1-\mathbf{x}) = -\pi \cot(\pi \mathbf{x})\,. \tag{8.26}
$$

Using an appropriate integration, we also obtain the following reflection formula for the Barnes G-function (see [93, p. 45])

$$
\ln G(1+\mathbf{x}) - \ln G(1-\mathbf{x}) \ = \mathbf{x} \ln(2\pi) - \int\_0^\chi \pi t \cot(\pi t) \, dt \ . \tag{8.27}
$$

These and other examples show that the reflection formulas usually share a common pattern. Their right sides typically include 1-periodic functions or integrals of 1-periodic functions while their left sides are of one the following forms

$$
\Sigma \mathbf{g}(\mathbf{x}) \pm \Sigma \mathbf{g}(\mathbf{l} - \mathbf{x}) \qquad \text{or} \qquad \Sigma \mathbf{g}(\mathbf{l} + \mathbf{x}) \pm \Sigma \mathbf{g}(\mathbf{l} - \mathbf{x}),
$$

for some appropriate functions g.

In this section, we investigate this important topic in the light of our theory. To get straight to the point, we have not found an analogue of Euler's reflection formula that is systematically applicable to any multiple log --type function. We nevertheless present a few interesting results that could hopefully be the starting point of a larger theory.

First of all, due to the presence of the arguments x and 1 − x in most of the reflection formulas, it is important to see how the domain of the functions considered in this work can be extended to a larger set. Since many functions g involved in the difference equation f <sup>=</sup> <sup>g</sup> have singularities at 0 (e.g., g(x) <sup>=</sup> <sup>1</sup> <sup>x</sup> ), we suggest extending the domain of all these functions to the set <sup>R</sup> \ {0}. Due to the nature of the difference operator , any solution <sup>f</sup> is then required to be defined on <sup>R</sup>\(−N). The domains of many other associated functions and identities of this theory can be extended likewise. For instance, for any <sup>p</sup> <sup>∈</sup> <sup>N</sup> and any <sup>n</sup> <sup>∈</sup> <sup>N</sup>∗, the domain of the function f <sup>p</sup> <sup>n</sup> [g] defined in (1.4) can be extended to <sup>R</sup> \ (−N). Similarly, for any <sup>p</sup> <sup>∈</sup> <sup>N</sup> and any <sup>a</sup> <sup>∈</sup> <sup>R</sup> \ {0}, the domain of the function <sup>ρ</sup><sup>p</sup> <sup>a</sup> [g] defined in (1.7) can be extended to <sup>R</sup> \ {−a}.

We now have the following important result.

**Lemma 8.55** *Let* <sup>g</sup> : <sup>R</sup> \ {0} → <sup>R</sup> *be a function whose restriction* <sup>g</sup>|R<sup>+</sup> *to* <sup>R</sup><sup>+</sup> *lies in <sup>D</sup>*<sup>p</sup> <sup>∩</sup>*K*<sup>p</sup> *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*. Then, there exists a unique function* <sup>f</sup> : <sup>R</sup>\(−N) <sup>→</sup> <sup>R</sup> *such that* f = g *and* f |R<sup>+</sup> = (g|R<sup>+</sup> )*. Moreover,*

$$f(\mathbf{x}) = \lim\_{n \to \infty} f\_n^p[\mathbf{g}](\mathbf{x})\,, \qquad \mathbf{x} \in \mathbb{R} \; | \; (-\mathbb{N})\,.$$

*Proof* For any <sup>m</sup> <sup>∈</sup> <sup>N</sup> and any solution <sup>f</sup> : <sup>R</sup> \ (−N) <sup>→</sup> <sup>R</sup> to the equation f <sup>=</sup> <sup>g</sup>, we must have

$$f(\mathbf{x} - m) = f(\mathbf{x}) - \sum\_{k=1}^{m} \mathbf{g}(\mathbf{x} - k), \qquad \mathbf{x} \in \mathbb{R}\_+ \nmid \mathbb{N}.\tag{8.28}$$

This clearly establishes the first part of the lemma.

Let us now prove that for any <sup>x</sup> <sup>∈</sup> <sup>R</sup><sup>+</sup> \ <sup>N</sup> and any integers 0 <sup>≤</sup> <sup>m</sup> <sup>≤</sup> <sup>n</sup> we have

$$f\_n^{p}[\mathbf{g}](\mathbf{x}) - \sum\_{k=1}^{m} \mathbf{g}(\mathbf{x} - k) \ = f\_n^{p}[\mathbf{g}](\mathbf{x} - m) - \sum\_{k=1}^{m} \rho\_n^{p}[\mathbf{g}](\mathbf{x} - k). \tag{8.29}$$

On the one hand, for j = 1,...,p, we have

$$\sum\_{k=1}^{m} \binom{\mathbf{x} - k}{j - 1} = \sum\_{k=0}^{m-1} \binom{\mathbf{k} + \mathbf{x} - m}{j - 1} = \sum\_{k=0}^{m} \Delta\_k \binom{\mathbf{k} + \mathbf{x} - m}{j} = \binom{\mathbf{x}}{j} - \binom{\mathbf{x} - m}{j}.$$

and hence using (1.7) we obtain

$$\sum\_{k=1}^{m} \rho\_n^p[\mathbf{g}](\mathbf{x} - k) = \sum\_{k=1}^{m} \mathbf{g}(\mathbf{x} + n - k) - \sum\_{j=1}^{p} \left( \binom{\mathbf{x}}{j} - \binom{\mathbf{x} - m}{j} \right) \Delta^{j-1} \mathbf{g}(n) \dots$$

On the other hand, using this latter identity and subtracting the right side of (8.29) from the left side, using (1.4) we obtain

$$\sum\_{k=0}^{n-1} (\mathbf{g}(\mathbf{x} - m + k) - \mathbf{g}(\mathbf{x} + k)) - \sum\_{k=1}^{m} \mathbf{g}(\mathbf{x} - k) + \sum\_{k=1}^{m} \mathbf{g}(\mathbf{x} + n - k),$$

which is identically zero. This establishes (8.29).

Let us now show that the sequence <sup>n</sup> → <sup>ρ</sup><sup>p</sup> <sup>n</sup> [g](x − k) converges to zero for any <sup>x</sup> <sup>∈</sup> <sup>R</sup><sup>+</sup> \<sup>N</sup> and any <sup>k</sup> <sup>∈</sup> <sup>N</sup>. By (2.12) it is actually enough to show that the sequence

$$m \mapsto \text{g}[n, n+1, \dots, n+p-1, n+x-k]$$

converges to zero. However, by Lemma 2.5 this latter sequence can be sandwiched between the sequences

$$n \mapsto \text{ g}[n-k, n+1-k, \dots, n+p-1-k, n+x-k]$$

and

$$n \mapsto \text{ g}[n, n+1, \dots, n+p-1, n+x],$$

which both converge to zero by (2.12).

Finally, let <sup>f</sup> : <sup>R</sup> \ (−N) <sup>→</sup> <sup>R</sup> be the unique function defined in the first part of this lemma. Using (8.28) and (8.29), since <sup>g</sup> lies in *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> we obtain

$$\begin{aligned} f(\mathbf{x} - m) &= \Sigma \mathbf{g}(\mathbf{x}) - \sum\_{k=1}^{m} \mathbf{g}(\mathbf{x} - k) \ &= \lim\_{n \to \infty} f\_n^p[\mathbf{g}](\mathbf{x}) - \sum\_{k=1}^{m} \mathbf{g}(\mathbf{x} - k), \\ &= \lim\_{n \to \infty} f\_n^p[\mathbf{g}](\mathbf{x} - m), \end{aligned}$$

which establishes the second part of the lemma.

Lemma 8.55 shows that the domain of the function g can be extended to <sup>R</sup> \ (−N) whenever <sup>g</sup> is defined on <sup>R</sup> \ {0}. We then use the same symbol g for this extended function. Moreover, in this case we have

$$\Sigma \mathbf{g}(\mathbf{x}) = \lim\_{n \to \infty} f\_n^p[\mathbf{g}](\mathbf{x})\,, \qquad \mathbf{x} \in \mathbb{R} \; | \; (-\mathbb{N})$$

and the Eulerian form (8.1) of g extends similarly. Actually, when g is a function of a complex variable, Lemma 8.55 can be easily adapted to extend the function g to an appropriate complex domain.

Let us now establish reflection formulas on <sup>R</sup> \ <sup>Z</sup> for functions g when the restriction of <sup>g</sup> to <sup>R</sup><sup>+</sup> lies in *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*0. The result is presented in the following two propositions, which deal separately with the cases when g|R\<sup>Z</sup> is odd or even. The proofs of these propositions are similar and we therefore omit the second one.

**Proposition 8.56** *Let* <sup>g</sup> : <sup>R</sup> \ {0} → <sup>R</sup> *be such that* <sup>g</sup>|R<sup>+</sup> *lies in <sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*<sup>0</sup> *and let* <sup>ω</sup>: <sup>R</sup> \ <sup>Z</sup> <sup>→</sup> <sup>R</sup> *be the function defined by the equation*

$$
\omega(\mathbf{x}) = \left. \Sigma \mathbf{g}(\mathbf{x}) - \Sigma \mathbf{g}(\mathbf{l} - \mathbf{x}) \right| \quad \text{ for } \mathbf{x} \in \mathbb{R} \text{ } \backslash \mathbb{Z}.
$$

*Then the following assertions are equivalent.*


$$\omega(\mathbf{x}) = \lim\_{N \to \infty} \sum\_{|k| \le N} \mathbf{g}(\mathbf{x} + k), \qquad \mathbf{x} \in \mathbb{R} \ \mathbf{\color{red}{Z}}{\mathbb{Z}}.$$

*Proof* The equivalence (i) ⇔ (ii) is trivial since ω(x) = g(x) + g(−x). Let us prove the implication (iii) ⇒ (ii). We have

$$\Delta\boldsymbol{\alpha}(\mathbf{x}) = \lim\_{N \to \infty} \sum\_{|k| \le N} (\mathbf{g}(\mathbf{x} + k + 1) - \mathbf{g}(\mathbf{x} + k))$$

$$= \lim\_{N \to \infty} (\mathbf{g}(\mathbf{x} + N + 1) - \mathbf{g}(\mathbf{x} - N)) = \mathbf{0}.$$

Finally, let us prove the implication (i) ⇒ (iii). Using Lemma 8.55 we obtain

$$\begin{aligned} \omega(\mathbf{x}) &= \sum\_{k=0}^{\infty} (\mathbf{g}(\mathbf{x} + k) + \mathbf{g}(\mathbf{x} - k - 1)) \\ &= \lim\_{N \to \infty} \left( -\mathbf{g}(\mathbf{x} - N - 1) + \sum\_{|k| \le N} \mathbf{g}(\mathbf{x} + k) \right). \end{aligned}$$

This completes the proof.

**Proposition 8.57** *Let* <sup>g</sup> : <sup>R</sup> \ {0} → <sup>R</sup> *be such that* <sup>g</sup>|R<sup>+</sup> *lies in <sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*<sup>0</sup> *and let* <sup>ω</sup>: <sup>R</sup> \ <sup>Z</sup> <sup>→</sup> <sup>R</sup> *be the function defined by the equation*

$$
\rho(\mathbf{x}) = \mathsf{E}\mathbf{g}(\mathbf{x}) + \mathsf{E}\mathbf{g}(\mathbf{l} - \mathbf{x}) \qquad \text{for } \mathbf{x} \in \mathbb{R} \text{ } \mathbb{Z}.
$$

*Then the following assertions are equivalent.*


$$\omega(\mathbf{x}) = -\mathbf{g}(\mathbf{x}) + \lim\_{N \to \infty} \sum\_{1 \le |k| \le N} (\mathbf{g}(k) - \mathbf{g}(\mathbf{x} + k)), \qquad \mathbf{x} \in \mathbb{R} \backslash \mathbb{Z}.$$

*Example 8.58 (The Digamma Function)* Consider the odd function g(x) = 1/x on <sup>R</sup> \ {0} for which we have the identity g(x) <sup>=</sup> ψ(x) <sup>+</sup> <sup>γ</sup> (see Sect. 10.2). This identity actually holds not only on <sup>R</sup><sup>+</sup> but also on <sup>R</sup>\(−N)since by Lemma 8.55 the digamma function ψ extends to this larger domain through the following Eulerian form (see also Srivastava and Choi [93, p. 24])

$$\psi(x) = \left| -\chi - \frac{1}{x} + \sum\_{k=1}^{\infty} \left( \frac{1}{k} - \frac{1}{x+k} \right) \right|, \qquad x \in \mathbb{R} \ ( -\mathbb{N} ).$$

Now, using Proposition 8.56 we immediately obtain the identity

$$\psi(\mathbf{x}) - \psi(\mathbf{l} - \mathbf{x}) = \lim\_{N \to \infty} \sum\_{|k| \le N} \frac{1}{x + k}, \qquad \mathbf{x} \in \mathbb{R} \backslash \mathbb{Z},$$

where the right-hand function is 1-periodic. Finally, it can be proved (see, e.g., Aigner and Ziegler [3, Chapter 26], Berndt [18, p. 4], and Graham et al. [41, Eq. (6.88)]) that this function reduces to −π cot(πx). We then retrieve the reflection formula (8.26) for the digamma function. ♦

*Example 8.59 (A Variant of the Digamma Function)* Consider the even function g(x) <sup>=</sup> <sup>1</sup>/|x<sup>|</sup> on <sup>R</sup>\{0}. Using Lemma 8.55, we then obtain the following expression for g on <sup>R</sup> \ (−N)

$$\Sigma g(\mathbf{x}) \, \, = \sum\_{k=0}^{\infty} \left( \frac{1}{k+1} - \frac{1}{|\mathbf{x} + k|} \right),$$

or equivalently,

$$\Sigma g(\mathbf{x}) = \sum\_{k=0}^{\infty} \left( \frac{1}{k+1} - \frac{1}{\mathbf{x} + k} \right) + \sum\_{k=0}^{\infty} \left( \frac{1}{\mathbf{x} + k} - \frac{1}{|\mathbf{x} + k|} \right),$$

where the first series reduces to ψ(x) + γ . If x > 0, then the second series is zero. If x < 0, it reduces to

$$\begin{aligned} \sum\_{k=0}^{\infty} \min\left\{\frac{2}{\alpha+k}, 0\right\} &= \sum\_{k=0}^{\lfloor -\chi \rfloor} \frac{2}{\alpha+k} = 2 \sum\_{k=0}^{\lfloor -\chi \rfloor} \Delta\_k \psi(\alpha+k), \\ &= 2 \left(\psi(1 - \{-\chi\}) - \psi(\chi)\right). \end{aligned}$$

Using Proposition 8.57, we then obtain that the function

$$
\Sigma g(\mathbf{x}) + \Sigma g(\mathbf{l} - \mathbf{x}) = \ -\frac{1}{|\mathbf{x}|} + \lim\_{N \to \infty} \sum\_{1 \le |k| \le N} \left( \frac{1}{|k|} - \frac{1}{|\mathbf{x} + k|} \right), \qquad \mathbf{x} \in \mathbb{R} \backslash \mathbb{Z},
$$

is 1-periodic. Using the reflection formula for ψ, we also obtain

$$\begin{aligned} \Sigma g(\boldsymbol{\chi}) + \Sigma g(1 - \boldsymbol{\chi}) &= \psi(\{\boldsymbol{\chi}\}) + \psi(1 - \{\boldsymbol{\chi}\}) + 2\boldsymbol{\chi} \\ &= 2\,\psi(\{\boldsymbol{\chi}\}) + \pi\,\cot(\pi\boldsymbol{\chi}) + 2\boldsymbol{\chi} \end{aligned} \qquad \boldsymbol{\chi} \in \mathbb{R} \nmid \mathbb{Z},$$

which provides a closed expression for this periodic function. ♦

*Example 8.60* Consider the function <sup>g</sup> : <sup>R</sup> <sup>→</sup> <sup>R</sup> defined by the equation

$$g(x) := \frac{x+1}{x^2+1} \qquad \text{for } x \in \mathbb{R}.$$

We observe that both functions g(x) and g(x) ˜ <sup>=</sup> g(−x) have restrictions to <sup>R</sup><sup>+</sup> that lie in *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*0. However, the function <sup>g</sup> is neither even nor odd. Denoting its even and odd parts by g<sup>+</sup> and g−, respectively, we have

$$\mathbf{g}\_{+}(\mathbf{x}) = \frac{\mathbf{g}(\mathbf{x}) + \mathbf{g}(-\mathbf{x})}{2} = \frac{1}{\mathbf{x}^{2} + 1};$$

$$\mathbf{g}\_{-}(\mathbf{x}) = \frac{\mathbf{g}(\mathbf{x}) - \mathbf{g}(-\mathbf{x})}{2} = \frac{\mathbf{x}}{\mathbf{x}^{2} + 1}.$$

and we can derive a reflection formula for each of these functions.

Now, it is not difficult to see that (see Example 5.10)

$$\Sigma g\_{+}(\mathbf{x}) = \Re(\psi(1+i) - \psi(\mathbf{x}+i));$$

$$\Sigma g\_{-}(\mathbf{x}) = \Re(-\psi(1+i) + \psi(\mathbf{x}+i)).$$

Using Propositions 8.56 and 8.57, we then see that both functions

$$
\Sigma \mathbf{g}\_{+}(\mathbf{x}) + \Sigma \mathbf{g}\_{+}(1-\mathbf{x}) \qquad \text{and} \qquad \Sigma \mathbf{g}\_{-}(\mathbf{x}) - \Sigma \mathbf{g}\_{-}(1-\mathbf{x}) \dots
$$

are 1-periodic. Moreover, their sum g(x) + g(˜ 1 − x) is also 1-periodic. Equivalently, the function

$$
\Re(\psi(\mathbf{x} + i) - \psi(1 - \mathbf{x} + i)) - \Im(\psi(\mathbf{x} + i) + \psi(1 - \mathbf{x} + i))
$$

is 1-periodic. However, we do not have a reflection formula for g or g˜. ♦

Although Propositions 8.56 and 8.57 constitute major steps in the investigation of reflection formulas, they do not provide closed-form expressions for the 1-periodic functions involved in these formulas. For instance, considering the reflection formula for the digamma function (see Example 8.58), we see that Proposition 8.56 does not yield the right-hand side of identity (8.26). Moreover, it seems that such an expression, obtained for example using Herglotz's trick (see Aigner and Ziegler [3, Chapter 26]), is very specific to the case when g(x) = 1/x. Now, finding a closedform expression in the general case remains a very interesting open problem: such a result would provide an analogue of Euler's reflection formula for a wide class of functions. In this regard, we observe that Herglotz's trick uses an analogue of Legendre's duplication formula in the additive notation. Thus, a suitable adaptation of this trick could be helpful to tackle this problem.

Let us now investigate the more general case when the function g|R<sup>+</sup> lies in *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> for some <sup>p</sup> <sup>∈</sup> <sup>N</sup>. We observe that some reflection formulas can be obtained by integrating or differentiating both sides of a given reflection formula. Thus, if <sup>g</sup>|R<sup>+</sup> lies in *<sup>C</sup>*<sup>1</sup> <sup>∩</sup> *<sup>D</sup>*<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*<sup>1</sup> for instance, we know from Proposition 4.12 that g <sup>|</sup>R<sup>+</sup> lies in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*<sup>0</sup> and we may try to find a reflection formula for g using Propositions 8.56 and 8.57. Since g and (g) differ by a constant by Proposition 7.7, a reflection formula for g can then be obtained by integrating both sides of the reflection formula for g . This approach is inspired from the elevator method (as discussed in Sect. 7.3).

For instance, integrating both sides of (8.26) on ( <sup>1</sup> <sup>2</sup> ,x), where <sup>1</sup> <sup>2</sup> <x< 1, we get the identity

$$
\ln \Gamma(\mathfrak{x}) + \ln \Gamma(1-\mathfrak{x}) = \ln(\mathfrak{x}\,\mathrm{csc}(\mathfrak{x}\,\mathfrak{x})) .
$$

Thus, we retrieve Euler's reflection formula on the interval ( <sup>1</sup> <sup>2</sup> ,x) and this formula can be extended to the complex domain <sup>C</sup> \<sup>Z</sup> by analytic continuation. The identity (8.27) can be obtained similarly, observing that

$$
\ln G(\alpha + 1) = \ln \Gamma(\alpha) + \ln G(\alpha).
$$

Now, let <sup>g</sup> : <sup>R</sup> \ {0} → <sup>R</sup> be a function such that <sup>g</sup>|R<sup>+</sup> lies in *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> for some <sup>p</sup> <sup>∈</sup> <sup>N</sup>. Let also <sup>ω</sup>+[g]: <sup>R</sup>\<sup>Z</sup> <sup>→</sup> <sup>R</sup> and <sup>ω</sup>−[g]: <sup>R</sup>\<sup>Z</sup> <sup>→</sup> <sup>R</sup> be the functions defined by the equation

$$a\_{\pm}[\mathbf{g}](\mathbf{x}) = \,\_2\Sigma \mathbf{g}(\mathbf{x}) \pm \Sigma \mathbf{g}(1-\mathbf{x}) \qquad \text{for } \mathbf{x} \in \mathbb{R} \text{ } \backslash \mathbb{Z}.$$

We then observe that

ω±[g](x) <sup>=</sup> g(x) <sup>∓</sup> g(−x), x <sup>∈</sup> <sup>R</sup> \ <sup>Z</sup>.

It follows that ω<sup>+</sup> (resp. ω−) is 1-periodic if and only if g|R\<sup>Z</sup> is even (resp. odd).

The following proposition provides an explicit expression for the function ω±[g] whenever it is 1-periodic. This expression is constructed from the very definition of g.

**Proposition 8.61** *Let* <sup>g</sup> : <sup>R</sup> \ {0} → <sup>R</sup> *be such that* <sup>g</sup>|R<sup>+</sup> *lies in <sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*. Then the following assertions hold.*

*(a) If* g|R\<sup>Z</sup> *is odd, then the function* ω−[g] *is* 1*-periodic and is equal to*

$$\lim\_{n \to \infty} \left( -\sum\_{|k| \le n-1} \mathbf{g}(\mathbf{x} + k) - \mathbf{g}(\mathbf{x} - n) + \sum\_{j=1}^p \left( \binom{\mathbf{x}}{j} - \binom{\mathbf{l} - \mathbf{x}}{j} \right) \Delta^{j-1} \mathbf{g}(n) \right).$$

*(b) If* g|R\<sup>Z</sup> *is even, then the function* ω+[g] *is* 1*-periodic and is equal to*

$$\begin{aligned} \lim\_{n \to \infty} \left( -\operatorname{g}(\mathbf{x}) + \sum\_{1 \le |k| \le n-1} (\operatorname{g}(k) - \operatorname{g}(\mathbf{x} + k)) \right. \\ & \qquad - \operatorname{g}(\mathbf{x} - \mathbf{n}) + \sum\_{j=1}^{p} \left( \binom{\boldsymbol{\chi}}{j} + \binom{\mathbf{l} - \boldsymbol{\chi}}{j} \right) \Delta^{j-1} \operatorname{g}(\mathbf{n}) \Big). \end{aligned}$$

*Proof* Let us prove assertion (a). That ω−[g] is 1-periodic is clear from the discussion above. Now, using Lemma 8.55 we obtain

$$\begin{aligned} \omega\_{-}[g](\mathbf{x}) &= \lim\_{n \to \infty} (f\_n^p[g](\mathbf{x}) + f\_n^p[g](\mathbf{l} - \mathbf{x})) \\ &= \lim\_{n \to \infty} \left( \sum\_{k=0}^{n-1} (g(1 - \mathbf{x} + k) - g(\mathbf{x} + k)) + \sum\_{j=1}^p \left( \binom{\mathbf{x}}{j} - \binom{\mathbf{l} - \mathbf{x}}{j} \right) \Delta^{j-1} \mathbf{g}(\mathbf{n}) \right) . \end{aligned}$$

This proves assertion (a). Assertion (b) can be established similarly.

*Example 8.62* Consider the odd function <sup>g</sup> : <sup>R</sup> <sup>→</sup> <sup>R</sup> defined by the equation

$$g(x) = x - \frac{x}{x^2 + 1} \qquad \text{for } x \in \mathbb{R}.$$

The function <sup>g</sup>|R<sup>+</sup> clearly lies in *<sup>D</sup>*<sup>2</sup> <sup>∩</sup> *<sup>K</sup>*<sup>2</sup> and we have (see Example 5.10)

$$
\Sigma g(\mathbf{x}) = \binom{\alpha}{2} - \Re(\psi(\mathbf{x} + i)).
$$

$$\square$$

By Proposition 8.61, the function

$$
\Delta \mathbf{g}(\mathbf{x}) - \Sigma \mathbf{g}(1-\mathbf{x}) = \left. \mathfrak{M}(\psi(1-\mathbf{x}+i) - \psi(\mathbf{x}+i)) \right|
$$

is 1-periodic and is equal to the limit

$$\lim\_{n \to \infty} \left( -\sum\_{|k| \le n-1} h(\mathbf{x} + k) - h(\mathbf{x} - n) + (2\mathbf{x} - 1)h(n) \right),$$

where h(x) = g(x) − x. ♦

*Example 8.63 (Euler's Reflection Formula)* Consider the even function <sup>g</sup> : <sup>R</sup> \ {0} defined by the equation g(x) <sup>=</sup> ln <sup>|</sup>x<sup>|</sup> for <sup>x</sup> <sup>∈</sup> <sup>R</sup> \ {0}. The function <sup>g</sup>|R<sup>+</sup> clearly lies in *<sup>D</sup>*<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*<sup>1</sup> and, since x ln <sup>|</sup>-(x)| = ln <sup>|</sup>x<sup>|</sup> on <sup>R</sup> \ (−N), we must have

$$\Sigma \mathbf{g}(\mathbf{x}) = \ln \left| \Gamma(\mathbf{x}) \right|, \qquad \mathbf{x} \in \mathbb{R} \; / \; (-\mathbb{N}) .$$

By Proposition 8.61, the function |-(x)-(1−x)<sup>|</sup> on <sup>R</sup>\<sup>Z</sup> is 1-periodic and is equal to

$$\lim\_{n \to \infty} \left| \frac{1}{x} \prod\_{1 \le |k| \le n} \frac{k}{x+k} \right|.$$

Euler's reflection formula then shows that this limit is also |π csc(πx)|, as expected (see Artin [11, p. 27]). ♦

*Remark 8.64* We observe the following interesting link between the analogue of Euler's reflection formula and the logarithm of the generalized Stirling constant (see Definition 6.17). Let <sup>g</sup> : <sup>R</sup> \ {0} → <sup>R</sup> be an even function such that <sup>g</sup>|R<sup>+</sup> lies in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> dom(). Assume also that <sup>g</sup> is integrable at 0. Then, we have

$$\Box \lg|\_{\mathbb{R}\_+}] = \int\_0^1 \Sigma \mathcal{g}(t) \, dt = \frac{1}{2} \int\_0^1 (\Sigma \mathcal{g}(t) + \Sigma \mathcal{g}(1-t)) dt \, dt$$

that is,

$$\left[\overline{\sigma}[\mathbf{g}|\_{\mathbb{R}\_+}]\right] = \frac{1}{2} \int\_0^1 a\_+[\mathbf{g}](t) \, dt.$$

For instance, for the function g(x) = ln |x| (see Example 8.63), we obtain

$$\overline{\sigma}[\mathbf{g}|\_{\mathbb{R}\_+}] = \frac{1}{2} \int\_0^1 \ln(\pi \csc(\pi t)) \, dt$$

and it is not difficult to see that this expression reduces to <sup>1</sup> <sup>2</sup> ln(2π ). ♦

#### **8.10 Analogue of Gauss' Digamma Theorem**

The following formula, due to Gauss, enables one to compute the values of the digamma function <sup>ψ</sup> for rational arguments. If a, b <sup>∈</sup> <sup>N</sup><sup>∗</sup> with a<b, then we have

$$\psi\left(\frac{a}{b}\right) = \left[ -\gamma - \ln(2b) - \frac{\pi}{2}\cot\frac{a\pi}{b} + 2\sum\_{j=1}^{\lfloor (b-1)/2 \rfloor} \cos\left(2j\pi\frac{a}{b}\right) \ln\left(\sin\frac{j\pi}{b}\right) \right] \tag{8.30}$$

(see, e.g., Knuth [53, p. 95] and Srivastava and Choi [93, p. 30]). This formula can be extended to all integers a, b <sup>∈</sup> <sup>N</sup><sup>∗</sup> by means of the difference equation ψ(x <sup>+</sup> 1) − ψ(x) = 1/x.

For instance, we have

$$
\psi\left(\frac{3}{4}\right) = -\gamma + \frac{\pi}{2} - 3\ln 2.
$$

It is natural to wonder if an analogue of formula (8.30) holds for any multiple log --type function. Finding an analogue as beautiful as this formula seems to be hard. However, we have the following partial result.

**Proposition 8.65** *Let* <sup>g</sup> <sup>∈</sup> *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*<sup>0</sup> *and let* a, b <sup>∈</sup> <sup>N</sup><sup>∗</sup> *with* a<b*. Then*

$$
\Sigma g \left( \frac{a}{b} \right) \;= \frac{1}{b} \sum\_{j=0}^{b-1} \left( 1 - \omega\_b^{-aj} \right) S\_j^b \text{[g]},
$$

*where*

$$w\_b = e^{\frac{2\pi l}{b}} \qquad \text{and} \qquad S\_j^b[\mathbf{g}] = \sum\_{k=1}^{\infty} w\_b^{jk} \, \mathbf{g}\left(\frac{k}{b}\right) \dots$$

*Proof* By definition of the map , we have

$$\begin{aligned} \Sigma g\left(\frac{a}{b}\right) &= \lim\_{n \to \infty} \left(\sum\_{k=1}^{n-1} g\left(\frac{bk}{b}\right) - \sum\_{k=0}^{n-1} g\left(\frac{bk+a}{b}\right)\right) \\ &= \lim\_{n \to \infty} \sum\_{k=1}^{bn-1} \left(u\_b(k) - u\_b(k-a)\right) g\left(\frac{k}{b}\right), \end{aligned}$$

where ub(k) = 1, if b divides k, and ub(k) = 0, otherwise; that is,

$$\mu\_b(k) := \frac{1}{b} \sum\_{j=0}^{b-1} a\_b^{jk} \dots$$

This completes the proof.

Proposition 8.65 provides a first step in the search for an explicit expression for g( <sup>a</sup> <sup>b</sup> ). Depending upon the function g, more computations may be necessary to obtain a useful expression. In this respect, the derivation of formula (8.30) by means of Proposition 8.65 can be found in Marichal [66, p. 13].

*Example 8.66* Let us apply Proposition 8.65 to the function gs(x) = −x−s, where s > 1. This function lies in *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*<sup>0</sup> and we have gs(x) <sup>=</sup> ζ (s, x) <sup>−</sup> ζ (s); see Example 1.7. Let a, b <sup>∈</sup> <sup>N</sup><sup>∗</sup> with a<b. For <sup>j</sup> <sup>=</sup> <sup>0</sup>,...,b <sup>−</sup> 1, we then have

$$\mathcal{S}\_j^b[\mathbf{g}\_s] = -b^s \operatorname{Li}\_s(a\_b^j),$$

where

$$\operatorname{Li}\_s(z) = \sum\_{k=1}^{\infty} \frac{z^k}{k^s}$$

is the polylogarithm function. Using Proposition 8.65, we then obtain

$$\begin{aligned} \xi\left(s,\frac{a}{b}\right) &= \xi\left(s\right) - b^{s-1} \sum\_{j=0}^{b-1} \left(1 - \omega\_b^{-aj}\right) \mathrm{Li}\_s(\omega\_b^j) \\ &= b^{s-1} \sum\_{j=0}^{b-1} \omega\_b^{-aj} \mathrm{Li}\_s(\omega\_b^j). \end{aligned}$$

The inverse conversion formula is simply given by

$$\operatorname{Li}\_{\mathfrak{s}}(\omega\_{b}^{j}) = \; b^{-s} \sum\_{k=1}^{b} \omega\_{b}^{jk} \, \xi \left( s, \frac{k}{b} \right), \qquad j = 1, \dots, b - 1. \tag{8}$$

#### **8.11 Generalized Gautschi's Inequality**

Gautschi [38] showed that the following double inequality holds for any 0 ≤ a ≤ 1

$$e^{(a-1)\,\psi(\mathbf{x}+\mathbf{l})} \le \frac{\Gamma(\mathbf{x}+a)}{\Gamma(\mathbf{x}+\mathbf{l})} \le \|\mathbf{x}^{a-1}\|, \qquad \mathbf{x} > \mathbf{0}.$$

As a consequence, since ψ(x) < ln x for any x > 0, he also obtained that

$$\Gamma(\mathfrak{x}+1)^{a-1} \le \frac{\Gamma(\mathfrak{x}+a)}{\Gamma(\mathfrak{x}+1)} \le x^{a-1}, \qquad \mathfrak{x} > 0,$$

which is also a straightforward consequence of the Wendel inequality (6.5). We refer to these inequalities as the *Gautschi inequality*.

We now provide an analogue of Gautschi's inequality for certain multiple log - type functions and for any a ≥ 0. We call it the *generalized Gautschi's inequality*. As usual, we use the additive notation.

**Proposition 8.67 (Generalized Gautschi's Inequality)** *Suppose that* <sup>g</sup> *lie in <sup>C</sup>*<sup>2</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*max{p,2} *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup> *and let* <sup>a</sup> <sup>≥</sup> <sup>0</sup> *and* x > <sup>0</sup> *be so that* g *is convex on* [x + a,∞)*. Then we have*

$$\begin{aligned} \left( \left( a - \lceil a \rceil \right) \mathfrak{g} \left( \mathfrak{x} + \lceil a \rceil \right) \right) &\leq \left( a - \lceil a \rceil \right) \left( \Sigma \mathfrak{g} \right) \left( \mathfrak{x} + \lceil a \rceil \right) \\ &\leq \Sigma \mathfrak{g} \left( \mathfrak{x} + a \right) - \Sigma \mathfrak{g} \left( \mathfrak{x} + \lceil a \rceil \right) \leq \left( a - \lceil a \rceil \right) \mathfrak{g} \left( \mathfrak{x} + \lfloor a \rfloor \right) . \end{aligned}$$

*(The inequalities are to be reversed if* g *is concave on* [x + a,∞)*.)*

*Proof* We follow the same steps as in Gautschi's proof. We can assume that k ≤ a<k <sup>+</sup> 1 for some fixed <sup>k</sup> <sup>∈</sup> <sup>N</sup>. Let x > 0 be fixed so that g is convex on [<sup>x</sup> <sup>+</sup> k,∞). Let also <sup>f</sup> : [k, k <sup>+</sup> <sup>1</sup>) <sup>→</sup> <sup>R</sup> and <sup>ϕ</sup> : [k, k <sup>+</sup> <sup>1</sup>) <sup>→</sup> <sup>R</sup> be the functions defined by the equations

$$f(a) = \frac{1}{k+1-a} (\Sigma g(\mathbf{x}+a) - \Sigma g(\mathbf{x}+k+1))$$

and

$$\varphi(a) \;=\;(k+1-a)^2 f'(a);$$

for k ≤ a<k + 1. We then observe that

$$(k+1-a)\,\,f'(a) = \,\,f(a) + D\_a\,\,((k+1-a)\,\,f(a)) = \,\,f(a) + (\Sigma g)'(x+a).$$

It then follows that

$$\varphi(a) \;=\;(k+1-a)\left(f(a) + (\Sigma g)'(x+a)\right)$$

and

$$
\varphi'(a) = \left(k+1-a\right) (\Sigma g)''(x+a).
$$

We also have

$$\begin{aligned} \varphi(k) &= \Sigma g(\mathbf{x} + k) - \Sigma g(\mathbf{x} + k + 1) + (\Sigma g)'(\mathbf{x} + k) \\ &= (\Sigma g)'(\mathbf{x} + k) - \mathbf{g}(\mathbf{x} + k), \end{aligned}$$

where

$$g(\mathbf{x} + k) := \int\_0^1 (\Sigma g)'(\mathbf{x} + k + t) \, dt.$$

Since g is convex on [x + k,∞), its derivative is increasing on [x + k,∞), and hence we must have ϕ(k) ≤ 0 and ϕ (a) ≥ 0. Since ϕ(k + 1) = 0, it follows that the function ϕ is nonpositive and hence that the function f is decreasing. Using L'Hospital's rule and the fact that ϕ(k) ≤ 0, we then obtain the following chain of inequalities

$$\begin{aligned} -g(\mathbf{x} + k + 1) &\le - (\Sigma g)'(\mathbf{x} + k + 1) \\ &\le \lim\_{a \to k+1} f(a) \le f(a) \le f(k) = -g(\mathbf{x} + k). \end{aligned}$$

This proves the result.

*Example 8.68* Applying Proposition 8.67 to g(x) = ln x and p = 1, we obtain for any a ≥ 0 and any x > 0

$$(\mathbf{x} + \lceil a \rceil)^{a - \lceil a \rceil} \le e^{(a - \lceil a \rceil)\,\psi(\mathbf{x} + \lceil a \rceil)} \le \frac{\Gamma(\mathbf{x} + a)}{\Gamma(\mathbf{x} + \lceil a \rceil)} \le (\mathbf{x} + \lfloor a \rfloor)^{a - \lceil a \rceil}.$$

If we assume that 0 ≤ a ≤ 1, then we retrieve the original Gautschi inequality. ♦ *Remark 8.69* If we wish to bracket the function g(x + a) − g(x + 1) in Proposition 8.67, we can use the identity

$$
\Sigma g(\mathbf{x} + \lceil a \rceil) = \Sigma g(\mathbf{x} + \mathbf{l}) + \sum\_{k=1}^{\lceil a \rceil - 1} g(\mathbf{x} + k),
$$

which immediately follows from (5.3). For instance, for g(x) = ln x we obtain the double inequality

$$\begin{aligned} \left(e^{(a-\lceil a\rceil)\,\psi(\mathbf{x}+\lceil a\rceil)}(\mathbf{x}+\lceil a\rceil-1)\overline{\underline{\boldsymbol{a}}\mathbf{l}-\mathbf{l}}\leq \frac{\Gamma(\mathbf{x}+\mathbf{a})}{\Gamma(\mathbf{x}+\mathbf{l})}\mathbf{x}\right) \\ \leq (\mathbf{x}+\lfloor a\rfloor)^{a-\lceil a\rceil}(\mathbf{x}+\lceil a\rceil-1)\overline{\underline{\boldsymbol{a}}\mathbf{l}-\mathbf{l}}.\end{aligned}$$

which holds for any a ≥ 0 and any x > 0. ♦

We end this section with the following corollary, which is obtained by integrating on a ∈ (0, 1) the expressions in the generalized Gautschi inequality (Proposition 8.67).

**Corollary 8.70** *Suppose that* <sup>g</sup> *lie in <sup>C</sup>*<sup>2</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*max{p,2} *and let* x > <sup>0</sup> *be so that* g *is convex on* [x,∞)*. Then we have*

$$\begin{aligned} -\frac{1}{2}\operatorname{g}(\mathbf{x}+\mathbf{l}) &\leq -\frac{1}{2}(\Sigma\mathbf{g})'(\mathbf{x}+\mathbf{l})\\ &\leq \int\_{\mathbf{x}}^{\mathbf{x}+\mathbf{l}} \Sigma\mathbf{g}(\mathbf{t}) \, dt - \Sigma\mathbf{g}(\mathbf{x}+\mathbf{l}) \leq -\frac{1}{2}\operatorname{g}(\mathbf{x})\,.\end{aligned}$$

*(The inequalities are to be reversed if* g *is concave on* [x,∞)*.) In particular, the following assertions hold.*

*(a) If* g *is not eventually identically zero and if*

$$\lim\_{\chi \to \infty} \frac{g(\chi)}{\Sigma g(\chi)} = \begin{array}{c} \text{0}, \\ \end{array} \tag{8.31}$$

*then*

$$\lim\_{\chi \to \infty} \frac{(\Sigma g)'(\mathbf{x})}{\Sigma g(\mathbf{x})} = 0 \qquad \text{and} \qquad \Sigma g(\mathbf{x}) \sim \int\_{\chi}^{\chi + 1} \Sigma g(t) \, dt \quad \text{as } \mathbf{x} \to \infty.$$

*(b) If* g *is not eventually identically zero and if*

$$\lim\_{\chi \to \infty} \frac{g(\chi + 1)}{g(\chi)} = |1, 1|$$

*then*

$$\lim\_{\chi \to \infty} \frac{(\Sigma g)'(\mathbf{x})}{g(\mathbf{x})} = 1 \qquad \text{and} \qquad \lim\_{\mathbf{x} \to \infty} \frac{\int\_{\chi}^{\mathbf{x}+1} \Sigma g(t) \, dt - \Sigma g(\mathbf{x})}{g(\mathbf{x})} = \frac{1}{2}.$$

*Proof* The inequalities are obtained by integrating on a ∈ (0, 1) the expressions in the generalized Gautschi inequality. Let us now prove assertion (a); the second one can be established similarly. If g is not eventually identically zero, then it eventually never vanishes since it lies in *<sup>K</sup>*0. If condition (8.31) holds, then we must have

$$\lim\_{\chi \to \infty} \frac{\Sigma g(\chi + 1)}{\Sigma g(\chi)} = \lim\_{\chi \to \infty} \left( 1 + \frac{g(\chi)}{\Sigma g(\chi)} \right) = 1 \quad \text{and} \quad \lim\_{\chi \to \infty} \frac{g(\chi)}{\Sigma g(\chi + 1)} = 0.$$

We then complete the proof by dividing all the expressions in the inequalities by g(x + 1) and letting x → ∞.

#### **8.12 Generalized Webster's Functional Equation**

In the framework of --type functions, Webster [98, Section 8] investigated the multiplicative version of the functional equation

$$f(\mathbf{x}) + f(\mathbf{x} + \frac{1}{2}) = h(\mathbf{x}), \qquad \mathbf{x} > \mathbf{0},$$

and, more generally, of the functional equation

$$\sum\_{j=0}^{m-1} f\left(x + \frac{j}{m}\right) \, = \, h(\mathbf{x}), \qquad x > 0,$$

for any <sup>m</sup> <sup>∈</sup> <sup>N</sup>∗, where <sup>h</sup>: <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> is a given function satisfying certain conditions.

In this section, we extend Webster's result by considering and solving the more general equation

$$\sum\_{j=0}^{m-1} f(\mathbf{x} + a \, j) = h(\mathbf{x}), \qquad \mathbf{x} > \mathbf{0}, \tag{8.32}$$

where a > 0 is also a given parameter. We call it the *generalized Webster functional equation*. For instance, we can prove that the unique monotone solution <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> R to the equation

$$f(\mathbf{x}) + f(\mathbf{x} + a) \, = \, \frac{1}{x}$$

is given by

$$f(\mathbf{x}) = \frac{1}{2a} \psi \left(\frac{\mathbf{x} + a}{2a}\right) - \frac{1}{2a} \psi \left(\frac{\mathbf{x}}{2a}\right).$$

Our general result is stated in the following theorem, a variant of which was established by Webster [98, Theorem 8.1] in the special case when p = 1 and <sup>a</sup> <sup>=</sup> <sup>1</sup> m.

**Theorem 8.71 (Generalized Webster's Functional Equation)** *Let* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*,* <sup>m</sup> <sup>∈</sup> <sup>N</sup>∗*,* a > <sup>0</sup>*, and* <sup>h</sup> <sup>∈</sup> *<sup>D</sup>*<sup>q</sup> <sup>∩</sup> *<sup>K</sup>*<sup>q</sup> *for some integer* <sup>q</sup> <sup>≥</sup> <sup>p</sup>*. Define also the function* ha : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *by the equation*

$$h\_a(\mathbf{x}) = \, h(a\mathbf{x}) \qquad \text{for } \mathbf{x} > \mathbf{0}.$$

*If* ha *lies in <sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> <sup>+</sup> <sup>∩</sup> *<sup>K</sup>*<sup>q</sup> *(resp. <sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> <sup>−</sup> <sup>∩</sup> *<sup>K</sup>*<sup>q</sup> *), then there is a unique solution to equation* (8.32) *lying in <sup>K</sup>*p*, namely*

$$f(\mathbf{x}) = \,\,\Sigma h\_{am}\left(\frac{\mathbf{x} + a}{am}\right) - \Sigma h\_{am}\left(\frac{\mathbf{x}}{am}\right).$$

*Moreover, this solution lies in <sup>K</sup>*<sup>p</sup> <sup>−</sup> *(resp. <sup>K</sup>*<sup>p</sup> +*).*

*Proof* Suppose for instance that ha lies in *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> <sup>+</sup> <sup>∩</sup> *<sup>K</sup>*<sup>q</sup> and let <sup>g</sup><sup>m</sup> <sup>a</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> be defined by the equation g<sup>m</sup> <sup>a</sup> (x) = ha(mx) for x > 0. By Corollary 4.21, the function g<sup>m</sup> <sup>a</sup> lies in *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> <sup>+</sup> <sup>∩</sup> *<sup>K</sup>*<sup>q</sup> . Suppose that <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> is a solution to equation (8.32). Then necessarily

$$g\_a^m(\mathbf{x}) = h(am\mathbf{x} + a) - h(am\mathbf{x}) = \sum\_{j=0}^{m-1} \Delta\_j f(am\mathbf{x} + aj) = \Delta\_\mathbf{x} f(am\mathbf{x}).$$

If <sup>f</sup> lies in *<sup>K</sup>*p, then by the uniqueness and existence theorems we have that

$$f(amx) = f(am) + \Sigma g\_a^m(x).$$

and <sup>f</sup> must lie in *<sup>K</sup>*<sup>p</sup> <sup>−</sup>. Since both <sup>g</sup><sup>m</sup> <sup>a</sup> and <sup>h</sup> lie in *<sup>D</sup>*<sup>q</sup> <sup>∩</sup> *<sup>K</sup>*<sup>q</sup> , by Propositions 5.7 and 5.8 we then have

$$f(amx) = f(am) + \Sigma\_x h(amx + a) - \Sigma h(amx)$$

$$= f(am) + \Sigma\_x h\_{am}\left(x + \frac{1}{m}\right) - \Sigma h\_{am}(\mathbf{x})$$

$$= c + \Sigma h\_{am}\left(\mathbf{x} + \frac{1}{m}\right) - \Sigma h\_{am}(\mathbf{x}),$$

or equivalently,

$$f(\mathbf{x}) = c + \Sigma h\_{am} \left(\frac{\mathbf{x} + a}{am}\right) - \Sigma h\_{am} \left(\frac{\mathbf{x}}{am}\right) \tag{8.33}$$

for some <sup>c</sup> <sup>∈</sup> <sup>R</sup>. But the function <sup>f</sup> specified by (8.33) satisfies (8.32) if and only if c = 0; indeed, we then have

$$\sum\_{j=0}^{m-1} f(\mathbf{x} + aj) = mc + \sum\_{j=0}^{m-1} \Delta\_j \,\Sigma h\_{am} \left( \frac{\mathbf{x} + aj}{am} \right)$$

$$= mc + \Delta \Sigma h\_{am} \left( \frac{\mathbf{x}}{am} \right) \\ \quad = mc + h(\mathbf{x}).$$

This completes the proof.

*Example 8.72* Theorem 8.71 shows that the unique eventually monotone or eventually log-convex solution to the functional equation

$$f(\mathbf{x})f(\mathbf{x}+a)\,\mathbf{x}^p = 1, \qquad \mathbf{x} > 0, \, a > 0, \, p > 0,$$

is the function

$$f(\mathbf{x}) = \left(\frac{\Gamma(\frac{\mathbf{x}}{2a})}{\sqrt{2a}\,\Gamma(\frac{\mathbf{x}+a}{2a})}\right)^p \,\mathrm{s}$$

This result was established by Thielman [95] (see also Anastassiadis [5]). The special case when p = 1 was previously shown by Mayer [70]. ♦

Combining both Theorems 8.27 and 8.71, we can derive immediately the following corollary, which in a sense provides yet another characterization of multiple --type functions. For a similar result on the gamma function, see Artin [11, p. 35].

**Corollary 8.73** *Let* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*,* <sup>m</sup> <sup>∈</sup> <sup>N</sup>∗*, and* <sup>g</sup> <sup>∈</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*p+1*. Define also the function* gm : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *by the equation* gm(x) <sup>=</sup> g( <sup>x</sup> <sup>m</sup> ) *for* x > 0*. Then the function* f = g *is the unique solution lying in <sup>K</sup>*<sup>p</sup> *to the equation*

$$\sum\_{j=0}^{m-1} f\left(\frac{\mathbf{x} + j}{m}\right) \ = \sum\_{j=1}^{m} \Sigma g\left(\frac{j}{m}\right) + \Sigma g\_m(\mathbf{x}), \qquad \mathbf{x} > \mathbf{0}.$$

*Example 8.74* For any <sup>m</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> the gamma function is the unique log-convex solution <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup><sup>+</sup> to the equation

$$\prod\_{j=0}^{m-1} f\left(\frac{\mathbf{x} + j}{m}\right) \;= \frac{\Gamma(\mathbf{x})}{m^{\frac{\mathbf{x}}{2} - \frac{1}{2}}} (2\pi)^{\frac{m-1}{2}}, \qquad \mathbf{x} > \mathbf{0}.$$

Equivalently, for any <sup>m</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> the gamma function is the unique log-convex solution <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup><sup>+</sup> to the equation

$$\prod\_{j=0}^{m-1} f\left(\frac{\mathbf{x} + j}{m}\right) = \prod\_{j=0}^{m-1} \Gamma\left(\frac{\mathbf{x} + j}{m}\right), \qquad \mathbf{x} > \mathbf{0}. \tag{8}$$

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

## **Chapter 9 Summary of the Main Results**

Now that we have collected a number of relevant results on multiple log - type functions, we naturally look forward to applying them on various examples, including not only special functions related to the gamma function but also many other useful functions of mathematical analysis. Such applications will be discussed in the next three chapters. But first and foremost, it is time to take stock of the new theory we have developed and summarize what we have found and learned thus far.

This chapter is devoted to a review of the most interesting and useful results that we have established in the previous chapters. These results are presented here as a step-by-step plan in order to perform a systematic and efficient investigation of the multiple log --type functions. We have tried to be as self-contained as possible, so that the reader can skip Chaps. 2–8 and make direct use of the summary given in this chapter.

*Remark 9.1* At many places in this book (e.g., in Proposition 5.18), we have made the assumption that the function <sup>g</sup> (resp. <sup>g</sup>(r) for some <sup>r</sup> <sup>∈</sup> <sup>N</sup>∗) is continuous to ensure the existence of certain integrals. Although we can often relax this condition by simply requiring that g (resp. g(r)) is locally integrable, we have kept this continuity assumption for simplicity and consistency with similar results where higher order differentiability is assumed. ♦

#### **9.1 Basic Definitions**

Let us recall a few useful concepts introduced in the previous chapters. For any <sup>p</sup> <sup>∈</sup> <sup>N</sup> and any S ∈ {N, <sup>R</sup>}, we let *<sup>D</sup>*<sup>p</sup> <sup>S</sup> denote the set of functions <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> having the asymptotic property that

pg(x) <sup>→</sup> 0 as <sup>x</sup> <sup>→</sup><sup>S</sup> <sup>∞</sup>.

© The Author(s) 2022

161

J.-L. Marichal, N. Zenaïdi, *A Generalization of Bohr-Mollerup's Theorem for Higher Order Convex Functions*, Developments in Mathematics 70, https://doi.org/10.1007/978-3-030-95088-0\_9

For any <sup>p</sup> <sup>∈</sup> <sup>N</sup>, we also let *<sup>C</sup>*<sup>p</sup> denote the set of <sup>p</sup> times continuously differentiable functions from <sup>R</sup><sup>+</sup> to <sup>R</sup> and we let *<sup>K</sup>*<sup>p</sup> denote the set of functions from <sup>R</sup><sup>+</sup> to <sup>R</sup> that are eventually p-convex or eventually p-concave, that is, p-convex or p-concave (see Definition 2.2) in a neighborhood of infinity. Recall also that the sets *<sup>D</sup>*<sup>p</sup> <sup>S</sup> 's are increasingly nested while the sets *<sup>C</sup>*p's and *<sup>K</sup>*p's are decreasingly nested, that is,

$$\mathcal{D}\_{\mathbb{S}}^{p} \subset \mathcal{D}\_{\mathbb{S}}^{p+1}, \qquad \mathcal{K}^{p+1} \subset \mathcal{K}^{p}, \qquad \text{and} \quad \mathcal{C}^{p+1} \subset \mathcal{C}^{p} \qquad \text{for any } p \in \mathbb{N}.$$

We have also proved in Proposition 4.8 that

$$\mathcal{D}\_{\mathbb{N}}^p \cap \mathcal{K}^p \;= \; \mathcal{D}\_{\mathbb{R}}^p \cap \mathcal{K}^p$$

and we denote this common intersection simply by *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*p.

In Chap. 5, we have introduced the map that carries any function <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> lying in the set

$$\text{dom}(\Sigma) = \bigcup\_{p \ge 0} (\mathcal{D}^p \cap \mathcal{K}^p)$$

into the unique solution <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> that arises from Theorem 1.4 and satisfies f (1) = 0. That is,

$$\Sigma \mathbf{g}(\mathbf{x}) = \lim\_{n \to \infty} f\_n^p[\mathbf{g}](\mathbf{x}), \qquad \mathbf{x} > \mathbf{0}.$$

The class of functions that are equal (up to an additive constant) to g is called the *principal indefinite sum* of g (see Definition 5.4 and Example 5.5). A function f lying in the range of the map is also called a *multiple* log --*type function*.

In the previous chapters, we have established and discussed several properties of the multiple log --type functions, many of which are counterparts of classical properties of the gamma function. For instance, we have proved that every multiple log --type function satisfies an analogue of Gauss' multiplication formula for the gamma function. In the rest of this chapter, we provide a summary of these properties. The reader can use them for a systematic investigation of any multiple log --type function.

#### **9.2 ID Card and Main Characterization**

The first step in this investigation is to choose a function <sup>g</sup> <sup>∈</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> (for some <sup>p</sup> <sup>∈</sup> <sup>N</sup>) for which we wish to study its principal indefinite sum g. For instance, if we consider the function g(x) <sup>=</sup> <sup>x</sup> ln <sup>x</sup>, which lies in *<sup>D</sup>*<sup>2</sup> <sup>∩</sup> *<sup>K</sup>*2, then the function g is the logarithm of the hyperfactorial function K(x) (see Sect. 12.5), that is

$$
\Sigma g(\mathbf{x}) = \ln K(\mathbf{x}) = (\mathbf{x} - 1) \ln \Gamma(\mathbf{x}) - \ln G(\mathbf{x}),
$$

where G is the Barnes G-function. Our results will then enable us to study this function through several of its properties.

Alternatively, we can start from a given function <sup>f</sup> <sup>∈</sup> *<sup>K</sup>*<sup>p</sup> (for some <sup>p</sup> <sup>∈</sup> <sup>N</sup>) that we wish to investigate and whose difference g = f is a function that lies in *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*p. For instance, we may want to investigate the <sup>n</sup>th degree Bernoulli polynomial f (x) = Bn(x) by first observing that the function

$$g(\mathbf{x}) = \Delta f(\mathbf{x}) = n\mathbf{x}^{n-1}$$

lies in *<sup>D</sup>*<sup>n</sup> <sup>∩</sup> *<sup>K</sup>*n. We then have

$$
\Sigma \mathcal{g}(\mathfrak{x}) \, \, \, \, \, \, B\_{\mathfrak{n}}(\mathfrak{x}) \, \, \, \, B\_{\mathfrak{l}}(\mathfrak{l}) \, \, \,.
$$

*Remark 9.2* To investigate a function <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> through our results, it is not enough to check that the difference <sup>g</sup> <sup>=</sup> f lies in *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> for some <sup>p</sup> <sup>∈</sup> <sup>N</sup>. We also need to make sure that <sup>f</sup> also lies in *<sup>K</sup>*p. For instance, both functions

$$f\_1(\mathbf{x}) = \mathbf{x} + \sin(2\pi\mathbf{x}) \qquad \text{and} \qquad f\_2(\mathbf{x}) = \mathbf{x} + \theta\_3(\pi\mathbf{x}, 1/2),$$

where θ3(u, q) is the Jacobi theta function defined by the equation

$$\theta\_{\mathfrak{Z}}(u,q) := 1 + 2\sum\_{n=1}^{\infty} q^{n^2} \cos(2nu),$$

have the same difference <sup>g</sup> <sup>=</sup> f<sup>1</sup> <sup>=</sup> f<sup>2</sup> <sup>=</sup> 1 in *<sup>D</sup>*<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*<sup>1</sup> (and we have g(x) <sup>=</sup> <sup>x</sup> <sup>−</sup> 1). However, neither <sup>f</sup><sup>1</sup> nor <sup>f</sup><sup>2</sup> lies in *<sup>K</sup>*1. ♦

**ID Card** It is convenient to start our investigation of the function g by collecting some basic properties of the function g, thus establishing a kind of ID card for that function.

Thus, we first consider a function <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup>. We then determine its asymptotic degree

$$\begin{aligned} \deg \mathcal{g} &= -1 + \min \{ q \in \mathbb{N} : \mathcal{g} \in \mathcal{D}\_{\mathbb{R}}^q \} \\ &= -1 + \min \{ q \in \mathbb{N} : \Delta^q \mathcal{g}(\mathbf{x}) \to 0 \text{ as } \mathbf{x} \to \infty \} . \end{aligned}$$

If deg <sup>g</sup> = ∞ (e.g., when g(x) <sup>=</sup> <sup>2</sup>x) or if g /<sup>∈</sup> *<sup>K</sup>*<sup>p</sup> for all <sup>p</sup> <sup>≥</sup> <sup>1</sup> <sup>+</sup> deg <sup>g</sup> (e.g., g(x) <sup>=</sup> <sup>x</sup> <sup>+</sup> <sup>1</sup> <sup>x</sup> sin x), then the function g does not exist and the investigation stops here. Otherwise, the functions <sup>g</sup> and g lie in *<sup>D</sup>*p∩*K*<sup>p</sup> and *<sup>D</sup>*p+1∩*K*p, respectively, where p = 1 + deg g.

If deg <sup>g</sup> = −1, it is important to check whether <sup>g</sup> also lies in the set *<sup>D</sup>*−<sup>1</sup> <sup>N</sup> of functions <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> for which the sequence <sup>n</sup> → g(n) is summable. In this case, by Proposition 6.14 we have that

$$\lim\_{\chi \to \infty} \Sigma g(\chi) \, = \sum\_{k=1}^{\infty} g(k).$$

It is also useful to determine the integer <sup>r</sup> <sup>∈</sup> <sup>N</sup>, if any, for which <sup>g</sup> lies in *<sup>C</sup>*<sup>r</sup> <sup>∩</sup> *<sup>K</sup>*max{p,r} . In this case, we know from Theorem 7.5 that g lies also in this set. Moreover, many functions of mathematical analysis lie in both

$$\mathcal{C}^{\infty} = \bigcap\_{r \ge 0} \mathcal{C}^r \qquad \text{and} \qquad \mathcal{K}^{\infty} = \bigcap\_{p \ge 0} \mathcal{K}^p.$$

If <sup>g</sup> lies in these sets, then we can write <sup>g</sup> <sup>∈</sup> *<sup>C</sup>*<sup>∞</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*∞.

It may be also useful to determine the domain on which g is p-convex or pconcave. For instance, the function g(x) <sup>=</sup> <sup>1</sup> <sup>x</sup> ln x is 0-concave on [e,∞), 1-convex on [e3/2,∞), etc. (see Example 5.13).

Note that, at this stage, we may not yet have any simple expression for g. Limit and series representations will later emerge anyway from our investigation.

**Analogue of Bohr-Mollerup's Theorem** The following characterization result constitutes the analogue of Bohr-Mollerup's theorem for the function g and follows immediately from the uniqueness Theorem 3.1.

*If* <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *is a solution to the equation* f <sup>=</sup> <sup>g</sup>, *then it lies in <sup>K</sup>*<sup>p</sup> *if and only if* <sup>f</sup> <sup>=</sup> <sup>c</sup> <sup>+</sup> g *for some* <sup>c</sup> <sup>∈</sup> <sup>R</sup>.

This characterization sometimes enables one to establish alternative expressions for the function g. For instance, if g(x) <sup>=</sup> <sup>1</sup> <sup>x</sup> , then we have

g(x) = ψ(x) + γ.

Using the characterization above, we can easily establish the following Gauss representation (see, e.g., Srivastava and Choi [93, p. 26])

$$
\psi(\mathbf{x}) + \boldsymbol{\gamma} = \int\_0^\infty \frac{e^{-t} - e^{-\chi t}}{1 - e^{-t}} dt, \qquad \boldsymbol{\chi} > 0.
$$

Indeed, both sides of this identity vanish at x = 1 and are eventually increasing solutions to the equation f <sup>=</sup> <sup>g</sup>. Hence, by uniqueness they must coincide on <sup>R</sup>+.

Note also that, in addition to the analogue of Bohr-Mollerup's theorem above, we also have an alternative characterization of g given in Proposition 3.9.

#### **9.3 Extended ID Card**

We now complement the ID card of the function g by considering some additional related constants and mappings. From now on, we assume that g is at least continuous on <sup>R</sup>+. More precisely, we assume that

$$\mathbf{g} \in \mathcal{C}^r \cap \mathcal{D}^p \cap \mathcal{K}^{\max\{p,r\}}$$

for <sup>p</sup> <sup>=</sup> <sup>1</sup> <sup>+</sup> deg <sup>g</sup> and some <sup>r</sup> <sup>∈</sup> <sup>N</sup>.

Recall also that, for any <sup>n</sup> <sup>∈</sup> <sup>N</sup>, the symbols Gn and Bn denote the <sup>n</sup>th Gregory coefficient and the nth Bernoulli number, respectively. We also let

$$\overline{G}\_n = |1 - \sum\_{j=1}^n |G\_j|.$$

and we let Bn(x) denote the nth degree Bernoulli polynomial (see Sects. 6.3, 6.4, and 6.7).

**Asymptotic Constant** Recall that the asymptotic constant associated with g (see (6.10)) is the number

$$
\sigma[\mathbf{g}] = \int\_0^1 \Sigma \mathbf{g}(t+1) \, dt = \int\_1^2 \Sigma \mathbf{g}(t) \, dt.
$$

If g is integrable at 0, we also define the generalized Stirling constant (see Definition 6.17) as the number exp(σ[g]), where

$$
\lfloor \overline{\sigma} \lg \rfloor = \lfloor \sigma \lg \rfloor - \int\_0^1 \lg(t) \, dt \, = \int\_0^1 \Sigma \lg(t) \, dt.
$$

Since this latter constant does not always exist (e.g., when g(x) <sup>=</sup> <sup>1</sup> <sup>x</sup> ), we do not use it much in our investigation.

The asymptotic constant σ[g] has the following limit, series, and integral representations (see identities (8.11), (8.12), (8.21), and Corollary 8.45).

(a) If <sup>g</sup> lies in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*p, then we have

$$\sigma[\mathbf{g}] = \sum\_{j=1}^{p} G\_j \, \Delta^{j-1} \mathbf{g}(\mathbf{l}) - \sum\_{k=1}^{\infty} \left( \int\_{k}^{k+1} \mathbf{g}(t) \, dt - \sum\_{j=0}^{p} G\_j \, \Delta^j \mathbf{g}(k) \right)$$

and

$$\sigma[\mathbf{g}] = \lim\_{n \to \infty} \left( \sum\_{k=1}^{n-1} \mathbf{g}(k) - \int\_1^n \mathbf{g}(t) \, dt + \sum\_{j=1}^p G\_j \Delta^{j-1} \mathbf{g}(n) \right).$$

(b) If <sup>g</sup> lies in *<sup>C</sup>*2<sup>q</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*2<sup>q</sup> , where <sup>q</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> ∪ { <sup>1</sup> <sup>2</sup> } and 0 ≤ p ≤ 2q − 1, then we have

$$\sigma[g] = \lim\_{n \to \infty} \left( \sum\_{k=1}^{n-1} g(k) - \int\_1^n g(t) \, dt - \sum\_{k=1}^p \frac{B\_k}{k!} \, g^{(k-1)}(n) \right).$$

(c) If <sup>g</sup> lies in *<sup>C</sup>*<sup>2</sup> <sup>∩</sup> *<sup>D</sup>*<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*2, then we have

$$
\sigma[\mathbf{g}] = \frac{1}{2}\mathbf{g}(\mathbf{l}) + \int\_{1}^{\infty} \left( \{t\} - \frac{1}{2} \right) \mathbf{g}'(t) \, dt.
$$

(d) If <sup>g</sup> lies in *<sup>C</sup>*2q+<sup>1</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*2q+1, then we have

$$\sigma[\mathbf{g}] = \frac{1}{2}\mathbf{g}(\mathbf{l}) - \sum\_{k=1}^{q} \frac{B\_{2k}}{(2k)!} \mathbf{g}^{(2k-1)}(\mathbf{l}) - \int\_{1}^{\infty} \frac{B\_{2q}(\{\mathbf{l}\mathbf{l}\})}{(2q)!} \mathbf{g}^{(2q)}(\mathbf{l}) \,\mathrm{d}\mathbf{l}\,\mathrm{d}\mathbf{l}$$

We also know from Proposition 6.14 that if <sup>g</sup> lies in *<sup>C</sup>*0∩*D*−1∩*K*<sup>0</sup> (here *<sup>D</sup>*−<sup>1</sup> stands for *<sup>D</sup>*−<sup>1</sup> <sup>N</sup> ), then g is integrable at infinity and

$$\sigma[\mathbf{g}] = \sum\_{k=1}^{\infty} \mathbf{g}(k) - \int\_{1}^{\infty} \mathbf{g}(t) \, dt.$$

**Analogue of Raabe's Formula** The analogue of Raabe's formula is simply the identity (see (8.9))

$$\int\_{\mathfrak{x}}^{\mathfrak{x}+1} \Sigma \mathbf{g}(t) \, dt \, = \,\sigma[\mathbf{g}] + \int\_{1}^{\mathfrak{x}} \mathbf{g}(t) \, dt$$

and we know by Proposition 8.20 that any of these integrals lies in *<sup>C</sup>*0∩*D*p+1∩*K*p+1.

Recall also from Corollary 8.23 that a function <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> lies in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> and satisfies the equation

$$\int\_{\chi}^{\chi+1} f(t) \, dt \, = \, \sigma[\mathbf{g}] + \int\_{1}^{\chi} \mathbf{g}(t) \, dt, \qquad \chi > 0,$$

if and only if f = g. This provides an alternative characterization of g.

**Generalized Binet's Function** For any <sup>q</sup> <sup>∈</sup> <sup>N</sup>, the generalized Binet function associated with <sup>g</sup> and <sup>q</sup> is the function <sup>J</sup> <sup>q</sup> [g]: <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> defined by the equation (see (6.16))

$$J^q[\mathbf{g}](\mathbf{x}) = \sum\_{j=0}^{q-1} G\_j \Delta^j \mathbf{g}(\mathbf{x}) - \int\_{\mathbf{x}}^{\mathbf{x}+1} \mathbf{g}(t) \, dt \qquad \text{for } \mathbf{x} > \mathbf{0}.$$

In particular, we also have (see (6.18))

$$J^{q+1}[\Sigma g](\mathbf{x}) = \left. \Sigma g(\mathbf{x}) - \sigma[\mathbf{g}] - \int\_1^\chi g(t) \, dt + \sum\_{j=1}^q G\_j \Delta^{j-1} g(\mathbf{x}) \, dt \right|$$

Note that several objects and formulas of our theory can be usefully expressed in terms of this latter function.

**Generalized Euler's Constant** Recall that the generalized Euler constant associated with the function g is the number

$$\mathcal{Y}[\mathfrak{g}] = -J^{p+1}[\Sigma \mathfrak{g}](\mathfrak{l}),$$

where p = 1 + deg g (see Definition 6.34).

Note that, contrary to the asymptotic constant σ[g], the generalized Euler constant γ [g] is not invariant if we replace p with a higher value. Besides, by definition of γ [g] both quantities are related through the following identity

$$
\sigma \text{[g]} \;=\; \mathcal{Y} \text{[g]} + \sum\_{j=1}^{p} G\_j \; \Delta^{j-1} \text{g}(\mathbf{l}),
$$

where p = 1 + deg g (see Proposition 6.36). In particular, we have γ [g] = σ[g] whenever deg g = −1.

We also have the following integral representations

$$\gamma[\mathbf{g}] = \int\_{1}^{\infty} \left( \sum\_{j=0}^{p} G\_{j} \Delta^{j} \mathbf{g}(\lfloor t \rfloor) - \mathbf{g}(t) \right) dt$$

and

$$\mathcal{Y}[\mathbf{g}] = \int\_1^{\infty} \left( \overline{P}\_p[\mathbf{g}](t) - \mathbf{g}(t) \right) dt,$$

where

$$\overline{P}\_p[\![g](\mathbf{x}) \!] = \sum\_{j=0}^p \binom{\{\mathbf{x}\}}{j} \Delta^j \mathbf{g}(\lfloor \mathbf{x} \rfloor), \qquad \mathbf{x} \ge \mathbf{1}, \mathbf{y}$$

is the piecewise polynomial function whose restriction to any interval (k, k + 1), with <sup>k</sup> <sup>∈</sup> <sup>N</sup>∗, is the interpolating polynomial of <sup>g</sup> with nodes at k, k <sup>+</sup> <sup>1</sup>,...,k <sup>+</sup> <sup>p</sup> (see Proposition 6.37 and Eqs.. (6.38) and (6.41)).

If g is p-convex or p-concave on [1,∞), then the graph of g is always over or always under that of Pp[g] on [1,∞) and |γ [g]| is the surface area between both graphs. In this case, we also have (see (6.45) and (6.46))

$$|\mathcal{Y}[\mathbf{g}]| \le \overline{G}\_p |\Delta^p \mathbf{g}(1)|$$

and, if p ≥ 1,

$$|\mathcal{Y}\{\mathbf{g}\}| \le \int\_0^1 \left| \binom{t-1}{p} \right| \left| \Delta^{p-1} \mathbf{g}(t+1) - \Delta^{p-1} \mathbf{g}(1) \right| \, dt.$$

#### **9.4 Inequalities**

Recall that, for any a > 0, the function ρ<sup>p</sup> <sup>a</sup> [g]: [0,∞) <sup>→</sup> <sup>R</sup> is defined by the equation (see (1.7))

$$\rho\_a^p[\mathbf{g}](\mathbf{x}) = \mathbf{g}(\mathbf{x} + a) - \sum\_{j=0}^{p-1} \binom{\boldsymbol{\chi}}{j} \Delta^j \mathbf{g}(a) \qquad \text{for } \mathbf{x} > \mathbf{0}.$$

In particular, we have

$$\rho\_a^{p+1}[\Sigma \mathbf{g}](\mathbf{x}) = \Sigma \mathbf{g}(\mathbf{x} + a) - \Sigma \mathbf{g}(a) - \sum\_{j=1}^p \binom{\chi}{j} \Delta^{j-1} \mathbf{g}(a) \dots$$

**Generalized Wendel's Inequality (Symmetrized Version)** Let a ≥ 0 and let x > 0 be so that g is p-convex or p-concave on [x,∞). Then we have (see Corollary 6.2)

$$\left|\rho\_{\boldsymbol{x}}^{p+1}\{\Sigma\boldsymbol{g}\}(a)\right| \leq \lceil a\rceil \left|\binom{a-1}{p}\right| \left|\Delta^{p}\boldsymbol{g}(\boldsymbol{x})\right| \;.$$

If p ≥ 1, we also have the following tighter inequality

$$\left|\rho\_{\mathbf{x}}^{p+1}[\Sigma\mathbf{g}](a)\right| \leq \left|\binom{a-1}{p}\right| \left|\Delta^{p-1}\mathbf{g}(\mathbf{x}+a) - \Delta^{p-1}\mathbf{g}(\mathbf{x})\right|.$$

This latter inequality is referred to as the symmetrized version of the generalized Wendel inequality (see Corollary 6.2). Both inequalities reduce to equalities when a ∈ {0, 1,...,p}.

Now, for any <sup>n</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> we have (see (5.4))

$$
\rho\_n^{p+1}[\Sigma \mathfrak{g}](\mathfrak{x}) = \Sigma \mathfrak{g}(\mathfrak{x}) - f\_n^p[\mathfrak{g}](\mathfrak{x}), \qquad \mathfrak{x} \succ 0.
$$

Using this identity, we immediately derive the following discrete version of the inequalities above. If g is p-convex or p-concave on [n,∞), then

$$\left|\nabla \mathbf{g}(\mathbf{x}) - f\_n^p[\mathbf{g}](\mathbf{x})\right| \le \left\lceil \mathbf{x} \right\rceil \left| \binom{\mathbf{x} - 1}{p} \right| \left| \Delta^p \mathbf{g}(\mathbf{n}) \right|, \qquad \mathbf{x} > \mathbf{0},$$

and if p ≥ 1,

$$\left| \left| \Sigma g(\mathbf{x}) - f\_n^p[\mathbf{g}](\mathbf{x}) \right| \right| \le \left| \binom{\mathbf{x} - \mathbf{l}}{p} \right| \left| \Delta^{p-1} g(\mathbf{n} + \mathbf{x}) - \Delta^{p-1} g(\mathbf{n}) \right|, \qquad \mathbf{x} > \mathbf{0}.$$

If <sup>g</sup> lies in *<sup>D</sup>*−<sup>1</sup> <sup>N</sup> , then (see Proposition 6.14)

$$\mathsf{E}\mathfrak{g}(\mathsf{x}) \to \mathsf{E}\mathfrak{g}(\infty) = \sum\_{k=1}^{\infty} \mathfrak{g}(k) \qquad \text{as } \mathsf{x} \to \infty.$$

We then have the following additional inequality (see Theorem 3.13). If g is increasing or decreasing on [n,∞), then

$$\left|\sum\_{k=n}^{\infty} \mathbf{g}(\mathbf{x} + k)\right| = \left|\Sigma \mathbf{g}(\mathbf{x} + n) - \Sigma \mathbf{g}(\infty)\right| \le \left|\Sigma \mathbf{g}(n) - \Sigma \mathbf{g}(\infty)\right|, \qquad \mathbf{x} > \mathbf{0}.$$

**Generalized Stirling's Formula-Based Inequality (Symmetrized Version)** If x > 0 is so that g is p-convex or p-concave on [x,∞), then we have the inequality (see Corollary 6.12)

$$\left| J^{p+1}[\Sigma \mathfrak{g}](\mathfrak{x}) \right| \le \overline{G}\_p \left| \Delta^p \mathfrak{g}(\mathfrak{x}) \right|.$$

If p ≥ 1, we also have the following tighter inequality

$$\left| J^{p+1}[\Sigma g](\mathbf{x}) \right| \le \left| \int\_0^1 \binom{t-1}{p} (\Delta^{p-1} g(\mathbf{x} + t) - \Delta^{p-1} g(\mathbf{x})) \, dt \right|.$$

Moreover, if p = 0 or p = 1, then (see Proposition 6.19)

$$\left| \left\lfloor \Sigma \mathbf{g} \left( \mathbf{x} + \frac{1}{2} \right) - \sigma \mathbf{[g]} - \int\_{1}^{\chi} \mathbf{g}(t) \, dt \right| \right| \le \left| J^{p+1} [\Sigma \mathbf{g}](\mathbf{x}) \right|.$$

**Generalized Gautschi's Inequality** Suppose that <sup>g</sup> lies in *<sup>C</sup>*<sup>2</sup> <sup>∩</sup> *<sup>K</sup>*2. Let <sup>a</sup> <sup>≥</sup> <sup>0</sup> and let x > 0 be so that g is convex on [x + a,∞). Then we have (see Proposition 8.67)

$$\begin{aligned} \left(a - \lceil a \rceil \right) g(\mathbf{x} + \lceil a \rceil ) &\leq \left(a - \lceil a \rceil \right) \left(\Sigma g \right) \left(\mathbf{x} + \lceil a \rceil \right) \\ &\leq \Sigma g(\mathbf{x} + a) - \Sigma g(\mathbf{x} + \lceil a \rceil) \leq \left(a - \lceil a \rceil \right) g(\mathbf{x} + \lfloor a \rfloor). \end{aligned}$$

(The inequalities are to be reversed if g is concave on [x + a,∞).)

#### **9.5 Asymptotic Analysis**

In this section, we gather the main results related to the asymptotic behaviors of multiple log --type functions, including the generalized Stirling formula.

**Generalized Wendel's Inequality-Based Limit** The following convergence result immediately follows from the generalized Wendel inequality (see Theorem 6.1). For any a ≥ 0, we have

$$
\rho\_\times^{p+1}[\Sigma \mathfrak{g}](a) \to 0 \qquad \text{as } \mathfrak{x} \to \infty,
$$

or equivalently,

$$
\Delta \mathbf{g}(\mathbf{x} + \mathbf{a}) - \Sigma \mathbf{g}(\mathbf{x}) - \sum\_{j=1}^{p} \binom{a}{j} \Delta^{j-1} \mathbf{g}(\mathbf{x}) \to \mathbf{0} \qquad \text{as } \mathbf{x} \to \infty \dots
$$

This convergence result still holds if we differentiate r times the left-hand side.

**Generalized Stirling's Formula** We have (see Theorem 6.13)

$$J^{p+1}[\Sigma g](x) \to 0 \qquad \text{as } x \to \infty,$$

or equivalently,

$$
\Delta \mathbf{g}(\mathbf{x}) - \int\_1^\chi \mathbf{g}(t) \, dt + \sum\_{j=1}^p G\_j \Delta^{j-1} \mathbf{g}(\mathbf{x}) \to \sigma \mathbf{[g]} \qquad \text{as } \mathbf{x} \to \infty.
$$

If <sup>g</sup> lies in *<sup>C</sup>*2<sup>q</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*2<sup>q</sup> , where <sup>q</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> ∪ { <sup>1</sup> <sup>2</sup> } and 0 ≤ p ≤ 2q − 1, then we also have (see Proposition 8.39)

$$
\Delta \mathbf{g}(\mathbf{x}) - \int\_1^\chi \mathbf{g}(t) \, dt - \sum\_{k=1}^p \frac{B\_k}{k!} \, \mathbf{g}^{(k-1)}(\mathbf{x}) \to \sigma \, \text{[g]} \qquad \text{as } \mathbf{x} \to \infty.
$$

If p = 0 or p = 1, we also have the following analogue of Burnside's formula, which provides a better approximation than the generalized Stirling formula (see Proposition 6.19)

$$
\Sigma \mathbf{g}(\mathbf{x}) - \int\_1^{\mathbf{x} - \frac{1}{2}} \mathbf{g}(t) \, dt \to \sigma \mathbf{[g]} \qquad \text{as } \mathbf{x} \to \infty \dots
$$

All the convergence results above still hold if we differentiate r times both sides. In particular, the function <sup>D</sup>rJ <sup>p</sup>+1[g] vanishes at infinity.

**Asymptotic Equivalences** For any <sup>a</sup> <sup>≥</sup> 0 and any <sup>c</sup> <sup>∈</sup> <sup>R</sup>, we have (see Proposition 6.20)

$$c + \Sigma g(\mathbf{x} + a) \sim c + \int\_{\mathbf{x}}^{\mathbf{x} + 1} \Sigma g(t) \, dt \qquad \text{as } \mathbf{x} \to \infty$$

(under the assumption that c + g(n + 1) ∼ c + g(n) as n →<sup>N</sup> ∞ whenever <sup>c</sup> <sup>+</sup> g vanishes at infinity). If <sup>g</sup> does not lie in *<sup>D</sup>*−<sup>1</sup> <sup>N</sup> , then we also have

$$
\Sigma \mathbf{g}(\mathbf{x} + a) \sim c + \int\_1^\chi \mathbf{g}(t) \, dt \qquad \text{as } \mathbf{x} \to \infty.
$$

These equivalences still hold if we differentiate r times both sides; that is,

$$D^r \Sigma \mathbf{g}(\mathbf{x} + a) \sim \mathbf{g}^{(r-l)}(\mathbf{x}) \qquad \text{ as } \mathbf{x} \to \infty$$

(under the assumption that <sup>D</sup>rg(n+1) <sup>∼</sup> <sup>D</sup>rg(n) as <sup>n</sup> <sup>→</sup><sup>N</sup> <sup>∞</sup> whenever <sup>D</sup>rg vanishes at infinity).

**Asymptotic Expansions** We have the following asymptotic expansions (see Proposition 8.36).

(a) If <sup>g</sup> lies in *<sup>C</sup>*<sup>1</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*max{p,1} , then for large x we have

$$
\Sigma g(\mathbf{x}) = \sigma[\mathbf{g}] + \int\_1^\chi g(t) \, dt - \frac{1}{2} \, \mathbf{g}(\mathbf{x}) + R\_{\Gamma}(\mathbf{x}) \,, \, \mathbf{g}
$$

where

$$|\mathcal{R}\_1(\mathbf{x})| \le \frac{1}{2} |\mathbf{g}(\mathbf{x})|.$$

(b) If <sup>g</sup> lies in *<sup>C</sup>*2<sup>q</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*max{p,2q} for some <sup>q</sup> <sup>∈</sup> <sup>N</sup>∗, then for large <sup>x</sup> we have

$$
\Delta \mathbf{g}(\mathbf{x}) = \sigma[\mathbf{g}] + \int\_1^\mathbf{x} \mathbf{g}(t) \, dt - \frac{1}{2} \mathbf{g}(\mathbf{x}) + \sum\_{k=1}^q \frac{B\_{2k}}{(2k)!} \mathbf{g}^{(2k-1)}(\mathbf{x}) + \mathcal{R}\_l^q(\mathbf{x}) \,, ,
$$

where

$$|R\_1^q(\mathbf{x})| \le \frac{|B\_{2q}|}{(2q)!} |\mathbf{g}^{(2q-1)}(\mathbf{x})| \,.$$

Asymptotic expansions of the more general function

$$\infty \mapsto \frac{1}{m} \sum\_{j=0}^{m-1} \Sigma g\left(x + \frac{j}{m}\right),$$

for any <sup>m</sup> <sup>∈</sup> <sup>N</sup>∗, are also provided in Proposition 8.35.

**Generalized Liu's Formula** The following assertions hold (see Proposition 8.42).

(a) If <sup>g</sup> lies in *<sup>C</sup>*<sup>2</sup> <sup>∩</sup> *<sup>D</sup>*<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*2, then we have

$$\Sigma \mathbf{g}(\mathbf{x}) = \sigma \mathbf{f}[\mathbf{g}] + \int\_{1}^{\chi} \mathbf{g}(t) \, dt - \frac{1}{2} \mathbf{g}(\mathbf{x}) - \int\_{0}^{\infty} \left( \{t\} - \frac{1}{2} \right) \mathbf{g}'(\mathbf{x} + t) \, dt.$$

(b) If <sup>g</sup> lies in *<sup>C</sup>*2q+<sup>1</sup> <sup>∩</sup> *<sup>D</sup>*2<sup>q</sup> <sup>∩</sup> *<sup>K</sup>*2q+<sup>1</sup> for some <sup>q</sup> <sup>∈</sup> <sup>N</sup>∗, then we have

$$\begin{aligned} \Sigma \mathbf{g}(\mathbf{x}) &= \sigma \mathbf{f}[\mathbf{g}] + \int\_1^\chi \mathbf{g}(t) \, dt - \frac{1}{2} \mathbf{g}(\mathbf{x}) + \sum\_{k=1}^q \frac{B\_{2k}}{(2k)!} \mathbf{g}^{(2k-1)}(\mathbf{x}) \\ &+ \int\_0^\infty \frac{B\_{2q}(\{t\})}{(2q)!} \mathbf{g}^{(2q)}(\mathbf{x} + t) \, dt. \end{aligned}$$

#### **9.6 Limit, Series, and Integral Representations**

We now recall the different representations of multiple log --type functions that we established in this work as well as the way we can generate further identities by integration and differentiation.

Note that, in the special case when <sup>g</sup> lies in *<sup>D</sup>*−<sup>1</sup> <sup>N</sup> , both the Eulerian and Weierstrassian forms coincide with the analogue of Gauss' limit, i.e., we have

g(x) <sup>=</sup> <sup>∞</sup> k=1 g(k) <sup>−</sup> <sup>∞</sup> k=0 g(x + k),

and the second series converges uniformly on <sup>R</sup><sup>+</sup> (and tends to zero as <sup>x</sup> → ∞).

**Analogue of Gauss' Limit** By definition of g, we have

$$\Sigma \mathbf{g}(\boldsymbol{\lambda}) := \lim\_{n \to \infty} f\_n^p[\mathbf{g}](\boldsymbol{\lambda}), \qquad \boldsymbol{x} > \mathbf{0}.$$

This is precisely the analogue of Gauss' limit for the gamma function. We have also established that the sequence <sup>n</sup> → <sup>f</sup> <sup>p</sup> <sup>n</sup> [g] converges uniformly on any bounded subset of <sup>R</sup><sup>+</sup> to g (see our existence Theorem 3.6).

More generally, we have shown that the sequence <sup>n</sup> → <sup>D</sup>rf <sup>p</sup> <sup>n</sup> [g] converges uniformly on any bounded subset of <sup>R</sup><sup>+</sup> to <sup>D</sup>rg (see Theorem 7.5). In particular, both sides of the identity above can be differentiated r times (i.e., the limit and the derivative operator commute).

Moreover, the function f <sup>p</sup> <sup>n</sup> [g](x) − g(x) can be (repeatedly) integrated on any bounded interval of [0,∞) and the integral converges to zero as n → ∞ (see Proposition 5.18 and Remark 5.19).

**Eulerian and Weierstrassian Forms** We have the following Eulerian form (see Theorem 8.2)

$$\Delta \mathbf{g}(\mathbf{x}) = \left. -\mathbf{g}(\mathbf{x}) + \sum\_{j=1}^{p} \binom{\mathbf{x}}{j} \Delta^{j-1} \mathbf{g}(\mathbf{1}) - \sum\_{k=1}^{\infty} \left( \mathbf{g}(\mathbf{x} + k) - \sum\_{j=0}^{p} \binom{\mathbf{x}}{j} \Delta^{j} \mathbf{g}(k) \right) \mathbf{.}$$

We also have the following Weierstrassian forms if <sup>g</sup> <sup>∈</sup> *<sup>C</sup>*<sup>p</sup> (see Theorems 8.5 and 8.7).

(a) If p = 1 + deg g = 0, then

$$
\Delta \mathbf{g}(\mathbf{x}) = \sigma \mathbf{[g]} - \mathbf{g}(\mathbf{x}) - \sum\_{k=1}^{\infty} \left( \mathbf{g}(\mathbf{x} + k) - \int\_{k}^{k+1} \mathbf{g}(t) \, dt \right).
$$

(b) If p = 1 + deg g ≥ 1, then

$$\begin{aligned} \Sigma g(\mathbf{x}) &= \sum\_{j=1}^{p-1} \binom{\mathbf{x}}{j} \Delta^{j-1} g(\mathbf{1}) + \binom{\mathbf{x}}{p} (\Sigma g)^{(p)}(\mathbf{1}) \\ &- \mathbf{g}(\mathbf{x}) - \sum\_{k=1}^{\infty} \left( g(\mathbf{x} + k) - \sum\_{j=0}^{p-1} \binom{\mathbf{x}}{j} \Delta^{j} g(k) - \binom{\mathbf{x}}{p} g^{(p)}(k) \right), \end{aligned}$$

where (g)(p)(1) <sup>=</sup> <sup>g</sup>(p−1) (1) <sup>−</sup> <sup>σ</sup>[g(p)].

Each of the series above converges uniformly on any bounded subset of [0,∞) and can be repeatedly integrated term by term on any bounded interval of [0,∞). It can also be differentiated term by term up to r times.

**Gregory's Formula-Based Series Representation** We also have the following series representation (see Proposition 8.11). Suppose that g lies in *K*<sup>∞</sup> and let x > 0 be so that for every integer q ≥ p the function g is q-convex or q-concave on [x,∞). Suppose also that the sequence <sup>q</sup> → qg(x) is bounded. Then we have

$$
\Sigma g(\mathbf{x}) = \sigma[\mathbf{g}] + \int\_1^\mathbf{x} g(t) \, dt - \sum\_{n=1}^\infty G\_n \, \Delta^{n-1} g(\mathbf{x}) .
$$

Moreover, if these latter assumptions are satisfied for x = 1, then we also have the following analogue of Fontana-Mascheroni's series representation of γ

$$\sigma[\mathbf{g}] \, \, = \sum\_{n=1}^{\infty} G\_n \, \Delta^{n-1} \mathbf{g}(\mathbf{l}).$$

**Integral Representation** We have seen that an integral expression for g can sometimes be obtained by first finding an expression for g(r) when r > 1. This is the elevator method (see Corollary 7.20).

We have

$$(\Sigma g)^{(r)} - \Sigma g^{(r)} = \left. g^{(r-1)}(1) - \sigma [g^{(r)}] \right|$$

and, if r>p,

$$
\sigma[g^{(r)}] = \left. g^{(r-1)}(1) + \sum\_{k=1}^{\infty} g^{(r)}(k) \right\vert\_{r}
$$

Moreover, for any a > 0, we have

$$
\Sigma \mathfrak{g} \;= \; f\_a - f\_a(1),
$$

where fa <sup>∈</sup> *<sup>C</sup>*<sup>r</sup> is defined by

$$\,\_1f\_a(\mathbf{x}) = \sum\_{k=1}^{r-1} c\_k(a) \, \frac{(\mathbf{x} - a)^k}{k!} + \int\_a^\times \frac{(\mathbf{x} - t)^{r-1}}{(r-1)!} \, (\Sigma \mathbf{g})^{(r)}(t) \, dt$$

and, for k = 1,...,r − 1,

$$c\_k(a) := \sum\_{j=0}^{r-k-1} \frac{B\_j}{j!} \left( g^{(j+k-1)}(a) - \int\_a^{a+1} \frac{(a+1-t)^{r-j-k}}{(r-j-k)!} (\Sigma g)^{(r)}(t) \, dt \right).$$

#### **9.7 Further Identities and Results**

In this section, we collect the remaining identities and results that may be relevant in our investigation of multiple log --type functions.

**Analogue of Gauss' Multiplication Formula** Let <sup>m</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> and define the function gm : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> by the equation gm(x) <sup>=</sup> g( <sup>x</sup> <sup>m</sup> ) for x > 0. Then we have the following analogue of Gauss' multiplication formula (see Sect. 8.6)

$$\sum\_{j=0}^{m-1} \Sigma \mathbf{g} \left( \mathbf{x} + \frac{j}{m} \right) \\ \quad = \sum\_{j=1}^{m} \Sigma \mathbf{g} \left( \frac{j}{m} \right) + \Sigma \mathbf{g}\_m(m\mathbf{x}), \qquad \mathbf{x} > \mathbf{0},$$

where

$$\sum\_{j=1}^{m} \Sigma \mathbf{g}\left(\frac{j}{m}\right) = \left[m\,\sigma\,\mathrm{[g]} - \sigma\,\mathrm{[g\_m]} - m\int\_{1/m}^{1} \mathbf{g}(t) \,dt\right]$$

We also have

$$\lim\_{m \to \infty} \frac{\Sigma g\_m(mx) - \Sigma g\_m(m)}{m} = \int\_1^\chi g(t) \, dt, \qquad x > 0,$$

and, if g is integrable at 0,

$$\lim\_{m \to \infty} \frac{1}{m} \Sigma g\_m(mx) = \int\_0^\chi \mathbf{g}(t) \, dt, \qquad x > 0.$$

A related asymptotic result is also given in Proposition 8.30.

**Analogue of Wallis's Product Formula** We present here in a single statement the analogue of Wallis's product formula as given in Proposition 8.49 and Remark 8.53.

Let <sup>g</sup>˜1, <sup>g</sup>˜2, <sup>g</sup>˜<sup>3</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> be the functions defined respectively by the equations

$$
\tilde{\mathbf{g}}\_1(\mathbf{x}) = \Delta \mathbf{g}(2\mathbf{x} - \mathbf{l}), \quad \tilde{\mathbf{g}}\_2(\mathbf{x}) = \Delta \mathbf{g}(2\mathbf{x}), \quad \tilde{\mathbf{g}}\_3(\mathbf{x}) = 2\,\mathbf{g}(2\mathbf{x}), \quad \text{for } \mathbf{x} > \mathbf{0}.
$$

We assume that <sup>g</sup>˜ lies in *<sup>K</sup>*<sup>0</sup> for some ∈ {1, <sup>2</sup>, <sup>3</sup>}.

Let also <sup>θ</sup>1, θ2, θ<sup>3</sup> : <sup>N</sup><sup>∗</sup> <sup>→</sup> <sup>R</sup> be the sequences defined respectively by the equations

$$\begin{aligned} \theta\_1(n) &= \sigma[\tilde{\varrho}\_1] + \int\_1^{n+1} \tilde{\varrho}\_1(t) \, dt - \sum\_{j=1}^{(p-1)\_+} G\_j \, \Delta^{j-1} \tilde{g}\_1(n+1) \,, \\\theta\_2(n) &= g(2n) - g(1) - \sigma[\tilde{\varrho}\_2] - \int\_1^n \tilde{g}\_2(t) \, dt + \sum\_{j=1}^{(p-1)\_+} G\_j \, \Delta^{j-1} \tilde{g}\_2(n) \,, \end{aligned}$$

,

$$\theta\_3(n) = \sigma[\tilde{\mathbf{g}}\_3] - \sigma[\mathbf{g}] + \int\_1^2 (\mathbf{g}(2n+t) - \mathbf{g}(t)) \, dt$$

$$+ \sum\_{j=1}^p G\_j \left( \Delta^{j-1} \mathbf{g}(2n+1) - \Delta^{j-1} \tilde{\mathbf{g}}\_3(n+1) \right)$$

for <sup>n</sup> <sup>∈</sup> <sup>N</sup>∗. Then we have

$$\lim\_{n \to \infty} \left( h(n) + \sum\_{k=1}^{2n} (-1)^{k-1} g(k) \right) \\ = \ 0, 1$$

where h(n) is the function obtained from the series expansion for θ(n) about infinity after removing all the summands that vanish at infinity.

**Restriction to the Natural Integers** The restriction of g to N<sup>∗</sup> is the sum (5.2). This sum can be estimated, e.g., by means of an integral through Gregory's summation formula (6.33) with a bounded remainder (6.37). The representations of g given above can also lead to interesting identities when restricted to the natural integers.

**Analogue of Euler's Series Representation of** *<sup>γ</sup>* When <sup>g</sup> lies in *<sup>C</sup>*<sup>∞</sup> <sup>∩</sup> *<sup>K</sup>*∞, the following series (see (7.4))

$$
\sigma[\mathbf{g}] \, \mathbf{l} = \sum\_{k=1}^{\infty} (\Sigma \mathbf{g})^{(k)}(\mathbf{l}) \, \frac{1}{(k+1)!},
$$

when it converges, provides an analogue of Euler's series representation of γ . It is obtained by integrating term by term the Taylor series expansion of g(x+1) about x = 0.

**Generalized Webster's Functional Equation** This result can be found in Theorem 8.71.

**Analogues of Euler's Reflection Formula and Gauss' Digamma Theorem** These topics are discussed in Sects. 8.9 and 8.10.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

## **Chapter 10 Applications to Some Standard Special Functions**

We now apply our results to certain multiple --type functions and multiple log - type functions that are known to be well-studied special functions, namely: the gamma function, the digamma function, the polygamma functions, the q-gamma function, the Barnes G-function, the Hurwitz zeta function and its higher order derivatives, the generalized Stieltjes constants, and the Catalan number function. For recent background on some of these functions, see, e.g., Srivastava and Choi [93].

Each of these examples is examined and studied systematically by following the steps and results given in the previous chapter. When algebraic computations become tedious, a computer algebra system can be of great assistance in executing the details. Further examples will be discussed in the next two chapters.

In this chapter and the next, we occasionally address and solve some secondary but interesting issues. They are then presented and numbered in a *Project* environment.

Most of the applications we consider in this work illustrate how powerful is our theory to produce formulas and identities methodically. Although many of these formulas and identities are already known, to our knowledge they had never been derived from such a general and unified setting.

#### **10.1 The Gamma Function**

Since the Euler gamma function was the starting point of this theory and therefore also Webster's motivating example in his introduction of the --type functions, it is natural to test our results on this function first.

The following investigation of the gamma function does not reveal quite new formulas. However, it can be regarded as a tutorial that clearly demonstrates how our results can be used to carry out this investigation in a systematic way.

In addition to the remarkable book by Artin [11], the interested reader can also find a very good expository tour of the gamma function in Srinivasan's paper [92].

**ID Card** The following table summarizes the ID card corresponding to the log and log-gamma functions.


**Bohr-Mollerup's Theorem** A characterization of the gamma function is given in Bohr-Mollerup's theorem (see Theorem 1.1 and Example 3.2). In the additive notation, we have the following statement.

*All eventually convex or concave solutions* <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *to the equation*

$$f(\mathbf{x} + \mathbf{l}) - f(\mathbf{x}) = \ln \mathbf{x}$$

*are of the form* f (x) = c + ln -(x), *where* <sup>c</sup> <sup>∈</sup> <sup>R</sup>.

Using Proposition 3.9, we can also derive the following alternative characterization of the gamma function (see Example 3.11).

*All solutions* <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *to the equation*

$$f(x+1) - f(x) = \ln x$$

*that satisfy the asymptotic condition that, for each* x > 0,

$$f(\mathbf{x} + \mathbf{n}) - f(\mathbf{n}) - \mathbf{x} \ln \mathbf{n} \to \mathbf{0} \qquad \text{as } \mathbf{n} \to \mathbf{y} \cdot \mathbf{x}$$

*are of the form* f (x) = c + ln -(x), *where* <sup>c</sup> <sup>∈</sup> <sup>R</sup>.

**Extended ID Card** The value of σ[g] has been discussed in Example 6.5. More precisely, we also have the following values:


• *Inequality*

$$|\sigma[\mathbf{g}]| \le \ln 4 - \frac{5}{4} \approx 0.14.$$

• *Alternative representations of* σ[g] = γ [g]

$$\begin{aligned} \sigma[\mathbf{g}] &= \int\_1^\infty \left( \{t\} \ln \frac{1 + \lfloor t \rfloor}{t} + (1 - \{t\}) \ln \frac{\lfloor t \rfloor}{t} \right) dt \,, \\ \sigma[\mathbf{g}] &= \lim\_{n \to \infty} \left( \ln n! + n - 1 - \left( n + \frac{1}{2} \right) \ln n \right), \\ \sigma[\mathbf{g}] &= \sum\_{k=1}^\infty \left( 1 - \left( k + \frac{1}{2} \right) \ln \left( 1 + \frac{1}{k} \right) \right), \\ \sigma[\mathbf{g}] &= \int\_1^\infty \left( \frac{1}{2} \ln \left( \lfloor t \rfloor^2 + \lfloor t \rfloor \right) - \ln t \right) dt \,, \\ \sigma[\mathbf{g}] &= \int\_1^\infty \frac{\{t\} - \frac{1}{2}}{t} dt \,, \\ \sigma[\mathbf{g}] &= \int\_0^1 \ln \Gamma(t+1) \, dt. \end{aligned}$$

• *Binet's function*

$$J^2[\ln \circ \Gamma](\mathbf{x}) = J(\mathbf{x}) = \ln \Gamma(\mathbf{x}) - \frac{1}{2} \ln(2\pi) + \mathbf{x} - \left(\mathbf{x} - \frac{1}{2}\right) \ln \mathbf{x} \,, \qquad \mathbf{x} > \mathbf{0}.$$

• *Raabe's formula*

$$\int\_{\chi}^{\chi+1} \ln \Gamma(t) \, dt \, = \frac{1}{2} \ln(2\pi) + x \ln x - x \, , \qquad x > 0.$$

• *Alternative characterization*. The function f (x) = ln -(x) is the unique solution lying in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*<sup>1</sup> to the equation

$$\int\_{\chi}^{\chi+1} f(t) \, dt \, = \frac{1}{2} \ln(2\pi) + x \ln x - x \, , \qquad x > 0.$$

**Inequalities** The following inequalities hold for any x > 0, any a ≥ 0, and any <sup>n</sup> <sup>∈</sup> <sup>N</sup>∗.

• *Symmetrized generalized Wendel's inequality* (equality if a ∈ {0, 1})

$$\left|\ln\Gamma(\chi+a) - \ln\Gamma(\chi) - a\ln\chi\right| \le |a-1|\ln\left(1 + \frac{a}{\chi}\right),$$

$$\left(1 + \frac{a}{\chi}\right)^{-|a-1|} \le \frac{\Gamma(\chi+a)}{\Gamma(\chi)\chi^a} \le \left(1 + \frac{a}{\chi}\right)^{|a-1|}.$$

• *Symmetrized generalized Wendel's inequality* (discrete version)

$$\left| \ln \Gamma(\mathbf{x}) - \sum\_{k=1}^{n-1} \ln k + \sum\_{k=0}^{n-1} \ln(\mathbf{x} + k) - x \ln n \right| \le |\mathbf{x} - \mathbf{1}| \ln \left( 1 + \frac{\mathbf{x}}{n} \right).$$

$$\left( 1 + \frac{\mathbf{x}}{n} \right)^{-|\mathbf{x} - \mathbf{1}|} \le \Gamma(\mathbf{x}) \frac{\mathbf{x}(\mathbf{x} + 1) \cdots (\mathbf{x} + n - 1)}{(n - 1)! n^{\mathbf{x}}} \le \left( 1 + \frac{\mathbf{x}}{n} \right)^{|\mathbf{x} - \mathbf{1}|}.$$

• *Symmetrized Stirling's formula-based inequality*

$$|J(\mathbf{x})| \le \frac{(\mathbf{x} + 1)^2}{2} \ln\left(1 + \frac{1}{\mathbf{x}}\right) - \frac{\mathbf{x}}{2} - \frac{3}{4} \le \frac{1}{2} \ln\left(1 + \frac{1}{\mathbf{x}}\right),$$

$$\left(1 + \frac{1}{\mathbf{x}}\right)^{-\frac{1}{2}} \le \frac{\Gamma(\mathbf{x})}{\sqrt{2\pi} \, e^{-\mathbf{x}} \, x^{\mathbf{x} - \frac{1}{2}}} \le \left(1 + \frac{1}{\mathbf{x}}\right)^{\frac{1}{2}}.$$

• *Burnside's formula-based inequality*

$$\left| \ln \Gamma \left( x + \frac{1}{2} \right) - \frac{1}{2} \ln(2\pi) + x - x \ln x \right|\_{} \le \left| J(x) \right| \left. x \right| $$

• *Generalized Gautschi's inequality*

$$(\mathbf{x} + \lceil a \rceil)^{a - \lceil a \rceil} \le e^{(a - \lceil a \rceil)\,\psi(\mathbf{x} + \lceil a \rceil)} \le \frac{\Gamma(\mathbf{x} + a)}{\Gamma(\mathbf{x} + \lceil a \rceil)} \le (\mathbf{x} + \lfloor a \rfloor)^{a - \lceil a \rceil}.$$

**Stirling's and Related Formulas** For any a ≥ 0, we have the following limits and asymptotic equivalences as x → ∞,

$$\begin{aligned} \ln \Gamma(\mathbf{x} + a) - \ln \Gamma(\mathbf{x}) - a \ln \mathbf{x} &\to \ \mathbf{0}, \\\\ \ln \Gamma(\mathbf{x}) - \frac{1}{2} \ln(2\pi) + \mathbf{x} - \left(\mathbf{x} - \frac{1}{2}\right) \ln \mathbf{x} &\to \ \mathbf{0}, \\\\ \ln \Gamma\left(\mathbf{x} + \frac{1}{2}\right) - \frac{1}{2} \ln(2\pi) + \mathbf{x} - \mathbf{x} \ln \mathbf{x} &\to \ \mathbf{0}, \\\\ \Gamma(\mathbf{x} + a) \sim \mathbf{x}^a \,\Gamma(\mathbf{x}), \qquad \ln \Gamma(\mathbf{x} + a) &\sim \mathbf{x} \ln \mathbf{x}, \\\\ \Gamma(\mathbf{x}) \sim \sqrt{2\pi} \, e^{-\mathbf{x}} \,\mathbf{x}^{\mathbf{x} - \frac{1}{2}}, \qquad \Gamma(\mathbf{x} + 1) &\sim \sqrt{2\pi \mathbf{x}} \, e^{-\mathbf{x}} \,\mathbf{x}^{\mathbf{x}}. \end{aligned}$$

*Burnside's approximation* (better than Stirling's approximation)

$$
\Gamma(x) \sim \sqrt{2\pi} \left(\frac{x - \frac{1}{2}}{e}\right)^{x - \frac{1}{2}}.
$$

*Further results* (obtained by differentiation)

$$\begin{array}{ccccc} \psi(\mathbf{x} + a) - \psi(\mathbf{x}) \ \to & \mathbf{0}, & \psi(\mathbf{x}) - \ln \mathbf{x} \ \to & \mathbf{0}, & \psi(\mathbf{x} + a) \ \sim & \ln \mathbf{x} \ , \\\\ \psi\_{k}(\mathbf{x} + a) \ \sim & (-1)^{k-1} \frac{(k-1)!}{\mathbf{x}^{k}} \ , & \psi\_{k}(\mathbf{x}) \ \to & \mathbf{0}, & k \in \mathbb{N}^{\*}. \end{array}$$

**Asymptotic Expansions** For any m, q <sup>∈</sup> <sup>N</sup><sup>∗</sup> we have the following expansion as x → ∞

$$\begin{aligned} \frac{1}{m} \sum\_{j=0}^{m-1} \ln \Gamma\left(\mathbf{x} + \frac{j}{m}\right) &= \frac{1}{2} \ln(2\pi) + \mathbf{x} \ln \mathbf{x} - \mathbf{x} - \frac{1}{2m} \ln \mathbf{x} \\ &+ \sum\_{k=1}^{q} \frac{B\_{k+1}}{k(k+1) \ge^{k} m^{k+1}} + O\left(\mathbf{x}^{-q-1}\right) \dots \end{aligned}$$

Setting m = 1 in this formula, we retrieve the known asymptotic expansion of the log-gamma function ln -(x) as x → ∞ (see, e.g., [93, p. 7])

$$\ln \Gamma(\mathbf{x}) = \frac{1}{2} \ln(2\pi) - \mathbf{x} + \left(\mathbf{x} - \frac{1}{2}\right) \ln \mathbf{x} + \sum\_{k=1}^{q} \frac{B\_{k+1}}{k(k+1)\,\mathbf{x}^{k}} + O\left(\mathbf{x}^{-q-1}\right), \tag{10.1}$$

or equivalently,

$$J(\boldsymbol{\chi}) = \sum\_{k=1}^{q} \frac{B\_{k+1}}{k(k+1)\,\boldsymbol{x}^{k}} + O\left(\boldsymbol{x}^{-q-1}\right).$$

For instance, setting q = 4 in (10.1) we get

$$
\ln \Gamma(\mathbf{x}) = \frac{1}{2} \ln(2\pi) - \mathbf{x} + \left(\mathbf{x} - \frac{1}{2}\right) \ln \mathbf{x} + \frac{1}{12\mathbf{x}} - \frac{1}{360\mathbf{x}^3} + O\left(\mathbf{x}^{-5}\right).
$$

**Generalized Liu's Formula** For any x > 0 we have

$$
\ln \Gamma(\mathbf{x}) = \frac{1}{2} \ln(2\pi) - \mathbf{x} + \left(\mathbf{x} - \frac{1}{2}\right) \ln \mathbf{x} + \int\_0^\infty \frac{\frac{1}{2} - \{t\}}{t + \mathbf{x}} dt,
$$

or equivalently,

$$J(\mathbf{x}) = \int\_0^\infty \frac{\frac{1}{2} - \{t\}}{t + \mathbf{x}} dt.$$

**Limit, Series, and Integral Representations** We now consider various representations of ln -(x), including the Eulerian and Weierstrassian forms.

• *Eulerian form and related identities*. We have

$$
\ln \Gamma(\mathbf{x}) = \left. -\ln \mathbf{x} - \sum\_{k=1}^{\infty} \left( \ln(\mathbf{x} + k) - \ln k - \mathbf{x} \ln \left( 1 + \frac{1}{k} \right) \right) \right|,
$$

$$
\Gamma(\mathbf{x}) = \frac{1}{\varkappa} \prod\_{k=1}^{\infty} \frac{(1 + \frac{1}{k})^{\varkappa}}{1 + \frac{\varkappa}{k}}.
$$

Upon differentiation and integration, we obtain (cf. Example 8.3)

$$
\psi(\mathbf{x}) = -\frac{1}{\mathbf{x}} - \sum\_{k=1}^{\infty} \left( \frac{1}{\mathbf{x} + k} - \ln \left( 1 + \frac{1}{k} \right) \right),
$$

$$
\psi\_k(\mathbf{x}) = (-1)^{k-1} k! \zeta(k+1, \mathbf{x}), \qquad k \in \mathbb{N}^\*,
$$

$$
\psi\_k = \mathbf{x} - \mathbf{x} \ln \mathbf{x}, \qquad \sum\_{k=1}^{\infty} \left( \frac{1}{(\mathbf{x} + k) \ln \left( 1 + \frac{\mathbf{x}}{\mathbf{x}} \right)} - \frac{\mathbf{x}^2}{\mathbf{x}} \ln \left( \frac{1}{\mathbf{x} + \mathbf{x}} \right) \right).
$$

$$\psi\_{-2}(\mathbf{x}) = \left. \mathbf{x} - \mathbf{x} \ln \mathbf{x} - \sum\_{k=1}^{\infty} \left( (\mathbf{x} + k) \ln \left( 1 + \frac{\mathbf{x}}{k} \right) - \mathbf{x} - \frac{\mathbf{x}^2}{2} \ln \left( 1 + \frac{1}{k} \right) \right) \right|\_{\mathbf{x} = \mathbf{x}^2}$$

• *Weierstrassian form and related identities*. We have

$$
\ln \Gamma(\mathbf{x}) = \left[ -\gamma \mathbf{x} - \ln \mathbf{x} - \sum\_{k=1}^{\infty} \left( \ln(\mathbf{x} + k) - \ln k - \frac{\mathbf{x}}{k} \right) \right],
$$

$$
\Gamma(\mathbf{x}) = \frac{e^{-\gamma \mathbf{x}}}{\mathbf{x}} \prod\_{k=1}^{\infty} \frac{e^{\frac{\mathbf{x}}{k}}}{1 + \frac{\mathbf{x}}{k}}.
$$

Upon differentiation and integration, we obtain (cf. Example 8.8)

$$
\psi(\mathbf{x}) = \left. -\mathbf{y} - \frac{1}{\mathbf{x}} - \sum\_{k=1}^{\infty} \left( \frac{1}{\mathbf{x} + k} - \frac{1}{k} \right) \right.
$$

$$
\psi\_{-2}(\mathbf{x}) = \left. -\mathbf{y} \right.
\frac{\mathbf{x}^2}{2} + \mathbf{x} - \mathbf{x} \ln \mathbf{x} - \sum\_{k=1}^{\infty} \left( (\mathbf{x} + k) \ln \left( 1 + \frac{\mathbf{x}}{k} \right) - \mathbf{x} - \frac{\mathbf{x}^2}{2k} \right) \dots
$$

• *Gauss' limit and related identities*. The Gauss limit is

$$\ln \Gamma(\mathbf{x}) \, := \lim\_{n \to \infty} \left( \ln(n-1)! - \sum\_{k=0}^{n-1} \ln(x+k) + x \ln n \right) \dots$$

Upon differentiation and integration, we obtain

$$\begin{aligned} \psi(\mathbf{x}) &= \lim\_{n \to \infty} \left( \ln n - \sum\_{k=0}^{n-1} \frac{1}{x+k} \right), \\\\ \psi\_k(\mathbf{x}) &= (-1)^{k+1} k! \zeta(k+1, \mathbf{x}), & k \in \mathbb{N}^\*, \\\\ \psi\_{-2}(\mathbf{x}) &= \lim\_{n \to \infty} \left( n\mathbf{x} - \mathbf{x} \ln \mathbf{x} + (\ln n) \frac{\mathbf{x}^2}{2} - \sum\_{k=1}^{n-1} (\mathbf{x} + k) \ln \left( 1 + \frac{\mathbf{x}}{k} \right) \right). \end{aligned} \tag{10.2}$$

The multiplicative version of Gauss' limit reduces to the following formula (just replace <sup>n</sup> with <sup>n</sup> <sup>+</sup> 1 and note that (n <sup>+</sup> <sup>1</sup>)<sup>x</sup> <sup>∼</sup> nx as <sup>n</sup> → ∞)

$$\Gamma(\mathbf{x}) = \lim\_{n \to \infty} \frac{n! n^{\mathbf{x}}}{\mathbf{x}(\mathbf{x}+1) \cdots (\mathbf{x}+n)}$$

as stated in (1.6). We also have the following alternative form of Gauss' limit, which immediately follows from the Weierstrassian form

$$\Gamma(\mathbf{x}) = \frac{e^{-\mathbf{y}\cdot\mathbf{x}}}{\mathbf{x}} \lim\_{n \to \infty} \prod\_{k=1}^{n} \frac{e^{\frac{\mathbf{x}}{\mathbf{x}}}}{1 + \frac{\mathbf{x}}{\mathbf{x}}} = \lim\_{n \to \infty} \frac{n! \, e^{\mathbf{x}\psi(n)}}{\mathbf{x}(\mathbf{x}+1) \cdots (\mathbf{x}+n)}.$$

This latter limit can also be derived immediately from Gauss' limit and the wellknown fact that ψ(x) − ln x → 0 as x → ∞.

• *Integral representation*. Considering the antiderivative of the digamma function ϕ = ψ as the solution to the equation ϕ = g (using the elevator method), we obtain

$$\ln \Gamma(\mathbf{x}) \, \, = \, \, \psi\_{-1}(\mathbf{x}) \, \, = \, \int\_{1}^{\chi} \psi(t) \, dt.$$

• *Gregory's formula-based series representation*. For any x > 0 we have the series representation (see Example 8.12)

$$\ln \Gamma(\mathbf{x}) = \frac{1}{2} \ln(2\pi) - \mathbf{x} + \mathbf{x} \ln \mathbf{x} - \sum\_{n=0}^{\infty} G\_{n+1} \Delta^n \ln(\mathbf{x}) \tag{10.3}$$

$$\Gamma = \frac{1}{2}\ln(2\pi) - x + x\ln x - \sum\_{n=0}^{\infty} |G\_{n+1}| \sum\_{k=0}^{n} (-1)^k \binom{n}{k} \ln(x+k).$$

Setting x = 1 in this identity yields the following analogue of Fontana-Mascheroni series

$$\sum\_{n=0}^{\infty} |G\_{n+1}| \sum\_{k=0}^{n} (-1)^k \binom{n}{k} \ln(k+1) \, = \, -1 + \frac{1}{2} \ln(2\pi) \,.$$

**Gauss' Multiplication Formula** For any <sup>m</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> and any x > 0, we have

$$\prod\_{j=0}^{m-1} \Gamma\left(\frac{\chi+j}{m}\right) \, = \, (2\pi)^{\frac{m-1}{2}} m^{\frac{1}{2}-\chi} \, \Gamma(\chi) .$$

Corollary 8.33 provides the following asymptotic equivalence for any x > 0

$$
\Gamma(m\boldsymbol{x})^{\frac{1}{m}} \sim \; e^{-\boldsymbol{x}} \boldsymbol{x}^{\boldsymbol{\chi}} m^{\boldsymbol{\chi}} \qquad \text{as } m \to \infty,
$$

which also follows from Stirling's formula.

**Wallis's Product Formula** We have the following limits

$$\lim\_{n \to \infty} \frac{1 \cdot 3 \cdots (2n - 1)}{2 \cdot 4 \cdots (2n)} \sqrt{n} = \frac{1}{\sqrt{\pi}},$$

$$\lim\_{n \to \infty} \left(\frac{1}{2} \ln(\pi n) + \sum\_{k=1}^{2n} (-1)^{k-1} \ln k\right) = 0.$$

**Restriction to the Natural Integers** We have the well-known identity

$$
\Gamma(n+1) = n! \,, \qquad n \in \mathbb{N}.
$$

Gregory's formula states that for any <sup>n</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> and any <sup>q</sup> <sup>∈</sup> <sup>N</sup> we have

$$\ln n! = 1 - n + (n+1)\ln n - \sum\_{j=1}^{q} G\_j \left( (\Delta^{j-1} \ln)(n) - (\Delta^{j-1} \ln)(\mathbf{l}) \right) - R\_n^q \dots$$

with

$$|\mathcal{R}\_n^q| \le \overline{G}\_q \left| (\Delta^q \ln)(n) - (\Delta^q \ln)(1) \right|.$$

Moreover, Eq. (10.1) yields the following asymptotic expansion as x → ∞. For any <sup>q</sup> <sup>∈</sup> <sup>N</sup>∗, we have

$$\ln n! = \frac{1}{2}\ln(2\pi n) - n + n\ln n + \sum\_{k=1}^{q} \frac{B\_{k+1}}{k(k+1)n^k} + O\left(n^{-q-1}\right).$$

Similarly, Eq. (10.3) yields the following series representation

$$\ln n! = \frac{1}{2}\ln(2\pi) - n + (n+1)\ln n - \sum\_{k=0}^{\infty} G\_{k+1} \Delta^k \mathbf{g}(n), \quad n \in \mathbb{N}^\*.$$

We also have Liu's formula

$$\ln n! = \frac{1}{2}\ln(2\pi n) - n + n\ln n + \int\_n^\infty \frac{\frac{1}{2} - \{t\}}{t} dt \dots$$

Many other representations of ln n! can be derived from, e.g., the limit and series representations of the log-gamma function described above.

**Generalized Webster's Functional Equation** For any <sup>m</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> and any a > 0, there is a unique solution <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup><sup>+</sup> to the equation

$$\prod\_{j=0}^{m-1} f(x+aj) = x$$

such that ln <sup>f</sup> lies in *<sup>K</sup>*<sup>0</sup> (or in *<sup>K</sup>*1), namely

$$f(\mathbf{x}) = (am)^{\frac{1}{m}} \frac{\Gamma(\frac{\mathbf{x} + a}{am})}{\Gamma(\frac{\mathbf{x}}{am})} .$$

**Analogue of Euler's Series Representation of** *γ* The Taylor series expansion of ln -(x + 1) about x = 0 is

$$\ln \Gamma(\mathbf{x} + \mathbf{l}) = -\gamma \mathbf{x} + \sum\_{k=2}^{\infty} \frac{\xi(k)}{k} (-\mathbf{x})^k, \qquad |\mathbf{x}| < 1.$$

Integrating both sides of this equation on (0, 1), we obtain (see Example 7.16)

$$\sum\_{k=2}^{\infty} (-1)^k \frac{1}{k(k+1)} \zeta(k) \, \, = \, \frac{1}{2} \, \nu - 1 + \frac{1}{2} \ln(2\pi) \, \, .$$

**Reflection Formula** For any <sup>x</sup> <sup>∈</sup> <sup>R</sup> \ <sup>Z</sup>, we have -(x)-(1 − x) = π csc(πx).

#### **10.2 The Digamma and Harmonic Number Functions**

Let us now see what we get if we apply our results to both the digamma function x → ψ(x) and the harmonic number function x → Hx. Recall first that the identity

$$H\_{\ge -l} = \psi(\chi) + \chi$$

holds for any x > 0.

**ID Card** We have the following data about the functions 1/x and ψ(x):


**Analogue of Bohr-Mollerup's Theorem** The digamma function can be characterized as follows.

*All eventually monotone solutions* <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *to the equation*

$$f(\mathbf{x} + \mathbf{l}) - f(\mathbf{x}) = \frac{1}{x}$$

*are of the form* f (x) <sup>=</sup> <sup>c</sup> <sup>+</sup> ψ(x), *where* <sup>c</sup> <sup>∈</sup> <sup>R</sup>.

It is noteworthy that this characterization immediately follows from the basic version when p = 0 of our Theorem 1.4, which was established by John [49].

Interestingly, this characterization enables us to establish almost instantly the following identities for every x > 0,

$$H\_{\chi-1} = \psi(\chi) + \chi \;= \int\_0^1 \frac{1 - t^{\chi - 1}}{1 - t} dt \;.$$

Indeed, each of the three expressions above vanishes at x = 1 and is an eventually increasing solution to the equation f (x + 1) − f (x) = 1/x. Hence, they must coincide on <sup>R</sup>+. We can actually prove many other representations similarly; for instance, the following Gauss and Dirichlet integral representations (see, e.g., [93, p. 26])

$$
\psi(\mathbf{x}) = \int\_0^\infty \left( \frac{e^{-t}}{t} - \frac{e^{-\chi t}}{1 - e^{-t}} \right) dt, \qquad \mathbf{x} > \mathbf{0},
$$

$$
\psi(\mathbf{x}) = \int\_0^\infty \left( e^{-t} - \frac{1}{(t+1)^\chi} \right) \frac{dt}{t}, \qquad \mathbf{x} > \mathbf{0}.
$$

Kairies [51] obtained a variant of the characterization of the digamma function above by replacing the eventual monotonicity with the convexity property. This variant is also immediate from our results since <sup>g</sup> also lies in *<sup>D</sup>*<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*1.

Using Proposition 3.9, we can also derive the following alternative characterization of the digamma function.

*All solutions* <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *to the equation*

$$f(\mathbf{x} + \mathbf{l}) - f(\mathbf{x}) = \frac{1}{x}$$

*that satisfy the asymptotic condition that, for each* x > 0,

$$f(\mathbf{x} + \mathbf{n}) - f(\mathbf{n}) \to \mathbf{0} \qquad \text{as } \mathbf{n} \to \mathbb{N} \text{ } \infty$$

*are of the form* f (x) <sup>=</sup> <sup>c</sup> <sup>+</sup> ψ(x), *where* <sup>c</sup> <sup>∈</sup> <sup>R</sup>.

**Extended ID Card** We already know that σ[g] = γ (see Example 8.19). Hence we have the following table:


• *Alternative representations of* σ[g] = γ [g] = γ

$$\begin{aligned} \label{eq:SDAR} \gamma &= \lim\_{n \to \infty} \left( \sum\_{k=1}^n \frac{1}{k} - \ln n \right) &= \sum\_{k=1}^\infty \left( \frac{1}{k} - \ln \left( 1 + \frac{1}{k} \right) \right), \\\label{eq:SDAR} \gamma &= \int\_1^\infty \left( \frac{1}{\lfloor t \rfloor} - \frac{1}{t} \right) dt &= \frac{1}{2} - \int\_1^\infty \frac{\{t\} - \frac{1}{2}}{t^2} dt \,, \\\label{eq:SDAR} \gamma &= \int\_0^1 H\_{\mathcal{I}} \, dt \, . \end{aligned}$$

• *Generalized Binet's function*. For any <sup>q</sup> <sup>∈</sup> <sup>N</sup> and any x > <sup>0</sup>

$$J^{q+1}[\psi](\mathbf{x}) = \psi(\mathbf{x}) - \ln \mathbf{x} + \sum\_{j=1}^{q} |G\_j| \mathbf{B}(\mathbf{x}, j),$$

where (x, y) → B(x, y) is the beta function.

• *Analogue of Raabe's formula* (see Example 8.19)

$$\int\_{\infty}^{\chi+1} \psi(t) \, dt \, = \, \ln x \,, \qquad x > 0.$$

• *Alternative characterization*. The function f = ψ is the unique solution lying in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*<sup>0</sup> to the equation

$$\int\_{\infty}^{\chi+1} f(t) \, dt \, = \, \ln x \,, \qquad x > 0.$$

**Inequalities** The following inequalities hold for any x > 0, any a ≥ 0, and any <sup>n</sup> <sup>∈</sup> <sup>N</sup>∗.

• *Symmetrized generalized Wendel's inequality* (equality if a ∈ {0, 1})

$$|\psi(\mathfrak{x} + a) - \psi(\mathfrak{x})| \le \lceil a \rceil \frac{1}{\mathfrak{x}}.$$

• *Symmetrized generalized Wendel's inequality* (discrete version)

$$\left| \psi(\mathbf{x}) + \boldsymbol{\gamma} - \sum\_{k=1}^{n-1} \frac{1}{k} + \sum\_{k=0}^{n-1} \frac{1}{\boldsymbol{\gamma} + k} \right| \le \left\lceil \boldsymbol{\gamma} \right\rceil \frac{1}{n}.$$

• *Symmetrized Stirling's and Burnside's formulas-based inequalities*

$$\left|\psi\left(x+\frac{1}{2}\right)-\ln x\right| \le \left|\psi(x)-\ln x\right| \le \frac{1}{x}.$$

Considering for instance the value p = 1 in Corollary 6.12, we see that the latter inequality can be refined into

$$\frac{1}{2(\varkappa+1)} - \frac{1}{\varkappa} \le \psi(\varkappa) - \ln \varkappa \le -\frac{1}{2(\varkappa+1)}.$$

• *Generalized Gautschi's inequality*

$$\frac{a - \lceil a \rceil}{\lfloor x + \lfloor a \rfloor} \le \psi(x + a) - \psi(x + \lceil a \rceil) \le (a - \lceil a \rceil)\psi\_1(x + \lceil a \rceil) \le \frac{a - \lceil a \rceil}{x + \lceil a \rceil}.$$

**Generalized Stirling's and Related Formulas** For any a ≥ 0, we have the following limits and asymptotic equivalence as x → ∞,

$$
\psi(\mathbf{x} + a) - \psi(\mathbf{x}) \to 0, \qquad \psi(\mathbf{x}) - \ln \mathbf{x} \to 0, \qquad \psi(\mathbf{x} + a) \sim \ln \mathbf{x}.
$$

*Burnside-like approximation* (better than Stirling-like approximation)

$$
\psi(\mathbf{x}) - \ln\left(\mathbf{x} - \frac{1}{2}\right) \to \mathbf{0}.
$$

*Further results* (obtained by differentiation)

$$
\psi\_k(\mathbf{x} + a) \sim (-1)^{k-1} \frac{(k-1)!}{\mathbf{x}^k}, \qquad \psi\_k(\mathbf{x}) \to 0, \qquad k \in \mathbb{N}^\*.
$$

**Asymptotic Expansions** For any m, q <sup>∈</sup> <sup>N</sup><sup>∗</sup> we have the following expansion as x → ∞

$$\frac{1}{m}\sum\_{j=0}^{m-1}\psi\left(\mathbf{x} + \frac{j}{m}\right) \ = \ln \mathbf{x} + \sum\_{k=1}^{q} \frac{(-1)^{k-1}}{k \ (m\mathbf{x})^{k}} + O\left(\mathbf{x}^{-q-1}\right). \tag{10.4}$$

Setting m = 1 in this formula, we retrieve the known asymptotic expansion of ψ(x) as x → ∞ (see, e.g., [93, p. 36])

$$
\psi(\mathbf{x}) = \ln \mathbf{x} + \sum\_{k=1}^{q} \frac{(-1)^{k-1}}{k \, \mathbf{x}^{k}} + O\left(\mathbf{x}^{-q-1}\right),
$$

or equivalently,

$$J^1[\psi](\mathbf{x}) = \sum\_{k=1}^q \frac{(-1)^{k-1} \, \_B\mathbf{B}\_k}{\_k \mathbf{x}^k} + O\left(\mathbf{x}^{-q-1}\right).$$

For instance, setting q = 5 we get

$$\psi(x) = \ln x - \frac{1}{2x} - \frac{1}{12x^2} + \frac{1}{120x^4} + O\left(x^{-6}\right).$$

**Generalized Liu's Formula** For any x > 0 we have

$$\psi(x) = \ln x - \frac{1}{2x} + \int\_0^\infty \frac{\{t\} - \frac{1}{2}}{(t+x)^2} dt.$$

**Limit and Series Representations** Let us now examine the main limit and series representations of the digamma function that we obtain from our results.

• *Eulerian and Weierstrassian forms*. We have

$$\begin{aligned} \psi(\mathbf{x}) &= \left| -\mathbf{y} - \frac{1}{\mathbf{x}} + \sum\_{k=1}^{\infty} \left( \frac{1}{k} - \frac{1}{\mathbf{x} + k} \right) \right|, \\\\ \psi(\mathbf{x}) &= \left| -\frac{1}{\mathbf{x}} + \sum\_{k=1}^{\infty} \left( \ln \left( 1 + \frac{1}{k} \right) - \frac{1}{\mathbf{x} + k} \right) \right|. \end{aligned}$$

Upon differentiation, we obtain

$$\psi\_k(\mathbf{x}) = (-1)^{k-1} k! \, \zeta(k+1, \mathbf{x}), \qquad k \in \mathbb{N}^\*.$$

Moreover, integrating the Eulerian (resp. Weierstrassian) form of the digamma function on (0,x), we retrieve the Weierstrassian (resp. Eulerian) form of the log-gamma function.


$$\psi(x) = \ln x - \sum\_{n=1}^{\infty} |G\_n| \, \mathbf{B}(x, n) = \ln x - \sum\_{n=1}^{\infty} \frac{|G\_n|}{n \binom{x + n - 1}{n}}.$$

Setting x = 1 in this identity, we retrieve the Fontana-Mascheroni series (see, e.g., Blagouchine [20, p. 379])

$$\nu = \sum\_{n=1}^{\infty} \frac{|G\_n|}{n} \dots$$

Setting x = 2, we get

$$1 - \ln 2 = \sum\_{n=1}^{\infty} \frac{|G\_n|}{n+1},$$

which is consistent with the identities given in Example 8.16.

**Analogue of Gauss' Multiplication Formula** For any <sup>m</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> and any x > 0, we have (see, e.g., Berndt [18, p. 5])

$$\sum\_{j=0}^{m-1} \psi \left( \mathbf{x} + \frac{j}{m} \right) \;= \; m(\psi(m\mathbf{x}) - \ln m) \tag{10.5}$$

and

$$\sum\_{j=0}^{m-1} H\_{\chi+j/m} = \left( m(H\_{m\chi+m-1} - \ln m) \right).$$

Corollary 8.33 provides the following formula for any x > 0

$$\lim\_{m \to \infty} (H\_{m\chi - 1} - H\_{m - 1}) = \ln x.$$

**Analogue of Wallis's Product Formula** The analogue of Wallis's formula reduces to the classical identity

$$\sum\_{k=1}^{\infty} (-1)^{k-1} \frac{1}{k} = \begin{array}{c} \ln 2 \dots \end{array}$$

*Project 10.1* Find the analogue of Wallis's formula for the function g(x) = ψ(x). We apply our method (see Sect. 9.7) to the function

$$
\tilde{g}(x) = \Delta g(2x) = \frac{1}{2x}.
$$

Thus, we get

$$h(\mathbf{x}) = \psi(2n) - \psi(1) - \frac{1}{2}\gamma - \frac{1}{2}\ln n \\ = \frac{1}{2}(\gamma + \ln(4n)) + O\left(n^{-1}\right),$$

and the analogue of Wallis's formula for g(x) = ψ(x) is

$$\lim\_{n \to \infty} \left( -\ln(4n) + 2\sum\_{k=1}^{2n} (-1)^k \psi(k) \right) \\ = \mathcal{N} \cdot \mathcal{J}$$

This provides yet another formula to define Euler's constant γ . ♦

**Restriction to the Natural Integers** For any <sup>n</sup> <sup>∈</sup> <sup>N</sup> we have

$$H\_n = \sum\_{k=1}^n \frac{1}{k} \cdot$$

Gregory's formula states that for any <sup>n</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> and any <sup>q</sup> <sup>∈</sup> <sup>N</sup> we have

$$H\_{n-1} = \ln n - \sum\_{j=1}^{q} |G\_j| \left( \mathbf{B}(n, j) - \frac{1}{j} \right) - R\_n^q \ ,$$

with

$$|R\_n^q| \le \left| \overline{G}\_q \right| \mathbf{B}(n, q+1) - \frac{1}{q} \Big| \, .$$

Many representations of Hn can be derived from, e.g., the limit and series representations of the digamma function described above. For instance, using the generalized Liu formula, we get (see also Remark 8.47)

$$H\_n = \ln n + \nu + \frac{1}{2n} + \int\_n^\infty \frac{\{t\} - \frac{1}{2}}{t^2} dt = \ln n + \frac{1}{2} + \frac{1}{2n} - \int\_1^n \frac{\{t\} - \frac{1}{2}}{t^2} dt \dots$$

**Generalized Webster's Functional Equation** For any <sup>m</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> and any a > 0, there is a unique eventually monotone solution <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> to the equation

$$\sum\_{j=0}^{m-1} f(x+aj) = \frac{1}{x},$$

namely

$$f(\mathbf{x}) = \frac{1}{am}\psi\left(\frac{\mathbf{x} + a}{am}\right) - \frac{1}{am}\psi\left(\frac{\mathbf{x}}{am}\right).$$

**Analogue of Euler's Series Representation of** *γ* We have ψ(1) = −<sup>γ</sup> and

$$\psi\_k(1) = \left(-1\right)^{k-1} k! \,\zeta(k+1) \,, \qquad k \in \mathbb{N}^\*.$$

Thus, the Taylor series expansion of ψ(x + 1) about x = 0 is

$$H\_{\chi} = \psi(\mathbf{x} + 1) + \boldsymbol{\gamma} \ = \sum\_{k=1}^{\infty} (-1)^{k-1} \zeta(k+1) \boldsymbol{x}^{k}, \qquad |\mathbf{x}| < 1.$$

Integrating both sides of this equation on (0, 1), we retrieve Euler's series representation of γ

$$\lambda := \sum\_{k=2}^{\infty} (-1)^k \frac{\zeta(k)}{k} \dots$$

**Analogue of the Reflection Formula** For any <sup>x</sup> <sup>∈</sup> <sup>R</sup> \ <sup>Z</sup>, we have

$$
\psi(\mathfrak{x}) - \psi(1-\mathfrak{x}) = -\pi \cot(\pi \mathfrak{x}) .
$$

#### **10.3 The Polygamma Functions**

We now investigate the polygamma functions ψν for any <sup>ν</sup> <sup>∈</sup> <sup>Z</sup>. In this context, our results will prove to be particularly interesting when ν ≤ −2, that is, when the function ψν has a strictly positive asymptotic degree.

For any <sup>ν</sup> <sup>∈</sup> <sup>Z</sup>, we set gν <sup>=</sup> ψν ; hence we have <sup>g</sup> <sup>ν</sup> = gν+<sup>1</sup> and ψ <sup>ν</sup> = ψν+1. It follows immediately that

$$
\Sigma \mathcal{g}\_{\boldsymbol{\nu}}(\boldsymbol{x}) \;= \; \psi\_{\boldsymbol{\nu}}(\boldsymbol{x}) - \psi\_{\boldsymbol{\nu}}(\boldsymbol{1}) .
$$

(The cases ν = 0 and ν = −1 correspond to the functions ψ(x) and ln -(x), respectively, and have been already considered in the previous sections.) We will often deal with the cases ν ≥ 1 and ν ≤ −1 separately. In the latter case, we will often consider the value ν = −2 for simplicity and brevity.

**ID Card When** *ν* **<sup>≥</sup> <sup>1</sup>** Here we clearly have

$$g\_{\boldsymbol{\nu}}(\boldsymbol{x}) \;=\; D\_{\boldsymbol{\chi}}^{\boldsymbol{\nu}}\frac{1}{\boldsymbol{x}} = \; (-1)^{\boldsymbol{\nu}}\frac{\boldsymbol{\nu}!}{\boldsymbol{x}^{\boldsymbol{\nu}+1}}$$

and (see Example 7.6)

$$
\psi\_\nu(1) = (-1)^{\nu+1} \nu! \zeta(\nu+1).
$$

Hence we have the following table.


**ID Card When** *ν* **≤ −<sup>1</sup>** Using (8.9), we obtain the following recurrence to compute the functions gν . For any integer ν ≤ −1, we have

$$\begin{aligned} \mathbf{g}\_{\boldsymbol{\vartheta}-1}(\mathbf{x}) &= \int\_{\boldsymbol{\chi}}^{\boldsymbol{\chi}+1} \boldsymbol{\psi}\_{\boldsymbol{\vartheta}}(t) \, dt \ &= \int\_{0}^{\boldsymbol{\chi}} \mathbf{g}\_{\boldsymbol{\vartheta}}(t) \, dt + \int\_{0}^{1} \boldsymbol{\psi}\_{\boldsymbol{\vartheta}}(t) \, dt \\ &= \int\_{0}^{\boldsymbol{\chi}} \mathbf{g}\_{\boldsymbol{\vartheta}}(t) \, dt + \boldsymbol{\psi}\_{\boldsymbol{\vartheta}-1}(1). \end{aligned}$$

In particular,

$$\lim\_{\lambda \to 0} \varrho\_{\mathbb{V}^{-1}}(\mathbf{x}) \;= \; \psi\_{\mathbb{V}^{-1}}(\mathbf{l}) \;= \int\_0^1 \psi\_{\mathbb{V}}(t) \, dt \;.$$

Unfolding this recurrence, we obtain g−1(x) = ln x and, for any integer ν ≤ −1,

$$g\_{\mathbb{V}^{-1}}(\mathbf{x}) = \int\_0^\chi \frac{(\mathbf{x} - t)^{-\nu - 1}}{(-\nu - 1)!} \ln t \, dt + \sum\_{j=0}^{-\nu - 1} \psi\_{\mathbb{V} + j - 1}(1) \frac{\mathbf{x}^j}{j!},\tag{10.6}$$

which is precisely the (−ν − 1)th order Taylor expansion of gν−1(x).

Thus, we have

$$\begin{aligned} g\_{-1}(\mathbf{x}) &= \ln \mathbf{x} \,, \\ g\_{-2}(\mathbf{x}) &= \mathbf{x} \ln \mathbf{x} - \mathbf{x} + \frac{1}{2} \ln(2\pi) \,, \\ g\_{-3}(\mathbf{x}) &= \frac{1}{2} \mathbf{x}^2 \ln \mathbf{x} - \frac{3}{4} \mathbf{x}^2 + \left(\frac{1}{2} \mathbf{x} + \frac{1}{4}\right) \ln(2\pi) + \ln A .\end{aligned}$$

Hence the following ID card


**Analogue of Bohr-Mollerup's Theorem** The function ψν can be characterized as follows.

*All solutions* <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *to the equation* f (x <sup>+</sup> <sup>1</sup>) <sup>−</sup> f (x) <sup>=</sup> gν (x) *that lie in <sup>K</sup>*(−ν)<sup>+</sup> *are of the form* f (x) <sup>=</sup> cν <sup>+</sup> ψν (x), *where* cν <sup>∈</sup> <sup>R</sup>.

When ν ≥ 1, this characterization enables us to prove easily the following integral representation of ψν

$$\psi\_{\boldsymbol{\nu}}(\boldsymbol{x}) \;=\;(-1)^{\boldsymbol{\nu}-1} \int\_{0}^{\infty} \frac{t^{\boldsymbol{\nu}}e^{-\boldsymbol{\chi}t}}{1-e^{-t}}dt\;, \qquad \boldsymbol{x} > 0.$$

Indeed, both sides of this identity coincide at x = 1 and are eventually monotone solutions to the equation f <sup>=</sup> gν . Hence they must coincide on <sup>R</sup>+.

**Extended ID Card** The asymptotic constant σ[gν ] satisfies the following identity

$$
\sigma[\mathcal{g}\_\mathbb{V}] = \int\_0^1 \psi\_\mathbb{V}(t+1) \, dt - \psi\_\mathbb{V}(\mathbb{I}) \, = \, \mathcal{g}\_{\mathbb{V}-1}(\mathbb{I}) - \psi\_\mathbb{V}(\mathbb{I}).
$$

Moreover, if ν ≥ 1 we also have

$$\sigma[\mathbf{g}\_{\mathbb{V}}] = \left. \mathbb{Y}[\mathbf{g}\_{\mathbb{V}}] = \sum\_{k=1}^{\infty} \mathbf{g}\_{\mathbb{V}}(k) - \int\_{1}^{\infty} \mathbf{g}\_{\mathbb{V}}(t) \, dt \right|$$

$$= (-1)^{\mathbb{V}} \Gamma(\mathbb{v}) \left( \mathbb{v} \, \xi(\mathbb{v} + 1) - 1 \right)$$

and hence the following values


For ν ≤ −1 we have the values


For instance we have

$$
\lfloor \sigma \lceil g\_{-2} \rceil \rfloor = \ln A - \frac{1}{4} \ln(2\pi), \qquad \sigma \lceil g\_{-2} \rceil = \ln A + \frac{1}{4} \ln(2\pi) - \frac{3}{4},
$$

and

$$\gamma[g\_{-2}] = \ln A + \frac{1}{6}\ln 2 - \frac{1}{3} \dots$$

We also have the following identities.

• *Alternative representations of* σ[g]

$$\begin{split} \sigma[\mathbf{g}\_{\boldsymbol{\nu}}] &= \sum\_{j=1}^{(-\boldsymbol{\nu})\_{+}} G\_{j} \, \Delta^{j-1} \mathbf{g}\_{\boldsymbol{\nu}}(\mathbf{l}) - \sum\_{k=1}^{\infty} \left( \Delta \mathbf{g}\_{\boldsymbol{\nu}-1}(\mathbf{k}) - \sum\_{j=0}^{(-\boldsymbol{\nu})\_{+}} G\_{j} \, \Delta^{j} \mathbf{g}\_{\boldsymbol{\nu}}(\mathbf{k}) \right), \\ \sigma[\mathbf{g}\_{\boldsymbol{\nu}}] &= \lim\_{n \to \infty} \left( \sum\_{k=1}^{n-1} \mathbf{g}\_{\boldsymbol{\nu}}(\mathbf{k}) + \mathbf{g}\_{\boldsymbol{\nu}-1}(\mathbf{1}) - \mathbf{g}\_{\boldsymbol{\nu}-1}(\mathbf{n}) + \sum\_{j=1}^{(-\boldsymbol{\nu})\_{+}} G\_{j} \, \Delta^{j-1} \mathbf{g}\_{\boldsymbol{\nu}}(\mathbf{n}) \right), \\ \sigma[\mathbf{g}\_{\boldsymbol{\nu}}] &= \lim\_{n \to \infty} \left( \sum\_{k=1}^{n-1} \mathbf{g}\_{\boldsymbol{\nu}}(\mathbf{k}) + \mathbf{g}\_{\boldsymbol{\nu}-1}(\mathbf{1}) - \mathbf{g}\_{\boldsymbol{\nu}-1}(\mathbf{n}) - \sum\_{j=1}^{(-\boldsymbol{\nu})\_{+}} \frac{B\_{j}}{j!} \, \mathbf{g}\_{\boldsymbol{\nu}+j-1}(\mathbf{n}) \right). \end{split}$$

If ν ≥ 1, then

$$
\sigma[g\_\upsilon] = \left(-1\right)^\upsilon \upsilon! \left(\frac{1}{2} - (\upsilon + 1) \int\_1^\infty \frac{\{t\} - \frac{1}{2}}{t^{\upsilon + 2}} dt\right) .
$$

If ν ≤ −1, then for any integer q ≥ 
−ν/2,

$$\sigma[\mathbf{g}\_{\boldsymbol{\nu}}] = \frac{1}{2}\mathbf{g}\_{\boldsymbol{\nu}}(\mathbf{l}) - \sum\_{k=1}^{q} \frac{B\_{2k}}{(2k)!} \mathbf{g}\_{\boldsymbol{\nu}+2k-1}(\mathbf{l}) - \int\_{\mathbf{l}}^{\infty} \frac{B\_{2q}(\{t\})}{(2q)!} \mathbf{g}\_{\boldsymbol{\nu}+2q}(t) \, dt \dots$$

• *Representations of* γ [g]

$$\begin{aligned} \mathcal{Y}[\mathbf{g}\_{\boldsymbol{\nu}}] &= \sigma[\mathbf{g}\_{\boldsymbol{\nu}}] - \sum\_{j=1}^{(-\boldsymbol{\nu})\_{+}} G\_{j} \Delta^{j-1} \mathbf{g}\_{\boldsymbol{\nu}}(\mathbf{l}) \;, \\ \mathcal{Y}[\mathbf{g}\_{\boldsymbol{\nu}}] &= \int\_{1}^{\infty} \left( \sum\_{j=0}^{(-\boldsymbol{\nu})\_{+}} G\_{j} \Delta^{j} \mathbf{g}\_{\boldsymbol{\nu}}(\lfloor \boldsymbol{t} \rfloor) - \mathbf{g}\_{\boldsymbol{\nu}}(\boldsymbol{t}) \right) d\boldsymbol{t} \;, \\ \mathcal{Y}[\mathbf{g}\_{\boldsymbol{\nu}}] &= \int\_{1}^{\infty} \left( \sum\_{j=0}^{(-\boldsymbol{\nu})\_{+}} \binom{\lfloor \boldsymbol{t} \rfloor}{j} \Delta^{j} \mathbf{g}\_{\boldsymbol{\nu}}(\lfloor \boldsymbol{t} \rfloor) - \mathbf{g}\_{\boldsymbol{\nu}}(\boldsymbol{t}) \right) d\boldsymbol{t} \;. \end{aligned}$$

• *Generalized Binet's function*. For any <sup>q</sup> <sup>∈</sup> <sup>N</sup> and any x > <sup>0</sup>

$$J^{q+1}[\psi\_\nu](\mathbf{x}) = \psi\_\nu(\mathbf{x}) - \mathbf{g}\_{\nu-1}(\mathbf{x}) + \sum\_{j=1}^q G\_j \, \Delta^{j-1} \mathbf{g}\_\nu(\mathbf{x}) .$$

For instance,

$$J^3[\psi\_{-2}](\mathbf{x}) = \psi\_{-2}(\mathbf{x}) - \frac{1}{12}(\mathbf{x} + 1)\ln(\mathbf{x} + 1) + \frac{1}{12}(3\mathbf{x} - 1)^2$$

$$-\frac{1}{12}\mathbf{x}(6\mathbf{x} - 7)\ln\mathbf{x} - \frac{1}{2}\mathbf{x}\ln(2\pi) - \ln A.$$

• *Analogue of Raabe's formula*

$$\int\_{\mathfrak{x}}^{\mathfrak{x}+1} \psi\_{\mathbb{V}}(t) \, dt \;= \left. \operatorname{g}\_{\mathbb{V}-1}(\mathfrak{x}) \right. \qquad x > 0.$$

• *Alternative characterization*. The function f = ψν is the unique solution lying in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*(−ν)<sup>+</sup> to the equation

$$\int\_{\mathfrak{x}}^{\mathfrak{x}+1} f(t) \, dt \, = \left. \mathrm{g}\_{\mathbb{V}-1}(\mathfrak{x}) \right., \qquad \mathfrak{x} > 0.$$

**Inequalities When** *ν* **<sup>≥</sup> <sup>1</sup>** The following inequalities hold for any x > 0, any <sup>a</sup> <sup>≥</sup> 0, and any <sup>n</sup> <sup>∈</sup> <sup>N</sup>∗.

• *Symmetrized generalized Wendel's inequality* (equality if a ∈ {0, 1})

$$|\psi\_{\boldsymbol{\nu}}(\boldsymbol{x}+\boldsymbol{a}) - \psi\_{\boldsymbol{\nu}}(\boldsymbol{x})| \le \lceil \boldsymbol{a} \rceil \frac{\nu!}{\boldsymbol{x}^{\boldsymbol{\nu}+1}} \dots$$

• *Symmetrized generalized Wendel's inequality* (discrete version)

$$\left| \psi\_{\boldsymbol{\nu}}(\boldsymbol{x}) - \psi\_{\boldsymbol{\nu}}(1) - \sum\_{k=1}^{n-1} \mathbf{g}\_{\boldsymbol{\nu}}(k) + \sum\_{k=0}^{n-1} \mathbf{g}\_{\boldsymbol{\nu}}(\boldsymbol{x} + k) \right| \leq \left\lceil \boldsymbol{x} \right\rceil \frac{\boldsymbol{\nu}!}{n^{\boldsymbol{\nu}+1}} \cdot \boldsymbol{x}$$

• *Symmetrized Stirling's and Burnside's formulas-based inequalities*

$$\left|\psi\_{\boldsymbol{\nu}}\left(\mathbf{x} + \frac{1}{2}\right) - \mathbf{g}\_{\boldsymbol{\nu}-1}(\mathbf{x})\right| \leq \left|\psi\_{\boldsymbol{\nu}}(\mathbf{x}) - \mathbf{g}\_{\boldsymbol{\nu}-1}(\mathbf{x})\right| \leq \left|\mathbf{g}\_{\boldsymbol{\nu}}(\mathbf{x})\right|.$$

Considering for instance the value p = 1 in Corollary 6.12, we see that the latter inequality can be refined into

$$\left|\psi\_{\mathbb{V}}(\mathbf{x}) - \mathbf{g}\_{\mathbb{V}-1}(\mathbf{x}) + \frac{1}{2}\mathbf{g}\_{\mathbb{V}}(\mathbf{x})\right| \le \frac{1}{2} \left|\Delta \mathbf{g}\_{\mathbb{V}}(\mathbf{x})\right|.$$

• *Additional inequality*

$$|\psi\_{\boldsymbol{\nu}}(\boldsymbol{x}+\boldsymbol{n})| = \left| \sum\_{k=n}^{\infty} g\_{\boldsymbol{\nu}}(\boldsymbol{x}+k) \right| \le \left| \psi\_{\boldsymbol{\nu}}(\boldsymbol{n}) \right|.$$

• *Generalized Gautschi's inequality*

$$\begin{aligned} \left((-1)^{\upsilon-1}(a-\lceil a\rceil)\,\psi\_{\upsilon+1}(\mathbf{x}+\lceil a\rceil) &\leq (-1)^{\upsilon-1}(\psi\_{\upsilon}(\mathbf{x}+a)-\psi\_{\upsilon}(\mathbf{x}+\lceil a\rceil)) \\ &\leq (-1)^{\upsilon-1}(a-\lceil a\rceil)\,\varrho\_{\upsilon}(\mathbf{x}+\lfloor a\rfloor) .\end{aligned}$$

**Inequalities When** *ν* **≤ −<sup>1</sup>** The following inequalities hold for any x > 0, any <sup>a</sup> <sup>≥</sup> 0, and any <sup>n</sup> <sup>∈</sup> <sup>N</sup>∗.

• *Symmetrized generalized Wendel's inequality* (equality if a ∈ {0, 1,..., −ν})

$$\begin{aligned} & \left| \psi\_{\boldsymbol{\vartheta}}(\boldsymbol{x} + \boldsymbol{a}) - \psi\_{\boldsymbol{\vartheta}}(\boldsymbol{x}) - \sum\_{j=1}^{-\nu} \binom{\boldsymbol{a}}{j} \Delta^{j-1} \boldsymbol{g}\_{\boldsymbol{\vartheta}}(\boldsymbol{x}) \right| \\ & \leq \left| \binom{a-1}{-\nu} \right| \left| \Delta^{-\nu-1} \boldsymbol{g}\_{\boldsymbol{\vartheta}}(\boldsymbol{x} + \boldsymbol{a}) - \Delta^{-\nu-1} \boldsymbol{g}\_{\boldsymbol{\vartheta}}(\boldsymbol{x}) \right| \\ & \leq \left\| \boldsymbol{a} \right\| \left| \binom{a-1}{-\nu} \right| \left| \Delta^{-\nu} \boldsymbol{g}\_{\boldsymbol{\vartheta}}(\boldsymbol{x}) \right| . \end{aligned}$$

• *Symmetrized generalized Wendel's inequality* (discrete version)

$$\begin{aligned} \left| \left| \psi\_{\boldsymbol{\nu}}(\boldsymbol{x}) - \psi\_{\boldsymbol{\nu}}(\boldsymbol{1}) - f\_{\boldsymbol{n}}^{-\boldsymbol{\nu}}[\![\![g\_{\boldsymbol{\nu}}]\!](\boldsymbol{x}) \right| \leq \left| \binom{\boldsymbol{\kappa}-1}{-\boldsymbol{\nu}} \right| \left| \Delta^{-\boldsymbol{\nu}-1} g\_{\boldsymbol{\nu}}(\boldsymbol{x}+\boldsymbol{n}) - \Delta^{-\boldsymbol{\nu}-1} g\_{\boldsymbol{\nu}}(\boldsymbol{n}) \right| \\ \leq \left\lceil \boldsymbol{\kappa} \right\rceil \left| \binom{\boldsymbol{\kappa}-1}{-\boldsymbol{\nu}} \right| \left| \Delta^{-\boldsymbol{\nu}} g\_{\boldsymbol{\nu}}(\boldsymbol{n}) \right|, \end{aligned}$$

where

$$\mathbb{I}f\_n^{-\nu}[\mathbf{g}\_\boldsymbol{\boldsymbol{\nu}}](\mathbf{x}) = \sum\_{k=1}^{n-1} \mathbf{g}\_\boldsymbol{\boldsymbol{\nu}}(k) - \sum\_{k=0}^{n-1} \mathbf{g}\_\boldsymbol{\boldsymbol{\nu}}(\mathbf{x} + k) + \sum\_{j=1}^{-\nu} \left( \overset{\times}{\text{\textquotedbl{}}}\right) \Delta^{j-1} \mathbf{g}\_\boldsymbol{\boldsymbol{\nu}}(\mathbf{n}) \dots$$

• *Symmetrized Stirling's formula-based inequality*

$$\begin{aligned} & \left| \psi\_{\boldsymbol{\nu}}(\boldsymbol{\chi}) - \boldsymbol{g}\_{\boldsymbol{\nu}-1}(\boldsymbol{\chi}) + \sum\_{j=1}^{-\boldsymbol{\nu}} G\_{j} \Delta^{j-1} \boldsymbol{g}\_{\boldsymbol{\nu}}(\boldsymbol{\chi}) \right| \\ & \leq \int\_{0}^{1} \left| \binom{t-1}{-\boldsymbol{\nu}} \right| \left| \Delta^{-\boldsymbol{\nu}-1} \boldsymbol{g}\_{\boldsymbol{\nu}}(\boldsymbol{\chi}+t) - \Delta^{-\boldsymbol{\nu}-1} \boldsymbol{g}\_{\boldsymbol{\nu}}(\boldsymbol{\chi}) \right| dt \\ & \leq \overline{G}\_{-\boldsymbol{\nu}} \left| \Delta^{-\boldsymbol{\nu}} \boldsymbol{g}\_{\boldsymbol{\nu}}(\boldsymbol{\chi}) \right| \end{aligned}$$

• *Generalized Gautschi's inequality* Considering the function ψ−2, we obtain

$$\begin{aligned} \left(a - \lceil a \rceil \right) \psi\_{-1}(\mathbf{x} + \lceil a \rceil) &\leq \psi\_{-2}(\mathbf{x} + a) - \psi\_{-2}(\mathbf{x} + \lceil a \rceil) \\ &\leq (a - \lceil a \rceil) \operatorname{g}\_{-2}(\mathbf{x} + \lfloor a \rfloor), \end{aligned}$$

for any x + a ≥ x0, where x<sup>0</sup> = 1.461 ... is the unique positive zero of the digamma function.

**Generalized Stirling's and Related Formulas When** *ν* **<sup>≥</sup> <sup>1</sup>** For any <sup>a</sup> <sup>≥</sup> 0, we have the following limit and asymptotic equivalence as x → ∞,

$$\psi\_{\boldsymbol{\nu}}(\mathbf{x} + \boldsymbol{a}) \sim \mathcal{g}\_{\boldsymbol{\nu} - \mathbf{l}}(\mathbf{x}) = (-\mathbf{l})^{\boldsymbol{\nu} - \mathbf{l}} \frac{(\boldsymbol{\nu} - \mathbf{l})!}{\boldsymbol{\chi}^{\boldsymbol{\nu}}}, \qquad \psi\_{\boldsymbol{\nu}}(\mathbf{x}) \to \mathbf{0}.$$

*Burnside-like approximation* (better than Stirling-like approximation)

$$\psi\_{\upsilon}(\mathbf{x}) - \mathbf{g}\_{\upsilon - 1}(\mathbf{x} - \frac{1}{2}) \to \mathbf{0} \dots$$

**Generalized Stirling's and Related Formulas When** *ν* **≤ −<sup>1</sup>** For any <sup>a</sup> <sup>≥</sup> 0, we have the following limits and asymptotic equivalence as x → ∞,

$$\begin{aligned} \psi\_{\boldsymbol{\nu}}(\boldsymbol{x}+\boldsymbol{a}) - \psi\_{\boldsymbol{\nu}}(\boldsymbol{x}) - \sum\_{j=1}^{-\boldsymbol{\nu}} \binom{\boldsymbol{a}}{j} \Delta^{j-1} g\_{\boldsymbol{\nu}}(\boldsymbol{x}) & \to & 0, \\\\ \psi\_{\boldsymbol{\nu}}(\boldsymbol{x}) - g\_{\boldsymbol{\nu}-1}(\boldsymbol{x}) + \sum\_{j=1}^{-\boldsymbol{\nu}} G\_{j} \Delta^{j-1} g\_{\boldsymbol{\nu}}(\boldsymbol{x}) & \to & 0, \\\\ \psi\_{\boldsymbol{\nu}}(\boldsymbol{x}) - \sum\_{k=0}^{-\boldsymbol{\nu}} \frac{B\_{k}}{k!} g\_{\boldsymbol{\nu}+k-1}(\boldsymbol{x}) & \to & 0, \end{aligned}$$

$$
\psi\_\nu(\mathfrak{x} + a) \sim \mathfrak{g}\_{\nu - 1}(\mathfrak{x}) \sim \frac{1}{( - \nu)!} \mathfrak{x}^{-\nu} \ln \mathfrak{x}.
$$

When ν = −2 for instance, these limits reduce to

$$\int\_{\chi}^{x+a} \ln \Gamma(t) \, dt - a \ln \left( \sqrt{2\pi} \frac{x^{\chi}}{e^{\chi}} \right) - \left( \frac{a}{2} \right) \ln \left( \frac{(x+1)^{\chi+1}}{e \cdot x^{\chi}} \right) \to 0,$$

$$\psi\_{-2}(\mathbf{x}) - \frac{1}{12} (\mathbf{x} + 1) \ln(\mathbf{x} + 1) + \frac{1}{12} (3\mathbf{x} - 1)^2$$

$$- \frac{1}{12} \mathbf{x} (6\mathbf{x} - 7) \ln \mathbf{x} - \frac{1}{2} \mathbf{x} \ln(2\pi) \to \ln A,$$

$$\psi\_{-2}(\mathbf{x}) - \frac{1}{12} (6\mathbf{x}^2 - 6\mathbf{x} + 1) \ln \mathbf{x} + \frac{1}{4} (3\mathbf{x} - 2) \mathbf{x} - \frac{1}{2} \mathbf{x} \ln(2\pi) \to \ln A,$$

$$\psi\_{-2}(\mathbf{x} + a) \sim \frac{1}{2} \mathbf{x}^2 \ln \mathbf{x} \,.$$

**Asymptotic Expansions** For any m, q <sup>∈</sup> <sup>N</sup><sup>∗</sup> we have the following expansion as x → ∞

$$\frac{1}{m}\sum\_{j=0}^{m-1}\psi\_{\boldsymbol{\nu}}\left(\mathbf{x}+\frac{j}{m}\right) \\ = \sum\_{k=0}^{q}\frac{B\_k}{m^k k!}\mathbf{g}\_{\boldsymbol{\nu}+k-1}(\mathbf{x}) + O\left(\mathbf{g}\_{\boldsymbol{\nu}+q}(\mathbf{x})\right) \\ \dots$$

Setting m = 1 in this formula, we obtain

$$\psi\_{\boldsymbol{\nu}}(\boldsymbol{\chi}) \;= \sum\_{k=0}^{q} \frac{B\_k}{k!} \, \boldsymbol{g}\_{\boldsymbol{\nu}+k-1}(\boldsymbol{\chi}) + O\left(\boldsymbol{g}\_{\boldsymbol{\nu}+q}(\boldsymbol{\chi})\right) \;.$$

For instance the asymptotic expansion of ψ−<sup>2</sup> is

$$
\psi\_{-2}(\mathbf{x}) = \frac{1}{12}(6\mathbf{x}^2 - 6\mathbf{x} + 1)\ln\mathbf{x} - \frac{1}{4}(3\mathbf{x} - 2)\mathbf{x} + \frac{1}{2}\mathbf{x}\ln(2\pi) + \ln A
$$

$$
+ \frac{1}{720\mathbf{x}^2} + \mathcal{O}\left(\mathbf{x}^{-4}\right).
$$

**Generalized Liu's Formula** For any ν ≥ 1 and any x > 0 we have

$$\psi\_{\boldsymbol{\nu}}(\boldsymbol{x}) = (-1)^{\boldsymbol{\nu}-1} \, \Gamma(\boldsymbol{\nu}) \left( \frac{2\boldsymbol{x} + \boldsymbol{\nu}}{2\boldsymbol{x}^{\boldsymbol{\nu}+1}} + \boldsymbol{\nu}(\boldsymbol{\nu} + 1) \int\_0^\infty \frac{\frac{1}{\boldsymbol{\Delta}} - \{t\}}{(t+\boldsymbol{x})^{\boldsymbol{\nu}+2}} \, dt \right) \dots$$

For ν = −2 and any x > 0 we have

$$\begin{split} \psi\_{-2}(\mathbf{x}) &= \frac{1}{12} (6\mathbf{x}^2 - 6\mathbf{x} + 1) \ln \mathbf{x} - \frac{1}{4} (3\mathbf{x} - 2)\mathbf{x} + \frac{1}{2} \mathbf{x} \ln(2\pi) + \ln A \\ &+ \int\_0^\infty \frac{B\_2(\{t\})}{2(\mathbf{x} + t)} dt. \end{split}$$

**Limit and Series Representations When** *ν* **<sup>≥</sup> <sup>1</sup>** The Eulerian and Weierstrassian forms of ψν reduce to

$$\psi\_{\boldsymbol{\nu}}(\boldsymbol{x}) = \ -\sum\_{k=0}^{\infty} g\_{\boldsymbol{\nu}}(\boldsymbol{x} + k) \ = \ (-1)^{\boldsymbol{\nu}-1} \ \boldsymbol{\nu}! \xi(\boldsymbol{\nu}+1, \boldsymbol{x})$$

and this series converges uniformly on <sup>R</sup>+.

**Limit and Series Representations When** *ν* **≤ −<sup>1</sup>** The analogue of Gauss' limit is

$$\psi\_{\boldsymbol{\nu}}(\boldsymbol{x}) = \psi\_{\boldsymbol{\nu}}(\boldsymbol{1}) + \lim\_{n \to \infty} f\_n^{-\boldsymbol{\nu}}[\mathcal{g}\_{\boldsymbol{\nu}}](\boldsymbol{x}),$$

and both sides can be integrated on any bounded subset of [0,∞) (the limit and the integral commute). They can also be differentiated infinitely many times (the limit and the derivative operator commute).

For instance, when ν = −2 we obtain

$$\psi\_{-2}(\mathbf{x}) = \lim\_{n \to \infty} \left( \sum\_{k=1}^{n-1} k \ln k - \sum\_{k=0}^{n-1} (\mathbf{x} + k) \ln(\mathbf{x} + k) + \mathbf{x} \left( n \ln n + \frac{1}{2} \ln(2\pi) \right) \right)$$

$$+ \binom{\mathbf{x}}{2} \left( (n+1) \ln \left( 1 + \frac{1}{n} \right) + \ln n - 1 \right)).$$

Comparing this formula with that of (10.2), we see that the latter is less complicated, since it was produced from less terms in its polynomial part. Now, differentiating the formula above, we obtain a limit representation for ln -(x), but the Gauss limit is less complicated. In this context, finding the simplest limit representations seems to be an interesting problem.

The Eulerian and Weistrassian representations of ψν take the following forms

$$\begin{aligned} \psi\_{\boldsymbol{\nu}}(\boldsymbol{x}) - \psi\_{\boldsymbol{\nu}}(1) &= -g\_{\boldsymbol{\nu}}(\boldsymbol{x}) + \sum\_{j=1}^{-\boldsymbol{\nu}} \binom{\boldsymbol{\nu}}{j} \Delta^{j-1} g\_{\boldsymbol{\nu}}(1) \\ &+ \sum\_{k=1}^{\infty} \left( -g\_{\boldsymbol{\nu}}(\boldsymbol{x} + k) + \sum\_{j=0}^{-\boldsymbol{\nu}} \binom{\boldsymbol{\nu}}{j} \Delta^{j} g\_{\boldsymbol{\nu}}(k) \right) \end{aligned}$$

and

$$\begin{aligned} \psi\_{\boldsymbol{\nu}}(\boldsymbol{x}) - \psi\_{\boldsymbol{\nu}}(1) &= -g\_{\boldsymbol{\nu}}(\boldsymbol{x}) + \sum\_{j=1}^{-\nu-1} \binom{\boldsymbol{x}}{j} \Delta^{j-1} g\_{\boldsymbol{\nu}}(1) - \boldsymbol{\nu} \binom{\boldsymbol{x}}{-\boldsymbol{\nu}} \\ &+ \sum\_{k=1}^{\infty} \left( -g\_{\boldsymbol{\nu}}(\boldsymbol{x} + k) + \sum\_{j=0}^{-\nu-1} \binom{\boldsymbol{x}}{j} \Delta^{j} g\_{\boldsymbol{\nu}}(k) + \binom{\boldsymbol{x}}{-\boldsymbol{\nu}} \frac{1}{k} \right), \end{aligned}$$

respectively. These series can be integrated term by term on any bounded subset of [0,∞). They can also be differentiated term by term infinitely many times.

For instance, when ν = −2, both identities above reduce to

$$\psi\_{-2}(\mathbf{x}) = \ln \left( \frac{(2\pi)^{\frac{1}{2}\mathbf{x}} (\frac{4}{e})^{\binom{\mathbf{x}}{2}}}{\mathbf{x}^{\mathbf{x}}} \prod\_{k=1}^{\infty} \frac{(1+2/k)^{(k+2)\binom{\mathbf{x}}{2}}}{(1+\mathbf{x}/k)^{\mathbf{x}+k} \left(1+1/k\right)^{(k+1)\mathbf{x}\left(\mathbf{x}-2\right)}} \right)$$

and

$$\psi\_{-2}(\boldsymbol{x}) = \ln \left( \frac{(2\pi)^{\frac{1}{2}\chi} e^{-\chi\left(\frac{\boldsymbol{x}}{2}\right)}}{\chi^{\chi}} \prod\_{k=1}^{\infty} \frac{e^{\frac{1}{k}\binom{\chi}{2}} (1 + 1/k)^{(k+1)\chi}}{(1 + \chi/k)^{\chi+k}} \right) .$$

Integrating both the Eulerian and Weierstrassian forms of ln -(x), we obtain the following representations (which are simpler than the previous ones since less terms are involved; see also Examples 8.3 and 8.8)

$$\psi\_{-2}(\mathbf{x}) = \ln \left( \frac{e^{\mathbf{x}}}{\mathbf{x}^{\mathbf{x}}} \prod\_{k=1}^{\infty} \frac{e^{\mathbf{x}} (1 + 1/k)^{\mathbf{x}^2/2}}{(1 + \mathbf{x}/k)^{\mathbf{x} + k}} \right)$$

and

$$\psi\_{-2}(\mathbf{x}) = \ln \left( e^{-\mathbf{y}\boldsymbol{\chi}^2/2} \frac{e^{\boldsymbol{\chi}}}{\boldsymbol{\chi}^{\boldsymbol{\chi}}} \prod\_{k=1}^{\infty} \frac{e^{\boldsymbol{\chi} + \boldsymbol{\chi}^2/(2k)}}{(1 + \boldsymbol{\chi}/k)^{\boldsymbol{\chi} + k}} \right).$$

Here again, finding the simplest Eulerian and Weierstrassian forms remains an interesting problem.

**Integral Representation** For any <sup>ν</sup> <sup>∈</sup> <sup>Z</sup>, we have

$$
\psi\_\mathbb{v}(\mathbf{x}) = \psi\_\mathbb{v}(\mathbf{l}) + \int\_1^\chi \psi\_{\mathbb{v}+1}(\mathbf{t}) \, d\mathbf{t}.
$$

If ν ≥ 1, then ψν is not integrable at x = 0 (since gν is not). If ν ≤ −1, then ψν is integrable at 0 by definition and we have

$$\psi\_{\upsilon-1}(\mathbf{x}) = \int\_0^\mathbf{x} \psi\_{\upsilon}(t) \, dt \, = \int\_0^\mathbf{x} \frac{(\mathbf{x} - t)^{-\upsilon - 1}}{(-\upsilon - 1)!} \ln \Gamma(t) \, dt \,.$$

**Gregory's Formula-Based Series Representation** Proposition 8.11 gives the following series representation: for any x > 0 we have

$$\begin{aligned} \psi\_{\boldsymbol{\nu}}(\boldsymbol{\chi}) &= \mathcal{g}\_{\boldsymbol{\nu}-1}(\boldsymbol{\chi}) - \sum\_{n=0}^{\infty} G\_{n+1} \, \Delta^{n} \mathcal{g}\_{\boldsymbol{\nu}}(\boldsymbol{\chi}) \\ &= \mathcal{g}\_{\boldsymbol{\nu}-1}(\boldsymbol{\chi}) - \sum\_{n=0}^{\infty} |G\_{n+1}| \sum\_{k=0}^{n} (-1)^{k} \binom{n}{k} \mathcal{g}\_{\boldsymbol{\nu}}(\boldsymbol{\chi} + k) \ . \end{aligned}$$

Setting x = 1 in this identity yields the analogue of Fontana-Mascheroni series. For instance, taking ν = 1, we derive the identity (see, e.g., Merlini et al. [72, p. 1920])

$$\sum\_{n=1}^{\infty} |G\_n| \frac{H\_n}{n} = \frac{\pi^2}{6} - 1 \dots$$

Taking ν = 2, we obtain

$$\sum\_{n=1}^{\infty} |G\_n| \frac{\psi\_1(n+1) - H\_n^2}{n} = \left| 1 - 2\,\xi(3) + \gamma \,\frac{\pi^2}{6} \right.$$

**Analogue of Gauss' Multiplication Formula** Assume first that ν ≥ 1. Differentiating repeatedly both sides of the multiplication formula (10.5) for the digamma function <sup>ψ</sup>, we obtain the following formula. For any <sup>m</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> and any x > 0, we have

$$\sum\_{j=0}^{m-1} \psi\_\nu \left( \frac{x+j}{m} \right)\_+ = \left. m^{\nu+1} \right\psi\_\nu(x).$$

Moreover, Corollary 8.33 provides the following limit

$$\lim\_{m \to \infty} m^{\mathbb{v}} \psi\_{\mathbb{v}}(m\mathbb{x}) \, = \, \text{g}\_{\mathbb{v}-1}(\mathbb{x}), \qquad \text{x} > 0.$$

Assume now that ν ≤ −1. Applying Theorem 8.27 to the function gν , we obtain that for any <sup>m</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> and any x > <sup>0</sup>

$$\sum\_{j=0}^{m-1} \psi\_{\boldsymbol{\nu}} \left( \frac{\boldsymbol{x} + j}{m} \right) \\ = \sum\_{j=1}^{m-1} \psi\_{\boldsymbol{\nu}} \left( \frac{j}{m} \right) + \psi\_{\boldsymbol{\nu}}(1) + \Sigma\_{\boldsymbol{x}} \, \mathop{\,} \boldsymbol{g}\_{\boldsymbol{\nu}} \left( \frac{\boldsymbol{x}}{m} \right) \\ \dots$$

Let us expand this formula in the special case when ν = −2. First, we have

$$\log\_{-2}\left(\frac{\chi}{m}\right) = \frac{1}{m}\lg\_{-2}(\chi) - x\frac{\ln m}{m} + \frac{m-1}{m}\psi\_{-2}(1)$$

and hence

$$\Delta\_{\mathbf{x}} g\_{-2} \left( \frac{\mathbf{x}}{m} \right) \\ = \frac{1}{m} \psi\_{-2}(\mathbf{x}) - \binom{\mathbf{x}}{2} \frac{\ln m}{m} + \left( \frac{m-1}{m} \mathbf{x} - 1 \right) \psi\_{-2}(1) .$$

Using Proposition 8.28, after some algebra we also obtain

$$\sum\_{j=1}^{m-1} \psi\_{-2} \left( \frac{j}{m} \right) \\ = \left( 1 - \frac{1}{m} \right) \ln A - \frac{\ln m}{12 \, m} + (m - 1) \ln((2\pi)^{\frac{1}{4}} A) \dots$$

Now, collecting terms, we finally get the following multiplication formula for ψ−<sup>2</sup>

$$\begin{aligned} \sum\_{j=0}^{m-1} \psi\_{-2} \left( \frac{\mathbf{x} + j}{m} \right) &= \frac{1}{m} \,\psi\_{-2}(\mathbf{x}) - \frac{1}{12m} \left( 6\mathbf{x}^2 - 6\mathbf{x} + 1 \right) \ln m \\ &+ (m - 1) \ln(2\pi) \left( \frac{\mathbf{x}}{2m} + \frac{1}{4} \right) + \left( m - \frac{1}{m} \right) \ln A .\end{aligned}$$

Setting m = 2 in the formula above, we obtain the following analogue of Legendre's duplication formula

$$\begin{split} \psi\_{-2}\left(\frac{\chi}{2}\right) + \psi\_{-2}\left(\frac{\chi+1}{2}\right) &= \frac{1}{2}\psi\_{-2}(\chi) - \frac{1}{24}\left(6\chi^2 - 6\chi + 1\right)\ln 2 \\ &+ \frac{1}{4}\ln(2\pi)\left(\chi+1\right) + \frac{3}{2}\ln A. \end{split}$$

Taking x = 0 in this latter identity, we obtain

$$
\psi\_{-2}\left(\frac{1}{2}\right) \\
= \frac{5}{24}\ln 2 + \frac{3}{2}\ln A + \frac{1}{4}\ln \pi \\
\dots
$$

Moreover, Corollary 8.33 provides the following limit

$$\lim\_{m \to \infty} \left( \frac{1}{m^2} \psi\_{-2}(mx) - \frac{x^2}{2} \ln m \right) \\ = \frac{1}{2} x^2 \ln x - \frac{3}{4} x^2, \qquad x > 0.$$

**Analogue of Wallis's Product Formula** If ν ≥ 1, then the analogue of Wallis's formula is simply

$$\sum\_{k=1}^{\infty} (-1)^{k-1} g\_{\boldsymbol{\nu}}(k) = (-1)^{\boldsymbol{\nu}} (1 - 2^{-\boldsymbol{\nu}}) \,\boldsymbol{\nu}! \,\xi(\boldsymbol{\nu} + 1),$$

or equivalently,

$$\sum\_{k=1}^{\infty} (-1)^{k-1} g\_{\mathbb{V}}(k) \;=\; (-1)^{\mathbb{V}} \nu! \; \eta(\nu+1),$$

where η is Dirichlet's eta function. In the case when ν = −2, after a bit of calculus we obtain the following analogue of Wallis's formula

$$\lim\_{n \to \infty} \left( h(n) + \sum\_{k=1}^{2n} (-1)^{k-1} g\_{-2}(k) \right) \\ = \frac{1}{12} \ln 2 - 3 \ln A.$$

where

$$h(n) \;=\; \left(n + \frac{1}{4}\right) \ln n - n(1 - \ln 2).$$

*Project 10.2* Find the analogue of Wallis's formula for the function g(x) = ψ−2(x). After some algebra, we obtain

$$\lim\_{n \to \infty} \left( h(n) + \sum\_{k=1}^{2n} (-1)^{k-1} \psi\_{-2}(k) \right) \\ = \ln A - \frac{1}{12} \ln 2 \dots$$

where

$$h(n) := n^2 \ln(2n) - \frac{3}{2}n^2 + \frac{1}{2}n \ln(2\pi) - \frac{1}{12} \ln n.$$

This formula is a little harder to obtain than the former one; it requires the computation of both functions ψ−2(x) and 2 xψ−2(2x) using the elevator method (Corollary 7.20) with r = 2. That is,

$$\begin{split} \Sigma \psi\_{-2}(\mathbf{x}) &= -\frac{1}{12} \mathbf{x}(\mathbf{x} - \mathbf{1})(2\mathbf{x} - \mathbf{1}) + \frac{1}{4} \mathbf{x}(\mathbf{x} + \mathbf{1}) \ln(2\pi) \\ &+ 2\mathbf{x} \ln A + (\mathbf{x} - \mathbf{1}) \,\psi\_{-2}(\mathbf{x}) - 2 \,\psi\_{-3}(\mathbf{x}) \end{split} \tag{10.7}$$

and

$$2\operatorname{\Sigma}\_{\mathbf{x}}\psi\_{-2}(2\mathbf{x}) = -\frac{1}{6}\operatorname{\mathbf{x}}(2\mathbf{x}-1)(4\mathbf{x}-1) + (4\mathbf{x}+3)\ln A$$

$$+\frac{1}{12}\left(-24\mathbf{x}^2 + 48\mathbf{x} + 5\right)\ln 2 - 4\,\psi\_{-2}(\mathbf{x})$$

$$+2\operatorname{\mathbf{x}}\,\psi\_{-2}(2\mathbf{x}) - 2\,\psi\_{-2}\left(\mathbf{x} + \frac{1}{2}\right) - 2\,\psi\_{-3}(2\mathbf{x}).$$

These formulas can also be verified using the difference operator. ♦

**Restriction to the Natural Integers When** *<sup>ν</sup>* **<sup>≥</sup> <sup>1</sup>** For any <sup>n</sup> <sup>∈</sup> <sup>N</sup>∗, we have

$$
\psi\_\nu(n) - \psi\_\nu(1) = \sum\_{k=1}^{n-1} g\_\nu(k) \\
= (-1)^\nu \nu! \sum\_{k=1}^{n-1} \frac{1}{k^{\nu+1}}.
$$

In particular,

$$\psi\_{\boldsymbol{\upsilon}}(1) = \ -\sum\_{k=1}^{\infty} g\_{\boldsymbol{\upsilon}}(k).$$

Gregory's formula states that for any <sup>n</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> and any <sup>q</sup> <sup>∈</sup> <sup>N</sup> we have

$$\begin{aligned} \sum\_{k=1}^{n-1} g\_{\boldsymbol{\upnu}}(k) &= g\_{\boldsymbol{\upnu}-1}(n) - g\_{\boldsymbol{\upnu}-1}(1) \\ &- \sum\_{j=1}^{q} G\_j \left( \Delta^{j-1} g\_{\boldsymbol{\upnu}}(n) - \Delta^{j-1} g\_{\boldsymbol{\upnu}}(1) \right) - R\_n^q \dots \end{aligned}$$

with

$$|R\_n^q| \le \left| \overline{G}\_q \left| \Delta^q g\_\mathbb{v}(n) - \Delta^q g\_\mathbb{v}(1) \right| \right|.$$

**Generalized Webster's Functional Equation** For any <sup>m</sup> <sup>∈</sup> <sup>N</sup>∗, there is a unique solution <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> to the equation

$$\sum\_{j=0}^{m-1} f\left(x + \frac{j}{m}\right) = \lg\_{\mathbb{V}}(x)$$

that lies in *<sup>K</sup>*(−ν)+, namely

$$f(\mathbf{x}) := \psi\_{\mathbb{V}}\left(\mathbf{x} + \frac{1}{m}\right) - \psi\_{\mathbb{V}}(\mathbf{x})\,.$$

**Analogue of Euler's Series Representation of** *γ* Assume first that <sup>ν</sup> <sup>≥</sup> 1. In this case, for any <sup>k</sup> <sup>∈</sup> <sup>N</sup> we have

$$
\psi\_{\nu}^{(k)}(1) = \psi\_{\nu+k}(1) = (-1)^{\nu+k-1} (\nu+k)! \zeta(\nu+k+1).
$$

Thus, the Taylor series expansion of ψν (x + 1) about x = 0 is

$$\psi\_{\boldsymbol{\nu}}(\mathbf{x}+\mathbf{l}) = \sum\_{k=0}^{\infty} (-1)^{\boldsymbol{\nu}+k-1} \frac{(\boldsymbol{\nu}+k)!}{k!} \xi(\boldsymbol{\nu}+k+1) \, \boldsymbol{x}^{k}, \qquad |\boldsymbol{x}| < 1.$$

Integrating both sides of this equation on (0, 1), we obtain the identity

$$g\_{\upsilon-1}(1) = \sum\_{k=0}^{\infty} (-1)^{\upsilon+k-1} \frac{(\upsilon+k)!}{(k+1)!} \xi(\upsilon+k+1) \dots$$

We proceed similarly when ν ≤ −1. To keep the computations simple, let us assume that ν = −2. We then have

$$
\psi\_{-2}(\mathbf{l}) = \frac{1}{2} \ln(2\pi), \quad \psi\_{-2}'(\mathbf{l}) = \psi\_{-1}(\mathbf{l}) = 0, \quad \psi\_{-2}''(\mathbf{l}) = \psi\_0(\mathbf{l}) = -\gamma,
$$

and for any integer k ≥ 3,

$$
\psi\_{-2}^{(k)}(1) = \psi\_{k-2}(1) = (-1)^{k-1}(k-2)!\zeta(k-1).
$$

Thus, the Taylor series expansion of ψ−2(x + 1) about x = 0 is

$$\psi\_{-2}(\mathbf{x}+\mathbf{l}) = \frac{1}{2}\ln(2\pi) - \chi\frac{\mathbf{x}^2}{2} + \sum\_{k=3}^{\infty}(-1)^{k-1}\frac{\xi(k-1)}{(k-1)k}\mathbf{x}^k, \qquad |\mathbf{x}| < 1.$$

Integrating both sides of this equation on (0, 1), we obtain

$$\sum\_{k=2}^{\infty} (-1)^k \frac{\xi(k)}{k(k+1)(k+2)} = \frac{1}{6}\mathcal{Y} - \frac{3}{4} + \frac{1}{4}\ln(2\pi) + \ln A \dots$$

**Analogue of the Reflection Formula** Assume first that ν ≥ 1. Differentiating the reflection formula for ψ repeatedly, we obtain the following formula. For any <sup>x</sup> <sup>∈</sup> <sup>R</sup> \ <sup>Z</sup>, we have

$$
\psi\_\vee(\mathbf{x}) - (-1)^\upsilon \psi\_\vee(1-\mathbf{x}) = -\pi \, D^\vee \cot(\pi \mathbf{x}) .
$$

When ν ≤ −1, a reflection formula on (0, 1) can be obtained by integrating both sides of the identity

$$
\ln \Gamma(\chi) + \ln \Gamma(1-x) = \ln \pi - \ln \sin(\pi x).
$$

For example, for any x ∈ (0, 1) we have

$$
\psi\_{-2}(\mathbf{x}) - \psi\_{-2}(1-\mathbf{x}) = \mathbf{x} \ln \pi - \frac{1}{2} \ln(2\pi) - \int\_0^\chi \ln \sin(\pi t) \, dt \dots
$$

As a byproduct, we obtain

$$\int\_0^{\frac{1}{2}} \ln \sin(\pi t) \, dt \, = \, -\, \frac{1}{2} \ln 2 \dots$$

## **10.4 The** *q***-Gamma Function**

For any 0 <q< 1, the q-gamma function <sup>q</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup><sup>+</sup> is defined by the equation (see, e.g., [93, p. 490])

$$\Gamma\_q(\mathbf{x}) = (1-q)^{1-\chi} \prod\_{k=0}^{\infty} \frac{1-q^{k+1}}{1-q^{\chi+k}} = (1-q)^{1-\chi} \frac{(q;q)\_{\infty}}{(q^{\chi};q)\_{\infty}} \qquad \text{for } \mathbf{x} > \mathbf{0}. \tag{10.8}$$

Here we use the standard notation

$$(a;q)\_\infty = \prod\_{k=0}^\infty \left(1 - aq^k\right).$$

Note that these functions should not to be confused with the multiple gamma functions discussed in Sect. 5.2 (although the same symbols are used).

The function fq (x) = ln <sup>q</sup> (x) is a convex solution satisfying fq (1) = 0 to the equation fq <sup>=</sup> gq on <sup>R</sup>+, where gq : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> is the function defined by the equation

gq (x) = ln <sup>1</sup> <sup>−</sup> <sup>q</sup><sup>x</sup> 1 − q for x > 0.

Since gq lies in *<sup>D</sup>*1∩*K*<sup>1</sup> (and deg gq <sup>=</sup> 0), by the uniqueness theorem we must have

$$
\ln \Gamma\_q(\mathbf{x}) = \,\, \Sigma \mathbf{g}\_q(\mathbf{x}), \qquad \mathbf{x} > \mathbf{0}. \tag{10.9}
$$

Askey [13] proved an analogue of the Bohr-Mollerup theorem for q . However, as Webster [98, p. 615] already observed, this is actually an immediate consequence of the uniqueness Theorem 3.1 in the special case when p = 1.

Let us now investigate this function in the light of our results.

*Remark 10.3* When q > 1, the q-gamma function <sup>q</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup><sup>+</sup> is also defined by Eq. (10.9). In this case, using L'Hospital's rule we can readily see that gq(x) → ln q as x → ∞, and hence deg gq = 1. An analogue of the Bohr-Mollerup characterization for q was established by Moak [74]. We can see now that this characterization is a trivial consequence of our uniqueness Theorem 3.1 in the special case when p = 2. The complete analysis of q through our results is similar to the case when 0 <q< 1 and is left to the reader. ♦ **ID Card** As discussed above, the function <sup>q</sup> is a --type function and we immediately derive the following basic information.


**Analogue of Bohr-Mollerup's Theorem** The q-gamma function can be characterized as follows.

*All eventually convex or concave solutions* fq : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *to the equation*

$$f\_q(\mathbf{x} + \mathbf{l}) - f\_q(\mathbf{x}) \ = \ln \frac{1 - q^{\mathbf{x}}}{1 - q}$$

*are of the form* fq (x) = cq + ln <sup>q</sup> (x), *where* cq <sup>∈</sup> <sup>R</sup>.

Using Proposition 3.9, we can also derive the following alternative characterization of the q-gamma function.

*All solutions* fq : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *to the equation*

$$f\_q(\mathbf{x} + 1) - f\_q(\mathbf{x}) \ = \ln \frac{1 - q^{\mathbf{x}}}{1 - q}$$

*that satisfy the asymptotic condition that, for each* x > 0,

$$f\_q(\mathbf{x} + \mathbf{n}) - f\_q(\mathbf{n}) - \mathbf{x} \ln \frac{1 - q^n}{1 - q} \to 0 \qquad \text{as } \mathbf{n} \to \mathbb{N} \text{ } \infty$$

*are of the form* fq (x) = cq + ln <sup>q</sup> (x), *where* cq <sup>∈</sup> <sup>R</sup>.

**Extended ID Card** Interestingly, El Bachraoui [35] recently established the following analogue of Raabe's formula

$$\int\_{\chi}^{\chi+1} \ln \Gamma\_q(t) \, dt \, = \left(\frac{1}{2} - x\right) \ln(1 - q) - \frac{1}{\ln q} \operatorname{Li}\_2(q^x) + \ln(q; q)\_{\infty}, \qquad x \ge 0, 1$$

where

$$\operatorname{Li}\_s(z) := \sum\_{k=1}^{\infty} \frac{z^k}{k^s}$$

is the polylogarithm function. This formula provides immediately the following values

$$
\lfloor \overline{\sigma} \lg q \rfloor = \frac{1}{2} \ln(1 - q) - \frac{\zeta(2)}{\ln q} + \ln(q; q)\_{\infty}, \tag{10.10}
$$

$$\sigma[\!g\_q] = -\frac{1}{2}\ln(1-q) - \frac{1}{\ln q} \text{Li}\_2(q) + \ln(q;q)\_\infty \,\,, \tag{10.11}$$

and the integral

$$\int\_{1}^{\chi} \mathbf{g}\_{q}(\mathbf{t}) \, dt \, = \left( 1 - \mathbf{x} \right) \ln(1 - q) - \frac{1}{\ln q} \left( \text{Li}\_{2}(q^{\chi}) - \text{Li}\_{2}(q) \right) \,.$$

We then have the following values


• *Alternative representations of* σ[gq ] = γ [gq ]

$$\begin{aligned} \sigma[\mathbf{g}\_q] &= \int\_0^1 \ln \Gamma\_q(t+1) \, dt \,, \\ \sigma[\mathbf{g}\_q] &= \log[q] \int\_1^\infty \left(\frac{1}{2} - \{t\}\right) \frac{q^t}{1 - q^t} \, dt \,, \\ \sigma[\mathbf{g}\_q] &= \int\_1^\infty \ln \frac{(1 - q^{\lfloor t \rfloor})^{1/2} (1 - q^{\lfloor t + 1 \rfloor})^{1/2}}{1 - q^t} \, dt \,, \\ \sigma[\mathbf{g}\_q] &= \frac{1}{2} \sum\_{k=1}^\infty \ln \left( (1 - q^k)(1 - q^{k+1}) \right) - \frac{1}{\ln q} \operatorname{Li}\_2(q) \, . \end{aligned}$$

• *Generalized Binet's function*

$$J^2[\ln \diamond \Gamma\_q](\mathbf{x}) = \ln \Gamma\_q(\mathbf{x}) + (\mathbf{x} - \mathbf{l})\ln(\mathbf{l} - q) + \frac{1}{\ln q} \text{Li}\_2(q^{\mathbf{x}}) + \frac{1}{2} \ln(\mathbf{l} - q^{\mathbf{x}}),$$

$$-\ln(q; q)\_{\infty}.$$

• *Alternative characterization*. The function fq (x) = ln <sup>q</sup> (x) is the unique solution lying in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*<sup>1</sup> to the equation

$$\int\_{\chi}^{\chi+1} f\_q(t) \, dt \, = \left(\frac{1}{2} - x\right) \ln(1 - q) - \frac{1}{\ln q} \operatorname{Li}\_2(q^{\chi}) + \ln(q; q)\_{\infty}, \qquad x > 0.$$

**Inequalities** The following inequalities hold for any x > 0 and any a ≥ 0.

• *Symmetrized generalized Wendel's inequality* (equality if a ∈ {0, 1})

$$\begin{aligned} \left| \ln \Gamma\_q(\mathbf{x} + a) - \ln \Gamma\_q(\mathbf{x}) - a \operatorname{\mathbf{g}}\_q(\mathbf{x}) \right| &\leq |a - 1| \left| \operatorname{\mathbf{g}}\_q(\mathbf{x} + a) - \operatorname{\mathbf{g}}\_q(\mathbf{x}) \right| \\ &\leq \lceil a \rceil \left| a - 1 \right| \left| \operatorname{\mathbf{A}} \operatorname{\mathbf{g}}\_q(\mathbf{x}) \right|, \end{aligned}$$

$$\left(\frac{1 - q^{\chi + a}}{1 - q^{\chi}}\right)^{-|a - 1|} \le \frac{\Gamma\_q(\chi + a)}{\Gamma\_q(\chi) \left(\frac{1 - q^{\chi}}{1 - q}\right)^a} \le \left(\frac{1 - q^{\chi + a}}{1 - q^{\chi}}\right)^{|a - 1|}.$$

• *Symmetrized Stirling's formula-based inequality*

$$|J^2[\ln \circ \Gamma\_q](\mathbf{x})| \le \frac{1}{2} \left( \mathbf{g}\_q(\mathbf{x} + \mathbf{l}) - \mathbf{g}\_q(\mathbf{x}) \right),$$

$$\left( \frac{1 - q^{\mathbf{x} + \mathbf{l}}}{1 - q^{\mathbf{x}}} \right)^{-\frac{1}{2}} \le \frac{\Gamma\_q(\mathbf{x}) \left( 1 - q \right)^{\mathbf{x} - 1} (1 - q^{\mathbf{x}})^{\frac{1}{2}}}{(q; q)\_{\infty} \exp \left( - \frac{1}{\ln q} \text{Li}\_2(q^{\mathbf{x}}) \right)} \le \left( \frac{1 - q^{\mathbf{x} + 1}}{1 - q^{\mathbf{x}}} \right)^{\frac{1}{2}}$$

ln q

• *Burnside's formula-based inequality*

$$\begin{aligned} \left| \ln \Gamma\_q \left( x + \frac{1}{2} \right) + \left( x - \frac{1}{2} \right) \ln(1 - q) + \frac{1}{\ln q} \text{Li}\_2(q^{\chi}) - \ln(q; q)\_{\infty} \right| \\ &\leq |J^2 [\ln \circ \Gamma\_q](x)|. \end{aligned}$$

• *Generalized Gautschi's inequality*

$$e^{(a-\lceil a\rceil)\psi\_{q,0}(\chi+\lceil a\rceil)} \leq \frac{\Gamma\_q(\chi+a)}{\Gamma\_q(\chi+\lceil a\rceil)} \leq \left(\frac{1-q^{\chi+\lfloor a\rfloor}}{1-q}\right)^{a-\lceil a\rceil}$$

where ψq,0(x) = D ln <sup>q</sup> (x).

**Generalized Stirling's and Related Formulas** For any a ≥ 0, we have the following limits and asymptotic equivalences as x → ∞,

$$
\ln \Gamma\_q(\mathfrak{x} + a) - \ln \Gamma\_q(\mathfrak{x}) \to -a \ln(1 - q),
$$

$$\begin{aligned} \frac{\Gamma\_q(\mathbf{x})}{\Gamma\_q(\mathbf{x}+a)} &\sim (1-q)^a, & \ln \Gamma\_q(\mathbf{x}+a) &\sim -\mathbf{x} \ln(1-q) \,, \\\\ \ln \Gamma\_q(\mathbf{x}) + (\mathbf{x}-\mathbf{l}) \ln(\mathbf{l}-q) - \ln(q;q)\_\infty &\to \; 0 \,, \\\\ \Gamma\_q(\mathbf{x}) &\sim (q;q)\_\infty (1-q)^{1-\chi} \,. \end{aligned}$$

.

.

,

The generalized Stirling formula simply shows that ln <sup>q</sup> (x) has the oblique asymptote

$$y = (1 - x)\ln(1 - q) + \ln(q; q)\_{\infty}.$$

*Burnside-like approximation* (better than Stirling-like approximation)

$$\Gamma\_q(\mathbf{x}) \sim (q;q)\_\infty (1-q)^{1-\chi} \exp\left(-\frac{1}{\ln q} \text{Li}\_2(q^{\chi-\frac{1}{2}})\right).$$

*Further results* (obtained by differentiation). For any 0 <q< 1 and any <sup>ν</sup> <sup>∈</sup> <sup>N</sup>, let the function ψq,ν : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> denote the <sup>q</sup>-polygamma function defined by the equation

$$\psi\_{q,\upsilon}(\mathbf{x}) := D^{\upsilon+1} \ln \Gamma\_q(\mathbf{x}) \qquad \text{for } \mathbf{x} > \mathbf{0}.$$

We then have the following limits and asymptotic equivalences as x → ∞,

$$\begin{array}{ccccc} \psi\_{q,0}(\mathbf{x}+\mathbf{a}) - \psi\_{q,0}(\mathbf{x}) \to & \mathbf{0}, & \psi\_{q,0}(\mathbf{x}) \to & -\ln(1-q), \\\\ \psi\_{q,0}(\mathbf{x}+\mathbf{a}) \sim & -\ln(1-q), & \psi\_{q,\nu}(\mathbf{x}) \to & \mathbf{0}, & \nu \in \mathbb{N}^\*. \end{array}$$

*Project 10.4* Find the generalized Stirling formula when q > 1. In the case when q > 1, we have deg gq = 1 and hence the generalized Stirling formula is

$$
\ln \Gamma\_q(\mathbf{x}) - \int\_{\mathbf{x}}^{\mathbf{x} + 1} \ln \Gamma\_q(t) \, dt + \frac{1}{2} \mathbf{g}\_q(\mathbf{x}) - \frac{1}{12} \Delta \mathbf{g}\_q(\mathbf{x}) \to \mathbf{0} \qquad \text{as } \mathbf{x} \to \infty,
$$

where gq (x) → ln q as x → ∞. However, here the integral takes the following more complicated form (see El Bachraoui [35] and the references therein)

$$\int\_{x}^{x+1} \ln \Gamma\_q(t) \, dt = \ln C\_q - \frac{1}{2q^{\chi} \ln q} \left( \frac{1 - q^{\chi}}{1 - q^{-\chi}} (2 \operatorname{Li}\_2(q^{-\chi}) + (\ln(1 - q^{-\chi}))^2) \right)$$

$$- 2 \frac{1 - q^{\chi}}{1 - q^{-\chi}} \ln \frac{1 - q^{\chi}}{1 - q} \ln(1 - q^{-\chi}) - q^{\chi} \left( \ln \frac{1 - q^{\chi}}{1 - q} \right)^2$$

where

$$C\_q \, = \, q^{-\frac{1}{12}} (q-1)^{\frac{1}{2} - \frac{\ln(q-1)}{2\ln q}} (q^{-1}; q^{-1})\_\infty \, .$$

This is the analogue of Raabe's formula for ln <sup>q</sup> (x) when q > 1. ♦

**Asymptotic Expansions** For any m, r <sup>∈</sup> <sup>N</sup><sup>∗</sup> we have the following expansion as x → ∞

$$\begin{aligned} \frac{1}{m} \sum\_{j=0}^{m-1} \ln \Gamma\_q \left( \mathbf{x} + \frac{j}{m} \right) &= \left( \frac{1}{2} - \mathbf{x} \right) \ln(1 - q) - \frac{1}{\ln q} \operatorname{Li}\_2(q^x) + \ln(q; q)\_{\infty} \\ &+ \sum\_{k=1}^r \frac{B\_k}{m^k \, k!} \operatorname{g}\_q^{(k-1)}(\mathbf{x}) + O\left( \operatorname{g}\_q^{(r)}(\mathbf{x}) \right). \end{aligned}$$

Setting m = 1 in this formula, we obtain the expansion of the log-q-gamma function

$$\begin{aligned} \ln \Gamma\_q(\mathbf{x}) &= \left(\frac{1}{2} - \mathbf{x}\right) \ln(1 - q) - \frac{1}{\ln q} \operatorname{Li}\_2(q^\times) + \ln(q; q)\_\infty \\ &+ \sum\_{k=1}^r \frac{B\_k}{k!} \operatorname{g}\_q^{(k-1)}(\mathbf{x}) + O\left(\operatorname{g}\_q^{(r)}(\mathbf{x})\right). \end{aligned}$$

**Generalized Liu's Formula** For any x > 0, we have

$$\ln \Gamma\_q(\mathbf{x}) = \left(\frac{1}{2} - x\right) \ln(1 - q) - \frac{1}{\ln q} \operatorname{Li}\_2(q^\times) + \ln(q; q)\_\infty$$

$$- \frac{1}{2} \ln \frac{1 - q^\times}{1 - q} + (\ln q) \int\_0^\infty \left(\{t\} - \frac{1}{2}\right) \frac{q^{\times + t}}{1 - q^{\times + t}} dt.$$

**Limit and Series Representations** It is not difficult to see that both the Eulerian form of gq (x) and the analogue of Gauss's limit reduce to the definition of the q-gamma function given in Eq. (10.8). Let us now examine the other series representations.

• *Weierstrassian form*. For any x > 0, we have

$$\ln \Gamma\_q(\mathbf{x}) = -\ln \frac{1 - q^{\mathbf{x}}}{1 - q} + \psi\_{q,0}(\mathbf{l}) \ge -\sum\_{k=1}^{\infty} \left( \ln \frac{1 - q^{\mathbf{x} + \mathbf{k}}}{1 - q^k} + (\ln q) \frac{q^k}{1 - q^k} \ge \right).$$

Differentiating this series term by term, we obtain

$$
\psi\_{q,0}(\mathbf{x}) = \left(\ln q\right) \frac{q^{\chi}}{1 - q^{\chi}} + \psi\_{q,0}(1) + \left(\ln q\right) \sum\_{k=1}^{\infty} \left(\frac{1}{1 - q^{\chi+k}} - \frac{1}{1 - q^k}\right).
$$

• *Gregory's formula-based series representation*. For any x > 0 we have the series representation

$$\ln \Gamma\_q(\mathbf{x}) = \left(\frac{1}{2} - \mathbf{x}\right) \ln(1 - q) - \frac{1}{\ln q} \operatorname{Li}\_2(q^\mathbf{x}) + \ln(q; q)\_\infty$$

$$- \sum\_{n=0}^\infty |G\_{n+1}| \sum\_{k=0}^n (-1)^k \binom{n}{k} \mathbf{g}\_q(\mathbf{x} + k).$$

Setting x = 1 in this identity yields the following analogue of Fontana-Mascheroni series

$$\sum\_{n=0}^{\infty} |G\_{n+1}| \sum\_{k=0}^{n} (-1)^k \binom{n}{k} \mathfrak{g}\_q(k+1) = -\frac{1}{2} \ln(1-q) - \frac{1}{\ln q} \mathrm{Li}\_2(q) + \ln(q;q)\_{\infty}.$$

**Analogue of Gauss' Multiplication Formula** After first noting that

$$\mathcal{g}\_q\left(\frac{\chi}{m}\right) := \mathcal{g}\_{q\frac{1}{m}}(\chi) + \mathcal{g}\_q\left(\frac{1}{m}\right), \qquad \chi > 0,$$

we immediately obtain the following identity

$$\sum\_{j=0}^{m-1} \ln \Gamma\_q \left( \mathbf{x} + \frac{j}{m} \right) \\ \quad = \sum\_{j=1}^m \ln \Gamma\_q \left( \frac{j}{m} \right) + \ln \Gamma\_{q^{\frac{1}{m}}}(m\mathbf{x}) + (m\mathbf{x} - 1) \, \mathbf{g}\_q \left( \frac{1}{m} \right).$$

Now, using Proposition 8.28, we also obtain

$$\sum\_{j=1}^{m} \ln \Gamma\_q \left( \frac{j}{m} \right) \;= \frac{m-1}{2} \ln(1-q) + m \ln(q;q)\_\infty - \ln \left( q^{\frac{1}{m}}; q^{\frac{1}{m}} \right)\_\infty \dots$$

Thus, we get the following multiplication formula

$$\prod\_{j=0}^{m-1} \Gamma\_q \left( \mathbf{x} + \frac{j}{m} \right)^{\frac{1}{2}} = (1-q)^{\frac{m-1}{2}} \frac{(q;q)\_{\infty}^m}{\left( q^{\frac{1}{m}}; q^{\frac{1}{m}} \right)\_{\infty}} \Gamma\_{q^{\frac{1}{m}}}(m\mathbf{x}) \left( \frac{1-q^{\frac{1}{m}}}{1-q} \right)^{m\mathbf{x}-1}$$

,

.

or equivalently, replacing q with qm,

$$\prod\_{j=0}^{m-1} \Gamma\_{q^m} \left( \mathbf{x} + \frac{j}{m} \right) \\ = (1 - q^m)^{\frac{m-1}{2}} \frac{(q^m; q^m)\_{\infty}^m}{(q; q)\_{\infty}} \Gamma\_q(m \mathbf{x}) \left( \frac{1 - q}{1 - q^m} \right)^{m \times -1}$$

(See also, e.g., Srivastava and Choi [93, p. 494] and Webster [98, p. 617].) For instance, when m = 2, we obtain the following analogue of Legendre's duplication formula

$$\left(\Gamma\_{q^2}(\mathbf{x})\,\Gamma\_{q^2}\left(\mathbf{x}+\frac{1}{2}\right)\right) \\ = \left(1-q^2\right)^{\frac{1}{2}} \frac{\left(q^2;q^2\right)\_\infty^2}{(q;q)\_\infty} \frac{\Gamma\_q(2\mathbf{x})}{(1+q)^{2\chi-1}}.$$

**Analogue of Wallis's Product Formula** Using Proposition 8.49 with

$$
\tilde{\mathbf{g}}\_q(\mathfrak{x}) \, \, = \, 2\mathbf{g}\_q(2\mathfrak{x}) \, \, = \, 2(\mathbf{g}\_{q^2}(\mathfrak{x}) + \mathbf{g}\_q(\mathfrak{Z})),
$$

we obtain

$$\begin{aligned} h(n) &= \Sigma \tilde{\mathbf{g}}\_q(n+1) - \Sigma \mathbf{g}\_q(2n+1) \\ &= 2 \ln \Gamma\_{q^2}(n+1) + 2 \mathbf{g}\_2(2) n - \ln \Gamma\_q(2n+1) .\end{aligned}$$

Using the generalized Stirling formula, we then have

$$\lim\_{n \to \infty} h(n) = \mathcal{Z} \ln(q^2; q^2)\_{\infty} - \ln(q; q)\_{\infty}.$$

Finally, we obtain the following analogue of Wallis's formula

$$\lim\_{n \to \infty} \sum\_{k=1}^{2n} (-1)^{k-1} \ln \frac{1 - q^k}{1 - q} = \ln \frac{(q; q)\_{\infty}}{(q^2; q^2)\_{\infty}^2} \dots$$

**Generalized Webster's Functional Equation** For any <sup>m</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> and any a > 0, there is a unique solution <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup><sup>+</sup> to the equation

$$\prod\_{j=0}^{m-1} f(\mathbf{x} + aj) \;= \frac{1 - q^{\mathbf{x}}}{1 - q}$$

such that ln <sup>f</sup> lies in *<sup>K</sup>*<sup>0</sup> (or in *<sup>K</sup>*1), namely

$$f(\mathbf{x}) = \frac{\Gamma\_{q^{am}}(\frac{\chi+a}{am})}{\Gamma\_{q^{am}}(\frac{\chi}{am})} \left(\frac{1-q^{am}}{1-q}\right)^{\frac{1}{m}}.$$

## **10.5 The Barnes** *G***-Function**

The Barnes function <sup>G</sup>: <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup><sup>+</sup> is the function <sup>G</sup> <sup>=</sup> <sup>1</sup>/ -<sup>2</sup> as defined in Sect. 5.2. Hence, it can be defined by the equations

$$
\ln G(\mathbf{x}) = \; \Sigma \ln \Gamma(\mathbf{x}) \; = \; \Sigma \psi\_{-1}(\mathbf{x}) \qquad \text{for } \mathbf{x} > \mathbf{0}.
$$

**ID Card** We have the following basic information about the Barnes G-function:


**Analogue of Bohr-Mollerup's Theorem** The function G can be characterized in the multiplicative notation as follows.

*All solutions* <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup><sup>+</sup> *to the equation* f (x <sup>+</sup>1) <sup>=</sup> -(x)f (x) *for which* ln <sup>f</sup> *lies in <sup>K</sup>*<sup>2</sup> *are of the form* f (x) = c G(x), *where* c > 0.

Interestingly, this characterization enables one to establish the following identity

$$\ln G(\mathbf{x}) = -\left(\frac{\mathbf{x}}{2}\right) + (\mathbf{x} - 1)\ln \Gamma(\mathbf{x}) + \frac{1}{2}\ln(2\pi)\mathbf{x} - \psi\_{-2}(\mathbf{x}).\tag{10.12}$$

Indeed, both sides vanish at x = 1 and are eventually 2-convex solutions to the equation

$$f(\mathbf{x} + \mathbf{l}) - f(\mathbf{x}) = \ln \Gamma(\mathbf{x}).$$

Hence, they must coincide on <sup>R</sup>+.

Using Proposition 3.9, we can also derive the following alternative characterization of the Barnes G-function.

*All solutions* <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup><sup>+</sup> *to the equation* f (x+1) <sup>=</sup> -(x)f (x) *that satisfy the asymptotic condition that, for each* x > 0,

$$f(x+n) \sim \Gamma(n)^x n^{\binom{x}{2}} f(n) \qquad \text{ as } n \to \infty$$

*are of the form* f (x) = c G(x), *where* c > 0.

**Extended ID Card** The value of the asymptotic constant σ[g] can be derived for instance from identity (10.12). One can show that (see, e.g., [93, p. 53])

$$
\sigma[g] = \int\_0^1 \ln G(t+1) \, dt \, = \frac{1}{12} + \frac{1}{4} \ln(2\pi) - 2\ln A \approx 0.045... 
$$

We then have the following values:


• *Inequality*

$$|\sigma \text{[g]}| \le \frac{7}{3} \ln 2 - \frac{109}{72} \approx 0.10 \text{ .}$$

• *Alternative representations of* σ[g] = γ [g]

$$\begin{split} \sigma[g] &= \frac{1}{2} \ln(2\pi) + \lim\_{n \to \infty} \left( \sum\_{k=1}^{n} \ln \Gamma(k) - \psi\_{-2}(n) - \frac{1}{2} \ln \Gamma(n) - \frac{1}{12} \ln n \right), \\ \sigma[g] &= \frac{1}{2} \ln(2\pi) + \lim\_{n \to \infty} \left( \sum\_{k=1}^{n} \ln \Gamma(k) - \psi\_{-2}(n) - \frac{1}{2} \ln \Gamma(n) - \frac{1}{12} \psi(n) \right), \\ \sigma[g] &= \int\_{1}^{\infty} \left( \ln \frac{\Gamma(\lfloor t \rfloor)}{\Gamma(t)} + \langle t \rangle \ln \lfloor t \rfloor + \binom{\lfloor t \rfloor}{2} \ln \left( 1 + \frac{1}{\lfloor t \rfloor} \right) \right) dt \,, \\ \sigma[g] &= \int\_{1}^{\infty} \left( \ln \frac{\Gamma(\lfloor t \rfloor)}{\Gamma(t)} + \ln \frac{\lfloor t \rfloor^{7/12}}{\lfloor t \rfloor + 1} \right) dt \,, \\ \sigma[g] &= \frac{1}{12} \gamma - \frac{1}{2} \int\_{1}^{\infty} B\_{2}(\langle t \rangle) \, \psi\_{1}(t) \, dt \,, \\ \sigma[g] &= \ln \left( \prod\_{k=1}^{\infty} \frac{\Gamma(k) \, \epsilon^{k} \sqrt{k}}{\left( 1 + \frac{1}{k} \right)^{k} \, k^{k} \sqrt{2\pi}} \right). \end{split}$$

• *Generalized Binet's function*. For any <sup>q</sup> <sup>∈</sup> <sup>N</sup> and any x > <sup>0</sup>

$$J^{q+1}[\ln \circ G](\mathbf{x}) = \ln G(\mathbf{x}) - \psi\_{-2}(\mathbf{x}) - \overline{\sigma}[\mathbf{g}] + \sum\_{j=1}^{q} G\_j \, \Delta^{j-1} \ln \Gamma(\mathbf{x}) .$$

For instance,

$$J^{\beta}[\ln \diamond G](\mathbf{x}) = \ln G(\mathbf{x}) - \psi\_{-2}(\mathbf{x}) - \overline{\sigma}[\mathbf{g}] + \frac{1}{2} \ln \Gamma(\mathbf{x}) - \frac{1}{12} \ln \mathbf{x} \dots$$

• *Analogue of Raabe's formula*

$$\int\_{\mathbf{x}}^{\mathbf{x}+\mathbf{l}} \ln G(t) \, dt \, = \; \overline{\sigma}[\mathbf{g}] + \psi\_{-2}(\mathbf{x}) \,, \qquad \mathbf{x} > \mathbf{0}. \tag{10.13}$$

• *Alternative characterization*. The function f (x) = ln G(x) is the unique solution lying in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*<sup>2</sup> to the equation

$$\int\_{\chi}^{\chi+1} f(t) \, dt \, = \,\overline{\sigma}[g] + \psi\_{-2}(\chi) \,, \qquad x > 0.$$

*Project 10.5* Find a closed-form expression for the integral

$$\int\_{1}^{\infty} \ln G(t) \, dt.$$

We apply Proposition 8.20. Using (10.13) and then (10.7) we obtain

$$\int\_{1}^{\chi} \ln G(t) \, dt = \Sigma\_{\chi} \int\_{\chi}^{\chi+1} \ln G(t) \, dt = \overline{\sigma}[g] \left( \chi - 1 \right) + \Sigma \psi\_{-2}(\chi)$$

$$= 2 \ln A + \frac{1}{4} \left( \chi^2 + 1 \right) \ln(2\pi) - \frac{1}{12} \left( 2\chi + 1 \right) \left( \chi - 1 \right)^2$$

$$+ \left( \chi - 1 \right) \psi\_{-2}(\chi) - 2 \left. \psi\_{-3}(\chi) \right.$$

This expression could have been obtained also by integrating both sides of (10.12). ♦

**Inequalities** The following inequalities hold for any x > 0, any a ≥ 0, and any <sup>n</sup> <sup>∈</sup> <sup>N</sup>∗.

• *Symmetrized generalized Wendel's inequality* (equality if a ∈ {0, 1, 2})

$$\left|\ln G(\mathbf{x} + a) - \ln G(\mathbf{x}) - a \ln \Gamma(\mathbf{x}) - \binom{a}{2} \ln \mathbf{x} \right| \le \left| \binom{a-1}{2} \right| \ln \left( 1 + \frac{a}{\mathbf{x}} \right),$$

$$\left(1 + \frac{a}{\mathbf{x}}\right)^{-\left| \binom{a-1}{2} \right|} \le \frac{G(\mathbf{x} + a)}{G(\mathbf{x}) \, \Gamma(\mathbf{x})^a \, \underline{\mathcal{X}}^{\binom{a}{2}}} \le \left(1 + \frac{a}{\mathbf{x}}\right)^{\left| \binom{a-1}{2} \right|}.$$

• *Symmetrized generalized Wendel's inequality* (discrete version)

$$\begin{aligned} \left| \ln G(\mathbf{x}) - \sum\_{k=1}^{n-1} \ln \Gamma(k) + \sum\_{k=0}^{n-1} \ln \Gamma(\mathbf{x} + k) - \mathbf{x} \ln \Gamma(n) - \binom{\mathbf{x}}{2} \ln n \right| \\ &\leq \left| \binom{\mathbf{x} - 1}{2} \right| \left| \ln \left( 1 + \frac{\mathbf{x}}{n} \right) \right| , \\\left| 1 + \frac{\mathbf{x}}{n} \right| \stackrel{\left| \binom{\mathbf{x} - 1}{2} \right|}{\left| \Gamma(1) \Gamma(2) \cdots \Gamma(n - 1) \Gamma(n)^{x} \Gamma(2)} \leq \left( 1 + \frac{\mathbf{x}}{n} \right)^{\left| \binom{\mathbf{x} - 1}{2} \right|} . \end{aligned}$$

• *Symmetrized Stirling's formula-based inequality*

$$\begin{aligned} \left| J^3[\ln \diamond G](\mathbf{x}) \right| &\leq \frac{1}{12} (\mathbf{x} + 1)^2 (2\mathbf{x} + 5) \ln \left( 1 + \frac{1}{\mathbf{x}} \right) - \frac{1}{72} (12\mathbf{x}^2 + 48\mathbf{x} + 49) \\ &\leq \frac{5}{12} \ln \left( 1 + \frac{1}{\mathbf{x}} \right), \\\\ \left( 1 + \frac{1}{\mathbf{x}} \right)^{-5/12} &\leq \frac{G(\mathbf{x}) \, \Gamma(\mathbf{x})^{1/2}}{\mathbf{x}^{1/12} e^{\Psi - 2\left(\mathbf{x}\right) + \overline{\sigma} \left[ \mathbf{g} \right]}} \leq \left( 1 + \frac{1}{\mathbf{x}} \right)^{5/12}. \end{aligned}$$

• *Generalized Gautschi's inequality*

$$\Gamma(\mathbf{x} + \lceil a \rceil)^{a - \lceil a \rceil} \le e^{(a - \lceil a \rceil)D \ln G(\mathbf{x} + \lceil a \rceil)} \le \frac{G(\mathbf{x} + a)}{G(\mathbf{x} + \lceil a \rceil)} \le \Gamma(\mathbf{x} + \lceil a \rceil)^{a - \lfloor a \rfloor}.$$

(These inequalities are valid only if x + a ≥ x0, where x<sup>0</sup> = 1.92 ... is the unique positive zero of the function D<sup>2</sup> ln G(x).)

*Remark 10.6* It is not difficult to see that the first inequality in Proposition 6.19 does not hold for large values of x when g(x) = ln -(x). This shows that the analogue of Burnside's formula does not hold in general when deg g ≥ 1. ♦

**Generalized Stirling's and Related Formulas** For any a ≥ 0, we have the following limits and asymptotic equivalences as x → ∞,

> ln G(x + a) − ln G(x) − a ln -(x) <sup>−</sup> <sup>a</sup> 2 ln x → 0,

$$
\ln G(\mathbf{x}) - \psi\_{-2}(\mathbf{x}) + \frac{1}{2} \ln \Gamma(\mathbf{x}) - \frac{1}{12} \ln \mathbf{x} \to \overline{\sigma} \lg \mathbf{l},
$$

$$
\ln G(\mathbf{x}) - \psi\_{-2}(\mathbf{x}) + \frac{1}{2} \ln \Gamma(\mathbf{x}) - \frac{1}{12} \psi(\mathbf{x}) \to \overline{\sigma} \lg \mathbf{l},
$$

♦

$$\begin{array}{rcl}G(\mathbf{x}+\mathbf{a}) \sim G(\mathbf{x})\,\Gamma(\mathbf{x})^{a}\,\mathbf{x}^{\left[\frac{a}{2}\right]}, & \quad \ln G(\mathbf{x}+\mathbf{a}) \sim \; \psi\_{-2}(\mathbf{x}),\\\\G(\mathbf{x}) \sim \; \exp(\psi\_{-2}(\mathbf{x}) + \overline{\sigma}[\mathbf{g}])\,\Gamma(\mathbf{x})^{-\frac{1}{2}}\,\mathbf{x}^{\frac{1}{12}}.\end{array}$$

*Further results* (obtained by differentiation)

$$\begin{array}{c} \text{x } \psi(\mathbf{x} + a) - \mathbf{x} \text{ } \psi(\mathbf{x}) \to a, \quad \mathbf{x} \,\psi\_{1}(\mathbf{x}) \to 1, \quad \mathbf{x} \,\psi(\mathbf{x} + a) \sim \, \ln \Gamma(\mathbf{x}), \\\\ \ln \Gamma(\mathbf{x}) - \left(\mathbf{x} - \frac{1}{2}\right) \psi(\mathbf{x}) + \mathbf{x} \to \frac{1}{2} \left(1 + \ln(2\pi)\right). \end{array}$$

*Remark 10.7* Using one of the asymptotic equivalences above, we get

$$G(\mathbf{x} + \mathbf{l}) \sim \exp(\psi\_{-2}(\mathbf{x}) + \overline{\sigma} \|\mathbf{g}\|) \Gamma(\mathbf{x})^{\frac{1}{2}} \mathbf{x}^{\frac{1}{2}} \qquad \text{as } \mathbf{x} \to \infty.$$

Combining this latter equivalence with identity (10.12) and the Stirling formula for the gamma function, we also obtain the following simpler form

$$G(\varkappa+1) \sim A^{-1} \varkappa^{\frac{1}{2}\varkappa^2 - \frac{1}{2}} (2\pi)^{\frac{1}{2}} e^{-\frac{3}{4}\varkappa^2 + \frac{1}{2}} \qquad \text{as } \varkappa \to \infty.$$

**Asymptotic Expansions** For any m, q <sup>∈</sup> <sup>N</sup><sup>∗</sup> we have the following expansion as x → ∞

$$\frac{1}{m}\sum\_{j=0}^{m-1}\ln G\left(\mathbf{x} + \frac{j}{m}\right) \\ \\ = \overline{\sigma} \|\mathbf{g}\| + \sum\_{k=0}^{q} \frac{B\_k}{m^k k!} \psi\_{k-2}(\mathbf{x}) + O\left(\psi\_{q-1}(\mathbf{x})\right). \tag{10.14}$$

Setting m = 1 in this formula, we obtain

$$\ln G(\mathbf{x}) = \left| \overline{\sigma} \mathbf{[g]} \right| + \sum\_{k=0}^{q} \frac{B\_k}{k!} \left| \psi\_{k-2}(\mathbf{x}) + O\left(\psi\_{q-1}(\mathbf{x})\right) \right|,$$

or equivalently, if q ≥ 2,

$$J^{\mathfrak{J}}[\ln \diamond G](\mathbf{x}) = \frac{1}{12} \left( \psi(\mathbf{x}) - \ln \mathbf{x} \right) + \sum\_{k=3}^{q} \frac{B\_k}{k!} \psi\_{k-2}(\mathbf{x}) + O(\psi\_{q-1}(\mathbf{x})) \text{ .}$$

Setting q = 4 for instance, we obtain the following expansion

$$
\ln G(\mathbf{x}) = \overline{\sigma}[\mathbf{g}] + \psi\_{-2}(\mathbf{x}) - \frac{1}{2}\psi\_{-1}(\mathbf{x}) + \frac{1}{12}\psi(\mathbf{x}) - \frac{1}{720}\psi\_{2}(\mathbf{x}) + O\left(\mathbf{x}^{-4}\right).
$$

**Generalized Liu's Formula** For any x > 0 we have

$$\ln G(\mathbf{x}) = \overline{\sigma}[\mathbf{g}] + \psi\_{-2}(\mathbf{x}) - \frac{1}{2}\psi\_{-1}(\mathbf{x}) + \frac{1}{12}\psi(\mathbf{x}) + \frac{1}{2}\int\_0^\infty B\_2(\{t\})\,\psi\_1(\mathbf{x} + t)\,dt$$

or equivalently,

$$J^{\mathfrak{J}}[\ln \diamond G](\mathbf{x}) = \frac{1}{12} (\psi(\mathbf{x}) - \ln \mathbf{x}) + \frac{1}{2} \int\_0^\infty B\_2(\{t\}) \, \psi\_1(\mathbf{x} + t) \, dt.$$

**Limit, Series, and Integral Representations** Let us now determine the main representations of the function ln G(x).

• *Eulerian form and related identities*. We have

$$\ln G(\mathbf{x}) = -\ln \Gamma(\mathbf{x}) - \sum\_{k=1}^{\infty} \left( \ln \Gamma(\mathbf{x} + k) - \ln \Gamma(k) - \mathbf{x} \ln k - \binom{\mathbf{x}}{2} \ln \left( 1 + \frac{1}{k} \right) \right),$$

$$G(\mathbf{x}) = \frac{1}{\Gamma(\mathbf{x})} \prod\_{k=1}^{\infty} \frac{\Gamma(k)}{\Gamma(\mathbf{x} + k)} k^{\mathbf{x}} (1 + 1/k)^{\binom{\mathbf{x}}{2}}.$$

Upon differentiation, we obtain

$$\begin{aligned} \text{for } \psi(\mathbf{x}) &= \mathbf{x} - \frac{1}{2} (1 + \ln(2\pi)) - \sum\_{k=1}^{\infty} \left( \psi(\mathbf{x} + k) - \ln k - \left( \mathbf{x} - \frac{1}{2} \right) \ln \left( 1 + \frac{1}{k} \right) \right), \\\\ \psi(\mathbf{x}) + \mathbf{x} \, \psi\_1(\mathbf{x}) &= 1 - \sum\_{k=1}^{\infty} \left( \psi\_1(\mathbf{x} + k) - \ln \left( 1 + \frac{1}{k} \right) \right), \\\\ (r+1) \, \psi\_r(\mathbf{x}) + \mathbf{x} \, \psi\_{r+1}(\mathbf{x}) &= - \sum\_{k=1}^{\infty} \psi\_{r+1}(\mathbf{x} + k), \qquad r \in \mathbb{N}^\*. \end{aligned}$$

• *Weierstrassian form and related identities*. We have

$$\begin{aligned} \ln G(\mathbf{x}) &= \left(-1 - \boldsymbol{\gamma}\right)\left(\frac{\mathbf{x}}{2}\right) - \ln \Gamma(\mathbf{x}) \\ &- \sum\_{k=1}^{\infty} \left(\ln \Gamma(\mathbf{x} + k) - \ln \Gamma(k) - \mathbf{x} \ln k - \binom{\mathbf{x}}{2} \,\psi\_1(k)\right), \\\\ G(\mathbf{x}) &= \frac{e^{\left(-\boldsymbol{\gamma} - \mathbf{1}\right)\binom{\mathbf{x}}{2}}}{\Gamma(\mathbf{x})} \prod\_{k=1}^{\infty} \frac{\Gamma(k)}{\Gamma(\mathbf{x} + k)} k^{\boldsymbol{x}} e^{\psi\_1(k)\left(\frac{\mathbf{x}}{2}\right)}, \end{aligned}$$

Upon differentiation, we obtain

$$\begin{aligned} \text{tr}\,\psi(\mathbf{x}) + \left(\mathbf{x} - \frac{1}{2}\right)\boldsymbol{\upchi} + \frac{1}{2}\ln(2\pi) &= -\sum\_{k=1}^{\infty} \left(\psi(\mathbf{x} + k) - \left(\mathbf{x} - \frac{1}{2}\right)\boldsymbol{\uppsi}\_1(k) - \ln k\right), \\\\ \boldsymbol{\uppsi}(\mathbf{x}) + \boldsymbol{\upchi}\boldsymbol{\uppsi}\_1(\mathbf{x}) + \boldsymbol{\upchi} &= -\sum\_{k=1}^{\infty} \left(\psi\_1(\mathbf{x} + k) - \psi\_1(k)\right). \end{aligned}$$

• *Analogue of Gauss' limit and related identities*. The analogue of Gauss' limit is

$$\ln G(\mathbf{x}) = \lim\_{n \to \infty} \left( \sum\_{k=1}^{n-1} \ln \Gamma(k) - \sum\_{k=0}^{n-1} \ln \Gamma(\mathbf{x} + k) + \mathbf{x} \ln \Gamma(n) + \binom{\alpha}{2} \ln n \right),$$

$$G(\mathbf{x}) = \lim\_{n \to \infty} \frac{\Gamma(1)\Gamma(2) \cdots \Gamma(n)}{\Gamma(\mathbf{x})\Gamma(\mathbf{x} + 1) \cdots \Gamma(\mathbf{x} + n)} n!^{\chi} n^{\binom{\chi}{2}}.$$

Upon differentiation, we obtain

$$\begin{aligned} \left( (\mathbf{x} - \mathbf{1}) \,\psi(\mathbf{x}) - \mathbf{x} + \frac{1}{2} (1 + \ln(2\pi)) \right) \\ &= \lim\_{n \to \infty} \left( -\sum\_{k=0}^{n-1} \psi(\mathbf{x} + k) + \ln \Gamma(n) + \left( \mathbf{x} - \frac{1}{2} \right) \ln n \right), \\\\ \left( (\mathbf{x} - \mathbf{1}) \,\psi\_1(\mathbf{x}) + \psi(\mathbf{x}) - 1 &= \lim\_{n \to \infty} \left( \ln n - \sum\_{k=0}^{n-1} \psi\_1(\mathbf{x} + t) \right). \end{aligned}$$

• *Integral representations*. Using the elevator method on one and two levels, we obtain the following representations

$$\ln G(\mathbf{x}) = -\frac{1}{2} \left( \mathbf{x} - \mathbf{l} \right) (\mathbf{x} - \ln(2\pi)) + \int\_{1}^{\chi} (t - \mathbf{l}) \, \psi(t) \, dt$$

and

$$\ln G(\mathbf{x}) = -\frac{1}{2}(\mathbf{x} - \mathbf{l})(\mathbf{x} - \ln(2\pi)) + \int\_{1}^{\mathbf{x}} (\mathbf{x} - \mathbf{t})(\psi(t) + (t - \mathbf{l})\,\psi\_{\mathbf{l}}(t)) \, dt.$$

Each of these representations actually leads to identity (10.12).

• *Gregory's formula-based series representation*. For any x > 0 we have the series representation

$$\begin{aligned} \ln G(\mathbf{x}) &= \psi\_{-2}(\mathbf{x}) + \overline{\sigma}[\mathbf{g}] - \frac{1}{2} \ln \Gamma(\mathbf{x}) - \sum\_{n=0}^{\infty} G\_{n+2} \Delta^{n+1} \mathbf{g}(\mathbf{x}) \\\\ &= \psi\_{-2}(\mathbf{x}) + \overline{\sigma}[\mathbf{g}] - \frac{1}{2} \ln \Gamma(\mathbf{x}) - \sum\_{n=0}^{\infty} |G\_{n+2}| \sum\_{k=0}^{n} (-1)^k \binom{n}{k} \ln(\mathbf{x} + k) .\end{aligned}$$

Setting x = 1 in this identity yields the analogue of Fontana-Mascheroni series

$$\lfloor \overline{\sigma} \lg \rfloor = \ -\frac{1}{2} \ln(2\pi) + \sum\_{n=0}^{\infty} |G\_{n+2}| \sum\_{k=0}^{n} (-1)^k \binom{n}{k} \ln(k+1) \,.$$

Note that the Eulerian and Weierstrassian forms above can also be integrated term by term on any bounded interval of [0,∞). For instance, integrating on (1,x) provides series representations for the integral of ln G(x) as defined in Project 10.5.

**Analogue of Gauss' Multiplication Formula** For any <sup>m</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> and any x > 0, we have

$$\sum\_{j=0}^{m-1} \ln G\left(\frac{\chi+j}{m}\right) \, = \sum\_{j=1}^{m} \ln G\left(\frac{j}{m}\right) + \Sigma\_{\chi} \ln \Gamma\left(\frac{\chi}{m}\right) \,.$$

For instance, setting m = 2 in this identity, we obtain

$$
\ln G\left(\frac{\chi+1}{2}\right) + \ln G\left(\frac{\chi}{2}\right) \ = \ln G\left(\frac{1}{2}\right) + \Sigma\_{\chi} \ln \Gamma\left(\frac{\chi}{2}\right) \ .
$$

However, to make this multiplication formula interesting and usable, we need to find a simple expression for its right side. In particular, we need a closed-form expression for the function x ln -( x <sup>m</sup> ). Such a result would be most welcome.

We can nevertheless investigate the asymptotic behavior of the function

$$\chi \iff \sum\_{j=0}^{m-1} \ln G\left(\frac{x+j}{m}\right).$$

In addition to the asymptotic expansion given in (10.14), Proposition 8.30 yields the following convergence result. We have

$$\sum\_{j=0}^{m-1} \ln G\left(\frac{\mathbf{x} + j}{m}\right) - m \,\psi\_{-2}\left(\frac{\mathbf{x}}{m}\right) + \frac{1}{2} \ln \Gamma\left(\frac{\mathbf{x}}{m}\right)$$

$$- \frac{1}{12} \left(\ln \Gamma\left(\frac{\mathbf{x} + 1}{m}\right) - \ln \Gamma\left(\frac{\mathbf{x}}{m}\right)\right) \to \, m \,\overline{\sigma} \|\mathbf{g}\|\qquad\text{as } \mathbf{x} \to \infty.$$

**Analogue of Wallis's Product Formula** Using Legendre's duplication formula for the gamma function, we obtain

$$\begin{aligned} \Sigma\_{\boldsymbol{\lambda}} \ln \Gamma(2\boldsymbol{\lambda}) &= \ln G(\boldsymbol{\alpha}) + \ln G(\boldsymbol{\alpha} + \frac{1}{2}) - \ln G(\frac{1}{2}), \\ &+ (\boldsymbol{\alpha}^2 + 1)\ln 2 - \frac{\boldsymbol{\chi}}{2}\ln(16\pi). \end{aligned}$$

Using this identity with Proposition 8.49, we can derive the surprising analogue of Wallis's formula

$$\lim\_{n \to \infty} \frac{\Gamma(1)\Gamma(3)\cdots\Gamma(2n-1)}{\Gamma(2)\Gamma(4)\cdots\Gamma(2n)} \left(\frac{2n}{e}\right)^n = \frac{1}{\sqrt{2}}\dots$$

Note that a shorter proof of this formula can be obtained using the second sequence described in Remark 8.53.

*Project 10.8* Find the analogue of Wallis's formula for the function g(x) = ln G(x). After some algebra, we obtain

$$\lim\_{n \to \infty} \frac{G(1)G(3)\cdots G(2n-1)}{G(2)G(4)\cdots G(2n)} \frac{n^{n^2 - \frac{1}{2}n - \frac{1}{24}} 2^{n^2 - \frac{7}{24}} \pi^{\frac{1}{2}n}}{e^{\frac{3}{2}n^2 - \frac{1}{24}n - \frac{1}{24}}} =: A^{\frac{1}{2}}\text{ .}$$

This latter formula is a little harder to obtain than the former one. Using Proposition 8.49 requires the computation of both functions ln G(x) and 2 x ln G(2x) using the elevator method (Corollary 7.20) with r = 1. That is,

$$\Sigma \ln G(\mathbf{x}) = -\frac{1}{8}\mathbf{x}(\mathbf{x} - \mathbf{1})(2\mathbf{x} - \mathbf{5}) + \frac{1}{4}\mathbf{x}(\mathbf{x} - \mathbf{3})\ln(2\pi) - \mathbf{x}\ln A$$

$$+\frac{1}{2}\left(\mathbf{x} - \mathbf{1}\right)(\mathbf{x} - \mathbf{2})\ln\Gamma(\mathbf{x}) - \frac{1}{2}\left(2\mathbf{x} - \mathbf{3}\right)\psi\_{-2}(\mathbf{x}) + \psi\_{-3}(\mathbf{x})$$

and

$$2\,\Sigma\_{\lambda}\ln G(2\mathbf{x}) = -\frac{1}{4}\mathbf{x}(2\mathbf{x}-1)(4\mathbf{x}-7) - 2\mathbf{x}\ln A$$

$$+\frac{1}{2}\left(2\mathbf{x}^2 - 3\mathbf{x} - 1\right)\ln 2 + \mathbf{x}(\mathbf{x}-2)\ln \pi$$

$$\begin{aligned} &+\frac{1}{2}\ln\Gamma(\mathbf{x}) + \frac{1}{2}(2\mathbf{x}-1)(2\mathbf{x}-\mathbf{3})\ln\Gamma(2\mathbf{x}) \\ &-2(\mathbf{x}-1)\,\psi\_{-2}(2\mathbf{x}) + \psi\_{-3}(2\mathbf{x}).\end{aligned}$$

Here again, a shorter proof of the limit above can be obtained using the second sequence described in Remark 8.53. ♦

**Restriction to the Natural Integers** For any <sup>n</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> we have

$$G(n) \ = \prod\_{k=0}^{n-2} k! \ .$$

**Generalized Webster's Functional Equation** For any <sup>m</sup> <sup>∈</sup> <sup>N</sup>∗, there is a unique solution <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup><sup>+</sup> to the equation

$$\prod\_{j=0}^{m-1} f\left(x + \frac{j}{m}\right) = \left.\Gamma(x)\right|$$

such that ln <sup>f</sup> lies in *<sup>K</sup>*1, namely

$$f(\mathbf{x}) = \frac{G(\mathbf{x} + \frac{1}{m})}{G(\mathbf{x})}.$$

**Analogue of Euler's Series Representation of** *γ* The Taylor series expansion of ln G(x + 1) about x = 0 is (see, e.g., [93, p. 311])

$$\ln G(\mathbf{x} + \mathbf{l}) = \frac{1}{2} \left( \ln(2\pi) - \mathbf{l} \right) \mathbf{x} - \frac{\mathbf{y} + \mathbf{l}}{2} \mathbf{x}^2 - \sum\_{k=2}^{\infty} \frac{\xi(k)}{k+1} (-\mathbf{x})^{k+1}, \qquad |\mathbf{x}| < 1.$$

Integrating both sides of this equation on (0, 1), we obtain the identity

$$\sum\_{k=2}^{\infty} (-1)^k \frac{\xi(k)}{(k+1)(k+2)} = \frac{1}{2} + \frac{1}{6}\gamma - 2\ln A.$$

Also, the exponential generating function for the sequence <sup>n</sup> → <sup>σ</sup>[g(n)] is

$$\text{logf}\_{\sigma}[g](\mathbf{x}) = \ln G(\mathbf{x} + \mathbf{l}) - \psi\_{-2}(\mathbf{x} + \mathbf{l}) + \frac{1}{4}\ln(2\pi) - \frac{1}{12} + 2\ln A$$

Integrating both sides of this equation on (0, 1) (i.e., we use (7.5)), after some algebra we obtain

$$\sum\_{k=2}^{\infty} (-1)^k \frac{k-1}{k(k+1)(k+2)} \zeta(k) \, = \, \frac{5}{4} - 3 \ln A - \frac{1}{4} \ln(2\pi) \, .$$

**Analogue of the Reflection Formula** A reflection formula for the Barnes Gfunction is given in (8.27); see, e.g., [93, p. 45].

#### **10.6 The Hurwitz Zeta Function**

For any x > 0, the Hurwitz zeta function s → ζ (s, x) is defined as an analytic continuation to <sup>C</sup> \ {1} of the series (see, e.g., [93, p. 155])

$$\sum\_{k=0}^{\infty} (x+k)^{-s} = \frac{1}{\Gamma(s)} \int\_0^{\infty} \frac{t^{s-1} e^{-\chi t}}{1 - e^{-t}} dt, \qquad \Re(s) > 1.$$

It is known (see, e.g., [93, p. 159–160]) that this function satisfies the identity

$$D\_\chi^k \zeta(s, x) := \left( -s \right)^k \zeta(s + k, \chi), \qquad k \in \mathbb{N},$$

and the difference equation

$$
\zeta(s, x+1) - \zeta(s, x) = -x^{-s}.\tag{10.15}
$$

For any fixed <sup>s</sup> <sup>∈</sup> <sup>R</sup> \ {1}, define the function gs : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> by the equation

$$\mathbf{g}\_s(\mathbf{x}) := \, -\mathbf{x}^{-s} \qquad \text{for } \mathbf{x} > \mathbf{0}.$$

We then have gs <sup>∈</sup> *<sup>C</sup>*<sup>∞</sup> <sup>∩</sup> *<sup>K</sup>*∞. If s > 0 and <sup>s</sup> = 1, then gs <sup>∈</sup> *<sup>D</sup>*<sup>0</sup> <sup>N</sup>. If s > 1, then gs <sup>∈</sup> *<sup>D</sup>*−<sup>1</sup> <sup>N</sup> . If <sup>−</sup>p<s< 1 for some <sup>p</sup> <sup>∈</sup> <sup>N</sup>, then gs <sup>∈</sup> *<sup>D</sup>*<sup>p</sup> <sup>N</sup>, and hence we can consider

$$p \;=\; 1 + \deg \mathbf{g}\_s \;=\; \lfloor 1 - s \rfloor .$$

In all cases, we have

$$
\Delta \mathbf{g}\_s(\mathbf{x}) = \boldsymbol{\zeta}(\mathbf{b}, \mathbf{x}) - \boldsymbol{\zeta}(\mathbf{b}),
$$

where s → ζ (s) = ζ (s, 1) is the Riemann zeta function.

**ID Card** The basic information about the Hurwitz zeta function is summarized in the following table.


*Project 10.9* Find a closed-form expression for g, where

$$\mathbf{g}(\mathbf{x}) = \frac{\mathbf{x}^2}{\sqrt{\mathbf{x}+1}} \cdot$$

Expanding <sup>x</sup><sup>2</sup> <sup>=</sup> (x <sup>+</sup> <sup>1</sup> <sup>−</sup> <sup>1</sup>)2, we obtain

$$g(\mathbf{x}) = (\mathbf{x} + \mathbf{l})^{\frac{3}{2}} - 2(\mathbf{x} + \mathbf{l})^{\frac{1}{2}} + (\mathbf{x} + \mathbf{l})^{-\frac{1}{2}}$$

and hence

$$\Sigma g(\mathbf{x}) = c - \xi(-\frac{3}{2}, \mathbf{x} + 1) + 2\xi(-\frac{1}{2}, \mathbf{x} + 1) - \xi(\frac{1}{2}, \mathbf{x} + 1)$$

for some <sup>c</sup> <sup>∈</sup> <sup>R</sup>. ♦

**Analogue of Bohr-Mollerup's Theorem** The function ζ (s, x) can be characterized as follows.

*All solutions* fs : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *to the equation*

$$f\_s(\mathbf{x} + 1) - f\_s(\mathbf{x}) = -\mathbf{x}^{-s}$$

*that lie in <sup>K</sup>*<sup>1</sup>−s<sup>+</sup> *are of the form* fs(x) <sup>=</sup> cs <sup>+</sup> ζ (s, x), *where* cs <sup>∈</sup> <sup>R</sup>.

**Extended ID Card** The asymptotic constant σ[gs] satisfies the following identity

$$
\sigma[\mathfrak{g}\_s] = \int\_0^1 \xi(\mathfrak{s}, t+1) \, dt - \xi(\mathfrak{s}) \, \, \, \frac{1}{\mathfrak{s}-1} - \xi(\mathfrak{s}) \,.
$$

Hence we have the following values


We also have the following identities.

• *Alternative representations of* σ[gs]

$$\begin{split} \sigma[g\_s] &= \lim\_{n \to \infty} \left( \frac{1 - n^{1-s}}{s - 1} - \sum\_{k=1}^{n-1} k^{-s} + \sum\_{j=1}^{\lfloor 1-s \rfloor + 1} G\_j \, \Delta^{j-1} g\_s(n) \right), \\ \sigma[g\_s] &= \lim\_{n \to \infty} \left( \frac{1}{s - 1} - \sum\_{k=1}^{n-1} k^{-s} + \frac{1}{1 - s} \sum\_{j=0}^{\lfloor 1-s \rfloor + 1} \binom{1-s}{j} \frac{B\_j}{n^{s+j-1}} \right), \\ \sigma[g\_s] &= \sum\_{j=1}^{\lfloor 1-s \rfloor + 1} G\_j \, \Delta^{j-1} g\_s(1) \\ &+ \sum\_{k=1}^{\infty} \left( \frac{k^{1-s} - (k+1)^{1-s}}{s - 1} + \sum\_{j=0}^{\lfloor 1-s \rfloor + 1} G\_j \, \Delta^j g\_s(k) \right). \end{split}$$

If s > −1, then

$$\sigma[\mathbf{g}\_s] = -\frac{1}{2} + s \int\_1^\infty \frac{\{t\} - \frac{1}{2}}{t^{s+1}} \, dt \,.$$

If s ≤ −1, then for any integer q ≥ (1 − s)/2,

$$\sigma[\mathbf{g}\_s] = -\frac{1}{2} + \sum\_{k=1}^{q} \frac{B\_{2k}}{(2k)!} (-s)^{\underline{2k-1}} + \frac{(-s)^{\underline{2q}}}{(2q)!} \int\_1^{\infty} \frac{B\_{2q}(\{t\})}{t^{s+2q}} \, dt \dots$$

• *Representations of* γ [gs]

$$\begin{aligned} \mathcal{Y}[\mathcal{g}\_{\mathcal{S}}] &= \sigma[\mathcal{g}\_{\mathcal{S}}] - \sum\_{j=1}^{\lfloor 1-s \rfloor\_{+}} G\_{j} \, \Delta^{j-1} \mathcal{g}\_{\mathcal{S}}(1) \, \,, \\ \mathcal{Y}[\mathcal{g}\_{\mathcal{S}}] &= \int\_{1}^{\infty} \left( \sum\_{j=0}^{\lfloor 1-s \rfloor\_{+}} G\_{j} \, \Delta^{j} \mathcal{g}\_{\mathcal{S}}(\lfloor t \rfloor) - \mathcal{g}\_{\mathcal{S}}(t) \right) dt \, \,, \\ \mathcal{Y}[\mathcal{g}\_{\mathcal{S}}] &= \int\_{1}^{\infty} \left( \sum\_{j=0}^{\lfloor 1-s \rfloor\_{+}} \binom{\{t\}}{j} \Delta^{j} \mathcal{g}\_{\mathcal{S}}(\lfloor t \rfloor) - \mathcal{g}\_{\mathcal{S}}(t) \right) dt \, \, \,, \end{aligned}$$

• *Generalized Binet's function*. For any <sup>q</sup> <sup>∈</sup> <sup>N</sup> and any x > <sup>0</sup>

$$J^{q+1}[\Sigma g\_s](x) = \xi(s,x) - \frac{x^{1-s}}{s-1} + \sum\_{j=1}^{q} G\_j \, \Delta^{j-1} g\_s(x).$$

• *Analogue of Raabe's formula*

$$\int\_{\chi}^{\chi+1} \xi(s,t) \, dt \, = \, \frac{\chi^{1-s}}{s-1}, \qquad x > 0.$$

• *Alternative characterization*. The function fs(x) = ζ (s, x) is the unique solution lying in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*<sup>1</sup>−s<sup>+</sup> to the equation

$$\int\_{\chi}^{\chi+1} f\_s(t) \, dt \, = \, \frac{\chi^{1-s}}{s-1}, \qquad x > 0.$$

**Inequalities** The following inequalities hold for any x > 0, any a > 0, and any <sup>n</sup> <sup>∈</sup> <sup>N</sup>∗.

• *Symmetrized generalized Wendel's inequality* (equality if a ∈ {0, 1,..., 1 − s+})

$$\begin{aligned} & \left| \xi(\mathbf{s}, \mathbf{x} + a) - \xi(\mathbf{s}, \mathbf{x}) - \sum\_{j=1}^{\lfloor 1-s \rfloor\_{+}} \binom{a}{j} \Delta^{j-1} \mathbf{g}\_{s}(\mathbf{x}) \right| \\ & \leq \left\lceil \mathbf{a} \right\rceil \left| \binom{a-1}{\lfloor 1-s \rfloor\_{+}} \right| \left| \Delta^{\lfloor 1-s \rfloor\_{+}} \mathbf{g}\_{s}(\mathbf{x}) \right|. \end{aligned}$$

If s ≤ 0, then

$$\begin{aligned} & \left| \xi(s, x + a) - \xi(s, x) - \sum\_{j=1}^{\lfloor 1 - s \rfloor} \binom{a}{j} \Delta^{j-1} g\_s(\mathbf{x}) \right| \\ & \le \left| \binom{a - 1}{\lfloor 1 - s \rfloor} \right| \left| \Delta^{\lfloor -s \rfloor} g\_s(\mathbf{x} + a) - \Delta^{\lfloor -s \rfloor} g\_s(\mathbf{x}) \right|. \end{aligned}$$

• *Symmetrized generalized Wendel's inequality* (discrete version)

$$\left| \left| \zeta(s, x) - \zeta(s) - f\_n^{\lfloor 1 - s \rfloor\_+} \mathfrak{l} g\_s \mathfrak{l}(x) \right| \right| \le \lceil \pi \rceil \left| \binom{\lfloor 1 - s \rfloor\_+}{\lfloor 1 - s \rfloor\_+} \right| \left| \Delta^{\lfloor 1 - s \rfloor\_+} g\_s(n) \right| \dots$$

If s ≤ 0, then

$$\left| \left\langle \zeta(\mathbf{s}, \mathbf{x}) - \zeta(\mathbf{s}) - f\_n^{\lfloor 1-s \rfloor} \mathbf{[g\_s]}(\mathbf{x}) \right\rangle \right| \le \left| \binom{\mathbf{x} - \mathbf{l}}{\lfloor 1-s \rfloor} \right| \left| \Delta^{\lfloor -s \rfloor} \mathbf{g\_s}(\mathbf{x} + \mathbf{n}) - \Delta^{\lfloor -s \rfloor} \mathbf{g\_s}(\mathbf{n}) \right|.$$

Here

$$f\_n^{\lfloor \lfloor 1-s \rfloor + \lfloor \lg s \rfloor \rfloor}(\lg)(x) = \sum\_{k=0}^{n-1} (x+k)^{-s} - \sum\_{k=1}^{n-1} k^{-s} - \sum\_{j=1}^{\lfloor \lg -s \rfloor\_+ } \binom{\chi}{j} \Delta\_n^{j-1} n^{-s}.$$

• *Symmetrized Stirling's formula-based inequality*

$$\left| J^{\lfloor \lfloor 1-s \rfloor\_+ + 1} [\Sigma \mathfrak{g}\_s](\mathbf{x}) \right| \; \leq \overline{G}\_{\lfloor 1-s \rfloor\_+} \left| \Delta^{\lfloor 1-s \rfloor\_+} \mathfrak{g}\_s(\mathbf{x}) \right| \; . $$

If s ≤ 0, then

$$\left| J^{\lfloor 2-s \rfloor} [\Sigma \mathbf{g}\_s](\mathbf{x}) \right| \leq \int\_0^1 \left| \binom{t-1}{\lfloor 1-s \rfloor} \right| \left| \Delta^{\lfloor -s \rfloor} \mathbf{g}\_s(\mathbf{x}+t) - \Delta^{\lfloor -s \rfloor} \mathbf{g}\_s(\mathbf{x}) \right| dt.$$

• *Burnside's formula-based inequality if* s > −1

$$\left| \zeta \left( s, x + \frac{1}{2} \right) - \frac{x^{1-s}}{s-1} \right| \le \left| J^{\lfloor 1-s \rfloor + +1} [\Sigma g\_s](x) \right|.$$

• *Additional inequality if* s > 1.

$$0 \le \xi(s, \chi + n) = \sum\_{k=n}^{\infty} (\chi + k)^{-s} \le \xi(s, n).$$

• *Generalized Gautschi's inequality* If s ≥ 0, s = 1,

$$\begin{aligned} \left( ([a] - a)(\chi + [a])^{-s} \le s([a] - a)\,\,\xi(s+1, \chi + [a]) \right) \\ \le \xi(s, \chi + a) - \xi(s, \chi + [a]) \le \left( [a] - a \right) (\chi + [a])^{-s}. \end{aligned}$$

If s ≤ 0, then these inequalities must be reversed and they are valid only if the Hurwitz zeta function is concave on [x + a,∞).

**Generalized Stirling's and Related Formulas** For any a ≥ 0, we have the following limits and asymptotic equivalences as x → ∞,

$$\begin{aligned} \xi(s, x + a) - \xi(s, x) - \sum\_{j=1}^{\lfloor 1 - s \rfloor + } \binom{a}{j} \Delta^{j-1} g\_s(\mathbf{x}) &\to 0, \\\\ \xi(s, x) - \frac{\mathbf{x}^{1 - s}}{s - 1} + \sum\_{j=1}^{\lfloor 1 - s \rfloor + } G\_j \Delta^{j-1} g\_s(\mathbf{x}) &\to 0, \end{aligned}$$

$$
\zeta(s, x) + \frac{1}{1 - s} \sum\_{j = 0}^{\lfloor 1 - s \rfloor + 1} \binom{1 - s}{j} \frac{B\_j}{x^{s + j - 1}} \to 0,
$$

$$
\zeta(s, x + a) \sim \frac{x^{1 - s}}{s - 1}.
$$

In particular, if s > 1, then ζ (s, x) → 0 as x → ∞.

For instance, setting <sup>s</sup> = −<sup>3</sup> <sup>2</sup> in these latter two asymptotic formulas, we obtain

$$
\xi\left(-\frac{3}{2}, x\right) + \frac{2}{5}x^{5/2} - \frac{7}{12}x^{3/2} + \frac{1}{12}(x+1)^{3/2} \to 0,
$$

$$
\xi\left(-\frac{3}{2}, x\right) + \frac{2}{5}x^{5/2} - \frac{1}{2}x^{3/2} + \frac{1}{8}x^{1/2} \to 0.
$$

If s > −1, then we have the analogue of Burnside's formula

$$
\zeta(s, x) - \frac{1}{s - 1} \left( x - \frac{1}{2} \right)^{1 - s} \to 0, \qquad \text{as } x \to \infty,
$$

which provides a better approximation of ζ (s, x) than the generalized Stirling formula.

**Asymptotic Expansions** For any m, q <sup>∈</sup> <sup>N</sup><sup>∗</sup> we have the following expansion as x → ∞

$$\frac{1}{m} \sum\_{j=0}^{m-1} \xi\left(s, x + \frac{j}{m}\right) \\ = \frac{1}{s-1} \sum\_{k=0}^{q} \binom{1-s}{k} \frac{B\_k}{m^k x^{s+k-1}} + O\left(x^{-q-s}\right).$$

Setting m = 1 in this formula, we obtain

$$\zeta(s,\boldsymbol{x}) = \frac{1}{s-1} \sum\_{k=0}^{q} \binom{1-s}{k} \frac{B\_k}{\boldsymbol{\chi}^{s+k-1}} + O\left(\boldsymbol{\chi}^{-q-s}\right).$$

In particular, this clearly shows that ζ (s, x) is a (1−s)-degree polynomial whenever 1 − s is a positive integer. More precisely, we have

$$
\zeta(1-n,x) = \ -\frac{1}{n} \sum\_{k=0}^{n} \binom{n}{k} B\_k \ge^{n-k}, \qquad n \in \mathbb{N}^\*,
$$

that is,

$$\zeta(\mathbf{l}-n,\mathbf{x}) = -\frac{1}{n}B\_{\mathfrak{n}}(\mathbf{x}), \qquad n \in \mathbb{N}^\*. \tag{10.16}$$

**Generalized Liu's Formula** We have the following formulas for x > 0.

• If s > −1, then

$$
\zeta(s,x) := \frac{x^{1-s}}{s-1} + \frac{1}{2} \ge^{-s} - s \int\_0^\infty \frac{\{t\} - \frac{1}{2}}{(x+t)^{s+1}} dt.
$$

• If s ≤ −1, then for any integer q ≥ (1 − s)/2,

$$\zeta(s,x) = \frac{x^{1-s}}{s-1} + \frac{1}{2}x^{-s} - \sum\_{k=1}^{q} \frac{B\_{2k}}{(2k)!} \frac{(-s)^{2k-1}}{x^{s+2k-1}} - \frac{(-s)^{2q}}{(2q)!} \int\_{0}^{\infty} \frac{B\_{2q}(\{t\})}{(x+t)^{s+2q}} dt.$$

**Limit and Series Representations When** *s >* **<sup>1</sup>** We simply have

$$\xi(s,x) := \sum\_{k=0}^{\infty} (x+k)^{-s}$$

and this series converges uniformly on <sup>R</sup>+. In particular, we retrieve the identity

$$\psi\_{\boldsymbol{\nu}}(\boldsymbol{x}) := (-1)^{\boldsymbol{\nu}+1} \boldsymbol{\nu}! \zeta(\boldsymbol{\nu}+1, \boldsymbol{x}) \,, \qquad \boldsymbol{\nu} \in \mathbb{N}^\*.$$

**Limit and Series Representations When** *s <* **<sup>1</sup>** We have the following Eulerian form

$$\begin{aligned} \xi(s, \mathbf{x}) - \xi(s) &= -\mathsf{g}\_s(\mathbf{x}) + \sum\_{j=0}^{\lfloor -s \rfloor} \binom{\mathsf{x}}{j+1} \Delta^j \mathsf{g}\_s(\mathbf{1}) \\ &+ \sum\_{k=1}^{\infty} \left( -\mathsf{g}\_s(\mathbf{x} + k) + \sum\_{j=0}^{\lfloor \lfloor -s \rfloor \rfloor} \binom{\mathsf{x}}{j} \Delta^j \mathsf{g}\_s(k) \right), \end{aligned}$$

and the Weierstrassian form can be obtained similarly. The associated series converge uniformly on any bounded subset of [0,∞).

For instance, we have

$$\begin{aligned} & \quad \xi\left(-\frac{3}{2}, x\right) - \xi\left(-\frac{3}{2}\right) \\ &= \ & x^{\frac{3}{2}} + \lim\_{n \to \infty} \left(\sum\_{k=1}^{n-1} \left( (x+k)^{\frac{3}{2}} - k^{\frac{3}{2}} \right) - x \, n^{\frac{3}{2}} - \binom{x}{2} \Delta\_n n^{\frac{3}{2}} \right) \end{aligned}$$

$$\begin{split} \mathbf{x} &= \mathbf{x}^{\frac{3}{2}} - \mathbf{x} - (2\sqrt{2} - 1)\binom{\mathbf{x}}{2} + \sum\_{k=1}^{\infty} \left( (\mathbf{x} + k)^{\frac{3}{2}} - k^{\frac{3}{2}} - \mathbf{x} \Delta\_{k} k^{\frac{3}{2}} - \binom{\mathbf{x}}{2} \Delta\_{k}^{\frac{3}{2}} k^{\frac{3}{2}} \right) \\ &= \mathbf{x}^{\frac{3}{2}} - \mathbf{x} + \frac{3}{4} \zeta\left(\frac{1}{2}\right) \binom{\mathbf{x}}{2} + \sum\_{k=1}^{\infty} \left( (\mathbf{x} + k)^{\frac{3}{2}} - k^{\frac{3}{2}} - \mathbf{x} \Delta\_{k} k^{\frac{3}{2}} - \frac{3}{4} \binom{\mathbf{x}}{2} k^{-\frac{1}{2}} \right). \end{split}$$

The analogue of Gauss' limit is

$$\xi(s,x) := \xi(s) + \lim\_{n \to \infty} f\_n^{\lfloor 1-s \rfloor} [\mathfrak{g}\_s](x), \qquad x > 0.$$

where

$$\, \_r f\_n^{\lfloor 1-s \rfloor} \, \_l g\_s \rfloor (x) \, = \sum\_{k=0}^{n-1} (x+k)^{-s} - \sum\_{k=1}^{n-1} k^{-s} - \sum\_{j=1}^{\lfloor 1-s \rfloor} \binom{x}{j} \, \Delta\_n^{j-1} n^{-s} \,.$$

**Gregory's Formula-Based Series Representation** For any x > 0 we have

$$\begin{aligned} \zeta(s,\boldsymbol{x}) &= \frac{\boldsymbol{x}^{1-s}}{s-1} - \sum\_{n=0}^{\infty} G\_{n+1} \Delta^n g\_s(\boldsymbol{x}) \\ &= \frac{\boldsymbol{x}^{1-s}}{s-1} + \sum\_{n=0}^{\infty} |G\_{n+1}| \sum\_{k=0}^n (-1)^k \binom{n}{k} (\boldsymbol{x} + k)^{-s} \boldsymbol{\lambda} \end{aligned}$$

Setting x = 1 in this identity yields a known series expression for ζ (s) that is the analogue of Fontana-Mascheroni series

$$\zeta(s) \ = \frac{1}{s-1} + \sum\_{n=0}^{\infty} |G\_{n+1}| \sum\_{k=0}^{n} (-1)^k \binom{n}{k} (k+1)^{-s} \ . $$

**Analogue of Gauss' Multiplication Formula** For any <sup>m</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> and any x > 0, we have

$$\sum\_{j=0}^{m-1} \zeta\left(s, \frac{x+j}{m}\right)^{\cdot} = m^s \zeta(s, x).$$

Corollary 8.33 provides the following limits for any x > 0

$$\lim\_{m \to \infty} m^{s-1} \xi(s, mx) = \frac{x^{1-s}}{s-1}, \qquad s < 1,$$

$$\lim\_{m \to \infty} m^{s-1} (\xi(s, mx) - \xi(s, m)) = \frac{x^{1-s} - 1}{s-1}, \qquad s \neq 1.$$

#### **Analogue of Wallis's Product Formula** If s > 1, then we have

$$\sum\_{k=1}^{\infty} \frac{(-1)^{k-1}}{k^s} = (1 - 2^{1-s})\,\xi(s) \, = \,\, \eta(s),\tag{10.17}$$

where s → η(s) is Dirichlet's eta function. When s < 1, the form of the formula strongly depends upon the value of <sup>s</sup>. When <sup>s</sup> = −<sup>3</sup> <sup>2</sup> for instance, we obtain

$$\lim\_{n \to \infty} \left( h(n) + \sum\_{k=1}^{2n} (-1)^k k^{\frac{3}{2}} \right) \\ = (4\sqrt{2} - 1)\xi(-\frac{3}{2}).$$

where h(n) = −8n+<sup>3</sup> 4 !n 2 .

**Restriction to the Natural Integers** For any <sup>n</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> we have

$$
\xi(s, n) - \xi(s) = \ -\sum\_{k=1}^{n-1} k^{-s} \qquad \text{and} \qquad \xi(s, n) = \sum\_{k=n}^{\infty} k^{-s}.
$$

Gregory's formula states that for any <sup>n</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> and any <sup>q</sup> <sup>∈</sup> <sup>N</sup> we have

$$\sum\_{k=1}^{n-1} k^{-s} = \frac{1 - n^{1-s}}{s - 1} + \sum\_{j=1}^{q} G\_j \left( \Delta^{j-1} g\_s(n) - \Delta^{j-1} g\_s(1) \right) + R\_{s,n}^q \dots$$

with

$$|\mathcal{R}\_{s,n}^q| \le \overline{G}\_q \left| \Delta^q \mathcal{g}\_s(n) - \Delta^q \mathcal{g}\_s(\mathbf{l}) \right|.$$

Many other representations of this sum can be derived from, e.g., the limit and series representations of the Hurwitz zeta function.

**Generalized Webster's Functional Equation** For any <sup>m</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> and any a > 0, there is a unique solution fs : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> to the equation

$$\sum\_{j=0}^{m-1} f\_s \begin{pmatrix} x+a \ j \end{pmatrix} = \begin{vmatrix} -x^{-s} \end{vmatrix}$$

that lies in *<sup>K</sup>*−s<sup>+</sup> , namely

$$f\_s(x) = \frac{1}{(am)^s} \xi\left(s, \frac{x+a}{am}\right) - \frac{1}{(am)^s} \xi\left(s, \frac{x}{am}\right) \dots$$

## **Analogue of Euler's Series Representation of** *γ* We have

$$(\Sigma \mathfrak{g}\_s)^{(k)}(\mathfrak{l}) = \ (-s)^{\underline{k}} \zeta(s+k), \qquad k \in \mathbb{N}^\*.$$

Thus, the Taylor series expansion of ζ (s, x + 1) about x = 0 is

$$\zeta(s, x+1) = \sum\_{k=0}^{\infty} \binom{-s}{k} \zeta(s+k) x^k, \qquad |x| < 1.$$

Integrating both sides of this equation on (0, 1), we obtain the identity

$$\sum\_{k=1}^{\infty} \binom{1-s}{k} \zeta(s+k-1) = \begin{cases} -1, & s < 2, \ s \notin \mathbb{Z} \dots \\ \end{cases}$$

(When s > 2, the summand in the series above does not approach zero as k increases.)

**Analogue of the Reflection Formula** A reflection formula can be derived when s is an integer. Recall that we have the following special values for any <sup>n</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup>

$$
\zeta(1+n,x) = \left.(-1)^{n-1}\frac{1}{n!}\psi\_n(x)\right|
$$

and

$$
\zeta(1-n,x) = -\frac{1}{n}B\_n(x).
$$

It follows that for any <sup>x</sup> <sup>∈</sup> <sup>R</sup> \ <sup>Z</sup>, we have

$$
\xi(\mathbf{s},\mathbf{x}) + (-\mathbf{l})^{s}\xi(\mathbf{s},\mathbf{l}-\mathbf{x}) = \begin{cases}
\frac{(-1)^{s-1}}{(s-1)!}\pi \, D^{s-1}\cot(\pi\mathbf{x}), & \text{if } s-1 \in \mathbb{N}^{\*}, \\
0, & \text{if } -s \in \mathbb{N}.
\end{cases}
$$

#### **10.7 The Generalized Stieltjes Constants**

Recall that the *generalized Stieltjes constants* are the numbers γn(x) that occur in the Laurent series expansion of the Hurwitz zeta function

$$\zeta(s,x) := \frac{1}{s-1} + \sum\_{n=0}^{\infty} \frac{(-1)^n}{n!} \gamma\_n(x) (s-1)^n. \tag{10.18}$$

Recall also that the numbers γn <sup>=</sup> γn(1), where <sup>n</sup> <sup>∈</sup> <sup>N</sup>, are called the *Stieltjes constants*. The Stieltjes constants and generalized Stieltjes constants are known to satisfy the relations

$$
\lambda^0(\boldsymbol{\lambda}) = \boldsymbol{-\psi}(\boldsymbol{\chi}) \qquad \text{and} \qquad \lambda^0 = \boldsymbol{\lambda}
$$

as well as the following identities for every <sup>q</sup> <sup>∈</sup> <sup>N</sup>

$$\begin{aligned} \gamma\_q &= \lim\_{n \to \infty} \left( \sum\_{k=1}^n \frac{(\ln k)^q}{k} - \frac{(\ln n)^{q+1}}{q+1} \right), \\\gamma\_q(\mathbf{x}) &= \lim\_{n \to \infty} \left( \sum\_{k=0}^n \frac{(\ln(\mathbf{x}+k))^q}{\mathbf{x}+k} - \frac{(\ln(\mathbf{x}+n))^{q+1}}{q+1} \right). \end{aligned}$$

For recent background on these constants, see, e.g., Blagouchine [19, 20] and Blagouchine and Coppo [22] (see also Nan-Yue and Williams [80]).

Here we naturally restrict the values of <sup>x</sup> to the set <sup>R</sup>+. Interestingly, the generalized Stieltjes constants also satisfy the difference equation

$$
\gamma\_q(\mathfrak{x}+1) - \gamma\_q(\mathfrak{x}) = \mathfrak{g}\_q(\mathfrak{x}),
$$

where gq : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> is the function defined by the equation

$$g\_q(\mathbf{x}) := -\frac{1}{\mathbf{x}} (\ln \mathbf{x})^q \qquad \text{for } \mathbf{x} > \mathbf{0}.$$

Thus, our theory is particularly suitable for the investigation of these constants. For any <sup>q</sup> <sup>∈</sup> <sup>N</sup>, the function gq lies in *<sup>C</sup>*<sup>∞</sup> <sup>∩</sup> *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*<sup>∞</sup> and is increasing on [e<sup>q</sup> ,∞). By uniqueness of gq , it follows that

$$
\Sigma \mathcal{g}\_q(\mathfrak{x}) \, \, \, \, \, \, \nu\_q(\mathfrak{x}) \, \, \, \, \nu\_q \, \, \,.
$$

**ID Card** The introduction above enables us to provide the following basic information about the generalized Stieltjes constants.


**Analogue of Bohr-Mollerup's Theorem** The function γq can be characterized as follows.

*All eventually monotone solutions* fq : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *to the equation*

$$f\_q(x+1) - f\_q(x) = -\frac{1}{x}(\ln x)^q$$

*are of the form* fq (x) <sup>=</sup> cq <sup>+</sup> γq (x), *where* cq <sup>∈</sup> <sup>R</sup>.

Using Proposition 3.9, we can also derive the following alternative characterization of the function γq .

*All solutions* fq : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *to the equation*

$$f\_q(x+1) - f\_q(x) = -\frac{1}{x}(\ln x)^q$$

*that satisfy the asymptotic condition that, for each* x > 0,

$$f\_q(\mathfrak{x} + \mathfrak{n}) - f\_q(\mathfrak{n}) \to \begin{array}{c} 0 \ \end{array} \quad \text{as } \mathfrak{n} \to \mathbb{N} \ \infty$$

*are of the form* fq (x) <sup>=</sup> cq <sup>+</sup> γq (x), *where* cq <sup>∈</sup> <sup>R</sup>.

**Extended ID Card** Using identity (8.11), we can immediately make the remarkable observation that the asymptotic constant σ[gq ] is exactly the opposite of the Stieltjes constant γq. We then have the following values


• *Alternative representations of* σ[gq ] = γ [gq ]

$$\begin{aligned} \gamma\_q &= \sum\_{k=1}^{\infty} \left( \frac{(\ln k)^q}{k} - \frac{(\ln(k+1))^{q+1} - (\ln(k))^{q+1}}{q+1} \right), \\ \gamma\_q &= \int\_1^{\infty} \frac{\{t\} - \frac{1}{2}}{t^2} (\ln t)^{q-1} (q - \ln t) \, dt \qquad (q \ge 1), \\ \gamma\_q &= \int\_1^{\infty} \left( \frac{(\ln \lfloor t \rfloor)^q}{\lfloor t \rfloor} - \frac{(\ln t)^q}{t} \right) dt \,. \end{aligned}$$

• *Generalized Binet's function*. For any <sup>r</sup> <sup>∈</sup> <sup>N</sup> and any x > <sup>0</sup>

$$J^{r+1}[\chi\_q](\mathbf{x}) = \chi\_q(\mathbf{x}) + \frac{(\ln \mathbf{x})^{q+1}}{q+1} + \sum\_{j=1}^r G\_j \, \Delta^{j-1} \mathbf{g}\_q(\mathbf{x}) .$$

• *Analogue of Raabe's formula*

$$\int\_{\chi}^{\chi+1} \chi\_q(t) \, dt \, = \, -\frac{(\ln x)^{q+1}}{q+1}, \qquad x > 0. \tag{10.19}$$

• *Alternative characterization*. The function f (x) = γq (x) is the unique solution lying in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*<sup>0</sup> to the equation

$$\int\_{\infty}^{\infty+1} f(t) \, dt \, = \, -\frac{(\ln x)^{q+1}}{q+1}, \qquad x > 0.$$

**Inequalities** The following inequalities hold for any x > 0, any a > 0, and any <sup>n</sup> <sup>∈</sup> <sup>N</sup>.

• *Symmetrized generalized Wendel's inequality* (equality if a ∈ {0, 1}) If <sup>x</sup> <sup>≥</sup> <sup>e</sup><sup>q</sup> , we have

$$\left|\gamma\_q(\mathbf{x}+a) - \gamma\_q(\mathbf{x})\right| \le \lceil a \rceil \left|\frac{(\ln x)^q}{x}\right|.$$

• *Symmetrized generalized Wendel's inequality* (discrete version) If <sup>n</sup> <sup>≥</sup> <sup>e</sup><sup>q</sup> , we have

$$\left|\gamma\_q(\mathbf{x}) - \gamma\_q - \frac{(\ln x)^q}{x} - \sum\_{k=1}^{n-1} \left( \frac{(\ln(x+k))^q}{x+k} - \frac{(\ln k)^q}{k} \right) \right| \le \lceil \mathbf{x} \rceil \left| \frac{(\ln n)^q}{n} \right|.$$

• *Symmetrized Stirling's and Burnside's formulas-based inequalities* If <sup>x</sup> <sup>≥</sup> <sup>e</sup><sup>q</sup> , we have

$$\left|\gamma\_q\left(x+\frac{1}{2}\right)+\frac{(\ln x)^{q+1}}{q+1}\right| \le \left|\gamma\_q(x)+\frac{(\ln x)^{q+1}}{q+1}\right| \le \left|\frac{(\ln x)^q}{x}\right|.$$

• *Further inequalities*. For 0 < x ≤ 1, we use the following approximations (see Nan-Yue and Williams [80, p. 148])

$$\left|\eta\_0(x) - \frac{1}{x}\right| \le \gamma$$

and

$$\left|\nu\_q(\mathbf{x}) - \frac{(\ln \mathbf{x})^q}{\mathbf{x}}\right|\_\mathbf{x} \le \frac{(3 + (-1)^q)(2q)!}{q^{q+1}(2\pi)^q}, \quad q \in \mathbb{N}^\*.$$

**Generalized Stirling's and Related Formulas** For any a ≥ 0, we have the following limits and asymptotic equivalence as x → ∞,

$$\begin{aligned} \label{eq:1} \chi\_q(\mathbf{x} + a) - \chi\_q(\mathbf{x}) &\to & 0, & \quad \chi\_q(\mathbf{x}) + \frac{(\ln x)^{q+1}}{q+1} &\to & 0, \\\\ \chi\_q(\mathbf{x} + a) &\sim & -\frac{(\ln x)^{q+1}}{q+1} .\end{aligned}$$

*Burnside-like approximation* (better than Stirling-like approximation)

$$
\gamma\_q(\alpha) + \frac{1}{q+1} \left( \ln \left( x - \frac{1}{2} \right) \right)^{q+1} \to 0.
$$

*Further results* (obtained by differentiation)

$$\begin{array}{ccccc} \chi\_q'(\mathbf{x}) + \frac{(\ln \mathbf{x})^q}{\mathbf{x}} \rightarrow & \mathbf{0}, & \quad \chi\_q'(\mathbf{x} + a) \sim & -\frac{(\ln \mathbf{x})^q}{\mathbf{x}}. \end{array}$$

For any <sup>r</sup> <sup>∈</sup> <sup>N</sup>,

$$\begin{aligned} \chi\_q^{(r)}(\mathbf{x} + a) - \chi\_q^{(r)}(\mathbf{x}) &\to 0, & D\_x^r \left( \chi\_q(\mathbf{x}) + \frac{(\ln x)^{q+1}}{q+1} \right) &\to 0. \end{aligned}$$

$$D\_x^r \left( \chi\_q(\mathbf{x}) + \frac{1}{q+1} \left( \ln \left( \mathbf{x} - \frac{1}{2} \right) \right)^{q+1} \right) &\to 0.$$

**Asymptotic Expansions** For any m, r <sup>∈</sup> <sup>N</sup><sup>∗</sup> we have the following expansion as x → ∞

$$\frac{1}{m}\sum\_{j=0}^{m-1}\gamma\_q\left(\mathbf{x}+\frac{j}{m}\right) \\ = -\frac{(\ln x)^{q+1}}{q+1} + \sum\_{k=1}^r \frac{B\_k}{m^k k!} \mathbf{g}\_q^{(k-1)}(\mathbf{x}) + O\left(\mathbf{g}\_q^{(r)}(\mathbf{x})\right).$$

Setting m = 1 in this latter formula, we obtain

$$\gamma\_q(\mathbf{x}) = \left. -\frac{(\ln x)^{q+1}}{q+1} + \sum\_{k=1}^r \frac{B\_k}{k!} \mathbf{g}\_q^{(k-1)}(\mathbf{x}) + O\left(\mathbf{g}\_q^{(r)}(\mathbf{x})\right) \right\}.$$

Let us detail this expansion when q = 1. We first observe that

$$g\_1^{(k-1)}(\mathbf{x}) = (-1)^k (k-1)! \frac{\ln x - H\_{k-1}}{\mathbf{x}^k}, \qquad k \in \mathbb{N}^\*.$$

Using (10.4), we then obtain

$$\frac{1}{m}\sum\_{j=0}^{m-1}\eta\_1\left(\mathbf{x}+\frac{j}{m}\right)+(\ln\mathbf{x})\frac{1}{m}\sum\_{j=0}^{m-1}\psi\left(\mathbf{x}+\frac{j}{m}\right)$$

$$=\frac{(\ln\mathbf{x})^2}{2}+\sum\_{k=1}^r\frac{(-1)^{k-1}\begin{array}{c}B\_k \ H\_{k-1} \\ k(m\mathbf{x})^k\end{array}+O\left(\mathbf{x}^{-r-1}\right).$$

Setting m = 1 in this latter formula, we get

$$\gamma\_1(\mathbf{x}) = \frac{(\ln \mathbf{x})^2}{2} - \psi(\mathbf{x}) \ln \mathbf{x} + \sum\_{k=1}^r \frac{(-1)^{k-1} \operatorname{B}\_k \operatorname{H}\_{k-1}}{k \, \mathbf{x}^k} + O\left(\mathbf{x}^{-r-1}\right).$$

Setting r = 5 for instance, we obtain

$$\gamma\_1(\mathbf{x}) = \frac{(\ln \mathbf{x})^2}{2} - \psi(\mathbf{x}) \ln \mathbf{x} - \frac{1}{12\mathbf{x}^2} + \frac{11}{720\mathbf{x}^4} + O\left(\mathbf{x}^{-6}\right).$$

**Generalized Liu's Formula** For any q ≥ 1 and any x > 0 we have

$$\gamma\_q(\mathbf{x}) = -\frac{(\ln \mathbf{x})^{q+1}}{q+1} + \frac{(\ln \mathbf{x})^q}{2\mathbf{x}} + \int\_0^\infty \frac{\{t\} - \frac{1}{2}}{(\mathbf{x}+t)^2} (\ln(\mathbf{x}+t))^{q-1} (q - \ln(\mathbf{x}+t)) \, dt.$$

**Series Representations** Since the function gq (x) lies in *<sup>D</sup>*−<sup>1</sup> <sup>N</sup> , we only have the following series representations of γq (x).

• *Eulerian and Weierstrassian forms*. We have

$$\begin{aligned} \chi\_q(\mathbf{x}) &= \chi\_q + \frac{(\ln \mathbf{x})^q}{\mathbf{x}} + \sum\_{k=1}^{\infty} \left( \frac{(\ln(\mathbf{x} + k))^q}{\mathbf{x} + k} - \frac{(\ln k)^q}{k} \right), \\\\ \chi\_q(\mathbf{x}) &= \frac{(\ln \mathbf{x})^q}{\mathbf{x}} + \sum\_{k=1}^{\infty} \left( \frac{(\ln(\mathbf{x} + k))^q}{\mathbf{x} + k} - \frac{(\ln(k+1))^{q+1} - (\ln k)^{q+1}}{q+1} \right). \end{aligned}$$

The series can be differentiated term by term infinitely many times. For instance, we get

$$\gamma\_q'(\mathbf{x}) = \sum\_{k=0}^{\infty} \frac{(\ln(\mathbf{x} + k))^{q-1}}{(\mathbf{x} + k)^2} (q - \ln(\mathbf{x} + k)).$$

• The analogue of Gauss' limit coincides with the Eulerian form.

• *Gregory's formula-based series representation*. For any x > 0 satisfying the assumptions of Proposition 8.11, we obtain

$$\begin{aligned} \left(\gamma\_q(\boldsymbol{x}) + \frac{(\ln \boldsymbol{x})^{q+1}}{q+1} = \sum\_{n=0}^{\infty} G\_{n+1} \Delta\_x^n \frac{(\ln \boldsymbol{x})^q}{\boldsymbol{x}} \\ = \sum\_{n=0}^{\infty} |G\_{n+1}| \sum\_{k=0}^n (-1)^k \binom{n}{k} \frac{(\ln(\boldsymbol{x}+k))^q}{\boldsymbol{x}+k} . \end{aligned}$$

Setting x = 1 in this identity (provided that x = 1 satisfies the assumptions of Proposition 8.11), we obtain the Fontana-Mascheroni's series expression for γq

$$\gamma\_q \ = \sum\_{n=0}^{\infty} |G\_{n+1}| \sum\_{k=0}^{n} (-1)^k \binom{n}{k} \frac{(\ln(k+1))^q}{k+1} \ .$$

This latter expression can be found in Blagouchine [20, p. 383] and the references therein.

**Analogue of Gauss' Multiplication Formula** The following analogue of Gauss' multiplication formula was previously known (see also Blagouchine [19, p. 542]) but it can be derived straightforwardly from our results.

For any <sup>m</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> and any x > 0, we have

$$\sum\_{j=0}^{m-1} \chi\_q \left( \frac{x+j}{m} \right) = \ -\frac{m}{q+1} \left( \ln \frac{1}{m} \right)^{q+1} + m \sum\_{j=0}^q \binom{q}{j} \left( \ln \frac{1}{m} \right)^j \chi\_{q-j}(x) \dots$$

In particular,

$$\sum\_{j=1}^m \chi\_q\left(\frac{j}{m}\right) = \left. -\frac{m}{q+1} \left( \ln \frac{1}{m} \right)^{q+1} + m \sum\_{j=0}^q \binom{q}{j} \left( \ln \frac{1}{m} \right)^j \chi\_{q-j} \right\vert\_{\frac{1}{q}}$$

Corollary 8.33 provides the following limits for x > 0

$$\lim\_{m \to \infty} \sum\_{j=0}^{q} \binom{q}{j} \left( \ln \frac{1}{m} \right)^j \left( \chi\_{q-j}(mx) - \chi\_{q-j}(m) \right) = -\frac{(\ln x)^{q+1}}{q+1},$$

$$\lim\_{m \to \infty} \left( -\frac{1}{q+1} \left( \ln \frac{1}{m} \right)^{q+1} + \sum\_{j=0}^{q} \binom{q}{j} \left( \ln \frac{1}{m} \right)^j \chi\_{q-j}(mx) \right) = -\frac{(\ln x)^{q+1}}{q+1}.$$

For instance, setting q = 1 in these formulas yields

$$\lim\_{m \to \infty} \left( \left\langle \boldsymbol{\gamma}\_{\mathrm{I}}(mx) - \boldsymbol{\gamma}\_{\mathrm{I}}(m) + (\ln m)(\boldsymbol{\psi}(mx) - \boldsymbol{\psi}(m)) \right\rangle = -\frac{1}{2} (\ln x)^2 \right).$$

$$\lim\_{m \to \infty} \left( \left\langle \boldsymbol{\gamma}\_{\mathrm{I}}(mx) - \frac{1}{2} (\ln m)^2 + \boldsymbol{\psi}(mx) \ln m \right\rangle = -\frac{1}{2} (\ln x)^2 \right).$$

Now, setting m = 2 in the multiplication formula, we obtain the following analogue of Legendre's duplication formula

$$\chi\_q\left(\frac{\chi}{2}\right) + \chi\_q\left(\frac{\chi+1}{2}\right) = \ -\frac{2}{q+1}\left(\ln\frac{1}{2}\right)^{q+1} + 2\sum\_{j=0}^q \binom{q}{j} \left(\ln\frac{1}{2}\right)^j \chi\_{q-j}(\chi)\,.$$

When q = 0 and q = 1, the multiplication formula reduces to the known formulas

$$\begin{aligned} \sum\_{j=0}^{m-1} \psi \left( \frac{\mathbf{x} + j}{m} \right) &= m \left( \psi(\mathbf{x}) - \ln m \right), \\ \sum\_{j=0}^{m-1} \gamma\_1 \left( \frac{\mathbf{x} + j}{m} \right) &= -\frac{m}{2} (\ln m)^2 + m (\ln m) \left. \psi(\mathbf{x}) + m \left. \gamma\_1(\mathbf{x}) \right|. \end{aligned}$$

**Analogue of Wallis's Product Formula** The analogue of Wallis's formula for the function gq (x) is

$$\sum\_{k=1}^{\infty} (-1)^k \frac{(\ln k)^q}{k} = \ -\frac{(\ln 2)^{q+1}}{q+1} + \sum\_{j=0}^{q-1} \binom{q}{j} (\ln 2)^{q-j} \mathcal{Y}\_j \,. \tag{10.20}$$

This formula was established by Briggs and Chowla [25, Eq. (8)]. For q = 1, it reduces to

$$\sum\_{k=1}^{\infty} (-1)^k \frac{\ln k}{k} = \ -\frac{(\ln 2)^2}{2} + \mathcal{y} \ln 2 \dots$$

For q = 2, we obtain

$$\sum\_{k=1}^{\infty} (-1)^k \frac{(\ln k)^2}{k} = -\frac{(\ln 2)^3}{3} + \chi \left(\ln 2\right)^2 + 2\chi\_1 \ln 2 \dots$$

These latter two formulas were also established by Hardy [47].

As an aside, let us establish conversion formulas between the sequences q → γq and <sup>q</sup> → <sup>η</sup>(q)(1), where η(s) is the Dirichlet eta function introduced in (10.17) and <sup>η</sup>(q)(1) stands for the limiting value of <sup>η</sup>(q)(s) as <sup>s</sup> <sup>→</sup> 1. To ease the computations, let us instead consider the conversion formulas between the sequences q → γq and q → λq , where

$$
\lambda\_q := \frac{1}{q+1} \left( \ln 2 \right)^{q+1} + (-1)^{q+1} \eta^{(q)}(1), \qquad q \in \mathbb{N}.
$$

Using (10.20), we can readily derive the following equations

$$\lambda\_q = \sum\_{k=0}^{q-1} \binom{q}{k} (\ln 2)^{q-k} \,\, \varkappa \,\, , \qquad q \in \mathbb{N}. \tag{10.21}$$

These equations actually consist of an infinite consistent triangular system. Solving this system provides the following conversion formula

$$\gamma\_q = \sum\_{k=0}^{q} \binom{q}{k} \frac{B\_{q-k}}{k+1} (\ln 2)^{q-k-1} \lambda\_{k+1}, \qquad q \in \mathbb{N}, \tag{10.22}$$

that is,

$$\gamma\_q = \ -\frac{B\_{q+1}}{q+1} (\ln 2)^{q+1} + \sum\_{k=0}^{q} (-1)^k \binom{q}{k} \frac{B\_{q-k}}{k+1} (\ln 2)^{q-k-1} \eta^{(k+1)}(1), \quad q \in \mathbb{N}.$$

Indeed, plugging (10.22) in the right side of (10.21) we obtain for any <sup>q</sup> <sup>∈</sup> <sup>N</sup>

$$\begin{aligned} \sum\_{k=0}^{q-1} \binom{q}{k} (\ln 2)^{q-k} \, \varkappa\_k &= \sum\_{k=0}^{q-1} \binom{q}{k} (\ln 2)^{q-k} \sum\_{j=0}^k \binom{k}{j} \frac{B\_{k-j}}{j+1} (\ln 2)^{k-j-1} \, \lambda\_{j+1}, \\ &= \sum\_{j=0}^{q-1} \binom{q}{j} (\ln 2)^{q-j-1} \frac{\lambda\_{j+1}}{j+1} \sum\_{k=j}^{q-1} \binom{q-j}{k-j} \, B\_{k-j} \, \end{aligned}$$

where the inner sum reduces to 0q−j−1. The latter quantity then reduces to λq , as expected.

*Remark 10.10* The conversion formulas (10.21) and (10.22) are not quite new. In essence, they were established by Liang and Todd [63, Eq. (3.6)] and Nan-Yue and Williams [80, Eqs. (1.9) and (7.1)]. ♦ **Generalized Webster's Functional Equation** For any <sup>m</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> and any a > 0, there is a unique eventually monotone solution <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> to the equation

$$\sum\_{j=0}^{m-1} f\left(x+a\,j\right)^{-} = -\frac{1}{x}(\ln x)^{q}\,,$$

namely

$$f(\mathbf{x}) = S\_{q,am} \left( \frac{\mathbf{x} + a}{am} \right) - S\_{q,am} \left( \frac{\mathbf{x}}{am} \right) \dots$$

where

$$S\_{q,am}(\mathbf{x}) = \frac{1}{am} \sum\_{j=0}^{q} \binom{q}{j} (\ln(am))^j \,\chi\_{q-j}(\mathbf{x}) .$$

For instance, the unique eventually monotone solution <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> to the equation

$$f(\mathbf{x}) + f(\mathbf{x} + \mathbf{l}) \ = \ -\frac{1}{\mathbf{x}} \ln \mathbf{x}$$

is

$$f(\mathbf{x}) = \left. \mathcal{Y}(\mathbf{x}) - \mathcal{Y}\left(\frac{\mathbf{x}}{2}\right) + (\ln 2)\,\psi(\mathbf{x}) + \frac{1}{2}(\ln 2)^2 \,\mathrm{J} \right.$$

**Rational Arguments Theorem** Let us apply Proposition 8.65 to the function gq (x). For any a, b <sup>∈</sup> <sup>N</sup><sup>∗</sup> with a<b and any <sup>j</sup> ∈ {0,...,b <sup>−</sup> <sup>1</sup>} we have

$$\left[S\_j^b\right][q\_q] = \left. b\left(-1\right)^{q+1} \sum\_{i=0}^q \binom{q}{i} (\ln b)^{q-i} \left. D\_s^{i} \operatorname{Li}\_s(z) \right|\_{\left(s,z\right) = \left(1, \alpha\_b^j\right)^q},$$

where Lis(z) is the polylogarithm function. Hence, we have

$$\left(\chi\_q\left(\frac{a}{b}\right) - \chi\_q = (-1)^{q+1} \sum\_{i=0}^q \binom{q}{i} (\ln b)^{q-i} \sum\_{j=0}^{b-1} (1 - \omega\_b^{-aj}) D\_s^j \operatorname{Li}\_s(\mathbf{z})\big|\_{(s,\mathbf{z}) = (1,\omega\_b^j)}\right)$$

We note that a more practical formula was derived in the special case when q = 1 by Blagouchine [19] as a generalization of Gauss' digamma theorem.

#### **10.8 Higher Order Derivatives of the Hurwitz Zeta Function**

Let <sup>s</sup> <sup>∈</sup> <sup>R</sup> \ {1} and <sup>q</sup> <sup>∈</sup> <sup>N</sup>. Differentiating <sup>q</sup> times both sides of (10.15) we obtain

$$
\zeta^{(q)}(s,x+1) - \zeta^{(q)}(s,x) = \left(-1\right)^{q+1} x^{-s} (\ln x)^q, \qquad x > 0,
$$

where ζ (q)(s, x) stands for D<sup>q</sup> <sup>s</sup> ζ (s, x). This equation shows that the investigation of the higher order derivatives of the Hurwitz zeta function can be carried out using our results. To keep our presentation simple, we will focus on some selected results only.

The interested reader can find an earlier study of these functions in Ramanujan's second notebook [18, p. 36 *et seq.*].

**ID Card** The following basic information can be easily derived.


We observe that this investigation can be regarded as a simultaneous generalization of the studies of the Hurwitz zeta function and the generalized Stieltjes constants. For the latter, we observe that

$$(-1)^q \lim\_{s \to 1} \mathcal{g}\_{s,q}(\mathfrak{x}) = -\frac{1}{\mathfrak{x}} (\ln \mathfrak{x})^q.$$

Setting s = 0 in our results may also be very informative as it produces formulas involving the well-studied quantities <sup>ζ</sup> (q)(0) and <sup>ζ</sup> (q)(0,x)−<sup>ζ</sup> (q)(0) for any <sup>q</sup> <sup>∈</sup> <sup>N</sup>.

*Project 10.11* Find a closed-form expression for the integral

$$\int\_{1}^{\chi} \chi\_{q}(t) \, dt.$$

We apply Proposition 8.20 to gq (x) = −<sup>1</sup> <sup>x</sup> (ln x)<sup>q</sup> . Using (10.19) we obtain

$$\int\_{1}^{\chi} \chi\_{q}(t) \, dt = \Sigma\_{\chi} \int\_{\chi}^{\chi+1} \chi\_{q}(t) \, dt \, = \begin{array}{c} -\frac{1}{q+1} \, \Sigma(\ln x)^{q+1} \\\\ = \frac{(-1)^{q+1}}{q+1} \, \Sigma g\_{0,q+1}(\chi) \end{array}$$

♦

that is,

$$\int\_1^\chi \chi\_q(t) \, dt \, = \frac{(-1)^{q+1}}{q+1} \left( \xi^{(q+1)}(0, \chi) - \xi^{(q+1)}(0) \right).$$

In particular,

$$\gamma\_q(\mathbf{x}) = \frac{(-1)^{q+1}}{q+1} D\_{\mathbf{x}} \zeta^{(q+1)}(0, \mathbf{x}) \,.$$

**Analogue of Bohr-Mollerup's Theorem** The function ζ (q)(s, x) can be characterized as follows.

*All solutions* fs,q : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *to the equation*

$$f\_{\mathfrak{k},q}(\mathfrak{x}+1) - f\_{\mathfrak{k},q}(\mathfrak{x}) \, = \, \mathfrak{g}\_{\mathfrak{k},q}(\mathfrak{x}),$$

*that lie in <sup>K</sup>*<sup>1</sup>−s<sup>+</sup> *are of the form* fs,q (x) <sup>=</sup> cs,q <sup>+</sup> <sup>ζ</sup> (q)(s, x), *where* cs,q <sup>∈</sup> <sup>R</sup>.

**Extended ID Card** The asymptotic constant σ[gs,q] satisfies the identity

$$
\sigma[\mathfrak{g}\_{s,q}] = \int\_0^1 \zeta^{(q)}(s, t+1) \, dt - \zeta^{(q)}(s) \, \, \frac{-q!}{(1-s)^{q+1}} - \zeta^{(q)}(s) \, .
$$

Hence we have the following values


• *Alternative representations of* σ[gs,q]

$$\begin{split} \sigma[g\_{s,q}] &= \lim\_{n \to \infty} \left( \sum\_{k=1}^{n-1} g\_{s,q}(k) - \int\_{1}^{n} g\_{s,q}(t) \, dt + \sum\_{j=1}^{\lfloor 1-s \rfloor\_{+}} G\_{j} \Delta^{j-1} g\_{s,q}(n) \right), \\ \sigma[g\_{s,q}] &= \sum\_{j=1}^{\lfloor 1-s \rfloor\_{+}} G\_{j} \Delta^{j-1} g\_{s,q}(1) \\ &- \sum\_{k=1}^{\infty} \left( \int\_{k}^{k+1} g\_{s,q}(t) \, dt - \sum\_{j=0}^{\lfloor 1-s \rfloor\_{+}} G\_{j} \Delta^{j} g\_{s,q}(k) \right). \end{split}$$

Setting s = 0 in the previous formulas, we obtain

$$(-1)^{q}(q! + \xi^{(q)}(0)) = \lim\_{n \to \infty} \left( \sum\_{k=1}^{n} (\ln k)^{q} - \int\_{1}^{n} (\ln t)^{q} \, dt - \frac{1}{2} (\ln n)^{q} \right).$$

$$= \sum\_{k=1}^{\infty} \left( \frac{1}{2} (\ln k)^{q} - \int\_{k}^{k+1} (\ln t)^{q} \, dt \right).$$

The left-hand quantity can actually be related to the Stieltjes constants in a very simple way. Indeed, on differentiating both sides of (10.18), we obtain the following surprising identity

$$(-1)^q (q! + \zeta^{(q)}(0)) \ = \sum\_{n=0}^{\infty} \frac{\varkappa\_{n+q}}{n!} \ .$$

• *Generalized Binet's function*. For any <sup>r</sup> <sup>∈</sup> <sup>N</sup> and any x > <sup>0</sup>

$$J^{r+1}[\Sigma g\_{s,q}](\mathbf{x}) = \xi^{(q)}(\mathbf{s}, \mathbf{x}) - \int\_{\mathbf{x}}^{\mathbf{x}+1} \xi^{(q)}(\mathbf{s}, t) \, dt + \sum\_{j=1}^{r} G\_j \, \Delta^{j-1} g\_{s,q}(\mathbf{x}) \, \mathbf{x}$$

• *Analogue of Raabe's formula*. We have

$$\int\_{1}^{\chi} g\_{s,q}(t) \, dt \, = \, \frac{q! - \Gamma(q+1, (s-1)\ln x)}{(1-s)^{q+1}}, \quad x > 0, 1$$

and hence the analogue of Raabe's formula is

$$\begin{aligned} \int\_{\chi}^{\chi+1} \xi^{(q)}(s,t) \, dt &= -\frac{\Gamma(q+1,(s-1)\ln x)}{(1-s)^{q+1}} \\ &= -q! \frac{\chi^{1-s}}{(1-s)^{q+1}} \sum\_{j=0}^{q} \frac{((s-1)\ln x)^j}{j!}, \quad x > 0. \end{aligned}$$

**Generalized Stirling's and Related Formulas** For any a ≥ 0 we have

$$
\xi^{(q)}(s, x + a) - \xi^{(q)}(s, x) - \sum\_{j=1}^{\lfloor 1 - s \rfloor\_+ } \binom{a}{j} \Delta^{j-1} g\_{s, q}(\mathbf{x}) \to 0 \qquad \text{as } x \to \infty,
$$

with equality if a ∈ {0, 1,..., 1 − s+}. Also, we have the following analogue of Stirling's formula

$$\zeta^{(q)}(\mathbf{s},\mathbf{x}) - \int\_{\mathbf{x}}^{\mathbf{x}+1} \zeta^{(q)}(\mathbf{s},t) \, dt + \sum\_{j=1}^{\lfloor 1-s \rfloor\_+ } G\_j \, \Delta^{j-1} g\_{\mathbf{s},q}(\mathbf{x}) \to \mathbf{0} \qquad \text{as } \mathbf{x} \to \infty.$$

Setting s = 0 in this latter formula and then simplifying the resulting expression, we obtain

$$\xi^{(q)}(0,x) + \Gamma(q+1,-\ln x) + \frac{1}{2}(-1)^{q+1}(\ln x)^q \to 0 \qquad \text{as } x \to \infty.$$

We also have

$$
\zeta^{(q)}(\mathbf{s}, \mathbf{x} + a) \sim \int\_{\mathbf{x}}^{\mathbf{x} + 1} \zeta^{(q)}(\mathbf{s}, t) \, dt \qquad \text{as } \mathbf{x} \to \infty.
$$

Finally, if s > −1, then we have the following analogue of Burnside's formula

$$
\zeta^{(q)}(s,x) - \int\_{\varkappa-\frac{1}{2}}^{\varkappa+\frac{1}{2}} \zeta^{(q)}(s,t) \, dt \to 0 \,, \qquad \text{as } x \to \infty,
$$

which provides a better approximation of ζ <sup>q</sup> (s, x) than the analogue of Stirling's formula.

**Eulerian and Weierstrassian Forms** If s > 1, then for any x > 0, we simply have

$$\left(\zeta^{(q)}(s,x)\right) = -\sum\_{k=0}^{\infty} \mathfrak{g}\_{s,q}(x+k)^k$$

and this series converges uniformly on <sup>R</sup><sup>+</sup> and can be integrated and differentiated term by term. If s < 1, then for any x > 0, we obtain the following Eulerian form

$$\begin{aligned} \xi^{(q)}(s,\boldsymbol{x}) - \xi^{(q)}(\boldsymbol{s}) &= -g\_{s,q}(\boldsymbol{\alpha}) + \sum\_{j=0}^{\lfloor -s \rfloor} \binom{\boldsymbol{x}}{j+1} \Delta^{j} g\_{s,q}(\boldsymbol{1}) \\ &+ \sum\_{k=1}^{\infty} \left( -g\_{s,q}(\boldsymbol{x}+k) + \sum\_{j=0}^{\lfloor 1-s \rfloor} \binom{\boldsymbol{x}}{j} \Delta^{j} g\_{s,q}(\boldsymbol{k}) \right) \end{aligned}$$

and the Weierstrassian form can be obtained similarly. Both associated series converge uniformly on any bounded subset of [0,∞) and can be integrated and differentiated term by term. Note that the case where (s, q) = (0, 2) can be found in Ramanujan's second notebook [18, p. 26–27].

**Gregory's Formula-Based Series Representation** For any x > 0 satisfying the assumptions of Proposition 8.11, we have

$$\begin{aligned} \xi^{(q)}(s,\mathbf{x}) &= \int\_{\boldsymbol{x}}^{\boldsymbol{x}+1} \xi^{(q)}(s,\mathbf{t}) \, dt - \sum\_{n=0}^{\infty} G\_{n+1} \Delta^{n} g\_{s,q}(\mathbf{x}) \\ &= \int\_{\boldsymbol{x}}^{\boldsymbol{x}+1} \xi^{(q)}(s,\mathbf{t}) \, dt - \sum\_{n=0}^{\infty} |G\_{n+1}| \sum\_{k=0}^{n} (-1)^{k} \binom{n}{k} \, g\_{s,q}(\mathbf{x}+k) \,. \end{aligned}$$

Setting x = 1 in this identity (provided that x = 1 satisfies the assumptions of Proposition 8.11) yields a series expression for ζ (q)(s) that is the analogue of Fontana-Mascheroni series

$$\zeta^{(q)}(s) = \frac{-q!}{(1-s)^{q+1}} - \sum\_{n=0}^{\infty} |G\_{n+1}| \sum\_{k=0}^{n} (-1)^k \binom{n}{k} \, \_g\zeta\_{s,q}(k+1)\dots$$

which can also be obtained differentiating the analogue of Fontana-Mascheroni series for the Hurwitz zeta function. For instance, we have

$$\xi''(0) = \left. -2 + \sum\_{n=0}^{\infty} |G\_{n+1}| \sum\_{k=0}^{n} (-1)^k \binom{n}{k} \left( \ln(k+1) \right)^2 \right| $$

and this latter value is also known to be (see, e.g., Berndt [18, p. 25])

$$
\frac{1}{2}\gamma^2 - \frac{\pi^2}{24} - \frac{1}{2}(\ln(2\pi))^2 + \gamma\_1 \dots
$$

**Analogue of Gauss' Multiplication Formula** Upon differentiating the analogue of Gauss' multiplication formula for the Hurwitz zeta function, we immediately obtain the following multiplication formula. For any <sup>m</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> and any x > 0, we have

$$\sum\_{j=0}^{m-1} \xi^{(q)}\left(s, \frac{x+j}{m}\right) \\ = \left. m^s \sum\_{j=0}^q \binom{q}{j} (\ln m)^{q-j} \, \xi^{(j)}(s, x) \right\vert\_{\xi}$$

Moreover, Corollary 8.33 provides the following limit for any x > 0 and any s < 1

$$\lim\_{m \to \infty} \sum\_{j=0}^{q} \binom{q}{j} (\ln m)^{q-j} \frac{\zeta^{(j)}(s, mx)}{m^{1-s}} = \quad - \frac{\Gamma(q+1, (s-1)\ln x)}{(1-s)^{q+1}}.$$

Also, for any s = 1, we have

$$\lim\_{m \to \infty} \sum\_{j=0}^{q} \binom{q}{j} (\ln m)^{q-j} \frac{\zeta^{(j)}(s, m\mathbf{x}) - \zeta^{(j)}(s, m)}{m^{1-s}} = \frac{q! - \Gamma(q+1, (s-1)\ln x)}{(1-s)^{q+1}} \dots$$

**Analogue of Wallis's Product Formula** When s < 1, the form of the analogue of Wallis's product formula strongly depends upon the value of s. If s > 1, then we have

$$\begin{aligned} \eta^{(q)}(s) &= \sum\_{k=1}^{\infty} \frac{(-1)^{k-1}}{k^s} \left(-\ln k\right)^q \\ &= \xi^{(q)}(s) - 2^{1-s} \sum\_{j=0}^q \binom{q}{j} \left(\ln \frac{1}{2}\right)^{q-j} \xi^{(j)}(s), \end{aligned}$$

where s → η(s) is Dirichlet's eta function. Just as we did for the formulas (10.21) and (10.22), we can easily establish the following conversion formulas for s > 1

$$\begin{aligned} \mu\_q(s) &= \sum\_{k=0}^{q-1} \binom{q}{k} \left(\ln\frac{1}{2}\right)^{q-k} \zeta^{(k)}(s) \,, & q \in \mathbb{N}, \\\zeta^{(q)}(s) &= \sum\_{k=0}^{q} \binom{q}{k} \frac{B\_{q-k}}{k+1} \left(\ln\frac{1}{2}\right)^{q-k-1} \mu\_{k+1}(s) \,, & q \in \mathbb{N}, \end{aligned}$$

where

$$\mu\_q(\mathfrak{s}) := \mathcal{Z}^{s-1}(\zeta^{(q)}(\mathfrak{s}) - \eta^{(q)}(\mathfrak{s})) - \zeta^{(q)}(\mathfrak{s})\,, \qquad q \in \mathbb{N}.$$

#### **10.9 The Catalan Number Function**

The Catalan number function is the restriction to <sup>R</sup><sup>+</sup> of the map <sup>x</sup> → Cx defined on (−<sup>1</sup> <sup>2</sup> ,∞) by

$$C\_{\chi} := \frac{1}{x+1} \binom{2x}{x} \cdot \frac{1}{x}$$

This function satisfies the equation

$$C\_{\ge +1} := \left(4 - \frac{6}{\ge +2}\right) C\_{\ge -2}$$

The additive version of this equation reads f = g, where the function g is the logarithm of a rational function. We observe that such equations have been thoroughly investigated by Anastassiadis [7, p. 41] (see also Kuczma [57]).

The equation above shows that the Catalan number function can be investigated using our results. Let us briefly study this function.

**ID Card** The function Cx is clearly a --type function and we immediately derive the following basic information.


**Analogue of Bohr-Mollerup's Theorem** The function Cx can be characterized as follows.

*All solutions* <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup><sup>+</sup> *to the equation*

$$(\mathbf{x} + \mathbf{2})f(\mathbf{x} + \mathbf{1}) = (4\mathbf{x} + \mathbf{2})f(\mathbf{x})$$

*for which* ln <sup>f</sup> *lies in <sup>K</sup>*<sup>1</sup> *are of the form* f (x) <sup>=</sup> c Cx , *where* c > 0.

**Extended ID Card** We have the following values:


We also have the inequality

$$|\wp[\lg]| \le \frac{25}{8}\ln 5 + \frac{39}{8}\ln 3 - 16\ln 2 + \frac{3}{4} \approx 0.04$$

and the following representations

$$\begin{aligned} \nu[\mathbf{g}] &= \int\_1^\infty \frac{\Im(\{t\} - \frac{1}{2})}{(t+2)(2t+1)} \, dt \,, \\\sigma[\mathbf{g}] &= \int\_0^1 \ln C\_{l+1} \, dt .\end{aligned}$$

Moreover, the analogue of Raabe's formula is

$$\int\_{\chi}^{\chi+1} \ln C\_I \, dt \, = \ln \left( \frac{e^{\frac{3}{2}} (4\chi + 2)^{\chi + \frac{1}{2}}}{\sqrt{\pi} \, (\chi + 2)^{\chi + 2}} \right), \qquad \chi > 0.$$

#### **Generalized Stirling's and Related Formulas** For any a ≥ 0, we have

$$\frac{C\_{\ge +a}}{C\_{\ge}} \sim 4^a \qquad \text{and} \qquad C\_{\ge} \sim \frac{4^\times}{\ge^{3/2} \sqrt{\pi}} \qquad \text{as } \ge \infty.$$

Also, the analogue of Burnside's formula gives

$$\ln C\_{\chi} - \ln \left( \frac{e^{\frac{3}{2}} (4\chi)^{\chi}}{\sqrt{\pi} \left(\chi + \frac{3}{2}\right)^{\chi + \frac{3}{2}}} \right) \to 0 \qquad \text{as } \chi \to \infty.$$

**Restriction to the Natural Integers** For any <sup>n</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> we have

$$C\_n = \frac{1}{n+1} \binom{2n}{n}.$$

**Eulerian and Weierstrassian Forms** For any x > 0, we have

$$C\_{\chi} = \frac{\chi + 2}{4\chi + 2} 2^{\chi} \prod\_{k=1}^{\infty} \frac{\left(2 - \frac{3}{k+3}\right)^{\chi}}{\left(2 - \frac{3}{k+2}\right)^{\chi - 1} \left(2 - \frac{3}{\chi + k + 2}\right)}$$

and

$$C\_{\chi} = \frac{\chi + 2}{4\chi + 2} e^{-\frac{\chi}{2}} \prod\_{k=1}^{\infty} \frac{1 + \frac{\chi}{k+2}}{1 + \frac{2\chi}{2k+1}} e^{\frac{3\chi}{(k+2)(2k+1)}} \dots$$

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

#### **Chapter 11 Defining New Multiple log** *-***-Type Functions**

In the previous chapter, we tested our results on some multiple log--type functions that are well-known special functions. It is clear, however, that there are many other multiple log--type functions that are still to be introduced and investigated, simply as principal indefinite sums of standard functions.

In this chapter, we introduce and investigate the following functions (we use the acronym PIS for "principal indefinite sum")


The latter two examples are examined here in a broad way. A deeper investigation of these examples can be carried out simply by following all the steps and recipes given in Chap. 9.

#### **11.1 The PIS of the Digamma Function**

Let us see what our theory tells us when g(x) = ψ(x) is the digamma function. We first observe that <sup>g</sup> lies in *<sup>C</sup>*<sup>∞</sup> <sup>∩</sup> *<sup>D</sup>*<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*∞.

Using summation by parts, we can easily see that

$$
\Sigma \psi(x) = (x - 1)(\psi(x) - 1)\dots
$$

Moreover, from the identity Hx−<sup>1</sup> = ψ(x) + γ , we obtain immediately

$$
\Sigma\_{\underline{x}} H\_{\underline{x}-1} = (\underline{x}-1)(H\_{\underline{x}-1}-1)\dots
$$

This example may seem very basic at first glance, but since Hx is the discrete analogue of the function ln x, we expect an important analogy between ψ(x) and ln x = ln -(x), at least in terms of asymptotic behaviors. Actually, the analogue of Burnside's formula shows that the function

$$
\ln \Gamma \left( x - \frac{1}{2} \right) + \frac{1}{2} (1 - \ln(2\pi))
$$

is a very good approximation of ψ(x).

Interestingly, using (10.12) we can easily derive the following additional identity

$$
\Delta \psi(\mathbf{x}) = \frac{1}{2} (1 - \ln(2\pi)) + D \ln G(\mathbf{x}), \qquad \mathbf{x} > \mathbf{0},
$$

where G is the Barnes G-function (see Sect. 10.5).

*Project 11.1* Find a closed-form expression for the function xψ<sup>2</sup>(x). Using again summation by parts, we obtain

$$
\Delta\_{\mathbf{x}} \psi^2(\mathbf{x}) = (\mathbf{x} - \mathbf{1})\,\psi^2(\mathbf{x}) - (2\mathbf{x} - \mathbf{1})\,\psi(\mathbf{x}) + 2\mathbf{x} - 2 - \mathbf{y}.
$$

We also note that the function <sup>ψ</sup>2(x) lies in *<sup>C</sup>*<sup>∞</sup> <sup>∩</sup>*D*1∩*K*∞, just as does the function ψ(x). The investigation of this new function in the light of our results is left to the reader. ♦

**ID Card** The following basic information about the functions ψ(x) and ψ(x) follows trivially from the discussion above.


**Analogue of Bohr-Mollerup's Theorem** The function ψ(x) can be characterized as follows.

*All eventually convex or concave solutions* <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *to the equation*

$$f(\mathbf{x} + \mathbf{l}) - f(\mathbf{x}) = \psi(\mathbf{x})$$

*are of the form* f (x) <sup>=</sup> <sup>c</sup> <sup>+</sup> ψ(x), *where* <sup>c</sup> <sup>∈</sup> <sup>R</sup>.

**Extended ID Card** It is not difficult to see that

$$
\sigma[\mathfrak{g}] = \int\_0^1 \Sigma \psi(t+1) \, dt \, = \frac{1}{2} (1 - \ln(2\pi)).
$$

Hence we have the values


• *Alternative representations of* σ[g]

$$\begin{aligned} \sigma[\mathbf{g}] &= -\frac{1}{2}\mathcal{Y} - \sum\_{k=1}^{\infty} \left( \ln k - \psi(k) - \frac{1}{2k} \right), \\ \sigma[\mathbf{g}] &= -\frac{1}{2}\mathcal{Y} + \int\_{1}^{\infty} \left( \{t\} - \frac{1}{2} \right) \psi\_{1}(t) \, dt, \\ \sigma[\mathbf{g}] &= \lim\_{n \to \infty} \left( \left( n - \frac{1}{2} \right) \psi(n) - \ln \Gamma(n) - n + 1 \right). \end{aligned}$$

• *Alternative representations of* γ [g]

$$
\sigma[\mathbf{g}] = \int\_1^\infty \left( \psi(\lfloor t \rfloor) - \psi(t) + \frac{1}{2\lfloor t \rfloor} \right) dt,
$$

$$
\sigma[\mathbf{g}] = \int\_1^\infty \left( \psi(\lfloor t \rfloor) - \psi(t) + \frac{\{t\}}{\lfloor t \rfloor} \right) dt.
$$

• *Generalized Binet's function*. For any <sup>q</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> and any x > 0,

$$\begin{aligned} J^{q+1}[\Sigma \psi](\mathbf{x}) &= \Sigma \psi(\mathbf{x}) - \frac{1}{2} (1 - \ln(2\pi)) - \ln \Gamma(\mathbf{x}) + \frac{1}{2} \,\psi(\mathbf{x}), \\ &+ \sum\_{j=0}^{q-2} G\_{j+2}(-1)^{j} \,\mathbf{B}(j+1, \mathbf{x}), \end{aligned}$$

where (x, y) → B(x, y) is the beta function.

• *Analogue of Raabe's formula*

$$\int\_{\chi}^{\chi+1} \Sigma \psi(t) \, dt \, = \frac{1}{2} (1 - \ln(2\pi)) + \ln \Gamma(\chi), \qquad \chi > 0.$$

• *Alternative characterization*. The function f = ψ is the unique solution lying in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*<sup>1</sup> to the equation

$$\int\_{\chi}^{\chi+1} f(t) \, dt \, = \frac{1}{2} (1 - \ln(2\pi)) + \ln \Gamma(\chi), \qquad \chi > 0.$$

.

**Inequalities** The following inequalities hold for any x > 0, any a ≥ 0, and any <sup>n</sup> <sup>∈</sup> <sup>N</sup>∗.

• *Symmetrized generalized Wendel's inequality* (equality if a ∈ {0, 1})

$$\begin{split} \left| \Sigma \psi(\mathbf{x} + a) - \Sigma \psi(\mathbf{x}) - a \psi(\mathbf{x}) \right| &\leq \left| a - 1 \right| \left| \psi(\mathbf{x} + a) - \psi(\mathbf{x}) \right| \\ &\leq \left\lceil a \right\rceil \frac{\left| a - 1 \right|}{\chi} . \end{split}$$

• *Symmetrized generalized Wendel's inequality* (discrete version)

$$|\left|\Sigma\psi(\mathbf{x}) - f\_n^{\mathbf{I}}[\psi](\mathbf{x})\right| \le |\mathbf{x} - \mathbf{I}| \left|\psi(n + \mathbf{x}) - \psi(n)\right| \le \left\lceil \mathbf{x} \right\rceil \frac{|\mathbf{x} - \mathbf{I}|}{n},$$

where

$$[f\_n^{\mathbb{I}}[\psi](\mathbf{x}) = (n+\mathbf{x}-\mathbf{l})(\psi(n) - \psi(\mathbf{x}+\mathbf{n})) + (\mathbf{x}-\mathbf{l})\,\psi(\mathbf{x}) + \mathbf{l}.$$

• *Symmetrized Stirling's formula-based inequalities*

$$\left|\Sigma\psi\left(\mathbf{x} + \frac{1}{2}\right) - \frac{1}{2}(1 - \ln(2\pi)) - \ln\Gamma(\mathbf{x})\right|$$

$$\leq \left|\Sigma\psi(\mathbf{x}) - \frac{1}{2}(1 - \ln(2\pi)) - \ln\Gamma(\mathbf{x}) + \frac{1}{2}\psi(\mathbf{x})\right|$$

$$\leq x\ln\mathbf{x} - \ln\Gamma(\mathbf{x}) - \frac{1}{2}\psi(\mathbf{x}) - \mathbf{x} + \frac{1}{2}\ln(2\pi) \leq \frac{1}{2\pi}$$

• *Generalized Gautschi's inequality*

$$(a - \lceil a \rceil) \,\psi(\mathbf{x} + \lceil a \rceil) \le (a - \lceil a \rceil) \,(\Sigma \psi)'(\mathbf{x} + \lceil a \rceil)$$

$$\le (\Sigma \psi)(\mathbf{x} + a) - (\Sigma \psi)(\mathbf{x} + \lceil a \rceil)$$

$$\le (a - \lceil a \rceil) \,\psi(\mathbf{x} + \lfloor a \rfloor).$$

**Generalized Stirling's and Related Formulas** For any a ≥ 0, we have the following limits and asymptotic equivalence as x → ∞,

$$
\Sigma \psi(\mathbf{x} + a) - \Sigma \psi(\mathbf{x}) - a \psi(\mathbf{x}) \to 0, \qquad \Sigma \psi(\mathbf{x} + a) \sim \ln \Gamma(\mathbf{x}),
$$

$$
\Sigma \psi(\mathbf{x}) - \ln \Gamma(\mathbf{x}) + \frac{1}{2} \psi(\mathbf{x}) \to \frac{1}{2} (1 - \ln(2\pi)),
$$

$$
\Sigma \psi(\mathbf{x}) - \ln \Gamma \left(\mathbf{x} - \frac{1}{2}\right) \to \frac{1}{2} (1 - \ln(2\pi)).
$$

**Asymptotic Expansions** For any <sup>q</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> we have the following expansion as <sup>x</sup> <sup>→</sup> ∞

$$\Sigma \psi(\mathbf{x}) = \frac{1}{2} (1 - \ln(2\pi)) + \sum\_{k=0}^{q} \frac{B\_k}{k!} \psi\_{k-1}(\mathbf{x}) + O(\psi\_q(\mathbf{x})).$$

Setting q = 3 for instance, we get

$$
\Delta\psi(\mathbf{x}) = \frac{1}{2}(1 - \ln(2\pi)) + \ln\Gamma(\mathbf{x}) - \frac{1}{2}\psi(\mathbf{x}) + \frac{1}{12}\psi\_1(\mathbf{x}) + O(\mathbf{x}^{-3}).
$$

**Generalized Liu's Formula** For any x > 0, we have

$$\Sigma \psi(\mathbf{x}) = \frac{1}{2} (1 - \ln(2\pi)) + \ln \Gamma(\mathbf{x}) - \frac{1}{2} \psi(\mathbf{x}) - \int\_0^\infty \left( \{t\} - \frac{1}{2} \right) \psi\_\mathbf{l}(\mathbf{x} + t) \, dt.$$

**Limit and Series Representations** Let us briefly examine the main limit and series representations of ψ(x). The additional representations obtained by differentiation and integration are left to the reader.

• *Eulerian and Weierstrassian forms*. We have

$$\begin{aligned} \Sigma \psi(\mathbf{x}) &= \left( -\gamma \mathbf{x} - \psi(\mathbf{x}) - \sum\_{k=1}^{\infty} \left( \psi(\mathbf{x} + k) - \psi(k) - \frac{\mathbf{x}}{k} \right), \\\\ \Sigma \psi(\mathbf{x}) &= \left( -(1+\gamma)\mathbf{x} - \psi(\mathbf{x}) - \sum\_{k=1}^{\infty} \left( \psi(\mathbf{x}+k) - \psi(k) - \mathbf{x} \,\psi\_1(k) \right), \dots \right) \end{aligned}$$

• *Analogue of Gauss' limit*. We have

$$
\Delta \tilde{\boldsymbol{\psi}}(\mathbf{x}) := (\mathbf{x} - \mathbf{1})\,\boldsymbol{\psi}(\mathbf{x}) + \mathbf{1} + \lim\_{n \to \infty} (n + \mathbf{x} - \mathbf{1})(\boldsymbol{\psi}(n) - \boldsymbol{\psi}(\mathbf{x} + n)).
$$

**Gregory's Formula-Based Series Representation** For any x > 0 we have

$$\Sigma \psi(\mathbf{x}) = \frac{1}{2} (\mathbf{l} - \ln(2\pi)) + \ln \Gamma(\mathbf{x}) - \frac{1}{2} \psi(\mathbf{x}) + \sum\_{n=0}^{\infty} |G\_{n+2}| \mathbf{B}(n+1, \mathbf{x}) \dots$$

Setting x = 1 in this identity yields the following analogue of Fontana-Mascheroni's series

$$\sum\_{n=2}^{\infty} \frac{|G\_n|}{n-1} = \left| -\frac{1}{2} + \frac{1}{2}\ln(2\pi) - \frac{1}{2}\mathcal{Y}\_n \right| $$

and the right-hand value is precisely the generalized Euler constant γ [ψ] associated with the digamma function. We also observe that this latter identity was obtained by Kowalenko [52, p. 431].

**Analogue of Gauss' Multiplication Formula** Since we do not have any simple expression for the function xψ( <sup>x</sup> <sup>m</sup> ), it seems difficult to find a usable multiplication formula here. We had the same difficulty in the investigation of the Barnes Gfunction (see Sect. 10.5). However, we can use Proposition 8.30 to derive the following convergence result. For any <sup>m</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> we have

$$\sum\_{j=0}^{m-1} \Sigma \psi \left( \frac{\mathbf{x} + j}{m} \right) - m \ln \Gamma \left( \frac{\mathbf{x}}{m} \right) + \frac{1}{2} \psi \left( \frac{\mathbf{x}}{m} \right) \to \frac{m}{2} \left( 1 - \ln(2\pi) \right) \quad \text{as } \mathbf{x} \to \infty.$$

**Analogue of Wallis's Product Formula** The following analogue of Wallis's formula was already found in Project 10.1

$$\lim\_{n \to \infty} \left( -\ln(4n) + 2\sum\_{k=1}^{2n} (-1)^k \psi(k) \right) \\ = \mathcal{N} \cdot \mathcal{J}$$

**Generalized Webster's Functional Equation** For any <sup>m</sup> <sup>∈</sup> <sup>N</sup>∗, there is a unique eventually monotone solution <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> to the equation

$$\sum\_{j=0}^{m-1} f\left(x + \frac{j}{m}\right) = \psi(x)$$

namely

$$f(\mathbf{x}) = \Sigma \psi \left(\mathbf{x} + \frac{1}{m}\right) - \Sigma \psi(\mathbf{x}).$$

**Analogue of Euler's Series Representation of** *γ* We have (ψ) (1) = −1 − γ and

$$(\Sigma \psi)^{(k)}(1) \ = k \,\psi\_{k-1}(1) \ = (-1)^k k! \,\zeta(k), \qquad k \ge 2 \dots$$

The Taylor series expansion of ψ(x + 1) about x = 0 is

$$
\Sigma \psi(\mathbf{x} + \mathbf{l}) = (-1 - \gamma)\mathbf{x} + \sum\_{k=2}^{\infty} \xi(k)(-\mathbf{x})^k, \qquad |\mathbf{x}| < 1.
$$

Integrating both sides of this equation on (0, 1), we obtain

$$\sum\_{k=2}^{\infty} (-1)^k \frac{\xi(k)}{k+1} = \left[ 1 + \frac{1}{2} (\wp - \ln(2\pi)) \right]$$

**Analogue of the Reflection Formula** For any <sup>x</sup> <sup>∈</sup> <sup>R</sup> \ <sup>Z</sup>, we have

$$
\Sigma \psi(1+\mathfrak{x}) + \Sigma \psi(1-\mathfrak{x}) = |1 - \pi \ge \cot(\pi \mathfrak{x}) .
$$

#### **11.2 The PIS of the Hurwitz Zeta Function**

In this section we apply our theory to investigate the function

$$\text{ax} \mapsto \zeta\_2(\mathbf{s}, \mathbf{x}) \stackrel{\text{def}}{=} \Sigma\_{\mathbf{x}} \zeta(\mathbf{s}, \mathbf{x}),$$

for any fixed <sup>s</sup> <sup>∈</sup> <sup>R</sup> \ {1}.

Using summation by parts, we observe that if s = 2 we have

$$
\zeta\_2(\mathbf{s}, \mathbf{x}) = (\mathbf{x} - \mathbf{l})\,\zeta(\mathbf{s}, \mathbf{x}) - \zeta(\mathbf{s} - \mathbf{l}, \mathbf{x}) + \zeta(\mathbf{s} - \mathbf{l}).
$$

If s = 2, then

$$
\zeta\_2(2, \mathbf{x}) = \, ^\flat \Sigma\_\mathbf{x} \psi\_\mathbf{l}(\mathbf{x}) \, = \, ^\flat (\mathbf{x} - \mathbf{l}) \, \psi\_\mathbf{l}(\mathbf{x}) + \psi(\mathbf{x}) + \mathbf{y} \,.
$$

To keep this investigation simple, here we focus on some selected results only and we restrict ourselves to the case when s > 2, for which the sequence n → ζ (s, n) is summable. In this case, by (6.23) we obtain immediately the following surprising identity (see also Paris [83])

$$\sum\_{k=1}^{\infty} \xi(s,k) \, = \, \xi(s-1).$$

We also have

$$\int\_{1}^{\infty} \zeta(s, t) \, dt \, = \frac{\zeta(s - 1)}{s - 1} \,.$$

**ID Card** We can easily summarize the basic information as follows:


**Analogue of Bohr-Mollerup's Theorem** The function ζ2(s, x) can be characterized as follows.

*All eventually monotone solutions* fs : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *to the equation*

$$f\_{\mathbf{s}}(\mathbf{x} + \mathbf{l}) - f\_{\mathbf{s}}(\mathbf{x}) = \xi(\mathbf{s}, \mathbf{x})$$

*are of the form* fs(x) <sup>=</sup> cs <sup>+</sup> <sup>ζ</sup>2(s, x), *where* cs <sup>∈</sup> <sup>R</sup>.

**Extended ID Card** We immediately have

$$\sigma[\mathbf{g}\_s] = \sum\_{k=1}^{\infty} \xi(s,k) - \int\_1^{\infty} \xi(s,t) \, dt \, = \frac{s-2}{s-1} \xi(s-1) \dots$$

Hence we have the values


• *Alternative representations of* σ[gs] = γ [gs]

$$
\sigma[\mathbf{g}\_s] = \int\_0^1 \xi\_2(\mathbf{s}, t+1) \, dt \, = \int\_1^\infty \left( \xi(\mathbf{s}, \lfloor t \rfloor) - \xi(\mathbf{s}, t) \right) \, dt \, \,.
$$

$$
\sigma[\mathbf{g}\_s] = \frac{1}{2} \xi(\mathbf{s}) + s \int\_1^\infty \left( \frac{1}{2} - \{t\} \right) \xi(\mathbf{s}+1, t) \, dt \,.
$$

• *Analogue of Raabe's formula*

$$\int\_{\chi}^{\chi+1} \xi\_2(s,t) \, dt \, = \,\, \xi(s-1) - \frac{\xi(s-1,x)}{s-1}, \qquad x > 0.$$

**Inequalities and Asymptotic Analysis** For any a ≥ 0 and any x > 0, we have

$$\begin{aligned} \left| \xi\_2(s, x + a) - \xi\_2(s, x) \right| &\le \lceil a \rceil \xi(s, x), \\\left| \xi\_2(s, x) - \xi(s - 1) + \frac{\xi(s - 1, x)}{s - 1} \right| &\le \xi(s, x). \end{aligned}$$

In particular, we have

$$
\xi\_2(s, x) \to \xi(s - 1) \qquad \text{as } x \to \infty.
$$

**Generalized Liu's Formula** For any x > 0 we have

$$\begin{aligned} \xi\_2(s,x) &= \xi(s-1) - \frac{\xi(s-1,x)}{s-1} - \frac{1}{2}\xi(s,x) \\ &+ s \int\_0^\infty \left( \{t\} - \frac{1}{2} \right) \xi(s+1, x+t) \, dt. \end{aligned}$$

**Eulerian and Weierstrassian Forms** For any x > 0, we have

$$
\xi\_2(s, x) := \xi(s - 1) - \sum\_{k=0}^{\infty} \xi(s, x + k).
$$

and this series converges uniformly on <sup>R</sup><sup>+</sup> and can be integrated and differentiated term by term.

#### **Gregory's Formula-Based Series Representation** For any x > 0 we have

$$\begin{aligned} \xi\_2(s,x) &= \xi(s-1) - \frac{\xi(s-1,x)}{s-1} - \sum\_{n=0}^{\infty} G\_{n+1} \, \Delta\_x^n \xi(s,x) \\ &= \xi(s-1) - \frac{\xi(s-1,x)}{s-1} - \sum\_{n=0}^{\infty} |G\_{n+1}| \sum\_{k=0}^n (-1)^k \binom{n}{k} \xi(s,x+k) \dots \end{aligned}$$

Setting x = 1 in this identity yields the analogue of Fontana-Mascheroni series

$$\sum\_{n=0}^{\infty} |G\_{n+1}| \sum\_{k=0}^{n} (-1)^k \binom{n}{k} \zeta(s, k+1) \ = \frac{s-2}{s-1} \zeta(s-1) \dots$$

#### **Analogue of Wallis's Product Formula** The analogue of Wallis's formula is

$$\sum\_{k=1}^{\infty} (-1)^{k-1} \zeta(s, k) = (2 - 2^{1-s})\zeta(s) + (1 - 2^{1-s})\zeta(s - 1).$$

$$-2^{1-s} \sum\_{k=0}^{\infty} \zeta\left(s, k + \frac{1}{2}\right).$$

This formula is actually obtained by combining Proposition 6.7 with the duplication formula for the Hurwitz zeta function

$$2\xi(s,2x) = 2^{1-s}\xi(s,x) + 2^{1-s}\xi\left(s,x+\frac{1}{2}\right).$$

On the other hand we also have (see Paris [83])

$$\sum\_{k=1}^{\infty} (-1)^{k-1} \zeta(s,k) = \left(1 - 2^{-s}\right) \zeta(s), \qquad s > 1.$$

Combining this formula with the analogue of Wallis's formula, we derive the following identity

$$\sum\_{k=0}^{\infty} \xi\left(s, k + \frac{1}{2}\right) \\ = \left(2^{s-1} - 2^{-1}\right)\xi(s) + \left(2^{s-1} - 1\right)\xi(s-1).$$

**Taylor Series Expansion** We have

$$(\Sigma g\_s)^{(k)}(1) = -k! \binom{-s}{k} \zeta(s+k-1), \qquad k \in \mathbb{N}^\*.$$

The Taylor series expansion of ζ2(s, x + 1) about x = 0 is

$$\xi\_2(s, x+1) = -\sum\_{k=1}^{\infty} \binom{-s}{k} \xi(s+k-1) x^k, \qquad |x| < 1.$$

#### **11.3 The PIS of the Generating Function for the Gregory Coefficients**

Let us investigate the function hp for any <sup>p</sup> <sup>∈</sup> <sup>N</sup>∗, where hp : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> is defined by the equation

$$h\_p(\mathbf{x}) = \frac{\mathbf{x}^p}{\ln(\mathbf{x} + 1)} = \|\mathbf{x}^p\| \mathbf{i}'(\mathbf{x} + 1) \qquad \text{for } \mathbf{x} > \mathbf{0}$$

and li(x) is the logarithmic integral function defined for all positive real numbers x = 1 by the integral

$$\operatorname{li}(x) = \int\_0^x \frac{1}{\ln t} \, dt \dots$$

Incidentally, when p = 1, this function reduces to the ordinary generating function for the sequence n → Gn. That is,

$$h\_1(\mathbf{x}) = \sum\_{n=0}^{\infty} G\_n \mathbf{x}^n, \qquad |\mathbf{x}| < 1.$$

More generally, hp(x) <sup>=</sup> <sup>x</sup>p−1h1(x) is the ordinary generating function for the right-shifted sequence n → Gn−p+1, that is the sequence

$$\{0, \dots, 0, G\_0, G\_1, G\_2, \dots\}$$

with p − 1 leading 0's.

We also note that the function hp has the following integral representation

$$h\_p(\mathfrak{x}) \, \, = \, \, x^{p-1} \int\_0^1 (\mathfrak{x} + 1)^s \, ds.$$

This latter representation actually suggests introducing, for any <sup>p</sup> <sup>∈</sup> <sup>N</sup>∗, the function gp : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> defined by the equation

$$g\_p(\mathbf{x}) = \int\_0^1 (\mathbf{x} + 1)^{s + p - 1} \, ds = \frac{\mathbf{x}(\mathbf{x} + 1)^{p - 1}}{\ln(\mathbf{x} + 1)} \qquad \text{for } \mathbf{x} > \mathbf{0}.$$

The conversion formulas between the h ps and the g ps are simply given by the following equations

$$\begin{aligned} \mathbf{g}\_p(\mathbf{x}) &= \sum\_{k=1}^p \binom{p-1}{k-1} h\_k(\mathbf{x}) \; , \\ h\_p(\mathbf{x}) &= \sum\_{k=1}^p (-1)^{p-k} \binom{p-1}{k-1} \mathbf{g}\_k(\mathbf{x}) \; . \end{aligned}$$

In particular, we have g<sup>1</sup> = h1.

Since the function gp has a nicer integral form than hp, for the sake of simplicity we will investigate the function gp for any <sup>p</sup> <sup>∈</sup> <sup>N</sup>∗. By Proposition 5.7, the function hp can then be obtained by applying the operator to both sides of the second conversion formula above.

*Remark 11.2* We observe that the function gp is also the ordinary generating function for the sequence n → ψn(p − 1), where ψn is the nth degree Bernoulli polynomial of the second kind (see Sect. 12.8). ♦

**ID Card** It is not difficult to see that both gp and hp lie in *<sup>C</sup>*<sup>∞</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>∞</sup> and hence also in *<sup>K</sup>*p. We also have deg gp <sup>=</sup> deg hp <sup>=</sup> <sup>p</sup> <sup>−</sup> 1.

From the integral form of gp above, we can easily derive the following explicit form of gp (after replacing 1 − s with s in the integral)

$$
\Sigma g\_p(\mathbf{x}) = \int\_0^1 \xi(\mathbf{s} - p, \mathbf{2}) \, d\mathbf{s} - \int\_0^1 \xi(\mathbf{s} - p, \mathbf{x} + \mathbf{1}) \, d\mathbf{s},
$$

♦

that is,

$$
\Sigma \mathbf{g}\_p(\mathbf{x}) = \mathbf{r}\_p - \int\_0^1 \xi(\mathbf{s} - p, \mathbf{x} + \mathbf{l}) \, d\mathbf{s} \,, .
$$

with

$$
\pi\_p := -1 + \int\_0^1 \zeta(s - p) \, ds\,, ,
$$

where ζ (s, x) is the Hurwitz zeta function.

*Remark 11.3* For any integer n ≥ 2, the *harmonic number function of order* n is defined on (−1,∞) by

$$x \mapsto H\_x^{(n)} = \zeta(n) - \zeta(n, x+1),$$

see, e.g., Srivastava and Choi [93, p. 266]. Extending this definition to noninteger orders by writing

$$H\_{\chi}^{(s)} = \xi(s) - \xi(s, x + 1), \qquad s \in \mathbb{R} \backslash \{1\},$$

we obtain the following very compact integral representation

$$\Sigma g\_p(\mathbf{x}) = -1 + \int\_0^1 H\_\chi^{(s-p)} \, ds \,, \qquad \mathbf{x} > 0.$$

**Analogue of Bohr-Mollerup's Theorem** Thus defined, hp is a log p-type function that lies in *<sup>C</sup>*∞∩*D*p+1∩*K*∞. This function can be characterized as follows.

*All solutions* <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *to the equation* f <sup>=</sup> hp *that lie in <sup>K</sup>*<sup>p</sup> *are of the form*

$$f(\mathbf{x}) = c\_p + \sum\_{k=1}^p (-1)^{p-k} \binom{p-1}{k-1} \, \Sigma g\_k(\mathbf{x}) \,,$$

*where* cp <sup>∈</sup> <sup>R</sup>.

**Extended ID Card** Let us compute the asymptotic constant associated with the function gp. We have

$$\begin{aligned} \sigma[\mathbf{g}\_p] &= \int\_0^1 \Sigma \mathbf{g}\_p(t+1) \, dt \, = \,\, \tau\_p - \int\_0^1 \int\_0^1 \xi(s-p, t+2) \, dt \, ds \\ &= \tau\_p + \int\_0^1 \frac{2^{s+p}}{s+p} \, ds \,. \end{aligned}$$

Using the change of variable <sup>u</sup> <sup>=</sup> <sup>2</sup>s+p, we finally obtain

$$
\sigma[\mathfrak{g}\_p] = \mathfrak{r}\_p + \int\_{2^p}^{2^{p+1}} \frac{1}{\ln t} dt = \mathfrak{r}\_p + \text{li}(2^{p+1}) - \text{li}(2^p).
$$

Now, we have

$$\begin{aligned} \int\_1^\chi \mathbf{g}\_p(t) \, dt &= \int\_0^1 \frac{(\chi+1)^{s+p} - 2^{s+p}}{s+p} \, ds \\ &= \text{li}((\chi+1)^{p+1}) - \text{li}((\chi+1)^p) - \text{li}(2^{p+1}) + \text{li}(2^p) \end{aligned}$$

and hence the analogue of Raabe's formula is

$$\int\_{\mathfrak{x}}^{\mathfrak{x}+1} \Sigma \mathbf{g}\_{p}(t) \, dt \;= \mathfrak{x}\_{p} + \text{li}((\mathfrak{x}+1)^{p+1}) - \text{li}((\mathfrak{x}+1)^{p}), \qquad \mathfrak{x} > 0.$$

**Generalized Stirling's and Related Formulas When** *p* **<sup>=</sup> <sup>1</sup>** For any <sup>a</sup> <sup>≥</sup> 0, we have the following limits and asymptotic equivalence as x → ∞,

$$
\Sigma g\_1(\mathbf{x} + a) - \Sigma g\_1(\mathbf{x}) - a \frac{\mathbf{x}}{\ln(\mathbf{x} + 1)} \to 0,
$$

$$
\Sigma g\_1(\mathbf{x}) - \text{li}((\mathbf{x} + 1)^2) + \text{li}(\mathbf{x} + 1) + \frac{\mathbf{x}}{2\ln(\mathbf{x} + 1)} \to \ \mathbf{r}\_1,
$$

$$
\Sigma g\_1(\mathbf{x} + a) \sim \text{li}((\mathbf{x} + 1)^2) - \text{li}(\mathbf{x} + 1).
$$

Upon differentiation,

$$D\Sigma g\_1(\mathbf{x}) - \frac{\mathbf{x} - \frac{1}{2}}{\ln(\mathbf{x} + \mathbf{l})} \to 0, \qquad D^{k+1}\Sigma g\_1(\mathbf{x}) \to 0, \quad k \in \mathbb{N}^\*,$$

$$D\Sigma g\_1(\mathbf{x} + a) \sim \frac{\mathbf{x}}{\ln(\mathbf{x} + \mathbf{l})},$$

where

$$D\Sigma g\_1(\mathbf{x}) = \int\_0^1 (\mathbf{s} - \mathbf{l}) \, \xi(\mathbf{s}, \mathbf{x} + \mathbf{l}) \, d\mathbf{s}.$$

**Limit and Series Representations When** *p* **<sup>=</sup> <sup>1</sup>** The Eulerian and Weierstrassian forms are

$$\Delta \mathbf{g}\_1(\mathbf{x}) = \left[ -\mathbf{g}\_1(\mathbf{x}) + \mathbf{x} \, \mathbf{g}\_1(\mathbf{l}) - \sum\_{k=1}^{\infty} (\mathbf{g}\_1(\mathbf{x} + k) - \mathbf{g}(k) - \mathbf{x} \, \Delta\_k \mathbf{g}\_1(k)) \right]$$

and

$$\Delta \mathbf{g}\_1(\mathbf{x}) = \left[ -\mathbf{g}\_1(\mathbf{x}) + \mathbf{x} \, D \Sigma \mathbf{g}\_1(\mathbf{l}) - \sum\_{k=1}^{\infty} \left( \mathbf{g}\_1(\mathbf{x} + k) - \mathbf{g}(k) - \mathbf{x} \, \mathbf{g}\_1'(k) \right), \mathbf{x} \right]$$

where

$$D\Sigma g\_1(\mathbf{l}) = \int\_0^\mathbf{l} (\mathbf{s} - \mathbf{l}) \, \xi(\mathbf{s}, \mathbf{2}) \, d\mathbf{s} \, \, = \,\frac{1}{2} - \int\_0^\mathbf{l} \, \mathbf{s} \, \xi(\mathbf{l} - \mathbf{s}) \, d\mathbf{s} \, \, \xi$$

**Gregory's Formula-Based Series Representation When** *p* **<sup>=</sup> <sup>1</sup>** Proposition 8.11 provides the following series representation: for any x > 0 we have

$$\begin{aligned} \nabla g\_1(\mathbf{x}) &= \tau\_1 + \operatorname{li}((\mathbf{x} + \mathbf{l})^2) - \operatorname{li}(\mathbf{x} + \mathbf{l}) - \sum\_{n=0}^{\infty} G\_{n+1} \, \Delta^n g(\mathbf{x}) \\ &= \tau\_1 + \operatorname{li}((\mathbf{x} + \mathbf{l})^2) - \operatorname{li}(\mathbf{x} + \mathbf{l}) - \sum\_{n=0}^{\infty} |G\_{n+1}| \sum\_{k=0}^n (-1)^k \binom{n}{k} \frac{\mathbf{x} + \mathbf{k}}{\operatorname{ln}(\mathbf{x} + \mathbf{k} + 1)} . \end{aligned}$$

Setting x = 1 in this identity, we obtain the following analogue of Fontana-Mascheroni's series

$$
\sigma[g\_1] = \tau\_1 + \text{li}(4) - \text{li}(2) \\
= \sum\_{n=0}^{\infty} |G\_{n+1}| \sum\_{k=0}^{n} (-1)^k \binom{n}{k} \frac{k+1}{\ln(k+2)}.
$$

**Analogue of Gauss' Multiplication Formula** For any <sup>m</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> and any x > 0, we have

$$\sum\_{j=0}^{m-1} \Sigma \mathbf{g}\_p \left( \mathbf{x} + \frac{j}{m} \right) \\ = m \, \mathsf{r}\_p - \int\_0^1 \sum\_{j=0}^{m-1} \xi \left( \mathbf{s} - p, \mathbf{x} + 1 + \frac{j}{m} \right) d\mathbf{s}.$$

Using the multiplication formula for the Hurwitz zeta function, we then obtain the following analogue of Gauss' multiplication formula

$$\sum\_{j=0}^{m-1} \Sigma \mathbf{g}\_p \left( \mathbf{x} + \frac{j}{m} \right) \\ = m \, \mathsf{r}\_p - \int\_0^1 m^{s-p} \, \xi(\mathbf{s} - \mathbf{p}, m\mathbf{x} + m) \, d\mathbf{s} \,.$$

Now, using (8.15) we obtain

$$\Sigma\_{\chi} \operatorname{g}\_{\mathcal{P}} \left( \frac{\chi}{m} \right) = \sum\_{j=0}^{m-1} \Sigma \operatorname{g}\_{\mathcal{P}} \left( \frac{\chi + j}{m} \right) - \sum\_{j=1}^{m} \Sigma \operatorname{g}\_{\mathcal{P}} \left( \frac{j}{m} \right)$$

$$= \int\_{0}^{1} m^{s-p} \left( \zeta(s-p, m+1) - \zeta(s-p, x+m) \right) ds.$$

Corollary 8.33 then tells us that the sequences

$$m \mapsto \int\_0^1 m^{s-p-1} \left( \zeta(s-p, 2m) - \zeta(s-p, m\ge m) \right) ds$$

and

$$m \mapsto \int\_0^1 m^{s-p-1} \left( \zeta(s-p, m+1) - \zeta(s-p, m\chi+m) \right) ds$$

converge to the integrals

$$\int\_{1}^{\chi} \mathbf{g}\_{p}(t) \, dt \qquad \text{and} \qquad \int\_{0}^{\chi} \mathbf{g}\_{p}(t) \, dt \,, \,\chi$$

respectively.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

## **Chapter 12 Further Examples**

The scope of applications of our theory is very wide since it applies to any function lying in the domain of the map . In Chap. 10, we made a thorough study of some standard special functions. In Chap. 11, we defined and investigated new functions as principal indefinite sums of known functions. In the present chapter, we briefly discuss further examples that the reader may want to explore in more detail.

#### **12.1 The Multiple Gamma Functions**

The multiple gamma functions introduced in Sect. 5.2 can also be studied through the sequence of functions G0, G1,..., defined by (see Srivastava and Choi [93, p. 56])

$$G\_p(\mathfrak{x}) := \Gamma\_p(\mathfrak{x})^{(-1)^{p-1}}, \qquad p \in \mathbb{N}.$$

Equivalently, we have G0(x) = x and

$$\ln G\_p(\mathbf{x}) \, := \, \Sigma \ln G\_{p-1}(\mathbf{x}) \qquad \text{for all } p \in \mathbb{N}^\*.$$

Clearly, the function ln Gp−1(x) lies in *<sup>C</sup>*<sup>∞</sup> <sup>∩</sup>*D*<sup>p</sup> <sup>∩</sup>*K*<sup>∞</sup> and we have deg(ln ◦Gp) <sup>=</sup> p. Moreover, this sequence of functions can naturally be extended to p = −1 by defining

$$G\_{-\mathbf{l}}(\mathbf{x}) := \mathbf{l} + \frac{1}{\mathbf{x}} \mathbf{l}$$

271

Just as for the gamma function and the Barnes G-function, we can derive the following asymptotic equivalence: for any a ≥ 0,

$$G\_p(\mathbf{x} + a) \sim \prod\_{j=0}^p G\_{p-j}(\mathbf{x})^{\binom{a}{j}} \qquad \text{as } \mathbf{x} \to \infty,$$

with equality if a ∈ {0, 1,...,p}. We also have the following product representation

$$G\_p(\mathbf{x}) = \frac{1}{G\_{p-1}(\mathbf{x})} \prod\_{k=1}^{\infty} \frac{G\_{p-1}(k)}{G\_{p-1}(\mathbf{x}+k)} G\_{p-2}(k)^{\chi} G\_{p-3}(k)^{\binom{k}{2}} \cdots G\_{-1}(k)^{\binom{k}{p}}$$

and the recurrence formula

$$\ln G\_p(\mathbf{x}) = \left[ - (\mathbf{x} - \mathbf{l}) \,\sigma [D \ln \diamond G\_{p-1}] + \int\_1^\chi \Sigma \, D \ln G\_{p-1}(t) \, dt \, dt \right]$$

For example, one can show that

$$\ln G\_3(\mathbf{x}) = -\frac{1}{8}\mathbf{x}(\mathbf{x}-1)(2\mathbf{x}-\mathbf{5}) + \frac{1}{4}\mathbf{x}(\mathbf{x}-2)\ln(2\pi) + \binom{\mathbf{x}-1}{2}\ln\Gamma(\mathbf{x})$$

$$-\frac{1}{2}(2\mathbf{x}-\mathbf{3})\,\psi\_{-2}(\mathbf{x}) + \psi\_{-3}(\mathbf{x}) - \mathbf{x}\,\psi\_{-3}(\mathbf{l}).$$

This latter formula can also be established using the characterization of G<sup>3</sup> as a 3-convex solution to the equation f (x) = ln G2(x).

#### **12.2 The Regularized Incomplete Gamma Function**

Consider the 2-variable function Q(x, s) = -(x, s)/ -(x) on R<sup>2</sup> <sup>+</sup>, where -(x, s) is the upper incomplete gamma function. Thus defined, the function Q(x, s) satisfies the difference equation

$$|\mathcal{Q}(\mathfrak{x}+1,\mathfrak{s}) - \mathcal{Q}(\mathfrak{x},\mathfrak{s})| = \frac{e^{-\mathfrak{s}}\mathfrak{s}^{\chi}}{\Gamma(\mathfrak{x}+1)}.$$

For any s > 0, we define the function gs : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> by

$$\mathbf{g}\_{\mathbf{s}}(\mathbf{x}) \;= \; \frac{e^{-s}\mathbf{s}^{\alpha}}{\Gamma(\mathbf{x}+1)} \; .$$

This function lies in *<sup>C</sup>*<sup>∞</sup> <sup>∩</sup>*D*−<sup>1</sup> <sup>∩</sup>*K*<sup>∞</sup> and has the property that gs(x) <sup>=</sup> Q(x, s)<sup>−</sup> e−s. We also note that the Eulerian form of Q(x, s) is

$$\begin{aligned} \mathcal{Q}(\mathbf{x}, \mathbf{s}) &= 1 - \sum\_{k=0}^{\infty} g\_{\mathbf{s}}(\mathbf{x} + k) \ &= 1 - \frac{e^{-s} \mathbf{s}^{\mathbf{x}}}{\Gamma(\mathbf{x} + 1)} \sum\_{k=0}^{\infty} \frac{\Gamma(\mathbf{x} + 1)}{\Gamma(\mathbf{x} + k + 1)} s^{k} \\ &= 1 - \frac{e^{-s} \mathbf{s}^{\mathbf{x}}}{\Gamma(\mathbf{x} + 1)} \sum\_{k=0}^{\infty} x^{-k} s^{k} \ &, \end{aligned}$$

where <sup>x</sup>−<sup>k</sup> <sup>=</sup> -(x + 1)/ -(x <sup>+</sup> <sup>k</sup> <sup>+</sup> <sup>1</sup>) for any <sup>k</sup> <sup>∈</sup> <sup>N</sup>.

#### **12.3 The Error Function**

Recall that the Gauss error function erf(x) is defined by the equation

$$\text{erf}(\mathbf{x}) = \frac{2}{\sqrt{\pi}} \int\_0^\chi e^{-t^2} \, dt \qquad \text{for } \mathbf{x} > \mathbf{0}.$$

To study this function, we could for instance work with the function g(x) = erf(x). Instead, let us consider the function <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> defined by the equation

$$g(\mathbf{x}) = \frac{2}{\sqrt{\pi}} e^{-\mathbf{x}^2} \qquad \text{for } \mathbf{x} > \mathbf{0}.$$

It clearly lies in *<sup>C</sup>*<sup>∞</sup> <sup>∩</sup> *<sup>D</sup>*−<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*∞. Thus, the Eulerian form of g is given by the identity

$$\Sigma \mathbf{g}(\mathbf{x}) = \frac{2}{\sqrt{\pi}} \sum\_{k=0}^{\infty} (e^{-(k+1)^2} - e^{-(k+\chi)^2}) .$$

The generalized Stirling formula yields the following limit

$$\operatorname{erf}(\mathbf{x}) + \frac{2}{\sqrt{\pi}} \sum\_{k=0}^{\infty} e^{-(k+\mathbf{x})^2} \to -1 \qquad \text{as } \mathbf{x} \to \infty.$$

Incidentally, the analogue of Legendre's duplication formula provides the surprising identity

$$\sum\_{k=0}^{\infty} (e^{-(k+1)^2} - e^{-(k+\frac{x}{2})^2} - e^{-(k+\frac{x+1}{2})^2} + e^{-(k+\frac{1}{2})^2} - e^{-(\frac{k+1}{2})^2} + e^{-(\frac{k+x}{2})^2}) = 0.1$$

#### **12.4 The Exponential Integral**

Recall that the exponential integral E1(x) is defined by the equation

$$E\_1(\chi) = \int\_{\chi}^{\infty} \frac{e^{-t}}{t} dt \qquad \text{for } x > 0.$$

Similarly to the previous example, let us consider the function <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> defined by the equation

$$g(x) := \frac{e^{-x}}{x} \qquad \text{for } x > 0.$$

It lies in *<sup>C</sup>*<sup>∞</sup> <sup>∩</sup> *<sup>D</sup>*−<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*∞. Thus, the Eulerian form of g is given by the identity

$$\Sigma g(\mathbf{x}) = \sum\_{k=0}^{\infty} \left( \frac{e^{-(k+1)}}{k+1} - \frac{e^{-(k+\mathbf{x})}}{k+\mathbf{x}} \right).$$

The generalized Stirling formula easily provides the following convergence result

$$E\_1(\chi) - \sum\_{k=0}^{\infty} \frac{e^{-(k+\chi)}}{k+\chi} \to 0 \qquad \text{as } \chi \to \infty.$$

Moreover, the analogue of Raabe's formula is

$$\int\_{\chi}^{\chi+1} \Sigma \mathbf{g}(t) \, dt \, = \left. 1 - \ln(e - 1) - E\_1(\chi) \right|\_{\chi}, \qquad \chi > 0.$$

#### **12.5 The Hyperfactorial Function**

The hyperfactorial function (or <sup>K</sup>-function) is the function <sup>K</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup><sup>+</sup> defined by the equation ln <sup>K</sup> <sup>=</sup> g, where the function g(x) <sup>=</sup> <sup>x</sup> ln <sup>x</sup> lies in *<sup>C</sup>*∞∩*D*2∩*K*∞. Since we also have

$$\mathbf{g}(\mathbf{x}) = \mathbf{x} + \Delta \boldsymbol{\psi}\_{-2}(\mathbf{x}) - \boldsymbol{\psi}\_{-2}(\mathbf{l}),$$

we immediately derive (see also Example 8.21)

$$
\ln K(\mathbf{x}) = \Sigma \mathbf{g}(\mathbf{x}) = \begin{pmatrix} \mathbf{x} \\ \mathbf{2} \end{pmatrix} + \boldsymbol{\psi}\_{-2}(\mathbf{x}) - \boldsymbol{\chi}\,\boldsymbol{\psi}\_{-2}(\mathbf{l}) \\
= (\mathbf{x} - \mathbf{l})\ln \Gamma(\mathbf{x}) - \ln G(\mathbf{x}).
$$

Actually, g also corresponds to the special case when (s, q) = (−1, 1) of the function gs,q investigated in Sect. 10.8. Thus, we also have

$$
\Sigma \mathcal{g}(\mathfrak{x}) \, \, = \, \zeta'(-1, \mathfrak{x}) - \zeta'(-1),
$$

where ζ (−1) <sup>=</sup> <sup>1</sup> <sup>12</sup> − ln A. Finally, we note that the integer sequence n → K(n) is the sequence A002109 in the OEIS [90].

#### **12.6 The Hurwitz-Lerch Transcendent**

The Hurwitz-Lerch transcendent (z, s, a) is a generalization of the Hurwitz zeta function defined as an analytic continuation of the series

$$\sum\_{k=0}^{\infty} z^k (a+k)^{-s}$$

when <sup>|</sup>z<sup>|</sup> <sup>&</sup>lt; 1 and <sup>a</sup> <sup>∈</sup> <sup>C</sup> \ (−N) (see, e.g., Srivastava and Choi [93, p. 194]). It satisfies the difference equation

$$
\Phi(z, s, a+1) - z^{-1} \Phi(z, s, a) = -z^{-1} a^{-s}.
$$

It follows that the modified function

$$\overline{\Phi}(z,s,a) = -z^a \Phi(z,s,a)$$

satisfies the difference equation

$$
\overline{\Phi}(z, s, a+1) - \overline{\Phi}(z, s, a) \, = \, z^a a^{-s} \,.
$$

Thus, for certain real values of <sup>z</sup> and <sup>s</sup>, the restriction to <sup>R</sup><sup>+</sup> of the map <sup>a</sup> → (z, s, a) fits the assumptions of our theory. Its investigation is left to the reader.

#### **12.7 The Bernoulli Polynomials**

Recall that, for any <sup>n</sup> <sup>∈</sup> <sup>N</sup>, the <sup>n</sup>th degree Bernoulli polynomial Bn(x) is defined by the equation

$$B\_n(\boldsymbol{\chi}) := \sum\_{k=0}^n \binom{n}{k} B\_{n-k} \boldsymbol{\chi}^k \qquad \text{for } \boldsymbol{\chi} \in \mathbb{R},$$

where Bk is the kth Bernoulli number. This polynomial satisfies the difference equation

$$B\_n(x+1) - B\_n(x) \ = \ n x^{n-1}.$$

Thus, the function gn : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> defined by the equation gn(x) <sup>=</sup> n xn−<sup>1</sup> for x > <sup>0</sup> lies in *<sup>C</sup>*<sup>∞</sup> <sup>∩</sup> *<sup>D</sup>*<sup>n</sup> <sup>∩</sup> *<sup>K</sup>*<sup>∞</sup> and has the property that

$$
\Sigma \mathcal{g}\_n(\mathfrak{x}) \;= \; B\_n(\mathfrak{x}) - B\_n(\mathfrak{l}),
$$

that is, in view of (10.16)

$$\Delta \mathbf{g}\_n(\mathbf{x}) = n\zeta(1-n) - n\zeta(1-n,\mathbf{x})\,, \qquad n \in \mathbb{N}^\*.$$

Thus, the nth degree Bernoulli polynomial can be characterized as follows.

*All solutions* fn : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *to the equation* fn(x <sup>+</sup> <sup>1</sup>) <sup>−</sup> fn(x) <sup>=</sup> n xn−<sup>1</sup> *that lie in <sup>K</sup>*<sup>n</sup> *are of the form* fn(x) <sup>=</sup> cn <sup>+</sup> Bn(x), *where* cn <sup>∈</sup> <sup>R</sup>.

Using the generalized Webster functional equation (Theorem 8.71), we can also easily characterize the nth degree Euler polynomial En(x), which is defined by the equation

$$E\_n(\chi) := \frac{2^{n+1}}{n+1} \left( B\_{n+1} \left( \frac{\chi+1}{2} \right) - B\_{n+1} \left( \frac{\chi}{2} \right) \right).$$

We then obtain the following statement.

*All solutions* fn : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *to the equation* fn(x <sup>+</sup> <sup>1</sup>) <sup>+</sup> fn(x) <sup>=</sup> <sup>2</sup> <sup>x</sup><sup>n</sup> *that lie in <sup>K</sup>*<sup>n</sup> *are of the form* fn(x) <sup>=</sup> cn <sup>+</sup> En(x), *where* cn <sup>∈</sup> <sup>R</sup>.

Finally, we also easily retrieve the multiplication formula:

$$\sum\_{j=0}^{m-1} B\_n \left( \frac{\chi + j}{m} \right) \, = \, \frac{1}{m^{n-1}} \, B\_n(\boldsymbol{\chi}) \qquad \boldsymbol{\chi} > \mathbf{0}.$$

#### **12.8 The Bernoulli Polynomials of the Second Kind**

For any <sup>n</sup> <sup>∈</sup> <sup>N</sup>, the <sup>n</sup>th degree Bernoulli polynomial of the second kind is defined by the equation

$$
\psi\_n(\mathbf{x}) = \int\_{\mathbf{x}}^{\mathbf{x}+1} \binom{l}{n} \, dt \qquad \text{for } \mathbf{x} > \mathbf{0}.
$$

In particular, we have ψn(0) = Gn. Also, these polynomials satisfy the difference equation

$$
\psi\_{n+1}(\mathbf{x}+1) - \psi\_{n+1}(\mathbf{x}) = \psi\_n(\mathbf{x}).
$$

Thus, the function gn : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> defined by the equation gn(x) <sup>=</sup> ψn(x) for x > <sup>0</sup> lies in *<sup>C</sup>*<sup>∞</sup> <sup>∩</sup> *<sup>D</sup>*n+<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*<sup>∞</sup> and has the property that

$$
\Sigma \mathfrak{g}\_n(\mathfrak{x}) = \psi\_{n+1}(\mathfrak{x}) - \psi\_{n+1}(\mathfrak{l}).
$$

Thus, the Bernoulli polynomials of the second kind can be characterized as follows.

*All solutions* fn : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *to the equation* fn(x <sup>+</sup> <sup>1</sup>) <sup>−</sup> fn(x) <sup>=</sup> ψn(x) *that lie in <sup>K</sup>*n+<sup>1</sup> *are of the form* fn(x) <sup>=</sup> cn <sup>+</sup> ψn+1(x), *where* cn <sup>∈</sup> <sup>R</sup>.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

## **Chapter 13 Conclusion**

Krull-Webster's theory offered an elegant extension of Bohr-Mollerup's theorem and has proved to be a very nice and useful contribution to the resolution of the difference equation f <sup>=</sup> <sup>g</sup> on the real half-line <sup>R</sup>+. In this book, we have provided a significant generalization of Krull-Webster's theory by considerably relaxing the asymptotic condition imposed on the function g, and we have demonstrated through various examples how this generalization provides a unified framework to investigate the properties of many functions. This framework has indeed enabled us to derive several general formulas that now constitute a powerful toolbox and even a genuine Swiss Army knife to investigate a large variety of functions.

The key point of this generalization was the discovery of expression (1.4) for the sequence <sup>n</sup> → <sup>f</sup> <sup>p</sup> <sup>n</sup> [g](x) for any <sup>p</sup> <sup>∈</sup> <sup>N</sup>. We also observe that our uniqueness and existence results strongly rely on Lemma 2.7 together with identities (3.3) and (3.8). These results actually constitute the common core and even the fundamental cornerstone of all the subsequent formulas that we derived in this book. For instance, the generalized Stirling formula (6.21) has been obtained almost miraculously by merely integrating both sides of the inequality given in Lemma 2.7 (see Remark 6.16). Similarly, Gregory's summation formula (6.33) has been derived instantly by integrating both sides of identity (3.8), and we have shown how its remainder can be controlled using Lemma 2.7 again.

Our results clearly shed light on the way many of the classical special functions, such as the polygamma functions and the higher order derivatives of the Hurwitz zeta function, can be systematically studied, sometimes by deriving identities and formulas almost mechanically.

Beyond this systematization aspect, our theory has enabled us to introduce a number of new important and useful objects. For instance, the map itself is a new concept that appears to be as fundamental as the basic antiderivative operation (cf. Definition 5.4). Both concepts are actually strongly related through, e.g., Propositions 6.19, 6.20, and 8.18. Other concepts such as the asymptotic constant and the generalized Binet function also appear to be new fundamental

J.-L. Marichal, N. Zenaïdi, *A Generalization of Bohr-Mollerup's Theorem for Higher Order Convex Functions*, Developments in Mathematics 70, https://doi.org/10.1007/978-3-030-95088-0\_13

objects that merit further study. For instance, it is remarkable that the asymptotic constant appears not only in the generalized Stirling formula (Theorem 6.13), but also in many other important formulas, such as the generalized Euler constant (Proposition 6.36), the Weierstrassian form (Theorem 8.7), the analogue of Raabe's formula (Proposition 8.18), the analogue of Gauss' multiplication formula (Proposition 8.28), the asymptotic expansion (Proposition 8.36), and the generalized Liu formula (Proposition 8.42).

Our work has also revealed how important and natural are the higher order convexity properties. Although these properties seem to be still poorly used in mathematical analysis, they actually constitute an essential and highly useful ingredient in the development of our theory and therefore also merit further investigation (see, e.g., Proposition 4.14 and Remark 4.15).

In conclusion, the results that we have obtained as well as the new concepts that we have introduced and explored show that this area of investigation is very rich and intriguing. We have just skimmed the surface, and there are a lot of questions that emerge naturally. We now list some below.


$$f(\mathfrak{x} + 1) - a \, f(\mathfrak{x}) \, = \, \mathfrak{g}(\mathfrak{x}),$$

where a is a given constant. Consider also linear difference equations of any order. Partial results along this line can be found, e.g., in John [49, Theorem C].

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

## **Appendix A Higher Order Convexity Properties**

*We establish a number of basic facts about higher order convexity and concavity properties with the aim of proving Lemma 2.6.*

Lemma 2.6 is a fundamental element of our theory. It can be derived from more general results established by Kuczma [61, Chapter 15]. However, this derivation is not immediate and actually requires considerable attention. In this appendix, we prove Lemma 2.6 almost from scratch and using elementary means only.

Let I be an arbitrary nonempty open real interval. We first observe that for any functions f, g : <sup>I</sup> <sup>→</sup> <sup>R</sup> and any system <sup>x</sup><sup>0</sup> < x<sup>1</sup> <sup>&</sup>lt; ··· < xn of <sup>n</sup> <sup>+</sup> 1 points in <sup>I</sup> , we have

$$f(f+\mathbf{g})[\mathbf{x}\_0, \mathbf{x}\_1, \dots, \mathbf{x}\_n] = f[\mathbf{x}\_0, \mathbf{x}\_1, \dots, \mathbf{x}\_n] + \mathbf{g}[\mathbf{x}\_0, \mathbf{x}\_1, \dots, \mathbf{x}\_n].$$

Moreover, for any <sup>c</sup> <sup>∈</sup> <sup>R</sup>, if the function <sup>h</sup>: <sup>I</sup> <sup>−</sup> <sup>c</sup> <sup>→</sup> <sup>R</sup> is defined by the equation h(x) = f (x + c) for x ∈ I − c, then

$$h[\mathbf{x}\_0, \mathbf{x}\_1, \dots, \mathbf{x}\_n] = f[\mathbf{x}\_0 + c, \mathbf{x}\_1 + c, \dots, \mathbf{x}\_n + c].$$

These properties are immediate consequences of identity (2.4).

We now present a proposition and an immediate corollary. Let [h] denote the forward difference operator with step h.

**Proposition A.1** *For any* <sup>n</sup> <sup>∈</sup> <sup>N</sup>*, any system* <sup>x</sup><sup>0</sup> < x<sup>1</sup> <sup>&</sup>lt; ··· < xn *of* <sup>n</sup> <sup>+</sup> <sup>1</sup> *points in* <sup>I</sup> *, any function* <sup>f</sup> : <sup>I</sup> <sup>→</sup> <sup>R</sup>*, and any* <sup>h</sup> <sup>∈</sup> <sup>R</sup> \ {0} *such that* <sup>x</sup><sup>0</sup> <sup>+</sup> h, xn <sup>+</sup> <sup>h</sup> <sup>∈</sup> <sup>I</sup> *, we have*

$$\frac{1}{h}(\Delta\_{\{h\}}f)[\mathbf{x}\_0, \mathbf{x}\_1, \dots, \mathbf{x}\_n] \ = \sum\_{k=0}^n f[\mathbf{x}\_0, \dots, \mathbf{x}\_k, \mathbf{x}\_k + h, \dots, \mathbf{x}\_n + h].$$

283

*Proof* Using a telescoping sum, we obtain

$$\frac{1}{h}(\Delta\_{\{h\}}f)[\mathbf{x}\_0, \mathbf{x}\_1, \dots, \mathbf{x}\_n] = \frac{1}{h} \left( f[\mathbf{x}\_0 + h, \mathbf{x}\_1 + h, \dots, \mathbf{x}\_n + h] - f[\mathbf{x}\_0, \mathbf{x}\_1, \dots, \mathbf{x}\_n] \right)$$

$$= \frac{1}{h} \sum\_{k=0}^n \left( f[\mathbf{x}\_0, \dots, \mathbf{x}\_k + h, \mathbf{x}\_{k+1} + h, \dots, \mathbf{x}\_n + h] \right)$$

$$- \left( f[\mathbf{x}\_0, \dots, \mathbf{x}\_k, \mathbf{x}\_{k+1} + h, \dots, \mathbf{x}\_n + h] \right).$$

We then conclude the proof using the recurrence relation (2.3).

**Corollary A.2** *Let* <sup>f</sup> *lie in <sup>K</sup>*<sup>p</sup> <sup>+</sup>(I ) *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup> *and let* <sup>h</sup> <sup>∈</sup> <sup>R</sup> \ {0}*. If the function* <sup>1</sup> <sup>h</sup> [h]<sup>f</sup> *is defined on* <sup>I</sup> *, then it lies in <sup>K</sup>*p−<sup>1</sup> <sup>+</sup> (I )*.*

We can now readily see that Lemma 2.6(b) is an immediate consequence of Corollary A.2 (just take h = 1).

The next result establishes Lemma 2.6(c). Let us first observe that a pointwise limit of functions lying in *<sup>K</sup>*<sup>p</sup> <sup>+</sup>(I ) also lies in *<sup>K</sup>*<sup>p</sup> <sup>+</sup>(I ). This fact can be proved straightforwardly using identity (2.4).

**Corollary A.3** *If* <sup>f</sup> : <sup>I</sup> <sup>→</sup> <sup>R</sup> *is differentiable and lies in <sup>K</sup>*<sup>p</sup> <sup>+</sup>(I ) *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*, then the derivative* <sup>f</sup> *lies in <sup>K</sup>*p−<sup>1</sup> <sup>+</sup> (I )*.*

*Proof* It is clear that the derivative f is the pointwise limit of the sequence n → fn, where, for each <sup>n</sup> <sup>∈</sup> <sup>N</sup>∗, the function fn : <sup>I</sup> <sup>→</sup> <sup>R</sup> is defined by the equation

$$f\_n := n\Delta\_{\left[1/n\right]}f \qquad \text{ for } n \in \mathbb{N}^\*.$$

We then conclude the proof using Corollary A.2.

We now have the following corollary, which follows from Proposition 2.1. It immediately establishes Lemma 2.6(d).

**Corollary A.4** *If* <sup>f</sup> : <sup>I</sup> <sup>→</sup> <sup>R</sup> *is differentiable and* <sup>f</sup> *lies in <sup>K</sup>*p−<sup>1</sup> <sup>+</sup> (I ) *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*, then* <sup>f</sup> *lies in <sup>K</sup>*<sup>p</sup> <sup>+</sup>(I )*.*

*Proof* This result is an immediate consequence of Proposition 2.1 (just use identity (2.7) with n = p + 1).

It remains to establish Lemma 2.6(a). To this end, we present the following technical lemma, which provides a test for differentiability of real functions on I .

**Lemma A.5** *Let* <sup>n</sup> <sup>∈</sup> <sup>N</sup>*, let* a, x1,...,xn *be* <sup>n</sup> <sup>+</sup> <sup>1</sup> *pairwise distinct points in* <sup>I</sup> *and let* <sup>f</sup> : <sup>I</sup> <sup>→</sup> <sup>R</sup>*. If the limit*

$$\lim\_{h \to 0} f[a, a+h, x\_1, \dots, x\_n]$$

*exists and is finite, then* f *is differentiable at* a*.*

*Proof* This result can be easily proved by induction on n using the recurrence relation (2.3). To simplify the computations, let us consider the first two cases only. For n = 0, we have trivially

$$\lim\_{h \to 0} f[a, a+h] = \lim\_{h \to 0} \frac{f(a+h) - f(a)}{h}$$

and f is clearly differentiable at a if this limit exists and is finite. For n = 1, we get

$$f[a, a+h, x\_1] = \frac{f[a, x\_1] - f[a, a+h]}{x\_1 - a - h}$$

and hence

$$\lim\_{h \to 0} f[a, a+h] = f[a, \ge\_l] - \lim\_{h \to 0} (\ge\_l - a - h) \, f[a, a+h, \ge\_l]$$

and this limit is finite if so is the right-hand limit. The induction process is now clear.

We now have the following proposition.

**Proposition A.6** *If* <sup>f</sup> *lies in <sup>K</sup>*<sup>p</sup> <sup>+</sup>(I ) *for some integer* <sup>p</sup> <sup>≥</sup> <sup>2</sup>*, then* <sup>f</sup> *is differentiable on* I *.*

*Proof* Let a ∈ I and let J be a compact subinterval of I whose interior contains a. Let *<sup>I</sup>*p+<sup>1</sup> denote the set of tuples of <sup>I</sup>p+<sup>1</sup> whose components are pairwise distinct. By Lemma 2.5, the restriction of the map

$$(z\_0, \dots, z\_p) \mapsto f[z\_0, \dots, z\_p]$$

to *<sup>I</sup>*p+<sup>1</sup> is increasing in each place, hence this map is bounded on *<sup>I</sup>*p+<sup>1</sup> <sup>∩</sup> <sup>J</sup> <sup>p</sup>+1.

Let x1,...,xp−<sup>2</sup> be p − 2 pairwise distinct points in J , and distinct from a, and let h1, h<sup>2</sup> be sufficiently small distinct nonzero real numbers such that a+h1, a+h<sup>2</sup> lie in J . Using (2.3), we get

 $f[a, a + h\_1, a + h\_2, \mathbf{x\_1}, \dots, \mathbf{x\_{p-2}}]$ 
$$= \frac{f[a, a + h\_2, \mathbf{x\_1}, \dots, \mathbf{x\_{p-2}}] - f[a, a + h\_1, \mathbf{x\_1}, \dots, \mathbf{x\_{p-2}}]}{h\_2 - h\_2}$$

Thus, there exists CJ > 0 such that

 <sup>f</sup> [a, a <sup>+</sup> <sup>h</sup>2, x1,...,xp−2] − <sup>f</sup> [a, a <sup>+</sup> <sup>h</sup>1, x1,...,xp−2] <sup>≤</sup> CJ <sup>|</sup>h<sup>2</sup> <sup>−</sup> <sup>h</sup>1|. It follows that for any sequence n → hn converging to zero, the sequence

$$n \mapsto f[a, a + h\_n, x\_1, \dots, x\_{p-2}]$$

is a Cauchy sequence whose limit does not depend on the sequence n → hn. Therefore, the limit

$$\lim\_{h \to 0} f[a, a+h, x\_1, \dots, x\_{p-2}]$$

exists and is finite. By Lemma A.5, f is differentiable at a.

We are now in a position to prove Lemma 2.6(a).

**Proposition A.7** *If* <sup>f</sup> *lies in <sup>K</sup>*p+<sup>1</sup> <sup>+</sup> (I ) *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*, then* <sup>f</sup> *lies in <sup>C</sup>*p(I )*.*

*Proof* We proceed by induction on p. The case when p = 0 is folklore and can be found, e.g., in Artin [11, Theorem 1.5]. Suppose that the result holds for some <sup>p</sup> <sup>∈</sup> <sup>N</sup> and let us show that it still holds for <sup>p</sup> <sup>+</sup> 1. Let <sup>f</sup> lie in *<sup>K</sup>*p+<sup>2</sup> <sup>+</sup> (I ). By Proposition A.6 and Corollary A.3, <sup>f</sup> is differentiable on <sup>I</sup> and <sup>f</sup> lies in *<sup>K</sup>*p+<sup>1</sup> <sup>+</sup> (I ). Using our induction hypothesis, we see that <sup>f</sup> lies in *<sup>C</sup>*p(I ), and hence <sup>f</sup> lies in *<sup>C</sup>*p+1(I ).

Let us end this study with an interesting generalization of Lemma 2.5. It is an immediate corollary of the following proposition.

**Proposition A.8** *Let* n, m <sup>∈</sup> <sup>N</sup>*, let* <sup>x</sup>0,...,xn−1, y0,...,ym *be* <sup>n</sup>+m+<sup>1</sup> *pairwise distinct points in* <sup>I</sup> *, let* <sup>f</sup> : <sup>I</sup> <sup>→</sup> <sup>R</sup>*, and let* <sup>g</sup> : <sup>I</sup> \ {x0,...,xn−1} → <sup>R</sup> *be defined by the equation*

$$g(\mathbf{x}) = \,\_1f[\mathbf{x}\_0, \dots, \mathbf{x}\_{n-1}, \mathbf{x}] \qquad for \, \mathbf{x} \in I \, \backslash \{\mathbf{x}\_0, \dots, \mathbf{x}\_{n-1}\}.$$

*Then we have*

$$\lg[\mathbf{y}\_0, \dots, \mathbf{y}\_m] = f[\mathbf{x}\_0, \dots, \mathbf{x}\_{n-1}, \mathbf{y}\_0, \dots, \mathbf{y}\_m].$$

*Proof* This result can be easily proved by induction on m for any fixed value of n, simply by using the recurrence relation (2.3). To simplify the computations, let us consider the first two cases only. For m = 0, we have trivially

$$\text{lg[yol] } = \text{g(yo) } = f[\text{x}\_0, \dots, \text{x}\_{n-1}, \text{yol.}]$$

For m = 1, we have

$$\begin{aligned} \text{g[y\_0, y\_1]} &= \frac{\mathbf{g}(\mathbf{y\_1}) - \mathbf{g}(\mathbf{y\_0})}{\mathbf{y\_1} - \mathbf{y\_0}} = \frac{f[\mathbf{x\_0}, \dots, \mathbf{x\_{n-1}}, \mathbf{y\_1}] - f[\mathbf{x\_0}, \dots, \mathbf{x\_{n-1}}, \mathbf{y\_0}]}{\mathbf{y\_1} - \mathbf{y\_0}} \\ &= f[\mathbf{x\_0}, \dots, \mathbf{x\_{n-1}}, \mathbf{y\_0}, \mathbf{y\_1}]. \end{aligned}$$

The induction process is now obvious.

**Corollary A.9** *Let* j,p <sup>∈</sup> <sup>N</sup>*, with* <sup>j</sup> <sup>≤</sup> <sup>p</sup>*, and let <sup>I</sup>*j+<sup>1</sup> *denote the set of tuples of* <sup>I</sup> <sup>j</sup>+<sup>1</sup> *whose components are pairwise distinct. A function* <sup>f</sup> : <sup>I</sup> <sup>→</sup> <sup>R</sup> *lies in <sup>K</sup>*<sup>p</sup> <sup>+</sup>(I ) *if and only if the restriction of the map*

$$(z\_0, \dots, z\_j) \mapsto f[z\_0, \dots, z\_j]$$

*to I*j+<sup>1</sup> *is* (p − j )*-convex in each place.*

## **Appendix B On Krull-Webster's Asymptotic Condition**

*We show that our uniqueness and existence results fully generalize a recent attempt by Rassias and Trif [86] to solve the particular case when* p = 2*.*

Recall that the original asymptotic condition imposed by Krull and Webster on the function g is that, for each x > 0,

$$\mathbf{g}(\mathbf{x} + t) - \mathbf{g}(t) \to \mathbf{0} \qquad \text{as } t \to \infty \text{ ;} \mathbf{f}$$

see Eq. (1.2). Using our notation, this means that the function <sup>g</sup> lies in *<sup>R</sup>*<sup>1</sup> R. Geometrically, this condition also means that the chord to the graph of g on any fixed length interval has an asymptotic zero slope. Only fixed length intervals whose left endpoints are integers are to be considered if the condition reduces to requiring that <sup>g</sup> <sup>∈</sup> *<sup>R</sup>*<sup>1</sup> <sup>N</sup>. The restriction of our uniqueness and existence results to the case when <sup>p</sup> <sup>=</sup> 1 shows that this condition can actually be relaxed into <sup>g</sup> <sup>∈</sup> *<sup>D</sup>*<sup>1</sup> <sup>N</sup>, which means that the chord to the graph of <sup>g</sup> on any interval of the form [n, n <sup>+</sup> <sup>1</sup>], <sup>n</sup> <sup>∈</sup> <sup>N</sup>∗, has an asymptotic zero slope. The function g(x) = ln x is a typical example that shows, just as does every function whose derivative vanishes at infinity, that those functions need not behave asymptotically like constant functions.

It remains, however, that Krull-Webster's asymptotic condition is rather restrictive. As already mentioned in Chap. 1, this condition is not satisfied by the functions g(x) = x ln x and g(x) = ln -(x). To overcome this restriction, Rassias and Trif [86] proposed a modification of Webster's results by considering solutions lying in *<sup>K</sup>*<sup>2</sup> and replacing the asymptotic condition with a more appropriate one. Specifically, they considered any function <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> for which there exists a number a > <sup>0</sup> such that

$$\lim\_{t \to \infty} \left. g(x+t) - g(t) - x \ln t \right| = x \ln a, \qquad \text{for all } x > 0. \tag{B.1}$$

289

It turns out that both functions g(x) = x ln x and g(x) = ln -(x) satisfy this alternative condition. However, the identity function g(x) = x does not.

Let us now show that our asymptotic condition that <sup>g</sup> <sup>∈</sup> *<sup>D</sup>*<sup>2</sup> <sup>R</sup> generalizes not only Rassias and Trif's (B.1) but also many other similar conditions.

**Proposition B.1** *Let* <sup>ϕ</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *and suppose that* <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *has the property that, for each* x > 0*,*

$$\mathbf{g}(\mathbf{x} + t) - \mathbf{g}(t) - \mathbf{x}\,\boldsymbol{\varphi}(t) \to \mathbf{0} \qquad \text{as } t \to \infty.$$

*Then* <sup>g</sup> *lies in <sup>R</sup>*<sup>2</sup> <sup>R</sup> <sup>⊂</sup> *<sup>D</sup>*<sup>2</sup> <sup>R</sup>*. In particular, <sup>R</sup>*<sup>2</sup> <sup>R</sup> *contains all the functions that satisfy Rassias and Trif's condition.*

*Proof* For any t > 0 and any <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup>, define the function <sup>ρ</sup><sup>ϕ</sup> <sup>t</sup> [g]: [0,∞) <sup>→</sup> <sup>R</sup> by the equation

$$
\rho\_t^\varphi \text{[g]}(\mathbf{x}) \;=\; \mathbf{g}(\mathbf{x} + t) - \mathbf{g}(t) - \mathbf{x} \,\varphi(t) \qquad \text{for } \mathbf{x} > \mathbf{0}.
$$

Let also *<sup>R</sup>*<sup>ϕ</sup> <sup>R</sup> be the set of functions <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> with the property that, for each x > 0, ρ<sup>ϕ</sup> <sup>t</sup> [g](x) → 0 as t → ∞. Then we immediately see that

$$
\rho\_t^2[\mathbf{g}](\mathbf{x}) = \rho\_t^\varphi[\mathbf{g}](\mathbf{x}) - \mathbf{x} \rho\_t^\varphi[\mathbf{g}](\mathbf{l}),
$$

which shows that *<sup>R</sup>*<sup>ϕ</sup> <sup>R</sup> <sup>⊆</sup> *<sup>R</sup>*<sup>2</sup> <sup>R</sup>. Now, if g satisfies Rassias and Trif's condition, then it lies in the set <sup>∪</sup>a>0*R*ϕa <sup>R</sup> , where ϕa(x) <sup>=</sup> ln(ax), and hence it also lies in *<sup>R</sup>*<sup>2</sup> <sup>R</sup>.

Proposition B.1 can be generalized to *<sup>R</sup>*<sup>p</sup> <sup>R</sup> for any value of p ≥ 2 as follows.

**Proposition B.2** *Let* <sup>p</sup> <sup>≥</sup> <sup>2</sup> *be an integer, let* <sup>ϕ</sup>1,...,ϕp−<sup>1</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup>*, and suppose that* <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *has the property that, for each* x > <sup>0</sup>*,*

$$\left(\mathbf{g}(\mathbf{x}+t) - \mathbf{g}(t) - \sum\_{j=1}^{p-1} \binom{\boldsymbol{\chi}}{j} \boldsymbol{\varphi}\_j(t) \right) \to 0 \qquad \text{as } t \to \infty.$$

*Then* <sup>g</sup> *lies in <sup>R</sup>*<sup>p</sup> <sup>R</sup> <sup>⊂</sup> *<sup>D</sup>*<sup>p</sup> R*.*

*Proof* For any t > 0 and any <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup>, define the function <sup>ρ</sup>*<sup>ϕ</sup>* <sup>t</sup> [g]: [0,∞) <sup>→</sup> <sup>R</sup> by the equation

$$\rho\_l^\# [\mathbf{g}](\mathbf{x}) = \mathbf{g}(\mathbf{x} + t) - \mathbf{g}(t) - \sum\_{j=1}^{p-1} \binom{\mathbf{x}}{j} \varphi\_j(t).$$

Define also the functions ψ*ϕ*,<sup>1</sup> <sup>t</sup> [g],...,ψ*ϕ*,p <sup>t</sup> [g]: [0,∞) <sup>→</sup> <sup>R</sup> recursively by the equations <sup>ψ</sup>*ϕ*,<sup>1</sup> <sup>t</sup> [g] = <sup>ρ</sup>*<sup>ϕ</sup>* <sup>t</sup> [g] and

$$\left[\psi\_{l}^{\mathfrak{p},j+1}\text{[g]}\right] = \left.\psi\_{l}^{\mathfrak{p},j}\text{[g]} - \left(^{\chi}\_{j}\right)\psi\_{l}^{\mathfrak{p},j}\text{[g]} (j), \qquad j = 1, \ldots, p-1.$$

Then, it is not difficult to see that

$$\psi\_l^{\mathfrak{P},j}[\mathfrak{g}](\mathfrak{x}) = \rho\_l^{\mathfrak{P}}[\mathfrak{g}](\mathfrak{x}) - \sum\_{l=1}^{j-1} \binom{\mathfrak{x}}{l} (\Delta^l \mathfrak{g}(t) - \varphi\_l(t))^l$$

and hence ψ*ϕ*,p <sup>t</sup> [g] = <sup>ρ</sup><sup>p</sup> <sup>t</sup> [g]. Thus, if the function <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> has the property that, for each x > 0, ρ*ϕ* <sup>t</sup> [g](x) <sup>→</sup> 0 as <sup>t</sup> → ∞, then it lies in *<sup>R</sup>*<sup>p</sup> <sup>R</sup>.

## **Appendix C On a Question Raised by Webster**

*We discuss conditions on the function* g *to ensure both the uniqueness (up to an additive constant) and existence of solutions to the equation* f <sup>=</sup> <sup>g</sup> *that lie in <sup>K</sup>*p*.*

A natural question raised by Webster [98, p. 606], and that we now extend to any value of <sup>p</sup> <sup>∈</sup> <sup>N</sup>, is the following.

*Find necessary and sufficient conditions on the function* <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *to ensure both the uniqueness (up to an additive constant) and existence of solutions lying in <sup>K</sup>*<sup>p</sup> <sup>+</sup> *(resp. <sup>K</sup>*<sup>p</sup> −*) to the equation* f = g.

Lemma 2.6(b) shows that a necessary condition for this to occur is that <sup>g</sup> <sup>∈</sup> *<sup>K</sup>*p−<sup>1</sup> + (resp. <sup>g</sup> <sup>∈</sup> *<sup>K</sup>*p−<sup>1</sup> <sup>−</sup> ). Also, our uniqueness and existence results show that a sufficient condition is that <sup>g</sup> <sup>∈</sup> *<sup>D</sup>*p∩*K*<sup>p</sup> <sup>−</sup> (resp. <sup>g</sup> <sup>∈</sup> *<sup>D</sup>*p∩*K*<sup>p</sup> <sup>+</sup>). It is tempting to believe that this latter condition is also necessary. The following two examples support this idea.

(a) Both functions

$$
\ln \Gamma(\mathbf{x}) \qquad \text{and} \qquad \ln \Gamma(\mathbf{x}) + \ln \left( 1 + \frac{1}{2} \sin(2\pi \mathbf{x}) \right)
$$

are solutions to the equation f <sup>=</sup> <sup>g</sup> that lie in *<sup>K</sup>*<sup>0</sup> <sup>+</sup>, where g(x) = ln x does not lie in *<sup>D</sup>*<sup>0</sup> <sup>∪</sup> *<sup>K</sup>*<sup>0</sup> <sup>−</sup> (see Example 3.2).

(b) Both functions

<sup>2</sup><sup>x</sup> and 2<sup>x</sup> <sup>+</sup> sin(2πx)

are solutions to the equation f <sup>=</sup> <sup>g</sup> that lie in *<sup>K</sup>*<sup>p</sup> <sup>+</sup> for any <sup>p</sup> <sup>∈</sup> <sup>N</sup>, where g(x) <sup>=</sup> <sup>2</sup><sup>x</sup> does not lie in *<sup>D</sup>*<sup>p</sup> <sup>∪</sup> *<sup>K</sup>*<sup>p</sup> −.

Nevertheless, the following proposition shows that in general the condition above is not necessary.

293

**Proposition C.1** *There exists a function* <sup>f</sup> <sup>∈</sup> *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*<sup>0</sup> *such that*

*(a)* f *does not lie in <sup>D</sup>*<sup>0</sup> <sup>∪</sup> *<sup>K</sup>*0*, and*

*(b) for any function* <sup>ϕ</sup> <sup>∈</sup> *<sup>K</sup>*<sup>0</sup> *satisfying* ϕ <sup>=</sup> f *we have that* <sup>f</sup> <sup>−</sup> <sup>ϕ</sup> *is constant.*

*Proof* Let <sup>f</sup> <sup>∈</sup> *<sup>K</sup>*<sup>0</sup> <sup>+</sup> be the function whose graph is the polygonal line through the points (4n, <sup>4</sup>n) and (4n+2, <sup>4</sup>n+4) for all <sup>n</sup> <sup>∈</sup> <sup>N</sup>. Thus the sequence <sup>n</sup> → f (n) is the 4-periodic sequence 2, 0, 0, 2, 2, 0, 0, 2,... and hence condition (a) holds. Now, let <sup>ϕ</sup> <sup>∈</sup> *<sup>K</sup>*<sup>0</sup> be such that ϕ <sup>=</sup> f . Clearly, we must have <sup>ϕ</sup> <sup>∈</sup> *<sup>K</sup>*<sup>0</sup> <sup>+</sup>. For the sake of a contradiction, suppose that the 1-periodic function ω = f − ϕ is not constant. That is, there exist 0 <x<y ≤ 1 such that ω(x) = ω(y). There are two exclusive cases to consider.

(a) Suppose that ω(x) < ω(y). For large integer n, we then have

$$0 \le \varphi(\mathbf{y} + 4n + 2) - \varphi(\mathbf{x} + 4n + 2) = w(\mathbf{x}) - w(\mathbf{y}) < 0.$$

(b) Suppose that ω(x) > ω(y). For large integer n, we then have

$$0 \le \varphi(\mathbf{x} + 4n + 3) - \varphi(\mathbf{y} + 4n + 2) = w(\mathbf{y}) - w(\mathbf{x}) < 0.$$

In both cases we reach a contradiction, and hence condition (b) holds.

We note that the function f arising from Proposition C.1 is such that g = f does not lie in *<sup>D</sup>*0∪*K*0. The following proposition shows that if the equation f <sup>=</sup> <sup>g</sup> has a unique solution (up to an additive constant) and if <sup>g</sup> <sup>∈</sup> *<sup>K</sup>*<sup>p</sup> for some <sup>p</sup> <sup>∈</sup> <sup>N</sup>, then necessarily <sup>g</sup> <sup>∈</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> (see also Corollary 4.18).

**Proposition C.2** *Let* <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *and* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*, and suppose that the sequence* <sup>n</sup> → pg(n) *is eventually decreasing. Suppose also that there exists a unique (up to an additive constant) function* <sup>f</sup> <sup>∈</sup> *<sup>K</sup>*<sup>p</sup> <sup>+</sup> *satisfying the equation* f <sup>=</sup> <sup>g</sup>*. Then* <sup>g</sup> *lies in <sup>D</sup>*<sup>p</sup> N*.*

*Proof* For the sake of a contradiction, suppose that the assumptions are satisfied and that the sequence <sup>n</sup> → pg(n) does not approach zero. Since this sequence is eventually nonnegative (because we eventually have pg <sup>=</sup> p+1<sup>f</sup> <sup>≥</sup> 0), it must converge to a value C > 0. It follows that the function g(x) ˜ = g(x) − C x p lies in *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> <sup>−</sup> and hence g˜ lies in *<sup>K</sup>*<sup>p</sup> <sup>+</sup>. Now, for any 0 < τ < C/(2π )p, the functions

$$\begin{aligned} f(\mathbf{x}) &= \Sigma \tilde{\mathbf{g}}(\mathbf{x}) + C \binom{\boldsymbol{\chi}}{p+1}, \\ \varphi(\mathbf{x}) &= \Sigma \tilde{\mathbf{g}}(\mathbf{x}) + C \binom{\boldsymbol{\chi}}{p+1} + \mathfrak{r} \sin(2\pi \mathbf{x}), \end{aligned}$$

lie in *<sup>K</sup>*<sup>p</sup> <sup>+</sup> by Lemma 2.6(d); indeed, we have

$$D^{p+1}(C\binom{x}{p+1} + \tau \sin(2\pi x)) \ge C - \tau (2\pi)^p > 0.$$

Moreover, these functions are solutions to the equation f = g and satisfy ϕ(1) = f (1). This contradicts the uniqueness assumption.

*Remark C.3* We observe that if f and ϕ are solutions to f = g, then for any x > 0 and any <sup>p</sup> <sup>∈</sup> <sup>N</sup>∗, we have pf (x) <sup>≥</sup> 0 if and only if pϕ(x) <sup>≥</sup> 0. Indeed, suppose on the contrary that pf (x) <sup>≥</sup> 0 and pϕ(x) < 0 for some x > 0. Then

$$0 \le \Delta^p f(\mathbf{x}) = \Delta^{p-1} \mathbf{g}(\mathbf{x}) = \Delta^p \boldsymbol{\varphi}(\mathbf{x}) < 0,$$

a contradiction. ♦

Thus, Webster's question still remains a very interesting open problem whose solution would certainly shed light on the theory developed in this book.

Regarding uniqueness issues only, the following two results (John [49]) are also worth mentioning. Generalizations of these results to higher convexity properties would be welcome.

**Proposition C.4 (See [49])** *Let* <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *have the property that*

$$\inf\_{\mathbf{x}\in\mathbb{R}\_{+}}\mathcal{g}(\mathbf{x}) = \mathbf{0}.$$

*Then there is at most one (up to an additive constant) solution* f *to the equation* f = g *that is increasing.*

**Proposition C.5 (See [49])** *Let* <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *have the property that*

$$\liminf\_{x \to \infty} \frac{g(x)}{x} = 0.$$

*Then there is at most one (up to an additive constant) solution* f *to the equation* f = g *that is convex.*

## **Appendix D Asymptotic Behaviors and Bracketing**

*We show that by considering higher and higher values of* p *in Corollary 6.12 we obtain closer and closer bounds for the generalized Binet function* <sup>J</sup> <sup>p</sup>+1[g]*.*

We have seen in Example 6.15 that the inequalities

$$\left(1+\frac{1}{x}\right)^{-\frac{1}{2}} \le \frac{\Gamma(x)}{\sqrt{2\pi}\,e^{-x}\,\alpha^{x-\frac{1}{2}}} \le \left(1+\frac{1}{x}\right)^{\frac{1}{2}}$$

hold for any x > 0 and that tighter inequalities can also be obtained by using different values of the integer p ≥ 1 in Corollary 6.12. In this appendix we show that and how this feature applies in general to multiple log --type functions.

Let <sup>g</sup> lie in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*p, where <sup>p</sup> <sup>=</sup> <sup>1</sup> <sup>+</sup> deg <sup>p</sup>. By Corollary 6.12, for any x > 0 such that g is p-convex or p-concave on [x,∞) we have the inequalities

$$-\overline{G}\_p \left| \Delta^p \mathbf{g}(\mathbf{x}) \right| \; \leq \; J^{p+1}[\Sigma \mathbf{g}](\mathbf{x}) \; \leq \; \overline{G}\_p \left| \Delta^p \mathbf{g}(\mathbf{x}) \right|.$$

Let us now show how tighter inequalities can be obtained. For any <sup>r</sup> <sup>∈</sup> <sup>N</sup>, define the functions αr[g]: <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> and βr[g]: <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> respectively by the equations

$$\begin{aligned} \alpha\_r[\Sigma g](\mathbf{x}) &= -\overline{G}\_{p+r} \left| \Delta^{p+r} g(\mathbf{x}) \right| - \sum\_{j=p+1}^{p+r} G\_j \Delta^{j-1} g(\mathbf{x}) \,, \\\ \beta\_r[\Sigma g](\mathbf{x}) &= \overline{G}\_{p+r} \left| \Delta^{p+r} g(\mathbf{x}) \right| - \sum\_{j=p+1}^{p+r} G\_j \Delta^{j-1} g(\mathbf{x}) \,, \end{aligned}$$

for x > 0.

© The Author(s) 2022 J.-L. Marichal, N. Zenaïdi, *A Generalization of Bohr-Mollerup's Theorem for Higher Order Convex Functions*, Developments in Mathematics 70, https://doi.org/10.1007/978-3-030-95088-0

297

We immediately see that the equality

$$\alpha\_r[\Sigma \mathfrak{g}](\mathfrak{x}) := \beta\_r[\Sigma \mathfrak{g}](\mathfrak{x}),$$

holds if and only if p+rg(x) <sup>=</sup> 0. Moreover, by Corollary 6.12, if <sup>g</sup> <sup>∈</sup> *<sup>K</sup>*p+<sup>r</sup> and if x > 0 is so that g is (p + r)-convex or (p + r)-concave on [x,∞), then the following inequalities hold:

$$
\alpha\_r \mathsf{[\Sigma g]}(\mathsf{x}) \le J^{p+1} \mathsf{[\Sigma g]}(\mathsf{x}) \le \beta\_r \mathsf{[\Sigma g]}(\mathsf{x}) .
$$

The following proposition shows that these inequalities get tighter and tighter as the value of r increases.

**Proposition D.1** *Let* <sup>g</sup> *lie in <sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*p+r+<sup>1</sup> *for some* <sup>r</sup> <sup>∈</sup> <sup>N</sup>*, where* <sup>p</sup> <sup>=</sup> 1 + deg g*. Let* x > 0 *be so that* g|[x,∞) *lies in*

$$
\mathcal{K}^{p+r}([\mathfrak{x},\infty)) \cap \mathcal{K}^{p+r+1}([\mathfrak{x},\infty)).
$$

*Then, we have*

$$
\alpha\_r [\Sigma \mathfrak{g}](\mathfrak{x}) \le \alpha\_{r+1} [\Sigma \mathfrak{g}](\mathfrak{x}) \le \beta\_{r+1} [\Sigma \mathfrak{g}](\mathfrak{x}) \le \beta\_r [\Sigma \mathfrak{g}](\mathfrak{x}).
$$

*These inequalities are strict if* p+rg(x <sup>+</sup> <sup>1</sup>) = <sup>0</sup>*.*

*Proof* We already know that the central inequality holds. Now, using Corollary 4.19, we can assume that g is (p + r)-convex and (p + r + 1)-concave on [x,∞); the other case can be dealt with similarly. By Lemma 2.5, it follows that p+rg <sup>≤</sup> 0 and p+r+1<sup>g</sup> <sup>≥</sup> 0 on [x,∞). Let us show that the first inequality holds; the third one can be established similarly.

We have two exclusive cases to consider.

• If Gp+r+<sup>1</sup> < 0, then

$$\Delta\_r \alpha\_r [\Sigma g](\mathbf{x}) = -\overline{G}\_{p+r+1} \left( \Delta^{p+r+1} g(\mathbf{x}) + \Delta^{p+r} g(\mathbf{x}) \right),$$
 
$$= -\overline{G}\_{p+r+1} \Delta^{p+r} g(\mathbf{x} + \mathbf{l}).$$

• If Gp+r+<sup>1</sup> > 0, then

$$
\Delta\_r a\_r [\Sigma g](\mathbf{x}) = -\overline{G}\_{p+r} \Delta^{p+r} g(\mathbf{x} + \mathbf{l}) + G\_{p+r+1} \left( \Delta^{p+r+1} g(\mathbf{x}) - \Delta^{p+r} g(\mathbf{x}) \right).
$$

In both cases, we can see that r αr[g](x) ≥ 0. Moreover, we have r αr[g](x) <sup>&</sup>gt; 0 if p+rg(x <sup>+</sup> <sup>1</sup>) = 0.

It is natural to wonder how the inequalities in Proposition D.1 behave as r →<sup>N</sup> ∞. The following proposition, which is a reformulation of Proposition 8.11, answers this question and provides a series representation for <sup>J</sup> <sup>p</sup>+1[g].

**Proposition D.2** *Let* <sup>g</sup> *lie in <sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*∞*, where* <sup>p</sup> <sup>=</sup> <sup>1</sup> <sup>+</sup> degg*. Let* x > <sup>0</sup> *be so that for every integer* q ≥ p *the function* g *is* q*-convex or* q*-concave on* [x,∞)*. Suppose also that the sequence* <sup>q</sup> → qg(x) *is bounded. Then the following assertions hold.*


$$\Sigma g(\mathbf{x}) = \sigma[\mathbf{g}] + \int\_1^\chi \mathbf{g}(t)dt - \sum\_{j=1}^\infty G\_j \Delta^{j-1} \mathbf{g}(\mathbf{x}).$$

*Equivalently,*

$$J^{p+1}[\Sigma g](\mathbf{x}) = \ -\sum\_{j=p+1}^{\infty} G\_j \Delta^{j-1} \mathbf{g}(\mathbf{x}).$$

## **Appendix E Generalized Webster's Inequality**

*Webster [98] provided bounds for* <sup>ρ</sup>p+<sup>1</sup> <sup>x</sup> [g](a) *in the special case when* <sup>p</sup> <sup>=</sup> <sup>1</sup>*. We generalize Webster's bounds to any integer* <sup>p</sup> <sup>∈</sup> <sup>N</sup> *and use integration to provide new bounds for* <sup>J</sup> <sup>p</sup>+1[g](x) *that are tighter than those given in Theorem 6.11.*

As we mentioned in Sect. 6.4, one can show that if <sup>g</sup> lies in *<sup>D</sup>*<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*<sup>1</sup> and if x > 0 and a > 0 are so that g is concave on [x + a,∞), then the following double inequality holds

$$\begin{aligned} &\sum\_{k=0}^{\lfloor a \rfloor} \operatorname{g}(\mathbf{x} + k) + (\{a\} - 1)\operatorname{g}(\mathbf{x} + a) - a\operatorname{g}(\mathbf{x}) \ \leq \rho\_{\mathbf{x}}^{2} [\Sigma \operatorname{g}](a) \\ &\leq \sum\_{k=0}^{\lfloor a \rfloor} \operatorname{g}(\mathbf{x} + k) - \operatorname{g}(\mathbf{x} + a) + \{a\}\operatorname{g}(\mathbf{x} + \lfloor a \rfloor + 1) - a\operatorname{g}(\mathbf{x}). \end{aligned}$$

This result was proved in the multiplicative notation by Webster [98, Eq. (6.4)] to establish the limit (6.4) in the case when p = 1. In the following proposition, we generalize this inequality to any value of <sup>p</sup> <sup>∈</sup> <sup>N</sup>. We call it the *generalized Webster inequality*.

**Proposition E.1 (Generalized Webster's Inequality)** *Let* <sup>f</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *and* <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> *be functions such that* f <sup>=</sup> <sup>g</sup> *on* <sup>R</sup>+*. Let also* x > <sup>0</sup> *and* <sup>a</sup> <sup>≥</sup> <sup>0</sup>*. The following assertions hold.*

301

#### 302 E Generalized Webster's Inequality

*(a) If* f *is monotone on* [x + a,∞)*, then*

$$0 \le \pm \left( \rho\_{\chi}^{\mathbb{I}}[f](a) + \mathfrak{g}(\mathfrak{x} + a) - \sum\_{k=0}^{\lfloor a \rfloor} \mathfrak{g}(\mathfrak{x} + k) \right) \le \pm \mathfrak{g}(\mathfrak{x} + \lfloor a \rfloor + 1),$$

*where* <sup>±</sup> *stands for* <sup>1</sup> *or* <sup>−</sup><sup>1</sup> *according to whether* <sup>f</sup> *lies in <sup>K</sup>*<sup>0</sup> <sup>+</sup> *or <sup>K</sup>*<sup>0</sup> −*. (b) If* <sup>f</sup> *is* <sup>p</sup>*-convex or* <sup>p</sup>*-concave on* [<sup>x</sup> <sup>+</sup> a,∞) *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup>∗*, then*

$$\begin{aligned} 0 &\le \pm \varepsilon\_{p+1}(\{a\}) \, \rho^{p+1}\_{\chi+\lfloor a \rfloor+1}[f](\{a\})\\ &\le \pm \varepsilon\_{p+1}(\{a\}) \, \frac{\{a\}}{p} \, \rho^{p}\_{\chi+\lfloor a \rfloor+1}[g](\{a\}-1), \end{aligned}$$

*where* εp+1({a}) <sup>=</sup> <sup>0</sup>*, if* <sup>a</sup> <sup>∈</sup> <sup>N</sup>*, and* εp+1({a}) <sup>=</sup> (−1)p*, otherwise, and* <sup>±</sup> *stands for* <sup>1</sup> *or* <sup>−</sup><sup>1</sup> *according to whether* <sup>f</sup> *lies in <sup>K</sup>*<sup>p</sup> <sup>+</sup> *or <sup>K</sup>*<sup>p</sup> <sup>−</sup>*. Moreover, we have*

$$\begin{aligned} \rho\_{\boldsymbol{x}+\lfloor\boldsymbol{a}\rfloor+1}^{p+1}[f](\{\boldsymbol{a}\}) &= \rho\_{\boldsymbol{\lambda}}^{p+1}[f](\boldsymbol{a}) + \boldsymbol{g}(\boldsymbol{\lambda}+\boldsymbol{a}) \\ &+ \sum\_{j=1}^{p} \left( \binom{\boldsymbol{a}}{j} - \binom{\lfloor \boldsymbol{a} \rfloor}{j} \right) \boldsymbol{\Delta}^{j-1} \boldsymbol{g}(\boldsymbol{\lambda}) - \sum\_{j=0}^{p} \binom{\lfloor \boldsymbol{a} \rfloor}{j} \sum\_{k=0}^{\lfloor \boldsymbol{a} \rfloor} \boldsymbol{\Delta}^{j} \boldsymbol{g}(\boldsymbol{x}+k). \end{aligned}$$

*Proof* Let us first prove assertion (a). Using monotonicity of f , we get

$$
\pm f(\mathbf{x} + \lfloor a \rfloor + 1) \le \pm f(\mathbf{x} + a + 1) \le \pm f(\mathbf{x} + \lfloor a \rfloor + 2),
$$

or equivalently, using (3.2),

$$\begin{aligned} \pm \left( f(\mathbf{x}) + \sum\_{k=0}^{\lfloor a \rfloor} g(\mathbf{x} + k) \right) &\leq \pm \left( f(\mathbf{x} + a) + g(\mathbf{x} + a) \right) \\ &\leq \pm \left( f(\mathbf{x}) + \sum\_{k=0}^{\lfloor a \rfloor + 1} g(\mathbf{x} + k) \right). \end{aligned}$$

This proves assertion (a). Let us now prove assertion (b). The first inequality immediately follows from Lemma 2.7. To see that the second inequality holds, we first observe that

$$\begin{aligned} \{a\} & \xrightarrow{p+1} f\left[\mathbf{x} + \lfloor a \rfloor + 1, \dots, \mathbf{x} + \lfloor a \rfloor + p, \mathbf{x} + a, \mathbf{x} + a + 1\right] \\ & = (\{a\} - p)\left\{a\right\} \underline{\boldsymbol{f}}\left[\mathbf{x} + \lfloor a \rfloor + 1, \dots, \mathbf{x} + \lfloor a \rfloor + p, \mathbf{x} + a + 1\right] \\ & \quad - \{a\} \left(\{a\} - 1\right) \underline{\boldsymbol{f}}\left[\mathbf{x} + \lfloor a \rfloor + 1, \dots, \mathbf{x} + \lfloor a \rfloor + p, \mathbf{x} + a\right] \qquad \text{(by (2.3))} \end{aligned}$$

$$= (\{a\} - p) \, \rho^{p}\_{\underline{x} + \lfloor a \rfloor + 1} [f](\{a\}) - \{a\} \, \rho^{p}\_{\underline{x} + \lfloor a \rfloor + 1} [f](\{a\} - 1) \qquad \text{(by (2.12))}$$

$$= \{a\} \, \rho^{p - 1}\_{\underline{x} + \lfloor a \rfloor + 1} [g](\{a\} - 1) - p \, \rho^{p}\_{\underline{x} + \lfloor a \rfloor + 1} [f](\{a\}) \qquad \text{(by (4.3))}$$

$$= \{a\} \, \rho^{p}\_{\underline{x} + \lfloor a \rfloor + 1} [g](\{a\} - 1) - p \, \binom{\lfloor a \rfloor}{p} \, \Delta^{p} f(\mathfrak{x} + \lfloor a \rfloor + 1)$$

$$\qquad - p \, \rho^{p}\_{\underline{x} + \lfloor a \rfloor + 1} [f](\{a\}) \qquad \text{(by (1.7))}$$

$$= \{a\} \, \rho^{p}\_{\underline{x} + \lfloor a \rfloor + 1} [g](\{a\} - 1) - p \, \rho^{p + 1}\_{\underline{x} + \lfloor a \rfloor + 1} [f](\{a\}) \qquad \text{(by (1.7))}$$

Now, since f is p-convex or p-concave on [x + a,∞), we have

$$0 \le \pm \varepsilon\_{p+1}(\{a\}) \{a\} \underline{\underline{p+1}} f \{\underline{x} + \lfloor a \rfloor + 1, \dots, \underline{x} + \lfloor a \rfloor + p, \underline{x} + a, \underline{x} + a + 1\},$$

and hence

$$0 \le \pm \varepsilon\_{p+1}(\{a\}) \left( \frac{\{a\}}{p} \rho^p\_{\underline{\chi+\lfloor a \rfloor+1}}[\![g](\{a\}-1) - \rho^{p+1}\_{\underline{\chi+\lfloor a \rfloor+1}}[\![f](\{a\}) ) \right).$$

This proves the second inequality. Finally, using (1.7) and then (3.2) we obtain

$$\begin{aligned} &\rho\_{\boldsymbol{x}+\lfloor a\rfloor+1}^{p+1}[f](\{a\})-\rho\_{\boldsymbol{x}}^{p+1}[f](a) \\ &=f(\boldsymbol{x}+a+1)-\sum\_{j=0}^{p}\binom{[a]}{j}\Delta^{j}f(\boldsymbol{x}+\lfloor a\rfloor+1)-f(\boldsymbol{x}+a)+\sum\_{j=0}^{p}\binom{a}{j}\Delta^{j}f(\boldsymbol{x}) \\ &=g(\boldsymbol{x}+a)+\sum\_{j=1}^{p}\left(\binom{a}{j}-\binom{[a]}{j}\right)\Delta^{j}f(\boldsymbol{x})-\sum\_{j=0}^{p}\binom{[a]}{j}\sum\_{k=0}^{[a]}\Delta^{j}g(\boldsymbol{x}+k). \end{aligned}$$

This completes the proof.

The generalized Webster inequality applies to multiple log--type functions simply by taking <sup>f</sup> <sup>=</sup> g in Proposition E.1, provided that <sup>g</sup> lies in *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> for some <sup>p</sup> <sup>∈</sup> <sup>N</sup>. This inequality then provides bounds for the quantity <sup>ρ</sup>p+<sup>1</sup> <sup>x</sup> [g](a).

We now show how narrow bounds for <sup>J</sup> <sup>p</sup>+1[g](x) can be derived by "integrating" the generalized Webster inequality. We also show that these new bounds are narrower than the generalized Stirling's formula-based inequalities given in Theorem 6.11 and Corollary 6.12.

Let us begin with the special case when <sup>p</sup> <sup>=</sup> 0. Thus, let <sup>g</sup> lie in *<sup>C</sup>*0∩*D*0∩*K*<sup>0</sup> and let x > 0 be so that g is monotone on [x,∞). Corollary 6.12 provides the following bounds for <sup>J</sup> <sup>1</sup>[g](x)

$$-|\mathbf{g}(\mathbf{x})| \le J^{\mathsf{I}}[\Sigma \mathbf{g}](\mathbf{x}) \le |\mathbf{g}(\mathbf{x})|.$$

The following proposition provides a finer approximation of <sup>J</sup> <sup>1</sup>[g](x), where the absolute error is bounded at x by |g(x + 1)|.

**Proposition E.2** *Let* <sup>g</sup> *lie in <sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*<sup>0</sup> *and let* x > <sup>0</sup> *be so that* <sup>g</sup> *is monotone on* [x,∞)*. Then we have*

$$0 \le \pm \left( g(\mathbf{x}) - \int\_0^1 g(\mathbf{x} + t) \, dt \right) \le \pm \left( -1 \right) J^1[\Sigma g](\mathbf{x})$$

$$\le \pm \left( g(\mathbf{x}) + g(\mathbf{x} + \mathbf{l}) - \int\_0^1 g(\mathbf{x} + t) \, dt \right) \le \pm g(\mathbf{x}),$$

*where* <sup>±</sup> *stands for* <sup>1</sup> *or* <sup>−</sup><sup>1</sup> *according to whether* g *lies in <sup>K</sup>*<sup>0</sup> <sup>+</sup> *or <sup>K</sup>*<sup>0</sup> −*.*

*Proof* Negating <sup>g</sup> is necessary, we can assume that it lies in *<sup>K</sup>*<sup>0</sup> <sup>−</sup>, which means that g lies in *<sup>K</sup>*<sup>0</sup> <sup>+</sup>. This immediately establishes the first and the last inequalities. The two inner inequalities can then be obtained by integrating the expressions in assertion (a) of Proposition E.1 on a ∈ (0, 1).

*Example E.3* Let us apply Proposition E.2 to g(x) <sup>=</sup> <sup>1</sup> <sup>x</sup> . For any x > 0, we have the following inequalities

$$
\ln x - \frac{1}{x} \le \ln(x+1) - \frac{1}{x} - \frac{1}{x+1} \le \psi(x) \le \ln(x+1) - \frac{1}{x} \le \ln x.
$$

The inner approximation has an absolute error that is bounded at any x > 0 by the quantity <sup>1</sup> <sup>x</sup>+<sup>1</sup> . ♦

Let us now assume that <sup>p</sup> <sup>≥</sup> 1. Thus, let <sup>g</sup> lie in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> for some <sup>p</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> and let x > 0 be so that g is p-convex or p-concave on [x,∞). Then we have seen in Theorem 6.11 that the following inequalities hold

$$0 \le \pm (-1)^p J^{p+1} [\Sigma g](\mathbf{x}) \le \pm (-1)^{p+1} B^p [g](\mathbf{x}),$$

where <sup>±</sup> stands for 1 or <sup>−</sup>1 according to whether <sup>g</sup> lies in *<sup>K</sup>*<sup>p</sup> <sup>+</sup> or in *<sup>K</sup>*<sup>p</sup> <sup>−</sup>, and

$$B^p[\mathbf{g}](\mathbf{x}) = \int\_0^1 \binom{t-1}{p} \left(\Delta^{p-1}\mathbf{g}(\mathbf{x}+t) - \Delta^{p-1}\mathbf{g}(\mathbf{x})\right) dt$$

$$= \int\_0^1 \binom{t-1}{p} \Delta^{p-1}\mathbf{g}(\mathbf{x}+t) \, dt - (-1)^p \overline{G}\_p \, \Delta^{p-1}\mathbf{g}(\mathbf{x}) .$$

In the following proposition, we give finer bounds for <sup>J</sup> <sup>p</sup>+1[g](x). To this end, we introduce the quantity

$$A^p[\mathbf{g}](\boldsymbol{\chi}) \ = \ J^{p+1}[\mathbf{g}](\boldsymbol{\chi}) + \frac{1}{p} \int\_0^1 t \, \rho^p\_{\boldsymbol{\chi}+1}[\mathbf{g}](t-1) \, dt \ .$$

It is not difficult to see that this quantity can be rewritten as follows

$$A^p[\mathbf{g}](\mathbf{x}) = \left. J^{p+1}[\mathbf{g}](\mathbf{x}) + \frac{1}{p} \int\_0^1 t \, \mathbf{g}(\mathbf{x} + t) \, dt - \frac{1}{p} \sum\_{j=1}^p j G\_j \, \Delta^{j-1} \mathbf{g}(\mathbf{x} + \mathbf{l}) \, \Delta^j$$

Indeed, using (1.7) we clearly have

$$\int\_0^1 t \, \rho\_{x+1}^p[\mathbf{g}](t-1) \, dt \, = \int\_0^1 t \, \mathbf{g}(\mathbf{x}+t) \, dt - \sum\_{j=0}^{p-1} \int\_0^1 t \binom{t-1}{j} \, dt \, \Delta^j \mathbf{g}(\mathbf{x}+1),$$

where

$$\int\_0^1 t \binom{t-1}{j} \, dt \, = \left. (j+1) \int\_0^1 \binom{t}{j+1} \, dt \right| \, = \left. (j+1) \, G\_{j+1} \right|.$$

We also observe that <sup>A</sup>1[g] = <sup>B</sup>1[g].

**Proposition E.4** *Let* <sup>g</sup> *lie in <sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> *for some* <sup>p</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> *and let* x > <sup>0</sup> *be so that* g *is* p*-convex or* p*-concave on* [x,∞)*. Then, we have*

$$\begin{aligned} 0 \le \pm (-1)^{p+1} J^{p+1} [\mathfrak{g}](\mathbf{x}) &\le \pm (-1)^p J^{p+1} [\Sigma \mathbf{g}](\mathbf{x}) \\ &\le \pm (-1)^{p+1} A^p [\mathbf{g}](\mathbf{x}) \le \pm (-1)^{p+1} B^p [\mathbf{g}](\mathbf{x}), \end{aligned}$$

*where* <sup>±</sup> *stands for* <sup>1</sup> *or* <sup>−</sup><sup>1</sup> *according to whether* <sup>g</sup> *lies in <sup>K</sup>*<sup>p</sup> <sup>+</sup> *or in <sup>K</sup>*<sup>p</sup> − *.*

*Proof* Recall that if <sup>g</sup> lies in *<sup>K</sup>*<sup>p</sup> <sup>+</sup> (resp. *<sup>K</sup>*<sup>p</sup> <sup>−</sup>), then g lies in *<sup>K</sup>*<sup>p</sup> <sup>−</sup> (resp. *<sup>K</sup>*<sup>p</sup> <sup>+</sup>). The first inequality is then clear. The second and third inequalities are obtained by integrating the expressions in assertion (b) of Proposition E.1 on a ∈ (0, 1). To establish the fourth inequality, we first prove the following claim.

*Claim* For any <sup>g</sup> <sup>∈</sup> <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup>, any <sup>p</sup> <sup>∈</sup> <sup>N</sup>∗, any x > 0, and any 0 <t< 1, we have

$$\binom{l-1}{p}(\Delta^{p-1}g(\mathbf{x}+t)-\Delta^{p-1}g(\mathbf{x}))+\rho\_{\mathbf{x}}^{p+1}[\mathbf{g}](t)-\frac{t}{p}\rho\_{\mathbf{x}+1}^{p}[\mathbf{g}](t-1)$$

$$=\frac{1}{p}t\underbrace{\frac{p+1}{p}}\_{j=1}\sum\_{j=1}^{p-1}\mathbf{g}[\underbrace{\mathbf{x}+j,\ldots,\mathbf{x}+p-1}\_{p-j \text{ places}},\underbrace{\mathbf{x}+t,\ldots,\mathbf{x}+t+j}\_{j+1 \text{ places}}].$$

*Proof of the Claim* Using (1.7), it is easy to see that the claimed identity holds when p = 1, in which case the right-hand side is identically zero. Hence, we can assume that p ≥ 2. Using (2.3), we then obtain

$$\frac{1}{p}\operatorname{tr}\frac{p+1}{p}\sum\_{j=1}^{p-1}\operatorname{g}[\mathbf{x}+j,\dots,\mathbf{x}+p-1,\mathbf{x}+t,\dots,\mathbf{x}+t+j]$$

$$=\frac{1}{p}\frac{\mathbf{r}\mathbf{p}+\mathbf{l}}{\mathbf{t}}\sum\_{j=1}^{p-1}\left(\operatorname{g}[\mathbf{x}+j+1,\dots,\mathbf{x}+p-1,\mathbf{x}+t,\dots,\mathbf{x}+t+j]\right)$$

$$-\operatorname{g}[\mathbf{x}+j,\dots,\mathbf{x}+p-1,\mathbf{x}+t,\dots,\mathbf{x}+t+j-1]),$$

where the latter sum telescopes to

$$\mathbf{g}\mathbf{g}[\mathbf{x}+t,\dots,\mathbf{x}+t+p-1] - \mathbf{g}[\mathbf{x}+1,\dots,\mathbf{x}+p-1,\mathbf{x}+t].$$

Thus, using (2.12) we see that the right-hand side of the claimed identity reduces to

$$\left(\binom{t-1}{p}\Delta^{p-1}\mathbf{g}(\mathbf{x}+t) - \frac{t-p}{p}\,\rho^{p-1}\_{\mathbf{x}+1}\mathbf{[g]}(t-1)\right)$$

Now, subtracting the left-hand side of the claimed identity from this latter expression, we get

$$\frac{p-t}{p}\rho\_{\ge+1}^{p-1}[\mathbf{g}](t-1) + \frac{t}{p}\rho\_{\ge+1}^{p}[\mathbf{g}](t-1) - \rho\_{\ge}^{p+1}[\mathbf{g}](t) + \binom{r-1}{p}\Delta^{p-1}\mathbf{g}(\mathbf{x}).$$

Using identities (1.7), (3.5), and the trivial identity <sup>t</sup> p <sup>t</sup>−<sup>1</sup> p−1 <sup>=</sup> <sup>t</sup> p , it follows that the latter expression becomes

$$- \left( \begin{matrix} l \\ p \end{matrix} \right) \Delta^{p-1} \mathbf{g}(\mathbf{x} + \mathbf{l}) + \sum\_{j=0}^{p} \binom{l}{j} \Delta^{j} \mathbf{g}(\mathbf{x}) - \sum\_{j=0}^{p-2} \binom{l-1}{j} \Delta^{j} \mathbf{g}(\mathbf{x} + \mathbf{l}) + \binom{l-1}{p} \Delta^{p-1} \mathbf{g}(\mathbf{x}).$$

Substituting g(x) + g(x) for g(x + 1) in this latter expression, we obtain

$$-\left(\binom{t}{p}\,\Delta^{p-1}\mathbf{g}(\mathbf{x}) + \binom{t}{p}\,\Delta^{p}\mathbf{g}(\mathbf{x})\right) + \left(\binom{t}{p}\,\Delta^{p}\mathbf{g}(\mathbf{x}) + \binom{t}{p-1}\,\Delta^{p-1}\mathbf{g}(\mathbf{x}) + \sum\_{j=0}^{p-2} \binom{t}{j}\Delta^{j}\mathbf{g}(\mathbf{x})\right)$$

$$-\left(\sum\_{j=0}^{p-2} \binom{t-1}{j}\Delta^{j}\mathbf{g}(\mathbf{x}) + \sum\_{j=0}^{p-2} \binom{t-1}{j}\Delta^{j+1}\mathbf{g}(\mathbf{x})\right) + \binom{t-1}{p}\Delta^{p-1}\mathbf{g}(\mathbf{x}).$$

Collecting terms, this latter expression reduces to

$$\begin{aligned} \left( \begin{smallmatrix} \mathbf{z}' \\ p-1 \end{smallmatrix} \right) \Delta^{p-1} \mathbf{g}(\mathbf{x}) - \left( \begin{smallmatrix} \mathbf{z}' \\ p-1 \end{smallmatrix} \right) \Delta^{p-1} \mathbf{g}(\mathbf{x}) + \sum\_{j=1}^{p-2} \left( \begin{smallmatrix} \mathbf{z}' \\ j \end{smallmatrix} \right) - \left( \begin{smallmatrix} \mathbf{z}' \\ j \end{smallmatrix} \right) \Delta^{j} \mathbf{g}(\mathbf{x}) - \sum\_{j=1}^{p-1} \binom{\mathbf{z}'-1}{j-1} \Delta^{j} \mathbf{g}(\mathbf{x}) \\\ = \left( \begin{smallmatrix} \mathbf{z}'-1 \\ p-2 \end{smallmatrix} \right) \Delta^{p-1} \mathbf{g}(\mathbf{x}) + \sum\_{j=1}^{p-2} \binom{\mathbf{z}'-1}{j-1} \Delta^{j} \mathbf{g}(\mathbf{x}) - \sum\_{j=1}^{p-1} \binom{\mathbf{z}'-1}{j-1} \Delta^{j} \mathbf{g}(\mathbf{x}) = \mathbf{0}. \end{aligned}$$

This completes the proof of the claim.

Let us now establish the fourth inequality. Negating g if necessary, we can assume that it lies in *<sup>K</sup>*<sup>p</sup> <sup>−</sup>. Using the claim, we have immediately that

$$B^p[\mathbf{g}](\mathbf{x}) - A^p[\mathbf{g}](\mathbf{x}) = \sum\_{j=1}^{p-1} \int\_0^1 \underbrace{\frac{t^{p+1}}{p}}\_{} \mathbf{g}[\mathbf{x} + j, \dots, \mathbf{x} + p - 1, \mathbf{x} + t, \dots, \mathbf{x} + t + j] \, dt,$$

where the divided difference of g has p + 1 arguments and is therefore nonnegative since g is (p − 1)-convex by Corollary 4.19. This completes the proof.

*Example E.5 (The Gamma Function)* Let us apply Proposition E.4 to the function g(x) <sup>=</sup> ln <sup>x</sup> with <sup>p</sup> <sup>=</sup> 1 (recall here that <sup>A</sup>1[g] = <sup>B</sup>1[g]). We obtain the following inequalities for x > 0

$$0 \le \frac{1}{2}(2x+1)\ln\left(1+\frac{1}{x}\right) - 1 \le J(x) \le \frac{1}{2}(x+1)^2\ln\left(1+\frac{1}{x}\right) - \frac{x}{2} - \frac{3}{4}\pi$$

This provides an approximation of Binet's function J (x) with an absolute error that is bounded at any x > 0 by

$$
\frac{\chi^2}{2}\ln\left(1+\frac{1}{\chi}\right)-\frac{\chi}{2}+\frac{1}{4},
$$

that is, <sup>1</sup> <sup>6</sup><sup>x</sup> <sup>−</sup> <sup>1</sup> <sup>8</sup>x<sup>2</sup> <sup>+</sup> O(x−3) as <sup>x</sup> → ∞. In the multiplicative notation, we obtain

$$1 \le e^{-1} \left(1 + \frac{1}{x}\right)^{x + \frac{1}{2}} \le \frac{\Gamma(x)}{\sqrt{2\pi} \, e^{-x} \ge^{x - \frac{1}{2}}} \le e^{-\frac{x}{2} - \frac{3}{4}} \left(1 + \frac{1}{x}\right)^{\frac{1}{2}(x+1)^2}$$

thus retrieving (6.28). In turn, these inequalities provide an approximation of the log-gamma function with the same absolute error. ♦

,

*Example E.6 (The Barnes* G*-Function, see Sect. 10.5)* Let us apply Proposition E.4 to the function g(x) = ln -(x) with p = 2. After some calculus we obtain the following inequalities for x > 0

$$\begin{aligned} 0 &\le \ln \Gamma(\mathbf{x}) + x - \left(\mathbf{x} - \frac{1}{2}\right) \ln x - \frac{1}{12} \ln\left(1 + \frac{1}{\mathbf{x}}\right) - \frac{1}{2} \ln(2\pi) \\ &\le -\ln G(\mathbf{x}) + \psi\_{-2}(\mathbf{x}) - \frac{1}{2} \ln \Gamma(\mathbf{x}) + \frac{1}{12} \ln \mathbf{x} + \frac{1}{12} - \frac{1}{4} \ln(2\pi) - 2 \ln A \\ &\le \frac{1}{2} \psi\_{-2}(\mathbf{x}) + \frac{3}{4} \ln \Gamma(\mathbf{x}) - \frac{1}{12} (3\mathbf{x}^2 + 6\mathbf{x} - 4) \ln \mathbf{x} + \frac{3}{8} \mathbf{x}^2 + \frac{1}{2} \mathbf{x} \\ &\quad - \frac{1}{8} (2\mathbf{x} + 3) \ln(2\pi) - \frac{1}{2} \ln A \\ &\le \frac{1}{12} (\mathbf{x} + 1)^2 (2\mathbf{x} + 5) \ln\left(1 + \frac{1}{\mathbf{x}}\right) - \frac{1}{72} (12\mathbf{x}^2 + 48\mathbf{x} + 49). \end{aligned}$$

Here, the absolute error is bounded by <sup>1</sup> <sup>16</sup><sup>x</sup> <sup>−</sup> <sup>59</sup> <sup>1440</sup>x<sup>2</sup> <sup>+</sup> O(x−3) as <sup>x</sup> → ∞. ♦

*Remark E.7 (Bounds for the Generalized Euler Constant)* If <sup>g</sup> lies in *<sup>C</sup>*0∩*D*p∩*K*<sup>p</sup> for p = 1 + deg g and if g is p-convex or p-concave on [1,∞), then (6.45) and (6.46) provide bounds for the generalized Euler constant (see Definition 6.34)

$$\mathcal{Y}[\mathfrak{g}] = -J^{p+1}[\Sigma \mathfrak{g}](\mathfrak{l}).$$

Finer bounds can now be obtained as follows. Under the assumptions of Proposition E.4, we have

$$\pm (-1)^p J^{p+1}[\mathfrak{g}](\mathfrak{l}) \le \pm (-1)^{p+1} \mathcal{Y}[\mathfrak{g}] \le \pm (-1)^p A^p[\mathfrak{g}](\mathfrak{l}).$$

For instance, when g(x) = ln -(x), we obtain

$$1 - \frac{7}{12}\ln 2 - \frac{1}{2}\ln \pi \le \varkappa \{ \ln \diamond \Gamma \} \\ = \sigma \{ \ln \diamond \Gamma \} \\ \le \frac{7}{8} - \frac{1}{2}\ln A - \frac{3}{8}\ln(2\pi).$$

Thus, γ [ln ◦-] ≈ 0.045 lies in the interval [0.023, 0.062], with amplitude < 0.039. ♦

**Searching for Finer Approximations** We now end this appendix with an interesting observation about the approximations of <sup>J</sup> <sup>p</sup>+1[g](x) (or equivalently g(x)) given in Propositions E.2 and E.4.

For any <sup>p</sup> <sup>∈</sup> <sup>N</sup> and any <sup>g</sup> <sup>∈</sup> *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*p, define the function <sup>ε</sup>p[g]: <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> by the equation

$$\varepsilon^{p}[\mathfrak{g}](\mathfrak{x}) = \begin{cases} |\mathfrak{g}(\mathfrak{x}+1)|, & \text{if } p = 0, \\ |A^{p}[\mathfrak{g}](\mathfrak{x}) - J^{p+1}[\mathfrak{g}](\mathfrak{x})|, & \text{if } p \ge 1. \end{cases}$$

Let us show that, if <sup>g</sup> is <sup>p</sup>-convex or <sup>p</sup>-concave on [x,∞), then the function <sup>ε</sup>p[g] decreases to zero on [x,∞). This is clear if p = 0, so we can assume that p ≥ 1. We know from Theorem 6.11 that the function <sup>|</sup>Bp[g]| vanishes at infinity, and hence so does the function <sup>ε</sup>p[g] by Proposition E.4. On the other hand, using (2.12) we see that

$$\begin{aligned} \varepsilon^p[\mathbf{g}](\mathbf{x}) &= \left| \int\_0^1 \frac{t}{p} \rho\_{\mathbf{x}+1}^p[\mathbf{g}](t-1) \, dt \right| \\ &= \left| \int\_0^1 \frac{t^{\frac{p+1}{p}}}{p} \mathbf{g}[\mathbf{x}+1, \dots, \mathbf{x}+p, \mathbf{x}+t] \, dt \right|, \end{aligned}$$

and this function is monotone by Lemma 2.5.

In terms of approximations of g(x) given in Propositions E.2 and E.4, this observation shows that, for any <sup>m</sup> <sup>∈</sup> <sup>N</sup>, the approximation of g(x <sup>+</sup> m) is finer than that of g(x) and it is actually finer and finer as m increases.

Thus, finer approximations of g(x) can be obtained using the following procedure.

Step 1. Replace x with x + m in Propositions E.2 and E.4.

Step 2. Use the substitution (cf. (5.3))

$$
\Sigma \mathbf{g}(\mathbf{x} + m) = \Sigma \mathbf{g}(\mathbf{x}) + \sum\_{k=0}^{m-1} \mathbf{g}(\mathbf{x} + k)
$$

in the expression of <sup>J</sup> <sup>p</sup>+1[g](x <sup>+</sup> m).

Note that we already used this trick when we investigated the generalized Gautschi inequality (see Remark 8.69).

*Example E.8 (The Gamma Function)* Let <sup>m</sup> <sup>∈</sup> <sup>N</sup>. Replacing <sup>x</sup> with <sup>x</sup> <sup>+</sup> <sup>m</sup> in the following approximation of the gamma function (see Example E.5)

$$e^{-1} \left(1 + \frac{1}{\chi}\right)^{\chi + \frac{1}{2}} \le \frac{\Gamma(\chi)}{\sqrt{2\pi} \, e^{-\chi} \, \chi^{\chi - \frac{1}{2}}} \le \, e^{-\frac{\chi}{2} - \frac{3}{4}} \left(1 + \frac{1}{\chi}\right)^{\frac{1}{2}(\chi + 1)^2}.$$

and then using the substitution

$$
\Gamma(\mathfrak{x} + m) \, := \, (\mathfrak{x} + m - 1) \underline{\mathop{\rm m}} \Gamma(\mathfrak{x}),
$$

.

we finally obtain

$$\begin{aligned} e^{-1} \left( 1 + \frac{1}{\chi + m} \right)^{\chi + m + \frac{1}{2}} &\leq \frac{(\chi + m - 1)\#\Gamma(\chi)}{\sqrt{2\pi}} \\ &\leq e^{-\frac{\chi + m}{2} - \frac{3}{4}} \left( 1 + \frac{1}{\chi + m} \right)^{\frac{1}{2}(\chi + m + 1)^2} \end{aligned}$$

This double inequality provides an approximations of the log-gamma function with an absolute error that is bounded by <sup>1</sup> <sup>6</sup>(x+m) <sup>−</sup> <sup>1</sup> <sup>8</sup>(x+m)<sup>2</sup> <sup>+</sup> O(x−3) as <sup>x</sup> → ∞. ♦

## **Appendix F On the Differentiability of** *g*

*We establish Proposition 7.3, which states that, for every* <sup>p</sup> <sup>∈</sup> <sup>N</sup>*, there exists a function* <sup>g</sup> *lying in <sup>C</sup>*p+<sup>1</sup> <sup>∩</sup> *<sup>D</sup>*<sup>p</sup> <sup>∩</sup> *<sup>K</sup>*<sup>p</sup> *for which* g *does not lie in <sup>C</sup>*p+1*.*

To establish Proposition 7.3, we first show that it is enough to consider the special case when <sup>p</sup> <sup>=</sup> 0. Suppose that there exists a function <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> lying in *<sup>C</sup>*<sup>1</sup> <sup>∩</sup> *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*<sup>0</sup> such that g does not lie in *<sup>C</sup>*1. By Proposition 4.12, its antiderivative

$$G(\mathbf{x}) = \int\_{1}^{\mathbf{x}} \mathbf{g}(t) \, dt$$

clearly lies in *<sup>C</sup>*<sup>2</sup> <sup>∩</sup> *<sup>D</sup>*<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*1. By Proposition 8.20, we also have

$$D\Sigma G(\mathbf{x}) := \Sigma \mathbf{g}(\mathbf{x}) - \sigma \mathbf{[g]}, \qquad \mathbf{x} > \mathbf{0}.$$

Since we assumed that g does not lie in *<sup>C</sup>*1, it follows that G cannot lie in *<sup>C</sup>*2. Iterating this process, we obtain that the statement is true for any <sup>p</sup> <sup>∈</sup> <sup>N</sup>.

We now construct a function <sup>g</sup> lying in *<sup>C</sup>*<sup>1</sup> <sup>∩</sup> *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*<sup>0</sup> (and even in *<sup>C</sup>*∞) and such that the function g does not lie in *<sup>C</sup>*1.

Consider first the function : <sup>R</sup> <sup>→</sup> <sup>R</sup> defined by

$$\Psi(\mathbf{x}) = \begin{cases} \alpha \exp\left(1 - \frac{1}{1 - 4\alpha^2}\right), & \text{if } \mathbf{x} \in \left(-\frac{1}{2}, \frac{1}{2}\right), \\ 0, & \text{otherwise}, \end{cases}$$

where

$$\frac{1}{\alpha} = \int\_{-1/2}^{1/2} \exp\left(1 - \frac{1}{1 - 4\alpha^2}\right) d\alpha \dots$$

© The Author(s) 2022

311

J.-L. Marichal, N. Zenaïdi, *A Generalization of Bohr-Mollerup's Theorem for Higher Order Convex Functions*, Developments in Mathematics 70, https://doi.org/10.1007/978-3-030-95088-0

Thus defined, is a bump function that is of class *C*<sup>∞</sup> with the compact support

$$\text{supp}(\Psi) \, = \left[ -\frac{1}{2}, \frac{1}{2} \right].$$

For every <sup>m</sup> <sup>∈</sup> <sup>N</sup>∗, define the function m : <sup>R</sup> <sup>→</sup> <sup>R</sup> by the equation

$$\Psi\_m(\mathfrak{x}) := \Psi(\mathcal{Z}^m(\mathfrak{x} - m)) \quad \text{for } \mathfrak{x} \in \mathbb{R}\_+$$

We clearly have that

$$\text{supp}(\Psi\_m) = \left[ m - \frac{1}{2^{m+1}}, m + \frac{1}{2^{m+1}} \right],\tag{F.1}$$

$$\int\_{\mathbb{R}\_+} \Psi\_m(\mathbf{x}) \, d\mathbf{x} \, = \,\frac{1}{2^m}, \quad \text{and} \quad \Psi\_m(m) \, = \,\alpha\,.$$

Now, define the functions : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> and <sup>g</sup> : <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup> by

$$\overline{\Psi}(\mathbf{x}) = \sum\_{m=1}^{\infty} \Psi\_m(\mathbf{x}).$$

and

$$g(x) := -1 + \int\_0^x \overline{\Psi}(t) \, dt.$$

Then, we can easily see that the function <sup>g</sup> lies in *<sup>C</sup>*<sup>∞</sup> <sup>∩</sup> *<sup>D</sup>*<sup>0</sup> <sup>∩</sup> *<sup>K</sup>*<sup>0</sup> <sup>+</sup>, and hence the function g exists and lies in *<sup>C</sup>*<sup>0</sup> <sup>∩</sup> *<sup>D</sup>*<sup>1</sup> <sup>∩</sup> *<sup>K</sup>*<sup>0</sup> −.

We now have the following claim, which establishes Proposition 7.3.

*Claim* For any <sup>m</sup> <sup>∈</sup> <sup>N</sup>∗, the function g is not differentiable at <sup>m</sup>. More precisely, we have

$$\lim\_{h \to 0} \frac{\Sigma g(m+h) - \Sigma g(m)}{h} = -\infty.$$

*Proof* Since g lies in *C*<sup>∞</sup> and satisfies the equation g(x + 1) = g(x) + g(x), it is enough to prove the claim for m = 1. For any h > 0, we have

$$\frac{1}{h} \left( \Sigma g(1+h) - \Sigma g(1) \right) = \frac{1}{h} \Sigma g(1+h) = -\frac{1}{h} \sum\_{k=1}^{\infty} (g(k+h) - g(k))$$

$$= -\sum\_{k=1}^{\infty} g[k, k+h].$$

Now, for any <sup>k</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> the function <sup>g</sup> is increasing and concave on [k, k<sup>+</sup> <sup>1</sup> <sup>2</sup> ) (because its derivative g | [k,k+<sup>1</sup> <sup>2</sup> ) <sup>=</sup> k<sup>|</sup> [k,k+<sup>1</sup> <sup>2</sup> ) is nonnegative and decreasing). We then see that the function

$$h \mapsto \text{ g}[k, k+h]$$

is nonnegative and continuously decreases (by Lemma 2.5) on [0, <sup>1</sup> <sup>2</sup> ) with maximum value g[k, k] = g (k) = k(k) = α. It follows that, for any integers 1 ≤ k ≤ m, there exists 0 < δk,m < <sup>1</sup> <sup>2</sup> such that

$$\frac{\alpha}{2} \le \text{ g}[k, k + h] \le \alpha \qquad \text{for all } h \in (0, \delta\_{k, m}).$$

Thus, for any <sup>m</sup> <sup>∈</sup> <sup>N</sup>∗, there exists

$$0 \ll h\_m < \min\_{k=1,\dots,m} \delta\_{k,m},$$

such that

$$\frac{\alpha}{2} \le \lg[k, k + h\_m] \le \alpha \qquad k = 1, \dots, m.$$

Thus, we have

$$\frac{1}{h\_m}\Sigma\mathbf{g}(1+h\_m) = \left| -\sum\_{k=1}^{\infty} \mathbf{g}[k, k+h\_m] \right| \le \left| -\sum\_{k=1}^{m} \mathbf{g}[k, k+h\_m] \right| \le \left| -m\frac{\alpha}{2} \right|.$$

which shows that the function g cannot be right-differentiable at 1.

Now, since the function

$$h \mapsto \frac{1}{h} \operatorname{\Sigma} \operatorname{g}(1+h) = -\sum\_{k=1}^{\infty} \operatorname{g}[k, k+h]$$

is increasing on [0, <sup>1</sup> <sup>2</sup> ), we can easily see that

$$\lim\_{h \to 0^+} \frac{1}{h} \Sigma g(1+h) = -\infty.$$

Similarly, we obtain the same limit when h → 0−.

Thus, we have shown that g is a continuous and decreasing function that is not differentiable at each positive integer. Let us now establish the interesting fact that g is of class *<sup>C</sup>*<sup>∞</sup> on <sup>R</sup><sup>+</sup> \ <sup>N</sup>.

*Claim* The function g is of class *<sup>C</sup>*<sup>∞</sup> on <sup>R</sup><sup>+</sup> \ <sup>N</sup>.

*Proof* Since g lies in *C*<sup>∞</sup> and satisfies the equation g(x +1) = g(x)+g(x), it is enough to show that g is of class *C*<sup>∞</sup> on (0, 1), or equivalently, on every compact interval [a, b], with 0 <a<b< 1.

By the existence Theorem 3.6, the sequence <sup>n</sup> → <sup>f</sup> <sup>0</sup> <sup>n</sup> [g], with

$$(f\_n^0[\mathbf{g}](\mathbf{x}) \, : \, \sum\_{k=1}^{n-1} \mathbf{g}(k) - \sum\_{k=0}^{n-1} \mathbf{g}(\mathbf{x} + k),$$

converges uniformly to g on [a, b]. Let us now show that the sequence n → Df <sup>0</sup> <sup>n</sup> [g], with

$$D f\_n^0[\mathbf{g}](\mathbf{x}) \ = \ -\sum\_{k=0}^{n-1} \overline{\Psi}(\mathbf{x} + k),$$

converges uniformly on [a, b]. In view of identity (F.1), it is then clear that there exists <sup>k</sup><sup>0</sup> <sup>∈</sup> <sup>N</sup><sup>∗</sup> for which

supp(k) ∩ [<sup>a</sup> <sup>+</sup> k, b <sup>+</sup> <sup>k</sup>] ∩ supp(k+1) <sup>=</sup> <sup>∅</sup> for every <sup>k</sup> <sup>≥</sup> <sup>k</sup>0.

Thus, for any integer k ≥ k<sup>0</sup> and any x ∈ [a, b], we have (x + k) = 0. Therefore, we have

$$D f\_n^0[\mathbf{g}](\mathbf{x}) = \ -\sum\_{k=0}^{k\_0 - 1} \overline{\Psi}(\mathbf{x} + k), \qquad \mathbf{x} \in [a, b], \ n \ge k\_0.$$

It follows that the sequence <sup>n</sup> → Df <sup>0</sup> <sup>n</sup> [g]|[a,b] is eventually constant and hence uniformly convergent on [a, b]. Using the classical result on uniform convergence and differentiation, we obtain that g is of class *<sup>C</sup>*<sup>1</sup> on [a, b]. An immediate adaptation of this proof shows that g is of class *C*<sup>∞</sup> on [a, b].

## **Appendix G Analogues of Properties of the Gamma Function**


## **References**


© The Author(s) 2022

J.-L. Marichal, N. Zenaïdi, *A Generalization of Bohr-Mollerup's Theorem for Higher Order Convex Functions*, Developments in Mathematics 70, https://doi.org/10.1007/978-3-030-95088-0


## **Index**

#### **Symbols**


#### **A**

Asymptotic constant, **63**, 63–64, 68, 69, 73, 87, 119, 122, 126, 135, 139, 165 Asymptotic degree, **45**, 50, 163 Asymptotic expansion, 131–139, 171

#### **B**

Barnes's G-function, **5**, 30, 74, 143, 163, 218–228, 256, 272, 308 Bernoulli numbers, 7, **84**, 134 Bernoulli polynomials, **84**, 137, 163, 275 of the second kind, 265, **276** Binet's function, **65**, 66, 82, 118, 134, 307 generalized, **65**, 65–66, 71, 78, 88, 97, 166, 297 Bohr-Mollerup theorem, **1**, 22, 23, 30, 110, 180 analogue, 164 Burnside's formula, 73 analogue, 73–75, 171

#### **C**

Catalan number function, 252–254

#### **D**

Digamma function, 5, **9**, 29, 99, 138, 143, 147, 149, 188–194, 200, 205 Dirichlet representation, 189 Gauss' digamma theorem, 152 analogue, 152–153 Gauss representation, 189 principal indefinite sum, 255–261 Dirichlet's eta function, 206, **236**, 245, 252 Dirichlet test for convergence of improper integrals, 136 Divided difference, 13–15, 17, 307

#### **E**

Elevator method, **103**, 104, 106, 149, 174, 185, 207, 224, 226 Eulerian form, **112**, 111–113, 172, 173 of the gamma function, 112 Euler-Maclaurin summation formula, 84 Euler polynomials, 276 Euler's constant, 5, **10**, 35, 73, 87, 88, 119, 123, 141, 193 generalized, **86**, 85–89, 167, 260, 308 integral form, 88, 167 in terms of the asymptotic constant, 88, 167 Euler's reflection formula, 143 analogue, 143–152 Euler's series representation of γ , 101 analogue, 101, 176 Existence theorem, 3, **25** alternative form, 27 when g(n) is summable, 28

© The Author(s) 2022

J.-L. Marichal, N. Zenaïdi, *A Generalization of Bohr-Mollerup's Theorem for Higher Order Convex Functions*, Developments in Mathematics 70, https://doi.org/10.1007/978-3-030-95088-0

#### **F**

Fontana-Mascheroni's series, 119, 192 analogue, 119–122, 174

#### **G**

Gamma function, 179–188 Gauss error function, 273 Gauss' limit, 4 analogue, 4, 111, 173 Gauss' multiplication formula, 126 analogue, **126**, 125–130, 175 Gautschi's inequality, 154 generalized, 153–156, 170 Glaisher-Kinkelin's constant, **5**, 102, 108 Gregory coefficients, 7, **65**, 134, 264 Gregory's summation formula, 78 general form, 83 geometric interpretation, 80 as a quadrature formula, 79, 81

#### **H**

Harmonic number function, **9**, 141, 188–194 of order 2, 141 of order n, 266 Higher order convexity and concavity, 15–17, 283–287 Hurwitz-Lerch transcendent, 275 Hurwitz zeta function, **6**, 94, 228–237 higher order derivatives, 247–252 principal indefinite sum, 261–264 Hyperfactorial function, 162, **274**

#### **I**

Interpolating polynomial, **14**, 18, 33, 41, 80, 167 Interpolation error, **14**, 18

#### **J**

Jacobi theta function, 163

#### **L**

Legendre's duplication formula, **126**, 149 Liu's formula, 136 generalized, 136, 172 Logarithmic integral function, 264 Log <sup>p</sup>-type function, 4, **52**

#### **M**

Multiple log --type function, 4, **52**, 52, 162 integration, 53–55 Multiple --type function, 4, **52** Multiple gamma function, **51**, 271

#### **N**

Newton interpolation formula, **14**, 18

#### **P**

Polygamma functions, **9**, 30, 103, 108, 195–209 Polylogarithm function, **153**, 212, 246 Principal indefinite sum, **46**, 162 of the digamma function, 255–261 of the generalized Binet function, 88 of the generating function for the Gregory coefficients, 264–269 of the Hurwitz zeta function, 261–264

#### **R**

Raabe's formula, 64, 122 analogue, **122**, 122–125, 166 Regularized incomplete gamma function, 272 Riemann zeta function, 7, 96, 228

#### **S**

Stieltjes constants, 238 generalized Stieltjes constants, 50, 237–246 Stirling's constant, 72 generalized, **72**, 151, 165 Stirling's formula, 59 generalized, **68**, 66–73, 86, 123, 170 a variant, 135 improvements, 71 Stolz-Cesàro theorem, 55

#### **T**

Trigamma function, 29

#### **U**

Uniqueness theorem, 3, **22** alternative forms, 27, 109 when g(n) is summable, 28 Index 323

#### **W**

Wallis's product formula, 140 analogue, **140**, 139–143, 175 Webster's functional equation, 157 generalized, 157–159 Webster's inequality, 72, 301 generalized, 301–310

Weierstrassian form, **114**, **115**, 113–117, 172, 173 of the gamma function, 113 Wendel's inequality, 60 generalized, **61**, 60–63, 168 discrete version, 62