Result: Asymptotic methods of enumeration and applications to markov chain models

Title:
Asymptotic methods of enumeration and applications to markov chain models
Authors:
Source:
Matrix-analytic methods in stochastic models: theory, algorithms, and applicationsStochastic models. 21(2-3):343-375
Publisher Information:
Philadelphia, PA: Taylor & Francis, 2005.
Publication Year:
2005
Physical Description:
print, 15 ref
Original Material:
INIST-CNRS
Subject Terms:
Control theory, operational research, Automatique, recherche opérationnelle, Mathematics, Mathématiques, Sciences exactes et technologie, Exact sciences and technology, Sciences et techniques communes, Sciences and techniques of general use, Mathematiques, Mathematics, Probabilités et statistiques, Probability and statistics, Théorie des probabilités et processus stochastiques, Probability theory and stochastic processes, Processus stochastiques, Stochastic processes, Analyse stochastique, Stochastic analysis, Processus de markov, Markov processes, Analyse numérique. Calcul scientifique, Numerical analysis. Scientific computation, Analyse numérique, Numerical analysis, Probabilités et statistiques numériques, Numerical methods in probability and statistics, Application, Aplicación, Approximation asymptotique, Asymptotic approximation, Aproximación asintótica, Chaîne Markov, Markov chain, Cadena Markov, Comportement asymptotique, Asymptotic behavior, Comportamiento asintótico, Fonction répartition, Distribution function, Función distribución, Modèle Markov, Markov model, Modelo Markov, Modèle stochastique, Stochastic model, Modelo estocástico, Mouvement brownien, Brownian motion, Movimiento browniano, Méthode analytique, Analytical method, Método analítico, Processus naissance mort, Birth death process, Proceso nacimiento muerte, Processus naissance, Birth process, Proceso nacimiento, Processus stationnaire, Stationary process, Proceso estacionario, Relation récurrence, Recurrence relation, Relación recurrencia, Série entière, Power series, Serie potencias, Théorie probabilité, Probability theory, Teoría probabilidad, 60E05, 60G10, 60J10, 60J65, 60J80, 62E17, 65C40, Loi stationnaire, Stationary distribution, Analytic methods, Birth and death process, Generating functions, Limiting distributions. Primary 60J80, Secondary 05A16
Document Type:
Conference Conference Paper
File Description:
text
Language:
English
Author Affiliations:
Institut für Diskrete Mathematik und Geometrie, Wien, Austria
ISSN:
1532-6349
Rights:
Copyright 2005 INIST-CNRS
CC BY 4.0
Sauf mention contraire ci-dessus, le contenu de cette notice bibliographique peut être utilisé dans le cadre d’une licence CC BY 4.0 Inist-CNRS / Unless otherwise stated above, the content of this bibliographic record may be used under a CC BY 4.0 licence by Inist-CNRS / A menos que se haya señalado antes, el contenido de este registro bibliográfico puede ser utilizado al amparo de una licencia CC BY 4.0 Inist-CNRS
Notes:
Mathematics
Accession Number:
edscal.16924108
Database:
PASCAL Archive

Further Information

□ The purpose of this article is to present analytic methods for determining the asymptotic behaviour of the coefficents of power series that can be applied to homogeneous discrete quasi death and birth processes. It turns that there are in principle only three types for the asymptotic behaviour. The process either converges to the stationary distribution or it can be approximated in terms of a reflected Brownian motion or by a Brownian motion. In terms of Markov chains these cases correspond to positive recurrence, to null recurrence, and to non recurrence. 7'he same results hold for the continuous case, too.

AN0017539975;i3f01may.05;2019Feb27.14:25;v2.2.500

Asymptotic Methods of Enumeration and Applications to Markov Chain Models. 

The purpose of this article is to present analytic methods for determining the asymptotic behaviour of the coefficents of power series that can be applied to homogeneous discrete quasi death and birth processes. It turns that there are in principle only three types for the asymptotic behaviour. The process either converges to the stationary distribution or it can be approximated in terms of a reflected Brownian motion or by a Brownian motion. In terms of Markov chains these cases correspond to positive recurrence, to null recurrence, and to non recurrence. The same results hold for the continuous case, too.

Keywords: Analytic methods; Birth and death process; Generating functions; Limiting distributions; Primary 60J80; Secondary 05A16

1. INTRODUCTION

Let

Graph

be a power series (or generating function) of non-negative real numbers y<subs>n</subs> ≥ 0 that represents an analytic function for |x| < R, where

Graph

is the radius of convergence. It is well known that y(x) is singular at x<subs>0</subs> = R and thus the smallest singularity (or equivalently the radius of convergence) provides a first infomation on the asympotic behaviour of y<subs>n</subs>. In particular, for every ϵ > 0 there exists C<subs>1</subs> > 0 and C<subs>2</subs> > 0 such that

Graph

for all n ≥ 0 and

Graph

for infinitely many n ≥ 0. Furthermore, if one knows the behaviour of y(x) for x → R − one can be even a little bit more precise. Since y<subs>n</subs>r<sups>n</sups> ≤ y(r) for 0 < r < R one also gets

Graph

Already these simple observations indicate that the knowledge of the analytic behaviour of the complex function y(x), that is, the location of the singularties and the local behaviour of y(x) around them, provides information on the asymptotic behaviour of the coefficients y<subs>n</subs>. Of course, if one knows y(x) exactly then y<subs>n</subs> can be recovered by Cauchy's formula[1]

Graph

Thus, the main problem in this context is to observe how much information on y(x) is needed to expand this contour integral as precisely as possible.

In what follows (see section 2) we will make this statement more clearly. We will collect some known facts on transfer principles (singularity analysis, saddle point method), where the kind of singularity of y(x) resp. precise growth properties of y(x) reflect the asymptotic behaviour of the coefficients y<subs>n</subs>.

These kinds of methods are widely applied to asymptotic enumeration problems (see Bender[1]; Odlyzko[12]). In this context the coeffients y<subs>n</subs> are certain (combinatorial) numbers of interest (e.g. number of combinations, partitions, permutations etc. with some properties), where the recursive structure of the combinatorial problem can be translated into proper explicit or implicit relations for the corresponding generating functions, compare with Vitter[15]. There are also many applications in the combinatorial analysis of data structures and algorithms, see Sedgewick[14].

In this survey we will apply this asymptotic enumeration procedure to some special Markov processes X<subs>n</subs>, namely to homogeneous quasi birth and death processes (see Latouche[9]; Neuts[10][11]). This processes can be viewed as random walks on properly defined "periodic" infinite graphs and also, the probabilities of the distribution of the random walk can be interpreted as a weighted counting problem for paths on that graph. With help of this combinatorial interpretation we derive (more or less) explicit representations for the generating functions of the corresponding probabilities. At this stage we can apply proper transfer principles to obtain asymptotic representations for the probabilites in question.

It turns out that there are three typical cases, the positive recurrent one, the null recurrent one and the non-recurrent one. In the first case X<subs>n</subs> has a discrete limiting distribution, in the second one X<subs>n</subs> can be approximated im terms of a reflected Brownian motion, and in the third case in terms of a Brownian motion. In the continuous case we get (in principle) the same results but we have to use the Laplace transform instead of generating functions. These results are only partly new, e.g. the postive recurrent case is very well studied, see Latouche[9]. However, it seems that this approach is novel for these kinds of problems. The advantage of this approach is that one gets very precise information on the distribution of X<subs>n</subs>. The discrete case (Theorems 4.1 and 4.2) has been already discussed by Drmota[5] in terms of random walks on graphs, the continuous case is new in this context.

The structure of the paper is the following; in Section 2 we collect some results (with proofs) on asymptotic expansions for coefficients of generating functions. In Section 3 we present a combinatorial approach to homogeneous quasi birth and death processes (QBD's) with help of generating functions (and Laplace transforms). Finally, in Section 4 we apply the methods of section 2 to these QBD's to obtain the above mentioned limiting relations for X<subs>n</subs>.

2. ANALYTIC METHODS

2.1. Singularity Analysis

We start with the analysis of algebraic singularities. There exists very general transfer principles that are due to Flajolet and Odlyzko[6] and can be considered as analytic Tauberian theorems. The main difference to (real) Tauberian theorems is that one assumes that the analytic function y(x) can be analytically continued to a region Δ that is a little bit larger than the circle of convergence. A combination of Lemma 1 and 2 is then a very powerful tool to obtain asymptotic expansions of the coefficients of analytic functions with effective error terms in an almost automatic way.

Lemma 1

Let

Graph

be analytic in a region

Graph

in which x<subs>0</subs> and η are positive real numbers and 0 < δ < π/2. Furthermore suppose that there exists a real number α such that

Graph

then

Graph

Proof

One uses Cauchy's formula

Graph

where γ is a suitable chosen path of integration around the origin. In particular one can use γ = γ<subs>1</subs> ∪ γ<subs>2</subs> ∪ γ<subs>3</subs> ∪ γ<subs>4</subs>, where

Graph

Graph

Graph

Graph

It is easy to show that the bound |y(z)| ≤ C|1 − z/x<subs>0</subs>|<sups>−α</sups> directly proves that

Graph

whereas the integal over γ<subs>4</subs> is exponentially smaller: &#55349;&#56490;((x<subs>0</subs> + η)<sups>−n</sups>).

Remark

Note that the same proof method also shows that

Graph

implies

Graph

as n → ∞.

Furthermore, it would have been sufficient to use a variating η = η(n) of the form η = (logn)<sups>2</sups>/n. Then the integral over γ<subs>4</subs> is not exponentially smaller but of order &#55349;&#56490;((<bold>x</bold><subs>0</subs>)<sups>−n</sups>e<sups>−c (logn)2</sups>) which is better than any polynomial decay.

Lemma 1 is complemented by the following well known property on the asymptotic expansion of binomial coefficients. We include a proof since the same contour integration (that is also very close to the proof of Lemma 1) will be used in section 2.3.

Lemma 2

Suppose that y(x) = (1 − x)<sups>−α</sups>. then

Graph

Proof

We again use Cauchy's formula for the contour γ = γ<subs>1</subs> ∪ γ<subs>2</subs> ∪ γ<subs>3</subs> ∪ γ<subs>4</subs>, where

Graph

Graph

Graph

Graph

As above, the integral over γ<subs>4</subs> is negligible.

For the remaining part we approximate x<sups>−n−1</sups> by e<sups>−t</sups>(1 + &#55349;&#56490;(t<sups>2</sups>/n)). Hence, the integral over γ<subs>1</subs> ∪ γ<subs>2</subs> ∪ γ<subs>3</subs> is given by

Graph

where γ' = {t||t| = 1, ℜt ≤ 0} ∪ {t|0 < ℜt ≤ log<sups>2</sups>n, ℑt = ± 1}. Now I<subs>1</subs> approximates 1/Γ(α) (by Hankel's integral representation) in the following way :

Graph

Of course, this completes the proof of Lemma 2.

Remark

These two lemmas can be now used to transfer an asymptotic series expansion of y(x) to an asymptotic series expansion for y<subs>n</subs>. Suppose that a function y(x) is analytic in a region of the form Δ and that it has an expansion of the form

Graph

where β < α. Then we have

Graph

Note that has a full asymptotic series expansion of the form

Graph

where

Graph

with

Graph

This and some related formulae can be found in Flajolet[6].

Finally we want to mention that polar singularities are even more easy to handle. (The proof is immediate.)

Lemma 3

Suppose that y(x) is a meromorphic functions that is analytic at x = 0 and has polar singularities at the points q<subs>1</subs>,..., q<subs>r</subs> in the circle |x| < R, that is, y(x) has a representation of the form

Graph

where T(x) is analytic in the region |x| < R. Then for every ϵ > 0

Graph

2.2. Systems of Functional Equations

Next, we will show that squareroot singularities naturally appear in a system of functional equations. Lemma 4 will a also crucial for the proof of the null recurrent case of Theorem 4.2. Our presentation is very close to Drmota[4].

Let <bold>F</bold>(x, <bold>y</bold>) = (F<subs>1</subs>(x, <bold>y</bold>),..., F<subs>N</subs>(x, <bold>y</bold>))' a vector[2] of functions F<subs>j</subs>(x, <bold>y</bold>), 1 ≤ j ≤ N, with complex variables x, and <bold>y</bold> = (y<subs>1</subs>,..., y<subs>N</subs>)' which are analytic around 0 and satisfy F<subs>j</subs>(0, <bold>0</bold>) = 0, 1 ≤ j ≤ N. We will be interested in the analytic solution <bold>y</bold> = <bold>y</bold>(x) = (y<subs>1</subs>(x),..., y<subs>N</subs>(x))' of the functional equation

Graph

with <bold>y</bold>(0) = <bold>0</bold>, that is, the (unknown) functions y<subs>j</subs> = y<subs>j</subs>(x), 1 ≤ j ≤ N satisfy the system of functional equations

Graph

It is immediately clear that y<subs>j</subs>(x) can be represented as power series in x with non-negative coefficients [x<sups>n</sups>]y<subs>j</subs>(x) ≥ 0. One only has to consider (2) as a fixed point equation and to observe that the (unique analytic) solution can be obtained by an iterating procecure with initial value <bold>0</bold>. Of course, non-negativity of the coefficients is preserved in every iteration step.

Finally, we adjoin a dependency graph G<subs><bold>F</bold></subs> = (V, E) to such a system of functional equations <bold>y</bold> = <bold>F</bold>(x, <bold>y</bold>), where V = {y<subs>1</subs>, y<subs>2</subs>,..., y<subs>N</subs>} and (y<subs>i</subs>, y<subs>j</subs>) is contained in E if and only if F<subs>i</subs>(x, <bold>y</bold>) really depends on y<subs>j</subs>.

Lemma 4

Let <bold>F</bold>(x, <bold>y</bold>) = (F<subs>1</subs>(x, <bold>y</bold>),..., F<subs>N</subs>(x, <bold>y</bold>))′ be analytic functions around x = 0 and <bold>y</bold> = (y<subs>1</subs>,..., y<subs>N</subs>)′ = <bold>0</bold> such that all Taylor coefficients are non-negative, that <bold>F</bold>(0, <bold>y</bold>) ≡ <bold>0</bold>, that <bold>F</bold>(x, <bold>0</bold>) ≢ <bold>0</bold>, and that there exists j with <bold>F</bold><subs>yjyj</subs>(x, <bold>y</bold>) ≢ <bold>0</bold>. Furthermore assume that the region of convergence of <bold>F</bold> is large enough such that there exists a non-negative solution x = x<subs>0</subs>, <bold>y</bold> = <bold>y</bold><subs>0</subs> of the system of equations

Graph

Graph

inside it.

If the dependency graph G<subs><bold>F</bold></subs> = (V,E) of the system

Graph

in the unknown functions

Graph

is strongly connected then x<subs>0</subs> is the common radius of convergence of y<subs>1</subs>(x),..., y<subs>N</subs>(x) and we have a representation of the form

Graph

locally around x = x<subs>0</subs>, where g<subs>j</subs>(x) and h<subs>j</subs>(x) are analytic around x = x<subs>0</subs> and satisfy

Graph

with the unique solution <bold>b</bold> = (b<subs>1</subs>,..., b<subs>N</subs>)' > <bold>0</bold> of

Graph

If we further assume that [x<sups>n</sups>] y<subs>i</subs>(x) > 0 for n ≥ n<subs>0</subs> and 1 ≤ j ≤ N then x = x<subs>0</subs> is the only singularity of y<subs>j</subs>(x) on the circle |x| = x<subs>0</subs> and we obtain an asymptotic expansion for [x<sups>n</sups>] y<subs>j</subs>(x) of the form

Graph

Proof

First, we consider the case of N = 1 equation:

Graph

As mentioned above it is clear that the coefficients of the (unique) solution y(x) (that is obtained by iteration) are non-negative.

It is also possible to use the implicit function theorem. Since

Graph

there exists a solution y = y(x) of (7) which is analytic around 0.

However, it is useful to know that all Taylor coefficients of y(x) are non-negative. Namely, it follows that if y(x) is regular at x′ (that is real and positive) then y(x) is regular for all x with |x| ≤ x′.

Let x<subs>0</subs> denote the radius of convergence of y(x). Then x<subs>0</subs> is a singularity of y(x). The mapping

Graph

is strictly increasing for real and non-negative x as long as y(x) is regular. Note that F<subs>y</subs>(0, y(0)) = 0. As long as F<subs>y</subs>(x, y(x)) < 1 it follows from the implicit function theorem that y(x) is regular even in a neighbourhood of x. Hence there exists a finite limit point x<subs>0</subs> such that lim<subs>xx0 − </subs>y(x) = y<subs>0</subs> is finite and satisfies F<subs>y</subs>(x<subs>0</subs>, y<subs>0</subs>) = 1. If y(x) were regular at x = x<subs>0</subs> then

Graph

would imply F<subs>x</subs>(x<subs>0</subs>, y(x<subs>0</subs>)) = 0 which is surely not true. Thus, y(x) is singular at x = x<subs>0</subs> (that is, x<subs>0</subs> is the radius of convergence) and y(x<subs>0</subs>) is finite.

Now, let us consider the equation y − F(x, y) = 0 around x = x<subs>0</subs> and y = y<subs>0</subs>. We have 1 − F<subs>y</subs>(x<subs>0</subs>, y<subs>0</subs>) = 0 but −F<subs>yy</subs>(x<subs>0</subs>, y<subs>0</subs>) ≠ 0. Hence by the Weierstrass preparation theorem (see Kaup[7]) there exist functions H(x, y), p(x), q(x) which are analytic around x = x<subs>0</subs> and y = y<subs>0</subs> and satisfy H(x<subs>0</subs>, y<subs>0</subs>) ≠ 1, p(x<subs>0</subs>) = q(x<subs>0</subs>) = 0 and

Graph

locally around x = x<subs>0</subs> and y = y<subs>0</subs>. Since F<subs>x</subs>(x<subs>0</subs>, y<subs>0</subs>) ≠ 0 we also have q<subs>x</subs>(x<subs>0</subs>) ≠ 0. This means that any analytic function y = y(x) which satisfies y(x) = F(x, y(x)) in a subset of a neighbourhood of x = x<subs>0</subs> with x<subs>0</subs> on its boundary is given by

Graph

Since p(x<subs>0</subs>) = 0 and q<subs>x</subs>(x<subs>0</subs>) ≠ 0 we have

Graph

too. Thus there exists an analytic function K(x) such that K(x<subs>0</subs>) ≠ 0 and

Graph

locally around x = x<subs>0</subs>. This finally leads to a local representation of y = y(x) of the kind

Graph

in which g(x) and h(x) are analytic around x = x<subs>0</subs> and satisfy g(x<subs>0</subs>) = y<subs>0</subs> and h(x<subs>0</subs>) < 0.

In order to calculate h(x<subs>0</subs>) we use Taylor's theorem

Graph

By comparing the coefficients of (x − x<subs>0</subs>) we immediately obtain

Graph

We now want to apply the transfer lemma (Lemma 1). For this purpose we have to show that y(x) can be analytically continued to a region of the form Δ. The representation (8) provides such an analytic continuation for x in a neighborhood of x<subs>0</subs>. Now suppose that |x<subs>1</subs>| = x<subs>0</subs> and |arg(x<subs>1</subs>)| ≥ δ. Then the assumption y<subs>n</subs> > 0 for n ≥ n<subs>0</subs> implies that |y(x<subs>1</subs>)| < y(|x<subs>1</subs>|) = y(x<subs>0</subs>) and consequently

Graph

Thus, F<subs>y</subs>(x<subs>1</subs>, y(x<subs>1</subs>)) ≠ 1 and the implicit function theorem shows that there exists an analytic solution y = y(x) in a neighborhood of x<subs>1</subs>. For |x| < x<subs>0</subs> this solution equals the power series y(x) and for |x| ≥ x<subs>0</subs> it provides an analytic continuation to a region of the form Δ (by compactness it is sufficient to consider finitely many x<subs>1</subs> with |x<subs>1</subs>| = x<subs>0</subs> and |arg(x<subs>1</subs>)| ≥ δ). So finally we can apply Lemma 1 (resp. (1) with α = − 1/2 and β = − 3/2; the analytic part of g(x) provides exponentially smaller contributions.) This completes the proof of (6).

Finally we mention that the general case N > 1 can be reduced to a single equation by an elimination precedure (compare to Drmota[4]). We will indicate this procedure for N = 2 equations. y<subs>1</subs> = F<subs>1</subs>(x, y<subs>1</subs>, y<subs>2</subs>), y<subs>2</subs> = F<subs>2</subs>(x, y<subs>1</subs>, y<subs>2</subs>). First, we tackle the second equation and consider x and y<subs>1</subs> as variables. This gives y<subs>2</subs> = y<subs>2</subs>(x, y<subs>1</subs>) as a power series in non-negative coefficients in x and y<subs>1</subs>. Next we use this "solution" and insert it into the first equation. Thus, we obtain a single equation

Graph

that can be treated as above (all assumptions are satisfied). The singularity x<subs>0</subs> of y<subs>1</subs>(x) is determined by the equation

Graph

or by

Graph

Of course, this procedure only applies is y<subs>2</subs>(x, y<subs>1</subs>) is not singular, respectively F<subs>2, y2</subs> < 1. Since we assume that the dependency graph is strongly connected we surely have F<subs>1, y2</subs> ≠ 0 and F<subs>2, y1</subs> ≠ 0. Thus, must have F<subs>2, y2</subs> < 1 at the singular point of y<subs>1</subs>(x), in other words y<subs>2</subs>(x, y<subs>1</subs>) is regular there and does not "affect" the singularity of y<subs>1</subs>(x). However, we can now use y<subs>1</subs>(x) to "recover" the "real solution" y<subs>2</subs>(x) = y<subs>2</subs>(x, y<subs>1</subs>(x)) that has the same kind of singular behaviour as y<subs>1</subs>(x).

2.3. Small Powers of Functions

The next lemma shows how small powers of functions with a squareroot singularity can be handled, see also Drmota[3].

Lemma 5

Let y(x) = ∑<subs>n≥0</subs>y<subs>n</subs>x<sups>n</sups> be a power series with non-negative coefficents such that there is only one singularity on the circle of convergence |x| = x<subs>0</subs> > 0 and that y(x) can be locally represented as

Graph

where g(x) and h(x) are analytic functions around x<subs>0</subs> with g(x<subs>0</subs>) > 0 and h(x<subs>0</subs>) > 0, and that y(x) can be continued analytically to |x| < x<subs>0</subs> + δ, xnot ∈ [x<subs>0</subs>, x<subs>0</subs> + δ) (for some δ > 0). Furthermore, let ρ(x) be another power series with non-negative coefficients with radius of convergence greater than x<subs>0</subs>.

Then we have

Graph

uniformly for , where C > 0 is an arbitrary constant.

Proof

For simplicity we will assume that the radius of convergence x<subs>0</subs> = 1. We will use Cauchy's formula

Graph

where the path of integration γ = γ<subs>1</subs> ∪ γ<subs>2</subs> ∪ γ<subs>3</subs> ∪ γ<subs>4</subs> is the same as in the proof of Lemma 2.

Let . Then with we have for x ∈ γ<subs>1</subs> ∪ γ<subs>2</subs> ∪ γ<subs>3</subs>

Graph

Let . Then (with help of the substitution u<sups>2</sups> = t) it is an easy exercise to obtain

Graph

where γ' = {t | |t| = 1, ℜt ≤ 0} ∪ {t | 0 < ℜt ≤ log<sups>2</sups>n, ℑt = ± 1} is the same path of integration as used in Lemma 2.

Furthermore, for every fixed L we have

Graph

All these estimates are also uniform if λ varies in an bounded interval. Finally,

Graph

This completes the proof of Lemma 5.

The next lemma is very similar to Lemma 5. The only difference is that the leading factor ρ(x) also has a special squareroot singularity. In particular, this lemma will be used for null recurrent QBD's (second parts of Theorem 4.1 and 4.2).

Lemma 6

Let y(x) = ∑<subs>n≥0</subs>y<subs>n</subs>x<sups>n</sups> be as in Lemma 5 and ρ(x) another power series that has the same radius of convergence x<subs>0</subs>. Assume further that it can be continued analytically to the same region as y(x), and that it has a local (singular) represenation as

Graph

where (x) and (x) are analytic functions around x<subs>0</subs> with (x<subs>0</subs>) > 0.

Then we have

Graph

uniformly for , where C > 0 is an arbitrary constant.

Proof

The proof is almost the same as that of Lemma 5. The only difference is that one has to use the formula

Graph

We leave the details to the reader.

2.4. Large Powers of Functions

Finally we consider large powers of power series (see Drmota[2][3]).

We start with a general formula.

Lemma 7

Let y(x) = ∑<subs>n≥0</subs>y<subs>n</subs>x<sups>n</sups> be a power series with non-negative coefficents, moreoever, assume that there exists n<subs>0</subs> with y<subs>n</subs> > 0 for n ≥ n<subs>0</subs>. Furthermore, let ρ(x) be another power series with non-negative coefficients and suppose that, both, y(x) and ρ(x) have positive radius of convergence R<subs>1</subs>, R<subs>2</subs>. Set

Graph

and

Graph

and let h(y) denote the inverse function of μ(r).

Fix a, b with 0 < a < b < min{R<subs>1</subs>, R<subs>2</subs>}, then we have

Graph

uniformly for n, k with μ(a) ≤ n/k ≤ μ(b).

Proof

We again use Cauchy's formula

Graph

where r is defined by

Graph

that is, . Note that r is exactly the saddle point of the function

Graph

Now a standard saddle point method (see Drmota[3] or Odlyzko [12]) yields

Graph

In detail, if we use the substitution x = re<sups>it</sups> we get (for small with )

Graph

and consequently

Graph

Finally, by the use of the property that y<subs>n</subs> > 0 (for n ≥ n<subs>0</subs>) and the local expansion of y(x) around x = r it follows that there exists a constant c > 0 with

Graph

uniformly for a ≤ r ≤ b and |t| ≤ π. Hence the integral

Graph

is negligible.

We can also obtain a slightly modified version of Lemma 7 (compare with Drmota[3]) if n and k are almost proprotional. This can be either derived from Lemma 7 or proved in a completely similar way. We leave the details to the reader.

Lemma 8

Let y(x) and ρ(x) be as in Lemma 7. Then for every 0 < r < min{R<subs>1</subs>, R<subs>2</subs>} we have

Graph

uniformly for n, k with , where C > 0 is an arbitrary constant.

Remark

It should be remarked that it is also possible to derive complete asymptotic series expansion, see Drmota[3].

3. QUASI BIRTH AND DEATH PROCESSES

3.1. Random Walks on "Perodic Graphs"

We now consider a discrete quasi birth and death process (QBD), that is, a discrete Markov process X<subs>n</subs> on the non-negative integers with transition matrix of the form

Graph

where <bold>A</bold><subs>0</subs>, <bold>A</bold><subs>1</subs>, <bold>A</bold><subs>2</subs>, and <bold>B</bold> are square matrices of order m, compare with Latouche[9] and Neuts[10][11].

This kind of process can be also interpreted as a random walks on infinite graphs G of the following type (see Drmota[5]). Let K be the complete di-graphs of size m (with loops) and K<subs>0</subs>, K<subs>1</subs>, K<subs>2</subs>,... copies of K. The set of nodes, V(G), of G is now given by V(K<subs>0</subs>) ∪ V(K<subs>1</subs>) ∪ ⋅⋅⋅. The directed edges of G, E(G), consist first of the edges E(K<subs>0</subs>) ∪ E(K<subs>1</subs>) ∪ ⋅⋅⋅ and second of all edges between K<subs>0</subs> and K<subs>1</subs>, between K<subs>1</subs> and K<subs>2</subs> etc. We denote the matrix of transition probabilities inside K<subs>0</subs> as <bold>B</bold>, inside K<subs>j</subs> (j = 1, 2,...) as <bold>A</bold><subs>1</subs>, between K<subs>j</subs> and K<subs>j+1</subs> as <bold>A</bold><subs>0</subs>, and between K<subs>j</subs> and K<subs>j−1</subs> as <bold>A</bold><subs>2</subs> (see Figure 1).

Graph: FIGURE 1 One-sided "periodic" graph.

In other words, if we set <bold>P</bold> = (p<subs>w, v</subs>)<subs>w, vV(G)</subs>, then (for all k ≥ 0)

Graph

It is clear that the powers <bold>P</bold><sups>n</sups> = ()<subs>w, vV(G)</subs> of the infinite matrix <bold>P</bold> contain the probabilities  = <bold>Pr</bold>{X<subs>n</subs> = v|X<subs>0</subs> = w}.

We will now present a combinatorial interpretation of this relation. Let  = (e<subs>1</subs>(), e<subs>2</subs>(),..., e<subs>n</subs>()) denotes a path of length n on G (with edges e<subs>j</subs>() = (x<subs>j−1</subs>(), x<subs>j</subs>()) ∈ E(G)). Then we can define a weight (or probability) W() by

Graph

and, of course, we get

Graph

where the sum is taken over all paths of length n with x<subs>0</subs>() = w and x<subs>n</subs>() = v. This means that the calculation of can be viewed as a combinatorial enumeration problem of weighted paths of length n. In this context it is natural to introduce the power series (or generating functions)

Graph

The (infinite) matrix <bold>M</bold>(x) = (M<subs>w, v</subs>(x))<subs>w, vV(G)</subs> can be then viewed as

Graph

In what follows we will use the combinatorial interpretation to derive explicit formulae for (the first m rows of) <bold>M</bold>(x).

For this purpose we also define proper submatrices of order m

Graph

that correspond to the transition from K<subs>k</subs> to K<subs>ℓ</subs>.

Since we are only interested in processes X<subs>n</subs> starting in K<subs>0</subs> we will only state a result for <bold>M</bold><subs>0, ℓ</subs>(x). However, it is an easy exercice to derive formulae for all <bold>M</bold><subs>k, ℓ</subs>(x), compare with the remark after the proof of Lemma 9.

Lemma 9

Let <bold>N</bold>(x) denote the (analytic) solution with <bold>N</bold>(0) = <bold>I</bold> of the matrix equation

Graph

Then

Graph

Proof

In order to make the situation simpler we will first consider the case m = 1, that is, the corresponding graph is just the line of non-negative integers (and the transition matrix <bold>P</bold> is a tri-diagonal matrix). We will heavily make use of the combinatorial description with help of weighted paths .

We start with a property of one-sided paths on the integers (see Figure 2). Let Y<subs>n</subs> denote the random walk on the integers (see Figure 2) with Y<subs>0</subs> = 0. For this random walk we consider the generating function of one-sided return probabilities

Graph

Of course, the probabilities can be interpreted as sums of weights. First we show that N(x) satisfies the functional equation

Graph

This equation is immediately clear by writing it in slightly different way:

Graph

and by looking at the following recursive description. First, 1 is related to the case n = 0. Next, if the first step of the path is a loop (with probability a<subs>1</subs>) then the remaining part is just a non-negative path from 0 to 0, the corresponding contribution is a<subs>1</subs>x  N(x). If the first step goes to the right (with probability a<subs>0</subs>) then we decompose the path into four parts: into this first step from 0 to the right, into a part from 1 to 1 that is followed by the first step back from 1 to 0, the third part is this step back, and finally into the last part that is again a non-negative path from 0 to 0. Hence, in terms of generating functions this case contributed a<subs>0</subs>x  N(x a<subs>2</subs>x  N(x). This completes the proof of (14).

Graph: Random walk on the integers.

With help of the same reasoning one gets the relation

Graph

that proves the proposed representation for M<subs>0, 0</subs>(x).

Next we have M<subs>0, 1</subs>(x) = M<subs>0, 0</subs>(x)a<subs>0</subs>xN(x). Here we have to divide all paths from 0 to 1 into three parts. The first part is just the path from 0 to 0 that is followed by the last step from 0 to 1. This step is the second part, and the third part is a non-negative path from 1 to 1. In a similar way we also obtain the recurrence M<subs>0, ℓ+1</subs>(x) = M<subs>0, ℓ</subs>(x)a<subs>0</subs>xN(x).

The proof in the general case (m > 1) is exactly the same and already appears (for a special case) in Prodinger[13], compare also with Kuich[8]. We first consider a two-sided infinite periodic graph (containing components K<subs>ℓ</subs>, ℓ ∈ ℤ, and random walks Y<subs>n</subs> that start in K<subs>0</subs> and do not go to the negative part). This leads to a matrix generating function <bold>N</bold>(x) that satisfies (11). Note that (11) has a unique analytic solution with <bold>N</bold>(0) = <bold>I</bold>). With help of <bold>N</bold>(x) we directly get representations for <bold>M</bold><subs>0, ℓ</subs>(x):

Graph

and

Graph

Of course, this proves (12).

Remark

As already indicated it is also possible to get explicit formulae for all matrices <bold>M</bold><subs>k, ℓ</subs>(x). We just state corresponding recurrences from which these representation can be derived:

Graph

or

Graph

further

Graph

and

Graph

Remark

Lemma 9 is contained in Drmota[5] in a slightly more general form.

Note further that the generating function N(x) is closely related to the generating functions U(x), G(x), and R(x) presented in Latouche[9] (pp. 96). We have N(x) = 1/(1 − U(x)), G(x) = N(x a<subs>2</subs>x, and R(x) = a<subs>0</subs>x  N(x).

Moreoever, for x = 1 the matrix <bold>N</bold> = <bold>N</bold>(1) is also related to the matrices <bold>U</bold>, <bold>G</bold>, and <bold>r</bold> of Latouche[9] (pp. 137), in particular, <bold>N</bold> = (<bold>I</bold> − <bold>U</bold>)<sups>−1</sups>, <bold>G</bold> = <bold>N</bold><bold>D</bold>, and <bold>r</bold> = <bold>C</bold><bold>N</bold>.

3.2. Continuous Quasi Birth and Death Processes

There exists a continuous analogon the discrete QBD. Here we have a continuous time Markov process X(t) on G with infinitesimal generator <bold>Q</bold> of the same form

Graph

where <bold>A</bold><subs>0</subs>, <bold>A</bold><subs>1</subs>, <bold>A</bold><subs>2</subs>, and <bold>B</bold> are square matrices of order m such that <bold>A</bold><subs>0</subs> and <bold>A</bold><subs>2</subs> are non-negative and the matrices <bold>B</bold> and <bold>A</bold><subs>1</subs> have non-negative off-diagonal elements whereas the diagonal elements are stricly negative so that the row sums are all equal to zero, that is, (<bold>B</bold> + <bold>A</bold><subs>0</subs>)<bold>1</bold> = <bold>0</bold> and (<bold>A</bold><subs>0</subs> + <bold>A</bold><subs>1</subs> + <bold>A</bold><subs>2</subs>)<bold>1</bold> = <bold>0</bold>.

By definition it is clear that the matrix e<sups><bold>Qt</bold></sups> = ()<subs>w, vG</subs> contains the probabilities

Graph

Similarly to the discrete case we now consider the Laplace transforms

Graph

and the infinite matrix . By the above property we can rewrite as

Graph

This means that has almost the same representation as <bold>M</bold>(x) in the discrete case. Consequently we also get (almost) the same explicit representation for

Graph

Lemma 10

Let <bold>N</bold>(x) denote the (analytic) solution with <bold>N</bold>(0) = <bold>I</bold> of the matrix equation

Graph

Then

Graph

Remark

Note that can be also characterized by

Graph

with and <bold>M</bold><subs>0, ℓ</subs>(s) is then given by

Graph

Proof

We can proceed in a formal way. First we define weights W() for paths on G with help of the infinite generator matrix <bold>Q</bold> and obtain representations for generating functions that "count" weighted paths or, equivalently, explicit representations of the first m rows of the matrix <bold>M</bold>(x) = (<bold>I</bold> − x<bold>Q</bold>)<sups>−1</sups>. However, the Laplace transform of interest is given by

Graph

Consequently, . Thus, Lemma 17 follows (formally) from Lemma 12.

4. THE ASYMPTOTIC BEHAVIOUR OF HOMOGENEOUS QBD's

In this section we apply the asymptotic enumeration methods of section 2 to homoegeneous QBD's. The discrete case has been already treated by the author Drmota[5]. Furthermore, some parts of the results are definitely not new, e.g. the positive recurrent case is very well studied, see Latouche[9]. Nevertheless, it seems that kind of approach is novel.

4.1. One-Dimensional Discrete QBD's

We start with the most easy example, namely with one-dimensional QBD's that can be interpreted as random walks on the non-negative integers.

Theorem 4.1

Suppose that a<subs>0</subs>, a<subs>1</subs>, a<subs>2</subs> and b are positive numbers with a<subs>0</subs> + a<subs>1</subs> + a<subs>2</subs> = b + a<subs>0</subs> = 1; and let X<subs>n</subs> be the discrete QBD on the non-negative integers with transition matrix

Graph

1. If a<subs>0</subs> < a<subs>2</subs> then we have

Graph

that is, X<subs>n</subs> is positive recurrent. The distribution of X<subs>n</subs> converges to the stationary distribution.

2. If a<subs>0</subs> = a<subs>2</subs> then X<subs>n</subs> is null recurrent and converges weakly to the absolute normal distribution. In particular, we have, as n → ∞, and uniformly for all (where C > 0 is an arbitrary constant)

Graph

3. If a<subs>0</subs> > a<subs>2</subs> then X<subs>n</subs> is non recurrent and

Graph

converges weakly to the standard normal distribution. We also have, as n → ∞ and uniformly for all ℓ ≥ 0 with (where C > 0 is an aribtrary constant)

Graph

Remark

With a little bit more effort it can be shown that in the case a<subs>0</subs> = a<subs>2</subs> the normalized discrete processes

Graph

converges weakly to a reflected Brownian motion as n → ∞; and for a<subs>0</subs> < a<subs>2</subs> the processes

Graph

converges weakly to the standard Brownian motion.

We split the proof into several lemmas.

Lemma 11

Let N(x) be given by (14). Then we explicitly have

Graph

The radius of convergence x<subs>0</subs> is given by

Graph

if a<subs>1</subs> > 0 then x<subs>0</subs> is also the only singularity on the circle of convergence |x| = x<subs>0</subs>. Furthermore, N(x) has a local expansion of the form

Graph

around its singularity x = x<subs>0</subs>.

Proof

The proof is immediate.

Lemma 12

Suppose that a<subs>0</subs> < a<subs>2</subs>. Then the radius of convergence of M<subs>0, ℓ</subs>(x) (ℓ ≥ 0) is x<subs>1</subs> = 1, that is also a polar singularity of order 1. Furthermore, we have

Graph

Proof

First note that (for a<subs>0</subs> < a<subs>2</subs>) we have N(1) = 1/a<subs>2</subs> and N′(1) = (1 − a<subs>2</subs> + a<subs>0</subs>)/(a<subs>2</subs>(a<subs>2</subs> − a<subs>0</subs>)). Thus,

Graph

and consequently

Graph

for , where T(x) is an analytic function that has radius of convergence larger than 1. (Note that ) Of course, this directly implies (19), compare also with Lemma 3.

The most interesting case is the case a<subs>0</subs> = a<subs>2</subs>.

Lemma 13

Suppose that a<subs>0</subs> = a<subs>2</subs>. Then the radius of convergence of M<subs>ℓ</subs>(x) (ℓ ≥ 0) is x<subs>1</subs> = 1 that is also an algebraic singularity. Here we get, as n → ∞, and uniformly for all (where C > 0 is an arbitrary constant)

Graph

Proof

The essential difference between the present case and that of Lemma 12 is that N(x) is not regular at x = 1. We have to use the singular expansion (18) of Lemma 11 and obtain (around x = 1)

Graph

Furthermore

Graph

Thus, we can directly apply Lemma 6 and obtain (20).

The analysis of the final case a<subs>0</subs> > a<subs>2</subs> is a little bit different from the previous ones. In the first two cases the singular behaviour of M<subs>0, ℓ</subs>(x) around the point x<subs>0</subs> = 1 has governed the asymptotic behaviour of the coefficients. In the third case we will again work around the critial point x<subs>0</subs> = 1 but now with help of a saddle point method. The radius of convergence is larger than 1.

Lemma 14

Suppose that a<subs>0</subs> > a<subs>2</subs>. Then X<subs>n</subs> satisfies a central limit theorem with mean value <bold>E</bold> X<subs>n</subs> ∼ (a<subs>0</subs> − a<subs>2</subs>)n and <bold>Var</bold> X<subs>n</subs> ∼ (a<subs>0</subs> + a<subs>2</subs> − (a<subs>0</subs> − a<subs>2</subs>)<sups>2</sups>)n. In particular we have the following local limit theorem:

Graph

as n → ∞ and uniformly for all ℓ ≥ 0 with (where C > 0 is an aribtrary constant).

Proof

Note first that M<subs>0, ℓ</subs>(x) = M<subs>0, 0</subs>(x)(a<subs>0</subs>xN(x))<sups>ℓ</sups> and that x<subs>1</subs> = 1 is a regular point of M<subs>0, 0</subs>(x) and N(x). Thus, we can apply Lemma 8 for r = 1 with ρ(x) = M<subs>0, 0</subs>(x) and y(x) = a<subs>0</subs>xN(x). Since N(1) = 1/a<subs>0</subs> and N′(1) = (1 − a<subs>0</subs> + a<subs>2</subs>)/(a<subs>0</subs>(a<subs>0</subs> − a<subs>2</subs>)) we have μ(1) = 1/(a<subs>0</subs> − a<subs>2</subs>). Similarly we get σ<sups>2</sups>(1) = (a<subs>0</subs> + a<subs>2</subs> − (a<subs>0</subs> − a<subs>2</subs>)<sups>2</sups>)/(a<subs>0</subs> − a<subs>2</subs>). This completes the proof of Lemma 14.

4.2. General Homogeneous Discrete QBD's

The next step is to generalize Theorem 1 to m-dimensional discrete QBD's. In the formulation of the theorem we will also use the interpretation of X<subs>n</subs> as a random walk on the "one-sided periodic graph" G. Recall that the vertices V(G) of G consist of infinite copies V(K<subs>0</subs>), V(K<subs>1</subs>),... of the vertices of the complete di-graph K of m nodes. If v ∈ V(K<subs>ℓ</subs>) then will denote the corresponding node in K.

Theorem 4.2

Let <bold>A</bold><subs>0</subs>, <bold>A</bold><subs>1</subs>, <bold>A</bold><subs>2</subs> and <bold>B</bold> be square matrices of order m with non-negative elements such that (<bold>B</bold> + <bold>A</bold><subs>0</subs>)<bold>1</bold> = <bold>1</bold> and (<bold>A</bold><subs>0</subs> + <bold>A</bold><subs>1</subs> + <bold>A</bold><subs>2</subs>)<bold>1</bold> = <bold>1</bold>, that is, the infinite matrix <bold>P</bold> from (10) is a transition matrix of a Markov process. Furthermore suppose that the matrix <bold>B</bold> is primitive irreducible, that no row of <bold>A</bold><subs>0</subs> is zero, and that <bold>A</bold><subs>2</subs> is non-zero.

Let X<subs>n</subs> denote a corresponding QBD with X<subs>0</subs> ∈ K<subs>0</subs> (resp. a random walk on G with X<subs>0</subs> ∈ K<subs>0</subs>) and let x<subs>0</subs> denote the radius of convergence of the entries of <bold>N</bold>(x) and x<subs>1</subs> that of <bold>M</bold><subs>0, 0</subs>(x).

1. If x<subs>0</subs> > 1 and x<subs>1</subs> = 1 then X<subs>n</subs> is positive recurrent and for all v ∈ V(G) = V(K<subs>0</subs>) ∪ V(K<subs>1</subs>) ∪ ⋅⋅⋅ we have

Graph

<bold> where (_I_p_i__SB__I_v_i__sb_)_SB__I_v_i_∈_I_V_i_(_I_G_i_)_sb_ is the (unique) stationary distribution on _I_G_i_. Furthermore, the matrix _B_r</bold> = <bold>A</bold><subs>0</subs>  <bold>N</bold>(1) (where all eigenvalues have moduli < 1) satisfies

Graph

<bold> in which _B_P</bold> <subs>ℓ</subs> = (p<subs>v</subs>)<subs>vKℓ</subs>.

2. If x<subs>0</subs> = x<subs>1</subs> = 1 then X<subs>n</subs> is null recurrent and there exist ρ<subs>v′</subs> > 0 (v′ ∈ V(K)) and η > 0 such that, as n → ∞, and (uniformly for all with an arbitrary constant C > 0)

Graph

3. If x<subs>1</subs> > 1 then X<subs>n</subs> is non recurrent and there exist τ<subs>v′</subs> > 0 (v′ ∈ V(K)), μ > 0 and σ > 0 such that, as n → ∞ and uniform for all ℓ ≥ 0,

Graph

Remark

Theorem 4.2 is, of course, a direct generalization of Theorem 4.1. As above the second and the third case can be generalized to functional limit theorems in the following sense. For v ∈ V(K<subs>ℓ</subs>) set [vcirc]: = ℓ. Then is a process on the non-negative integers and after a proper scaling [Xcirc]<subs>n</subs> can be approximated by a reflected Brownian motion or by a Brownian motion. Note further that the matrix <bold>r</bold> = <bold>A</bold><subs>0</subs>  <bold>N</bold>(1) in the first part of Theorem 4.2 is the classical <bold>r</bold>-matrix for positive recurrent quasi-birth-and-death processes, it also satisfies the equation <bold>r</bold> = <bold>A</bold><subs>0</subs> + <bold>r</bold><bold>A</bold><subs>1</subs> + <bold>r</bold><sups>2</sups><bold>A</bold><subs>2</subs>, compare with Latouche[9] (Theorem 6.2.1).

We also want to mention that that assumption that <bold>B</bold> is primitive irreducible is only used in the second part of the theorem.

Before we start with the proof of Theorem 4.2 we present a proper generalization of Lemma 11.

Lemma 15

Suppose that <bold>B</bold> is a primitive irreducible matrix and let <bold>N</bold>(x) = (N<subs>v, v′</subs>(x))<subs>v, v′∈V(K)</subs> denote the solution of (11). Then all functions N<subs>v, v′</subs>(x) have a common radius of convergence x<subs>0</subs> ≥ 1. Furthermore, x<subs>0</subs> is the only singularity on the circle of convergence |x| = x<subs>0</subs> and there is a local expansion of the form

Graph

around its singularity x = x<subs>0</subs>, where and tilde<bold>N</bold><subs>2</subs> are matrices with positive elements.

Proof

The relation (11) is a system of m<sups>2</sups> algebraic equation for the functions N<subs>v, v′</subs>(x) that can be written in the form <bold>y</bold>(x) = <bold>F</bold>(x, <bold>y</bold>(x)), where <bold>y</bold>(x) is just the vector of functions N<subs>v, v′</subs>(x) and <bold>F</bold>(x, <bold>y</bold>) is a proper (non-linear) polynomial vector function with non-negative coefficients. By assumption <bold>B</bold> is irreducible (and non-negative). Thus, the so-called dependency graph of this system is strongly connected. Consequently, by Lemma 4 all (algebraic) functions N<subs>v, v′</subs>(x) have the same finite radius of convergence a squareroot singularity at x = x<subs>0</subs> of the form (24), where all entries of and tilde<bold>N</bold><subs>2</subs> are positive.

The assumption that <bold>B</bold> is primitive implies that all (sufficiently large) coefficients of the power series N<subs>v, v′</subs>(x) are positive. This property ensures that x = x<subs>0</subs> is the only singularity on the circle of convergence |x| = x<subs>0</subs> (compare with the proof of Lemma 4.

Finally, we surely have x<subs>0</subs> ≥ 1. For, if x<subs>0</subs> < 1 then the coefficients of N<subs>v, v′</subs>(x) are unbounded. However, the coefficients of N<subs>v, v′</subs>(x) are probabilities (compare also with (13)) and thus bounded. This completes the proof of the lemma.

Proof

(Theorem 4.2) First, let us consider the case x<subs>0</subs> > 1 and x<subs>1</subs> = 1. By assumption, x = 1 is a regular point of <bold>N</bold>(x) and, thus, the function

Graph

is regular at x = 1 and satisfies f(1) = 0. Equivalently, 1 is an eigenvalue of the matrix <bold>B</bold> + <bold>A</bold><subs>0</subs><bold>N</bold>(1)<bold>A</bold><subs>2</subs>. Since the matrix <bold>B</bold> + <bold>A</bold><subs>0</subs><bold>N</bold>(1)<bold>A</bold><subs>2</subs> is primitive irreducible, 1 is a simple eigenvalue. Consequently x = 1 is a simple zero of f(x) (and there are no further zeros on the circle |x| = 1). Hence all functions of the inverse matrix big(<bold>I</bold> − x<bold>B</bold> − x<sups>2</sups><bold>A</bold><subs>0</subs><bold>N</bold>(x)<bold>A</bold><subs>2</subs>big)<sups>−1</sups> have a simple pole at x = 1 (and no other singularities on the circle |x| = 1). Thus, it follows as in the proof of Lemma 12 that the limits

Graph

exist for w ∈ V(L). Similarly we get the existence of the corresponding limits for v ∈ K<subs>ℓ</subs> and (21) with <bold>r</bold> = <bold>A</bold><subs>0</subs><bold>N</bold>(1). Since ∑<subs>vV(G)</subs>p<subs>v</subs> = 1 the moduli of all eigenvalues of <bold>r</bold> have to be smaller than 1.

Next, suppose that x<subs>0</subs> = x<subs>1</subs> = 1. Now <bold>N</bold>(x) is singular at x = 1 and behaves like (24). We also have f(1) = 0 (with f(x) from above) and by using the definition of the determinant it also follows that f(x) has a squareroot singularity of the form

Graph

where c ≠ 0. (if we consider as a new variable then it follows as in the first part of the proof that f(x) = (s) has a simple zero in s. Thus, c ≠ 0.)

Next, consider the powers (x<bold>A</bold><subs>0</subs><bold>N</bold>(x))<sups>ℓ</sups>. By assumption x<bold>A</bold><subs>0</subs><bold>N</bold>(x) has just postive entries (for real x with 0 < x ≤ 1). Hence, there exists a unique positive eigenvalue λ(x) of x<bold>A</bold><subs>0</subs><bold>N</bold>(x) such that the moduli of all other eigenvalues are smaller than λ(x). By continuity this is also true in a neighborhood of the real axis. Thus,

Graph

for some matrix <bold>Q</bold> and some η > 0. Since <bold>N</bold>(x) has a squareroot singularity at x = 1, the eigenvalue λ(x) has the same property:

Graph

Hence, we are in a similar situation as in Lemma 13 and (22) follows; with the only difference that an additional factor  = λ(1)<sups>ℓ</sups> appears. However, if c<subs>1</subs> < 1 then the probabilities do not sum up to 1 but the sum is bounded by . On the other hand, if c<subs>1</subs> > 1 then the sum of the probabilities does not converge. This provides c<subs>1</subs> = 1 and completes the proof of the second part of Theorem 4.2.

Finally, suppose that x<subs>1</subs> > 1. Then we also have x<subs>0</subs> > 1. Thus, if we consider <bold>M</bold><subs>0, ℓ</subs>(x) in a neighborhood of x = 1 then all components of <bold>M</bold><subs>0, ℓ</subs>(x) behave (almost) as powers of λ(x) (the largest eigenvalue of x <bold>A</bold><subs>0</subs> <bold>N</bold>(x)) that is now analytic at x = 1. Thus, we can again use the (saddle point) methods of Lemma 8 and obtain (23), however, again with a factor λ(1)<sups>ℓ</sups>. As above it follows that λ(1) = 1 and we are done. Note that μ = 1/λ'(1).

4.3. One-Dimensional Continuous QBD's

As in the discrete case we first study the one-dimensional continuous case first. Note that the results are of completely the same structure.

Theorem 4.3

Suppose that q<subs>0</subs> and q<subs>2</subs> are positive numbers, q<subs>1</subs> = − q<subs>0</subs> − q<subs>2</subs> and b<subs>0</subs> = − q<subs>0</subs>; and let X(t) be the continuous QBD on the non-negative integers with generator matrix

Graph

1. If q<subs>0</subs> < q<subs>2</subs> then we have

Graph

that is, X(t) is positive recurrent. The distribution of X(t) converges to the stationary distribution.

2. If q<subs>0</subs> = q<subs>2</subs> then X(t) is null recurrent and converges weakly to the absolute normal distribution. In particular, we have, as t → ∞, and uniformly for all (where C > 0 is an arbitrary constant)

Graph

3. If q<subs>0</subs> > q<subs>2</subs> then X(t) is non recurrent and

Graph

converges weakly to the standard normal distribution. We also have, as t → ∞ and uniformly for all ℓ ≥ 0 with (where C > 0 is an aribtrary constant)

Graph

The structure of the proof of Theorem 4.3 is also very similiar to the proof of Theorem 4.1. We start with an analogue to Lemma 11.

Lemma 16

Let N(x) be the power series solution of

Graph

where q<subs>1</subs> = − q<subs>0</subs> − q<subs>2</subs> and q<subs>0</subs> and q<subs>2</subs> are positive, and set . Then we explicitly have

Graph

(s) is singular at and

Furthermore, (s) has a local expansion around of the form

Graph

With help of (s) we can express the Laplace transforms [Mcirc]<subs>0, ℓ</subs>(s) of the function t↦<bold>Pr</bold>{X(t) = ℓ|X(0) = 0} by

Graph

Instead of using Cauchy's formula we will apply the inverse Laplace transform to get some asymptotic information on

Graph

Lemma 17

Suppose that q<subs>0</subs> < q<subs>2</subs>. Then the Laplace transform [Mcirc]<subs>0, ℓ</subs>(s) (ℓ ≥ 0) converges for frakR(s) > 0. Furthermore, [Mcirc]<subs>0, ℓ</subs>(s) has a polar singularity of order 1 with residue (q<subs>2</subs> − q<subs>0</subs>)/q<subs>2</subs> (q<subs>0</subs>/q<subs>2</subs>)<sups>ℓ</sups> at s = 0 and s[Mcirc]<subs>0, ℓ</subs>(s) can be analytically continued to ℜ(s) > − η for some η > 0. Consequently,

Graph

Proof

First note that (s) is surely analytic for and, thus, regular for s = 0. Further, (for q<subs>0</subs> < q<subs>2</subs>) we have (0) = 1/q<subs>2</subs> and ′(0) = − 1/(q<subs>0</subs>(q<subs>0</subs> − q<subs>2</subs>)). Thus,

Graph

and consequently

Graph

where [Tcirc](s) is analytic for ℜ(s) > − η for some η > 0. Of course, this directly implies (26).

The case q<subs>0</subs> = q<subs>2</subs> is again quite interesting.

Lemma 18

Suppose that q<subs>0</subs> = q<subs>2</subs>. Then the Laplace transform [Mcirc]<subs>0, ℓ</subs>(s) (ℓ ≥ 0) converges for frakR(s) > 0. There is an algebraic singularity at s = 0 and an analytic continuation to a region of the form ℜ(s) > − η, |arg(s)| < π (for some η > 0). Furthermore, we have, as t → ∞, and uniformly for all (where C > 0 is an arbitrary constant)

Graph

Proof

For q<subs>0</subs> = q<subs>2</subs> we first obtain

Graph

Furthermore

Graph

In fact, we are now almost in the same situation as in Lemma 6. However, instead of Cauchy's formla we use the inverse Laplace transform

Graph

where γ = γ<subs>1</subs> ∪ γ<subs>2</subs> ∪ γ<subs>3</subs> ∪ γ<subs>4</subs> is given by

Graph

Graph

Graph

Graph

The integral over γ<subs>4</subs> is (as usual) negligible. If we use the substitution s = − z/t for the remaining integral (that is, s ∈ γ<subs>1</subs> ∪ γ<subs>2</subs> ∪ γ<subs>3</subs> corresponds to z ∈ γ') then this integral is asymptotically of the same structure as the integral in Lemma 6. Hence we directly obtain (27).

The analysis of the final case q<subs>0</subs> > q<subs>2</subs> relies again on a saddle point approximation.

Lemma 19

Suppose that q<subs>0</subs> > q<subs>2</subs>. Then X(t) satisfies a central limit theorem with mean value <bold>E</bold> X(t) ∼ (q<subs>0</subs> − q<subs>2</subs>)t and <bold>V</bold><bold>a</bold><bold>r</bold> X(t) ∼ (q<subs>0</subs> + q<subs>2</subs>)(q<subs>0</subs> − q<subs>2</subs>)<sups>−2</sups>t. In particular we have the following local limit theorem:

Graph

as t → ∞ and uniformly for all ℓ ≥ 0 with (where C > 0 is an aribtrary constant).

Proof

Here we have , where s = 0 is a regular point of [Mcirc]<subs>0, 0</subs>(s) and (s). The inverse Laplace transfrom

Graph

is now concentrated around s = 0. In fact, a usual saddle point method (that is completely similar to that of Lemma 7 resp. Lemma 8) leads to the proposed result.

The crucial step is to observe that s = 0 is the saddle point of the function

Graph

if ℓ = (q<subs>0</subs> − q<subs>2</subs>)t (and is also a proper approximation of a saddle point if

4.4. General Homogeneous Continuous QBD's

Finally we generalize Theorem 4.3 to m-dimensional continuous QBD's. One crucial step in the proof of Theorem 4.4 is to provide an analogue to Lemma 16. Since the matrix <bold>A</bold><subs>1</subs> has positive and negative entries we cannot apply Lemma 4 directly. Nevertheless, it seems that an analogue to Lemma 16 is actually true. For example, in the one-dimensional case we could have applied Lemma 4 by replacing x by −x.

Theorem 4.4

Let <bold>A</bold><subs>0</subs>, <bold>A</bold><subs>1</subs>, <bold>A</bold><subs>2</subs> and <bold>B</bold> be square matrices of order m such that <bold>A</bold><subs>0</subs> and <bold>A</bold><subs>2</subs> are non-negative and the matrices <bold>B</bold> and <bold>A</bold><subs>1</subs> have non-negative off-diagonal elements whereas the diagonal elements are stricly negative so that the row sums are all equal to zero: (<bold>B</bold> + <bold>A</bold><subs>0</subs>)<bold>1</bold> = <bold>0</bold> and (<bold>A</bold><subs>0</subs> + <bold>A</bold><subs>1</subs> + <bold>A</bold><subs>2</subs>)<bold>1</bold> = <bold>0</bold>, that is, the infinite matrix <bold>Q</bold> from (15) is a generator matrix of a homogeneous continuous QBD process. Furthermore suppose that the matrix <bold>B</bold> is primitive irreducible, that no row of <bold>A</bold><subs>0</subs> is zero, that <bold>A</bold><subs>2</subs> is non-zero, and that the system of equations (11) has a solution of the form (24), that is, all entries of <bold>N</bold>(x) have the same radius of convergence and the dominant singularity is of squareroot type.

Let X(t) denote a corresponding QBD process with X(0) ∈ K<subs>0</subs> and let σ<subs>0</subs> denote the abscissa of convergence of and σ<subs>1</subs> that of .

1. If σ<subs>0</subs> < 0 and σ<subs>1</subs> = 0 then X(t) is positive recurrent and for all v ∈ V(G) = V(K<subs>0</subs>) ∪ V(K<subs>1</subs>) ∪ ⋅⋅⋅ we have

Graph

where (p<subs>v</subs>)<subs>vV(G)</subs> is the (unique) stationary distribution on G. Furthermore, the matrix (where all eigenvalues have moduli < 1) satisfies

Graph

<bold> in which _B_P</bold> <subs>ℓ</subs> = (p<subs>v</subs>)<subs>vKℓ</subs>.

2. If σ<subs>0</subs> = σ<subs>1</subs> = 0 then X(t) is null recurrent and there exist ρ<subs>v′</subs> > 0 (v′ ∈ V(K)) and η > 0 such that, as t → ∞, and (uniformly for all with an arbitrary constant C > 0)

Graph

3. If σ<subs>1</subs> > 0 then X(t) is non recurrent and there exist τ<subs>v′</subs> > 0 (v′ ∈ V(K)), μ > 0 and σ > 0 such that, as t → ∞ and uniform for all ℓ ≥ 0,

Graph

Proof

The proof is a direct extension of the proof of Theorem 4.3 by using the ideas of the proof of Theorem 4.2.

Acknowledgments

REFERENCES

1 Bender, E.A.Asymptotic methods in enumeration. SIAM Rev.197416, 485–515.

2 Drmota, M.Asymptotic distributions and a multivariate Darboux method in enumeration problems. J. Combinat. Theory, Ser. A199467, 169–184.[CROSSREF]

3 Drmota, M.A bivariate asymptotic expansion of coefficients of powers of generating functions. Europ. J. Combinatorics199415, 139–152.[CROSSREF]

4 Drmota, M.Systems of functional equations. Random Struct. Algorithms199710, 103–124.[CROSSREF]

5 Drmota, M.Discrete random walk on one-sided "periodic graphs". Discr. Math. Theor. Comput. Sci.(Discrete Random Walks)2003, AC, 83–94.

6 Flajolet, P.; Odlyzko, A.M.Singularity analysis of generating functions. SIAM J. Discrete Math.1990, 3, 216–240.[CROSSREF]

7 Kaup, L.; Kaup, B.Holomorphic Functions of Several Variables, Studies in Mathematic, de Gruyter: Berlin, 1983.

8 Kuich, W.; Urbanek, F.J.Infinite linear systems and one-counter languages. Theoret. Comput. Sci.198322, 95–126.[CROSSREF]

9 Latouche, G.; Ramaswami, V.Introduction to Matrix Analytic Methods in Stochastic Modeling. ASA-SIAM Series on Statistics and Applied Probability; Society for Industrial and Applied Mathematics (SIAM): Philadelphia, PA, 1999.

Neuts, M.F.Matrix-geometric Solutions in Stochastic ModelsJohns Hopkins Series in the Mathematical Sciences: 2; Johns Hopkins University Press: Baltimore, MD, 1981.

Neuts, M.F.Structured stochastic matrices of M /G /1 type and their applications. Probability: Pure and Applied: 5Marcel Dekker Inc.: New York, NY, 1989.

Odlyzko, A.M.Asymptotic enumeration methods. In Handbook of Combinatorics; Elsevier: Amsterdam, 1995; Vol. 1, 2,1063–1229.

Prodinger, H.Further results on a problem of Knödel concerning the analysis of bin-packing; Number-theoretic Analysis(Vienna, 1988–89). Lecture Notes in Math., 1452, Springer: Berlin1990; 193–198.

Sedgewick, R.; Flajolet, P.An Introduction to the Analysis of Algorithms; Addison Wesley: Reading, MA, 1996.

Vitter, J.S.; Flajolet, P.Average-case analysis of algorithms and data structures; In Handbook of Theoretical Computer Science, van Leeuwen, J. Ed.; Vol. A: Algorithms and Complexity. Elsevier: Amsterdam, 1990; Ch. 9, 431–524.

Footnotes

<sups>a</sups>We will always use the notation [x<sups>n</sups>]y(x) = y<subs>n</subs> to denote the nth coefficient of a power series.

<sups>b</sups>We always identify a k-dimensional vector a = (a<subs>1</subs>,..., a<subs>k</subs>)' with a (k×1)-matrix, i.e. a column. Furthermore, if A is a matrix then A′ denotes the transposed matrix.

By Michael Drmota

Reported by Author