Reach Us +44-1522-440391
Stability of the Generalized Polar Decomposition Method for the Approximation of the Matrix Exponential | OMICS International
ISSN: 1736-4337
Journal of Generalized Lie Theory and Applications

# Stability of the Generalized Polar Decomposition Method for the Approximation of the Matrix Exponential

Elham Nobari and S. Mohammad Hosseini*

Department of Mathematics, Tarbiat Modares University, P.O. Box 14115-175, Tehran, Iran

Corresponding Author:
Department of Mathematics
Tarbiat Modares University
P.O. Box 14115-175, Tehran, Iran
Email: hossei [email protected]

Received date: 01 September 2009; Revised date: 03 September 2010; Accepted date: 05 September 2010

Visit for more related articles at Journal of Generalized Lie Theory and Applications

#### Abstract

Generalized polar decomposition method (or briefly GPD method) has been introduced by Munthe-Kaas and Zanna [5] to approximate the matrix exponential. In this paper, we investigate the numerical stability of that method with respect to roundoff propagation. The numerical GPD method includes two parts: splitting of a matrix Z ∈ g, a Lie algebra of matrices and computing exp(Z)v for a vector v. We show that the former is stable provided that Z is not so large, while the latter is not stable in general except with some restrictions on the entries of the matrix Z and the vector v.

MSC 2010: 65G50, 65L07, 65F30

#### Introduction

Since exponential matrix naturally appears in the analytical solutions of many differential equations, it is important to present the numerical approximate methods for computing exponential matrices in applied problems. In many cases that the different equations are modeled on a matrix Lie group, we need that the approximated matrix to remain in the same Lie group. Most of the well-known standard methods [3] do not satisfy this condition except in some special cases, like 3 × 3 antisymmetric matrices [2].

Recently, Munthe-Kaas and Zanna have introduced a geometric numerical method, called generalized polar decomposition method, or briefly GPD method [5]. The main advantage of this method among others is that it preserves the approximated object in the mentioned Lie group. In the GPD method, the Lie group G and its Lie algebra g are splitted to simpler ones, such that the computation of expZ will be easier in those subgroups and subalgebras. There are two numerical algorithms in GPD method. The first one (Algorithm 1) splits the matrix Z and in the second one (Algorithm 2), exp(Z)v is computed, where v is an arbitrary vector.

As far as we know, no proof of the stability of the GPD method, with respect to propagation of roundoff error, has appeared in the literature. In this paper, we study the stability of the method for the so-called polar-type splitting order-two algorithm that has been introduced in [5]. We show that Algorithm 1 which is a simpler algorithm is stable, provided the norm ||Z|| = maxi,j |Zi,j | of the matrix Z is not very large, but in the complicated one, that is, Algorithm 2, the stability is controlled by entries of Z, exp ||Z|| and ||v|| = maxi |vi| as the norm of the vector v. We introduce a sufficient condition for this aim. More precisely, given an n × n matrix Z, let

for j = 1, 2, . . . , n − 1 and let

then Algorithm 2 is stable under the following conditions:

||Z|| and ||v|| are not very large, (1.1a)

(1.1b)

where u is the unit roundoff error.

The content of the paper is as follows. In Section 2, some preliminaries concerning the general concept of error analysis and GPD method and also details of the above-mentioned algorithms are given. In Section 3, the error analysis of the Algorithm 1 and in Section 4, the error analysis of Algorithm 2 are addressed. Section 5 is devoted to sensibility analysis of Algorithm 2.

#### Preliminaries

Error analysis

In this section, we recall some standard definitions and lemmas, which are applied in the other sections. Elementary operators like +, ·, ×, and / as well as √, sin, and cos admit the following floating point arithmetic:

fl(x op y) = (x op y)(1 + δ), |δ| ≤ u,

where op is an elementary operation, and u is the unit roundoff error.

Lemma 1 (see [1]). If |δi| ≤ u and ρi = ±1 for 1 ≤ i ≤ n and nu < 1, then

such that

The following technical properties are satisfied by the sequence {γn}n:

γj + γk + γjγk ≤ γj+k, (j, k ≥ 1), (2.1)

n ≤ γcn, (c ≥ 1, n>1). (2.2)

Denoting the calculated value of any variable x by x and |A| = [|aij |] for any matrix A = [aij ], we have the following lemma.

Lemma 2 (see [1,4]). If is evaluated in floating point arithmetic, then, no matter what is the order of evaluation, there exist constants θk(i) , i = 0, 1, . . . , k − 1, such that

(2.3)

where |θk(i)| ≤ γk for all i. If bk = 1, so that there is no division, then |θk(i)| ≤ γk−1 for all i.

Also for matrix multiplication, we have

fl(AB) = AB + Δ, |Δ| ≤ γn|A||B|, A∈ Rm×n, B ∈ Rn×p. (2.4)

For any matrix A = [aij ], let ||A|| = maxi,j |aij |. By this norm, we have

||AB|| ≤ n||A|| ||B|| (2.5)

for any A ∈ Rm×n and B ∈Rn×p.

For any two vectors a, b ∈ Rn, let

(2.6)

So, using this notation, we can write

(2.7)

Similar to Lemma 1, one can prove the following lemma.

Lemma 3. If |ui| ≤ u1, for 1 ≤ i ≤ n and nu < 1, then

(2.8)

such that

hn≤ γn1.

GPD method

Suppose G is a finite dimensional Lie group and g is its Lie algebra and let σ : G → G, σ ≠ id be an involutive automorphism, that is, one-to-one smooth map such that:

σ(x · y) = σ(x) · σ(y), ∀x, y ∈ G,

It is well known that for sufficiently small t and for any z = exp(tZ) ∈ G, where Z ∈ g, we can write

z = xy, (2.9)

where σ(x) = x−1 and σ(y) = y. The decomposition (2.9) is called the Generalized Polar Decomposition (GPD) of z. On the Lie algebra g, the involutive automorphism is induced by σ:

hence dσ defines a splitting of g into the direct sum of two linear spaces:

where = {Z ∈ g : dσ(Z) = Z} and p = {Z ∈ g : dσ(Z) = −Z}. In fact, is a Lie subalgebra of g, but p only admits Lie triple system property:

A,B,C ∈ p =⇒ [A, [B,C]]∈ p,

where [A,B] is the standard commutator on Lie algebra g. Now, let

where the maps Πp : g → p and Π : g → are the canonical projection maps. It can be easily verified that every Z ∈ g can be uniquely decomposed into Z = P + K, where

(see [5]). Also, and p satisfy

If, in the decomposition (2.9), we consider x = expX(t) and y = expY (t), where X(t) ∈ p and Y (t) ∈ , for sufficiently small t, then it can be written as follows:

where the coefficients Xi and Yi can be found in [5]. The first terms in the expansions of X(t) and Y (t) are

(2.10)

(2.11)

Considering as elements of a sequence of involutive automorphisms on G, σ1 induces a decomposition and approximation:

where X[1] and Y[1] are suitable truncations of (2.10) and (2.11), respectively. Then 2 can be decomposed by σ2 and so we have 1 = p2 ⊕ 2 and

Repeating this process m times, (m ≥ 1) yields the following:

(2.12)

The number m is chosen appropriately such that the right-hand side terms can be easily computed. If the expansions (2.10) are truncated at order two, then the two following algorithms based on the iterated generalized polar decomposition (2.12) will be obtained. The first algorithm splits the matrix Z, while the second one approximates exp(Z)v.

Algorithm 1 (Polar-type splitting, order two).

% Purpose: 2nd order approximation of the splitting (2.12)

% In: n × n matrix Z

% Out: Z overwritten with the nonzero elements of X[i] and Y[m] as:

% Z(i + 1 : n, i) = X[i](i + 1 : n, i), Z(i, i + 1 : n) = X[i](i, i + 1 : n)

% diag(Z) = diag(Y [m])

for j = 1 : n − 1

aj := Z(j + 1 : n, j)

bj := Z(j, j + 1 : n)T

end

Algorithm 2 (Polar-type approximation).

% Purpose: Computing the approximant (2.12) applied to a vector v

% In: v : n-vector

% Z : n × n matrix containing the nonzero elements of X[i] and Y[m] as:

% Z(i + 1 : n, i) = X[i](i + 1 : n, i), Z(i, i + 1 : n) = X[i](i, i + 1 : n)

% diag(Z) = diag(Y[m])

% Out: v := exp(X[1]) · · · exp(X[m]) exp(Y[m])v

for k = 1 : n

vk := exp(zk,k)vk

end

for j = n − 1 : −1 : 1

aj := [0;Z(j + 1 : n, j)]

bj := [0;Z(j, j + 1 : n)T ]

vold := v(j : n)

end

#### Error analysis of Algorithm 1

In Algorithm 1, we denote by Z simultaneously the input data matrix and the splitted output matrix. In this algorithm, the calculated value of cj (1 ≤ j ≤ n − 1) is

where . From (2.8), we have where |h2| ≤ γ21.

Hence,

where

(3.1)

(3.2)

By applying the norm of the matrix Z this reduces to

Now, similarly for computing dj, we have

where

Therefore, the entries of the matrix Z change as follows:

where and

So from (3.1),

If we replace the value of cj from Algorithm 1 in the above relation, we will have

and finally

Since

for nu < 1 and from property (2.1), the upper bound of pj will be

The greatest value of the right-hand side occurs at j = 1, hence for all j,

(3.3)

Similarly, if

then

(3.4)

However, since the diagonal entries do not change during this algorithm, (3.3) and (3.4) show the stability of the algorithm whenever ||Z|| is not so large.

#### Error analysis of Algorithm 2

In the GPD method, the matrix Z is splitted in Algorithm 1, and the exp(Z)v is calculated in Algorithm 2. We denote the splitted matrix of Z with

Error analysis of αj

Let and , here we have assumed kj > 0. If kj is not a positive number, by considering when kj < 0 and by letting when kj = 0, it is easily seen that of all the arguments and the upper bounds presented in the paper remain valid.

From Lemma 2, we have

where So we can write

where

(4.1)

Then by assuming ||Z|| ≤ 1 and from condition (1.1), we have

where Hence kj + t is nonnegative and

where δ, with |δ| ≤ u, denotes the floating point arithmetic error. Therefore,

(4.2)

where

Since we have , we have

(4.3)

where

(4.4)

Error analysis of βj

Let us consider

By using the identity , we obtain

where the quantity ε1j is

(4.5)

From (4.1) and (4.3), we have

(4.6)

We now consider the condition

(4.7)

that is clearly a consequence of

for 1 ≤ j ≤ n − 1, which itself is obtained from condition (1.1b) for ||Z|| < 1. Hence from (4.7) and by increasing monotonicity of the map on the interval (0, 1), we can write

The function

has the elementary property that 1 ≤ f(a) ≤ f(b) whenever |a| ≤ b, from which and (4.5) we deduce , where

(4.8)

Now by using (4.4) and (4.7) in (4.8), we get

This shows that provided that admits an upper bound as suggested by the condition (1.1).

Error analysis of λj

However, for computing λj , from (4.2) and definition of floating point arithmetic error δ we have

where

Hence,

So,

where

(4.9)

Also, , where

However,

and therefore

Hence in this case , provided that admits an upper bound like the one suggested by condition (1.1) (note that ). Moreover, by (4.9),

(4.10)

where

(4.11)

which implies oj is of order O(u) under condition (1.1).

Error analysis of ϕ(D)

Now, in calculating ϕ(D), we will have

where

From (4.6) and (4.10), we have

(4.12)

Hence by (4.7) and (4.11), the right-hand side of (4.12) is of order O(u) and

(4.13)

where

(4.14)

(4.15)

(4.16)

Error analysis of w

At first, since in Algorithm 2, we have Now, for computing w, let hence,

where

So from property (2.1), we have

(4.17)

Now from Algorithm 2 and (2.4),

where

(4.18)

(4.19)

Therefore, from (4.13), (4.15), (4.17), (4.18), and (4.19), we have , where

(4.20)

Error analysis of vnew

Let us consider

where

(4.21)

and But

and hence |w1| ≤ l1, where

(4.22)

Also, , such that |w2| ≤ l2, where

So from (4.20), (4.22), and (4.21), we have

Error analysis of v

(4.23)

Sensibility of Algorithm 2

As it was shown in last section and from (4.23), the computation error of exp(Z)v depends on which is calculated by Algorithm 1. We therefore consider in which the norm of E is obtained from (3.3) and (3.4). Therefore, the relations (4.3) and (4.6) reduce to where

After substituting ϕj and ψj in the appropriate formulas given in the last section, the error upper bounds are computed in terms of ||E||. Accordingly, the estimate (4.17) becomes

that in turn provides the error bound given by (4.20) as follows:

Now if we consider initial errors influencing the vectors , for j ≥ 2, these vectors can be represented by , where er(1) = 0. Hence, the estimate (4.17) reduces to

(5.1)

in which the quantities ||E|| ||er||, which are very small, can be neglected. Obviously, the upper bound given by (5.1) affects the variations of rj and errorj. Let “error” be the final computing error committed by both Algorithms 1 and 2. In Figure 1, we compared the dependence of ||E|| and ||error|| using 10 × 10 random matrices hZ with Z normalized, so that and 10 × 1 random unit vector v. Figure 1 shows the stability of algorithms with respect to round off errors. Moreover, the statistical correlation coefficient between backward and forward errors is 0.9531.

Figure 1: Backward and forward errors versus h for Algorithms 1 and 2.

#### Conclusion

As we expected, the upper bound of |errorj | (4.23) strongly depends on the input data Z and v. More precisely, if k0 >> u, then this error bound can be controlled by . Finally, we have shown the stability of the GPD method under condition (1.1).

#### Acknowledgment

The authors express their deep gratitude to the anonymous referee for his/her valuable comments and suggestion which led to the present form of the paper.

#### References

Select your language of interest to view the total content in your interested language

### Article Usage

• Total views: 12007
• [From(publication date):
December-2011 - Jan 18, 2019]
• Breakdown by view type
• HTML page views : 8219

Peer Reviewed Journals

Make the best use of Scientific Research and information from our 700 + peer reviewed, Open Access Journals
International Conferences 2019-20

Meet Inspiring Speakers and Experts at our 3000+ Global Annual Meetings

### Medical & Clinical Conferences

Agri and Aquaculture Journals

Dr. Krish

+1-702-714-7001Extn: 9040

Biochemistry Journals

Datta A

1-702-714-7001Extn: 9037

Ronald

1-702-714-7001Extn: 9042

Chemistry Journals

Gabriel Shaw

1-702-714-7001Extn: 9040

Clinical Journals

Datta A

1-702-714-7001Extn: 9037

Engineering Journals

James Franklin

1-702-714-7001Extn: 9042

Food & Nutrition Journals

Katie Wilson

1-702-714-7001Extn: 9042

General Science

Andrea Jason

1-702-714-7001Extn: 9043

Genetics & Molecular Biology Journals

Anna Melissa

1-702-714-7001Extn: 9006

Immunology & Microbiology Journals

David Gorantl

1-702-714-7001Extn: 9014

Materials Science Journals

Rachle Green

1-702-714-7001Extn: 9039

Nursing & Health Care Journals

Stephanie Skinner

1-702-714-7001Extn: 9039

Medical Journals

Nimmi Anna

1-702-714-7001Extn: 9038

Neuroscience & Psychology Journals

Nathan T

1-702-714-7001Extn: 9041

Pharmaceutical Sciences Journals

Ann Jose

ankara escort

1-702-714-7001Extn: 9007

Social & Political Science Journals

Steve Harry

pendik escort

1-702-714-7001Extn: 9042

© 2008- 2019 OMICS International - Open Access Publisher. Best viewed in Mozilla Firefox | Google Chrome | Above IE 7.0 version