alexa Stability of the Generalized Polar Decomposition Method for the Approximation of the Matrix Exponential | OMICS International
ISSN: 1736-4337
Journal of Generalized Lie Theory and Applications
Make the best use of Scientific Research and information from our 700+ peer reviewed, Open Access Journals that operates with the help of 50,000+ Editorial Board Members and esteemed reviewers and 1000+ Scientific associations in Medical, Clinical, Pharmaceutical, Engineering, Technology and Management Fields.
Meet Inspiring Speakers and Experts at our 3000+ Global Conferenceseries Events with over 600+ Conferences, 1200+ Symposiums and 1200+ Workshops on
Medical, Pharma, Engineering, Science, Technology and Business

Stability of the Generalized Polar Decomposition Method for the Approximation of the Matrix Exponential

Elham Nobari and S. Mohammad Hosseini*

Department of Mathematics, Tarbiat Modares University, P.O. Box 14115-175, Tehran, Iran

Corresponding Author:
S. Mohammad Hosseini
Department of Mathematics
Tarbiat Modares University
P.O. Box 14115-175, Tehran, Iran
Email: hossei [email protected]

Received date: 01 September 2009; Revised date: 03 September 2010; Accepted date: 05 September 2010

Visit for more related articles at Journal of Generalized Lie Theory and Applications

Abstract

Generalized polar decomposition method (or briefly GPD method) has been introduced by Munthe-Kaas and Zanna [5] to approximate the matrix exponential. In this paper, we investigate the numerical stability of that method with respect to roundoff propagation. The numerical GPD method includes two parts: splitting of a matrix Z ∈ g, a Lie algebra of matrices and computing exp(Z)v for a vector v. We show that the former is stable provided that Z is not so large, while the latter is not stable in general except with some restrictions on the entries of the matrix Z and the vector v.

MSC 2010: 65G50, 65L07, 65F30

Introduction

Since exponential matrix naturally appears in the analytical solutions of many differential equations, it is important to present the numerical approximate methods for computing exponential matrices in applied problems. In many cases that the different equations are modeled on a matrix Lie group, we need that the approximated matrix to remain in the same Lie group. Most of the well-known standard methods [3] do not satisfy this condition except in some special cases, like 3 × 3 antisymmetric matrices [2].

Recently, Munthe-Kaas and Zanna have introduced a geometric numerical method, called generalized polar decomposition method, or briefly GPD method [5]. The main advantage of this method among others is that it preserves the approximated object in the mentioned Lie group. In the GPD method, the Lie group G and its Lie algebra g are splitted to simpler ones, such that the computation of expZ will be easier in those subgroups and subalgebras. There are two numerical algorithms in GPD method. The first one (Algorithm 1) splits the matrix Z and in the second one (Algorithm 2), exp(Z)v is computed, where v is an arbitrary vector.

As far as we know, no proof of the stability of the GPD method, with respect to propagation of roundoff error, has appeared in the literature. In this paper, we study the stability of the method for the so-called polar-type splitting order-two algorithm that has been introduced in [5]. We show that Algorithm 1 which is a simpler algorithm is stable, provided the norm ||Z|| = maxi,j |Zi,j | of the matrix Z is not very large, but in the complicated one, that is, Algorithm 2, the stability is controlled by entries of Z, exp ||Z|| and ||v|| = maxi |vi| as the norm of the vector v. We introduce a sufficient condition for this aim. More precisely, given an n × n matrix Z, let

Equation

for j = 1, 2, . . . , n − 1 and let

Equation

then Algorithm 2 is stable under the following conditions:

||Z|| and ||v|| are not very large, (1.1a)

Equation (1.1b)

where u is the unit roundoff error.

The content of the paper is as follows. In Section 2, some preliminaries concerning the general concept of error analysis and GPD method and also details of the above-mentioned algorithms are given. In Section 3, the error analysis of the Algorithm 1 and in Section 4, the error analysis of Algorithm 2 are addressed. Section 5 is devoted to sensibility analysis of Algorithm 2.

Preliminaries

Error analysis

In this section, we recall some standard definitions and lemmas, which are applied in the other sections. Elementary operators like +, ·, ×, and / as well as √, sin, and cos admit the following floating point arithmetic:

fl(x op y) = (x op y)(1 + δ), |δ| ≤ u,

where op is an elementary operation, and u is the unit roundoff error.

Lemma 1 (see [1]). If |δi| ≤ u and ρi = ±1 for 1 ≤ i ≤ n and nu < 1, then

Equation

such that

Equation

The following technical properties are satisfied by the sequence {γn}n:

γj + γk + γjγk ≤ γj+k, (j, k ≥ 1), (2.1)

n ≤ γcn, (c ≥ 1, n>1). (2.2)

Denoting the calculated value of any variable x by x and |A| = [|aij |] for any matrix A = [aij ], we have the following lemma.

Lemma 2 (see [1,4]). If is evaluated in floating point arithmetic, then, no matter what is the order of evaluation, there exist constants θk(i) , i = 0, 1, . . . , k − 1, such that

Equation (2.3)

where |θk(i)| ≤ γk for all i. If bk = 1, so that there is no division, then |θk(i)| ≤ γk−1 for all i.

Also for matrix multiplication, we have

fl(AB) = AB + Δ, |Δ| ≤ γn|A||B|, A∈ Rm×n, B ∈ Rn×p. (2.4)

For any matrix A = [aij ], let ||A|| = maxi,j |aij |. By this norm, we have

||AB|| ≤ n||A|| ||B|| (2.5)

for any A ∈ Rm×n and B ∈Rn×p.

For any two vectors a, b ∈ Rn, let

Equation (2.6)

So, using this notation, we can write

Equation (2.7)

Similar to Lemma 1, one can prove the following lemma.

Lemma 3. If |ui| ≤ u1, for 1 ≤ i ≤ n and nu < 1, then

Equation (2.8)

such that

hn≤ γn1.

GPD method

Suppose G is a finite dimensional Lie group and g is its Lie algebra and let σ : G → G, σ ≠ id be an involutive automorphism, that is, one-to-one smooth map such that:

σ(x · y) = σ(x) · σ(y), ∀x, y ∈ G,

Equation

It is well known that for sufficiently small t and for any z = exp(tZ) ∈ G, where Z ∈ g, we can write

z = xy, (2.9)

where σ(x) = x−1 and σ(y) = y. The decomposition (2.9) is called the Generalized Polar Decomposition (GPD) of z. On the Lie algebra g, the involutive automorphism is induced by σ:

Equation

hence dσ defines a splitting of g into the direct sum of two linear spaces:

Equation

where Equation = {Z ∈ g : dσ(Z) = Z} and p = {Z ∈ g : dσ(Z) = −Z}. In fact, Equation is a Lie subalgebra of g, but p only admits Lie triple system property:

A,B,C ∈ p =⇒ [A, [B,C]]∈ p,

where [A,B] is the standard commutator on Lie algebra g. Now, let

Equation

where the maps Πp : g → p and ΠEquation : g → Equation are the canonical projection maps. It can be easily verified that every Z ∈ g can be uniquely decomposed into Z = P + K, where

Equation

(see [5]). Also, Equation and p satisfy

Equation

If, in the decomposition (2.9), we consider x = expX(t) and y = expY (t), where X(t) ∈ p and Y (t) ∈ Equation, for sufficiently small t, then it can be written as follows:

Equation

where the coefficients Xi and Yi can be found in [5]. The first terms in the expansions of X(t) and Y (t) are

Equation (2.10)

Equation (2.11)

Considering Equation as elements of a sequence of involutive automorphisms on G, σ1 induces a decomposition Equation and approximation:

Equation

where X[1] and Y[1] are suitable truncations of (2.10) and (2.11), respectively. Then Equation2 can be decomposed by σ2 and so we have Equation1 = p2 ⊕ Equation2 and

Equation

Repeating this process m times, (m ≥ 1) yields the following:

Equation (2.12)

The number m is chosen appropriately such that the right-hand side terms can be easily computed. If the expansions (2.10) are truncated at order two, then the two following algorithms based on the iterated generalized polar decomposition (2.12) will be obtained. The first algorithm splits the matrix Z, while the second one approximates exp(Z)v.

Algorithm 1 (Polar-type splitting, order two).

% Purpose: 2nd order approximation of the splitting (2.12)

% In: n × n matrix Z

% Out: Z overwritten with the nonzero elements of X[i] and Y[m] as:

% Z(i + 1 : n, i) = X[i](i + 1 : n, i), Z(i, i + 1 : n) = X[i](i, i + 1 : n)

% diag(Z) = diag(Y [m])

for j = 1 : n − 1

aj := Z(j + 1 : n, j)

bj := Z(j, j + 1 : n)T

Equation

Equation

Equation

Equation

Equation

end

Algorithm 2 (Polar-type approximation).

% Purpose: Computing the approximant (2.12) applied to a vector v

% In: v : n-vector

% Z : n × n matrix containing the nonzero elements of X[i] and Y[m] as:

% Z(i + 1 : n, i) = X[i](i + 1 : n, i), Z(i, i + 1 : n) = X[i](i, i + 1 : n)

% diag(Z) = diag(Y[m])

% Out: v := exp(X[1]) · · · exp(X[m]) exp(Y[m])v

for k = 1 : n

vk := exp(zk,k)vk

end

for j = n − 1 : −1 : 1

aj := [0;Z(j + 1 : n, j)]

bj := [0;Z(j, j + 1 : n)T ]

vold := v(j : n)

Equation

Equation

Equation

Equation

Equation

Equation

end

Error analysis of Algorithm 1

In Algorithm 1, we denote by Z simultaneously the input data matrix and the splitted output matrix. In this algorithm, the calculated value of cj (1 ≤ j ≤ n − 1) is

Equation

where Equation. From (2.8), we have Equation where |h2| ≤ γ21.

Hence,

Equation

where

Equation (3.1)

Equation (3.2)

By applying the norm of the matrix Z this reduces to

Equation

Now, similarly for computing dj, we have

Equation

where

Equation

Therefore, the entries of the matrix Z change as follows:

Equation

where Equation and

Equation

So from (3.1),

Equation

If we replace the value of cj from Algorithm 1 in the above relation, we will have

Equation

and finally

Equation

Since

Equation

for nu < 1 and from property (2.1), the upper bound of pj will be

Equation

The greatest value of the right-hand side occurs at j = 1, hence for all j,

Equation (3.3)

Similarly, if

Equation

then

Equation (3.4)

However, since the diagonal entries do not change during this algorithm, (3.3) and (3.4) show the stability of the algorithm whenever ||Z|| is not so large.

Error analysis of Algorithm 2

In the GPD method, the matrix Z is splitted in Algorithm 1, and the exp(Z)v is calculated in Algorithm 2. We denote the splitted matrix of Z with Equation

Error analysis of αj

Let Equation and Equation, here we have assumed kj > 0. If kj is not a positive number, by considering Equation when kj < 0 and by letting Equation when kj = 0, it is easily seen that of all the arguments and the upper bounds presented in the paper remain valid.

From Lemma 2, we have

Equation

where Equation So we can write

Equation

where

Equation (4.1)

Then by assuming ||Z|| ≤ 1 and from condition (1.1), we have

Equation

where Equation Hence kj + t is nonnegative and

Equation

where δ, with |δ| ≤ u, denotes the floating point arithmetic error. Therefore,

Equation (4.2)

where

Equation

Since we have Equation, we have

Equation (4.3)

where

Equation (4.4)

Error analysis of βj

Let us consider

Equation

By using the identity Equation, we obtain

where the quantity ε1j is

Equation (4.5)

From (4.1) and (4.3), we have

Equation (4.6)

We now consider the condition

Equation (4.7)

that is clearly a consequence of

Equation

for 1 ≤ j ≤ n − 1, which itself is obtained from condition (1.1b) for ||Z|| < 1. Hence from (4.7) and by increasing monotonicity of the map Equation on the interval (0, 1), we can write

Equation

The function

Equation

has the elementary property that 1 ≤ f(a) ≤ f(b) whenever |a| ≤ b, from which and (4.5) we deduce Equation, where

Equation (4.8)

Now by using (4.4) and (4.7) in (4.8), we get

Equation

This shows that Equation provided that Equation admits an upper bound as suggested by the condition (1.1).

Error analysis of λj

However, for computing λj , from (4.2) and definition of floating point arithmetic error δ we have

Equation

where

Equation

Hence,

Equation

So,

Equation

where

Equation (4.9)

Also, Equation, where

Equation

However,

Equation

and therefore

Equation

Hence in this case Equation, provided that Equation admits an upper bound like the one suggested by condition (1.1) (note that Equation ). Moreover, by (4.9),

Equation (4.10)

where

Equation (4.11)

which implies oj is of order O(u) under condition (1.1).

Error analysis of ϕ(D)

Now, in calculating ϕ(D), we will have

Equation

where

Equation

From (4.6) and (4.10), we have

Equation (4.12)

Hence by (4.7) and (4.11), the right-hand side of (4.12) is of order O(u) and

Equation (4.13)

where

Equation (4.14)

Equation (4.15)

Equation (4.16)

Error analysis of w

At first, since Equation in Algorithm 2, we have Equation Now, for computing w, let Equation hence,

Equation

where

Equation

So from property (2.1), we have

Equation (4.17)

Now from Algorithm 2 and (2.4),

Equation

where

Equation (4.18)

Equation (4.19)

Therefore, from (4.13), (4.15), (4.17), (4.18), and (4.19), we have Equation, where

Equation (4.20)

Error analysis of vnew

Let us consider

Equation

where

Equation (4.21)

and Equation But

Equation

and hence |w1| ≤ l1, where

Equation (4.22)

Also, Equation, such that |w2| ≤ l2, where

Equation

So from (4.20), (4.22), and (4.21), we have

Equation

Error analysis of v

Equation

Equation (4.23)

Sensibility of Algorithm 2

As it was shown in last section and from (4.23), the computation error of exp(Z)v depends on Equation which is calculated by Algorithm 1. We therefore consider Equation in which the norm of E is obtained from (3.3) and (3.4). Therefore, the relations (4.3) and (4.6) reduce to Equation where

Equation

Equation

After substituting ϕj and ψj in the appropriate formulas given in the last section, the error upper bounds are computed in terms of ||E||. Accordingly, the estimate (4.17) becomes

Equation

that in turn provides the error bound given by (4.20) as follows:

Equation

Now if we consider initial errors influencing the vectors Equation, for j ≥ 2, these vectors can be represented by Equation, where er(1) = 0. Hence, the estimate (4.17) reduces to

Equation (5.1)

in which the quantities ||E|| ||er||, which are very small, can be neglected. Obviously, the upper bound given by (5.1) affects the variations of rj and errorj. Let “error” be the final computing error committed by both Algorithms 1 and 2. In Figure 1, we compared the dependence of ||E|| and ||error|| using 10 × 10 random matrices hZ with Z normalized, so that Equation and 10 × 1 random unit vector v. Figure 1 shows the stability of algorithms with respect to round off errors. Moreover, the statistical correlation coefficient between backward and forward errors is 0.9531.

generalized-theory-applications-errors-versus-h-Algorithms

Figure 1: Backward and forward errors versus h for Algorithms 1 and 2.

Conclusion

As we expected, the upper bound of |errorj | (4.23) strongly depends on the input data Z and v. More precisely, if k0 >> u, then this error bound can be controlled by Equation. Finally, we have shown the stability of the GPD method under condition (1.1).

Acknowledgment

The authors express their deep gratitude to the anonymous referee for his/her valuable comments and suggestion which led to the present form of the paper.

References

Select your language of interest to view the total content in your interested language
Post your comment

Share This Article

Relevant Topics

Recommended Conferences

  • 7th International Conference on Biostatistics and Bioinformatics
    September 26-27, 2018 Chicago, USA
  • Conference on Biostatistics and Informatics
    December 05-06-2018 Dubai, UAE
  • Mathematics Congress - From Applied to Derivatives
    December 5-6, 2018 Dubai, UAE

Article Usage

  • Total views: 11811
  • [From(publication date):
    December-2011 - Jul 19, 2018]
  • Breakdown by view type
  • HTML page views : 8035
  • PDF downloads : 3776
 

Post your comment

captcha   Reload  Can't read the image? click here to refresh

Peer Reviewed Journals
 
Make the best use of Scientific Research and information from our 700 + peer reviewed, Open Access Journals
International Conferences 2018-19
 
Meet Inspiring Speakers and Experts at our 3000+ Global Annual Meetings

Contact Us

Agri & Aquaculture Journals

Dr. Krish

[email protected]

+1-702-714-7001Extn: 9040

Biochemistry Journals

Datta A

[email protected]

1-702-714-7001Extn: 9037

Business & Management Journals

Ronald

[email protected]

1-702-714-7001Extn: 9042

Chemistry Journals

Gabriel Shaw

[email protected]

1-702-714-7001Extn: 9040

Clinical Journals

Datta A

[email protected]

1-702-714-7001Extn: 9037

Engineering Journals

James Franklin

[email protected]

1-702-714-7001Extn: 9042

Food & Nutrition Journals

Katie Wilson

[email protected]

1-702-714-7001Extn: 9042

General Science

Andrea Jason

[email protected]

1-702-714-7001Extn: 9043

Genetics & Molecular Biology Journals

Anna Melissa

[email protected]

1-702-714-7001Extn: 9006

Immunology & Microbiology Journals

David Gorantl

[email protected]

1-702-714-7001Extn: 9014

Materials Science Journals

Rachle Green

[email protected]

1-702-714-7001Extn: 9039

Nursing & Health Care Journals

Stephanie Skinner

[email protected]

1-702-714-7001Extn: 9039

Medical Journals

Nimmi Anna

[email protected]

1-702-714-7001Extn: 9038

Neuroscience & Psychology Journals

Nathan T

[email protected]

1-702-714-7001Extn: 9041

Pharmaceutical Sciences Journals

Ann Jose

[email protected]

1-702-714-7001Extn: 9007

Social & Political Science Journals

Steve Harry

[email protected]

1-702-714-7001Extn: 9042

 
© 2008- 2018 OMICS International - Open Access Publisher. Best viewed in Mozilla Firefox | Google Chrome | Above IE 7.0 version
Leave Your Message 24x7