alexa An Alternating Sequence Iteration’s Method for Computing Largest Real Part Eigenvalue of Essentially Positive Matrices: Collatz and Perron- Frobernius’ Approach | Open Access Journals
ISSN: 2168-9679
Journal of Applied & Computational Mathematics
Make the best use of Scientific Research and information from our 700+ peer reviewed, Open Access Journals that operates with the help of 50,000+ Editorial Board Members and esteemed reviewers and 1000+ Scientific associations in Medical, Clinical, Pharmaceutical, Engineering, Technology and Management Fields.
Meet Inspiring Speakers and Experts at our 3000+ Global Conferenceseries Events with over 600+ Conferences, 1200+ Symposiums and 1200+ Workshops on
Medical, Pharma, Engineering, Science, Technology and Business

An Alternating Sequence Iteration’s Method for Computing Largest Real Part Eigenvalue of Essentially Positive Matrices: Collatz and Perron- Frobernius’ Approach

Tedja Santanoe Oepomo*

Mathematics Division, Los Angeles Harbor College/West LA College, and School of International Business, California International University, USA

*Corresponding Author:
Oepomo TS
Science, Technology, Engineering, and Mathematics Division
Los Angeles Harbor College/West LA College, and
School of International Business, California International University
1301 Las Riendas Drive Suite: 15, Las Habra, CA 90631, USA
Tel: 310-287-4216
E-mail: [email protected]; [email protected]; [email protected]

Received Date: November 10, 2016; Accepted Date: December 23, 2016; Published Date: December 27, 2016

Citation: Oepomo TS (2016) An Alternating Sequence Iteration’s Method for Computing Largest Real Part Eigenvalue of Essentially Positive Matrices: Collatz and Perron-Frobernius’ Approach. J Appl Computat Math 5:334. doi: 10.4172/2168-9679.1000334

Copyright: © 2016 Oepomo TS. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Visit for more related articles at Journal of Applied & Computational Mathematics

Abstract

This paper describes a new numerical method for the numerical solution of eigenvalues with the largest real part of essentially positive matrices. Finally, a numerical discussion is given to derive the required number of mathematical operations of the new method. Comparisons between the new method and several well know ones, such as Power and QR methods, were discussed. The process consists of computing lower and upper bounds which are monotonically approximating the eigenvalue.

Keywords

Collatz’s theorem; Perron-Frobernius’ theorem; Eigenvalue

AMS subject classifications

15A48; 15A18; 15A99; 65F10 and 65F15

Introduction

A variety of numerical methods for finding eigenvalues of nonnegative irreducible matrices have been reported over the last decades, and the mathematical and numerical aspects of most of these methods are well reported [1-24]. In recent article of Tedja [19], it was presented the mathematical aspects of Collatz’s eigenvalue inclusion theorem for non-negative irreducible matrices. It is the purpose of this manuscript to present the numerical implementation of [19]. Indeed, there is the hope that developing new numerical method could lead to discovering properties that might be responsible for better numerical method in finding and estimating eigenvalues of non-negative irreducible matrices. Birkhoff and Varga [2] observed that the results of the Perron-Frobernius theory and consequently Collatz’s theorem could be slightly generalized by allowing the matrices considered to have negative diagonal elements. They introduced the terms “essentially non-negative for matrices, the off-diagonal elements of which are non-negative, and “essentially positive matrices” for essentially nonnegative, irreducible matrices. The only significant changes is that whenever Perron-Frobernius theory and Collatz’s theorem refer to the spectral radius of a non-negative matrix A, the corresponding quantity for an essentially non-negative matrix à is the (real) eigenvalue of the maximal real part in the spectrum of Ã, also to be denoted by Λ[Ã]. Of course Λ[Ã] need not be positive and it is not necessary dominant among the eigenvalues in the absolute value sense.

Definition

Incidentally, if A is what you call an essentially positive matrix (so it is a real matrix with positive off-diagonal entries), then A + aI has positive entries for sufficiently large positive a, so A + aI has a Perron eigenvalue, p say, with corresponding eigenvector v, say, having positive entries. But p is the only eigenvalue of A + aI for which there is a corresponding eigenvector with positive entries. Thus p - a is the only eigenvalue of A with a corresponding eigenvector having all its entries nonnegative, but p - a is real and need not be positive (since a could be greater than p). In this manuscript the eigenvalue corresponding to a positive eigenvector is real. There probably is a term already in the literature for "essentially positive" For example: Z-matrix, tau-matrix, M-matrix, and Metzler matrix all refer to matrices with off-diagonal entries all of the same sign, but having extra conditions.

Background

Let A be an n x n essentially positive matrix. The new method can be used to numerically determine the eigenvalue λA with the largest real part and the corresponding positive eigenvector x[A] for essentially positive matrices. This numerical method is based on previous manuscript [16]. A matrix A=(aij) is an essentially positive matrix if aij ≥ 0 for all ij, 1 ≤ i, jn, and A is irreducible.

Let x > 0 be any positive n components vector [19]. Let

Equation    (1)

Equation    (2)

Equation    (3)

Equation    (4)

Equation * In Ostrowsky [25] m(x) is defined as:

Equation        (5)

The following theorem is an application of corollary 2.3 from [16] to the design of numerical method using the Perron-Frobenius-Collatz mini-max principle for the calculation of x[A] [10].

Let {xp} (p=0, 1, 2,……) be a sequence of positive vectors and Equation.

Theorem 1 If the sequence {xp} (p=0, 1, 2,…) of positive unit vectors is such that either Equation as p → ∞ then xpx[A] ≡ ξ. Moreover, the sequence {m(xi)}, {xi} are equi-convergent in the sense that an index υ and a constant K > 0 exist such that Equation if I υ. Similar statements can be expressed if Equation is known. ([19], theorem 2.4)

Numerical Implication of Theorem 1

We will now define a group of sequences, the “decreasingsequence.”

Decreasing-sequence

Let Yr(x) (r=1, 2, 3,…,n) be an n component vector valued function such that the following equation is valid:

Equation    (6)

Here Ωr(x) (r=1, 2, 3,…, n) are scalar valued functions which are having properties as follows:

Ωr(x) is a continuous positive valued function which maps the set of positive vectors V+ into a set of real numbers R.

Ωr(x) ≤ xr.

Fr(Yr(x)) M(x).     (7)

• Equality in c) may be applicable only if Yr(xr-1) converges to x; this yields that fr(x)=M (x).

• If for some x > 0 Ωr(x)=xr for each r∈ (N=1, 2, 3,…, n) then

Equation. This will imply that fn(x)=λ according to Collatz, hence x=x[A]. In Faddeev [25], Equation.

Then n-component vector valued function Yr(x) defined in equation (6) will be referred to as the Decreasing-functions. A sequence {xp} (p=0, 1, 2, 3,…) of positive n-vectors is constructed which satisfy the conditions of theorem 1. The terms of the sequence {xp} are generated by the following recursive formula:

xp + 1=Yp + 1     (8)

Where Yk(x)=Yk + n (x) (k=1, 2, 3,…). If x0 is given the sequence {xp} is completely defined. xp + 1 and xp differ only in the rth component where

rp + 1 (mode n)      (9)

Such a sequence will be called a decreasing maximum ratio sequence or briefly decreasing-sequence.

Corollary 1: Any decreasing-sequence converges to xA.

Proof: From equation (6) and 0 ≤ Ωr(x) ≤ xr yield to imply 0 ≤ Yr(x) ≤ xxV+. Finally, it follows from inductive construction in equation (8) that x0 ≥ x1 ≥ x2 ≥…≥ 0. Therefore convergence of xp as p → ∞ is always guaranteed. The condition that 0 ≤xp<xp-1<…x0 necessarily implies that infp||xp|| < ∞ is satisfied.

Note: In any case, if we have an increasing bounded sequence, the limit always exists and is finite (the sup is an upper bound/ the sequence is bounded from above by a supremum). It does converge to the supremum. This criterion is also true for a decreasing bounded sequence and is bounded below by an infimum. It will converge to the infimum. But we need to be very careful to define the term bounded, it means bounded from above and bounded from below. These statements hold for sets of real numbers. We will now define a second group of sequences, the “Increasing-sequence”.

Increasing-sequence

Let yr(x) (r=1, 2, 3,…, n) be an n component vector valued function such that the following equation is valid:

Equation     (10)

Here ωr(x) (r=1, 2, 3,…, n) are scalar valued functions which are having properties as follows:

ωr(x) is a continuous positive valued function and bounded in Equation

ωr(x) ≥ xr

fr(yr(x)) ≥ m(x)     (11)

Equality in c) may be applicable only if yr(xr-1) converges to x; this yields that fr(x)=m(x).

If for some x > 0 ωr(x)=xr for each r∈ (N=1, 2, 3,…, n) then

Equation. This will imply that fn(x)=λ according to Collatz, hence x=x[A]. In Faddeev [25], Equation.

Then n-component vector valued function Yr(x) defined in equation (9) will be referred to as the Decreasing-functions. A sequence {xp} (p=0, 1, 2, 3,…,) of positive n-vectors is constructed which satisfy the conditions of theorem 1. The terms of the sequence {xp} are generated by the following recursive formula:

xp + 1=Yp + 1(xp)    (12).

Where yk(x)=yk + n(x) and y(x) is as defined in equation (9) (k=1, 2, 3,…). If x0 is given the sequence {xp} is completely defined. Such a sequence will be called an increasing minimum ratio sequence or briefly increasing-sequence.

Corollary 2: Any Increasing-sequence converges to xA.

Proof: Since wr(x) ≥ x, so that yr(x) ≥ x. This yields to imply that, starting with x0 ≥ 0, we have xp + 1=Yp + 1(xp) ≥xp≥…x0 ≥ 0, so that xp automatically converges as long a supp||xp||<8.

Numerical tests indicate that an alternation of the application of the decreasing and increasing sequences will converge faster than either the decreasing or increasing sequence separately. Therefore, we will define a sequence of vectors {xp} which are constructed by alternating methods of the decreasing or increasing type functions.

We will describe a sequence of n steps which generate xj + 1xj + n (j=0, n, 2n,…) in an iteration. If the decreasing functions (yr(x),r=1, 2,…,n) are used to generate the n terms of the sequence {xp} during iteration as defined in equation (8) then we will say that the iteration is in decreasing mode. Similarly, the iteration is in the increasing mode if increasing functions are used as defined in equation (12). Successive terms of the sequence {xp} can be defined recursively in the following respects:

xk +1 = Yk +1 (xk ) for k=0, 1, 2,…or xk +1 = yk +1 (xk ) for k=0, 1, 2…     (13)

Where k=0 corresponds to the input vector. The first iteration could be either in the increasing or decreasing mode. We also define the sequence of real number {t?} and {T?} as follows: t0=m(x0) and T0=M(x0). At the end of each iteration we consider the following inequalities:

Equation     (14)

Where (?=1, 2, 3…) are indexes of the iteration. If inequalities (14- 1) and (14-2) are met, we may set

Equation      (15)

And the mode or sequence of the next (? + 1)st iteration will be different from the ?st iteration, i.e. the sequence of the ?st iteration is different from the (? + 1)st iteration. If either inequality (14-1) or (14- 2) is not satisfied then the mode or sequence of the (? + 1st) iteration is the same as that of the ?st iteration or unchanged and we set: t?=t?-1 and T?=T?-1.

A sequence having the above mentioned properties will be called the alternating sequence iteration.

Corollary 3: Any alternating sequence iteration converges to xA.

Proof: If inequalities (14-1) and (14-2) are satisfied, it means that the estimated result is located between the upper bound, M(xn + 1), and lower bound, M(xn + 1), so no change in the iteration sequence is required. Otherwise, the iteration sequence is required to be switched. Since if only either one of those inequalities is met then it means that the result is not between M(xn + 1) and m(xn + 1). The condition stated in inequalities (14-1) and (14-2) will ensure that the iteration results are located between the upper and lower bound. As the iterations are continuing, eventually, M(xn + 1) and m(xn + 1) are getting closer and closer. This condition will accelerate the rate of convergence. At the end, equation (4) would be almost zero.

Corollary 1, 2, and 3 described above lay the foundation of the procedure of an iterative method for the determination of the positive eigenvector of essentially positive matrices. The choices of the functions Ωr(x), ωr(x) are open, but are subject to the restrictions specified in connection with the decreasing and increasing sequences. In theorem 2 and theorem 3 which follow, a possible choice for Ωr(x) and ωr(x) is given.

Theorem 2

Let Hr(x) (r=1, 2, 3,…, n) denote continuous, positive valued functions which map the set of positive vectors V+ into a set of real number R such that m(x) ≤ Hr(x) ≤ M(x)   (16)

and equality may hold on either side of equation (16) only if m(x)=M(x)=λA.

For r ε N (N=1, 2, 3,…, n), let Ωr(x)=x if fr > Hr or = Equation if fr < Hr    (17)

Where HrHr (x), frfr (x), and all notations are defined in equations (1, 2, 3, 4, and 5). Then the functions Ωr(x) (together with a starting vector x0) define a decreasing sequence.

Proof: We will first show that if fr < Hr, the term (Hrarr) in equation (17) is always positive. By definition from equations (1) and (2)

Equation as fr < Hr the above equation yields Equation     (18)

For an essentially positive matrix all the off diagonal elements cannot be zero and consequently for any vector Equation. Therefore (18) becomes 0 <Hrarr      (19)

It is clear from equations (17) and (19) that Ωr(x) is a positive valued function. Equation (17) can also be used to derive estimates for the function θrθr(x)=fr(Yr(x)). With the abbreviation used before,

θr=fr if frHr or=Hr if fr < Hr       (20)

The above inequalities are equivalent to θr=max[fr, Hr]      (21)

θr is the maximum of two continuous functions and is therefore continuous. By definition fr, Hr are either less than or equal to M. Therefore from equation (21) θrM. Thus Ωr(x) has property of c in equation (7) of decreasing sequences. From the definition of θr, Ωr(x),zr we get Equation      (22)

Equation (20) implies θr >fr    (23)

From equations (22), (23) Equation therefore Ωr(x) ≤ xr. Thus Ωr(x) has property b in equation (7) of decreasing sequences.

Equation (22) is equivalent to Equation      (24)

As stated before for an essentially positive matrix, zr is positive, and therefore it is obvious from (22) that θr > arr; and thus the denominator in equation (24) is positive. Therefore, by the established continuity of θr(x), Ωr(x) it is also continuous. Thus Ωr(x) has property of (a) of equation (7) of decreasing sequences. From equation (21) θr=M(x) is possible only if max(fr, Hr)=M(x). As Hr < M(x) by assumption unless fr (x)=M(x), so in any case θr(x)=M(x); fr(x)=M(x).

Thus Ωr(x) has property d in equation (12) of decreasing sequences.

Finally suppose that Ωr(x)=xr for r=1, 2, 3,…, n, and thus θr(x)=fr(x). From equation (20) θrHr as θr=fr, we get frHr. The last inequality is equivalent to max fr = M(x)≥Hr.

Since M(x) ≥ Hr by definition, we have M(x)=Hr. By assumption this is possible only if m(x)=M(x). Hence Ωr(x) has property of e in equation (7) of decreasing sequences. This completes the proof of theorem 2.

Theorem 3

Let hr (r=1, 2, 3,…, n) denotes a continuous bounded function mapping

Equation     (25)

and equality may hold on either side of equation (24) only if m(x)=M(x)=λA, x=xA, Equation has been introduced in equation (11). We further define

wr(x)=xr         (26)

If frhr, and

Equation     (27)

If Equation otherwise Then the function wr (together with an x0 > 0) define an increasing sequence.

Proof: We will first show that if fr > hr, the term (hrarr) in equation (17) is always negative. By definition Equation ; as fr > hr the above equation yields Equation      (28)

For an essentially positive matrix, all the off diagonal elements cannot be zero and accordingly for any other vector Equation ; therefore equation (28) becomes 0 >hrar

It is clear from equation (26) and (28) that ωr(x) is a negative valued function. Equation (26) can also be used to derive estimates for the function, θr=θr (x)=fr (yr(x)). With the abbreviation used before θr=fr if:

frhr or=hr if fr > hr       (29)

The above inequalities are equivalent to:

θr=min[fr, hr]       hr(30)

θr is the minimum of two continuous functions and is thus continuous. By definition fr, hr are either greater than or equal to m. Therefore, from equation (30) θr ≥ m. Thus ωr(x) has property of c in equation (11) of increasing sequences. From the definition of θr, ωr(x),zr we get:

Equation     (31)

Equation (29) implies

θrfr    (32)

From equation (31), (32) Equation therefore ωr(x) ≥xr.

Thus ωr(x) has property b in equation (11) of increasing sequences. Equation (31) is equivalent to:

Equation     (33)

As stated before, for an essentially positive matrix, zr is positive, and hence it is obvious from (31) that θr < arr; and thus the denominator in equation (33) is negative. Accordingly by the established continuity of θr(x), ωr(x) it is also continuous. Thus ωr(x) has the property of (f) of equation (11) of increasing sequences. From equation (30) θr=m(x) is possible only if min(fr, hr)=m(x). As hr > m(x) by assumption unless fr(x)=m(x), so in any case θr(x)=m(x); fr(x)=m(x). Thus r(x) has property i in equation (11) of increasing sequence. Finally suppose that ωr(x)=xr for r=1, 2, 3,…, n, and thus θr(x)=fr(x). From equation (29) θrhr as θr=fr, we get frhr.

The last inequality is equivalent to Equation. Since m(x)=hr by definition, we have m(x)=hr. By assumption this is possible only if m(x)=M(x). Hence r(x) has property of e in equation (11) of increasing sequences. This completes the proof of theorem 3.

The Requirements of Functions Hr(x), and hr(x)

The functions Hr(x), and hr(x) can be selected in many means. The following are a few of the possible choices:

a) Equation where r ∈ N (N=1, 2, 3,….,n) and M*(x)=min(m(x),M1) and M1 is an upper estimate of the eigenvalue λA, e.g., Equation. In [8], m(x) is defined as Equation and M(x) is defined as Equation. However in Tedja S [19], m(x) = ρ*(y), and M(x)=ρ*(y). While in Collatz [6], m(x) = Equation.

b) For full matrices, a reasonable choice for Hr(x) and hr(x) are derived from the arithmetic mean of the Equation andEquation whereEquation.

c) Another simple choice for hr(x); Hr(x) is a weighted arithmetic mean of Equation

Equation υ(x) can also be defined in the following mean:Equation    (34)

New Alternating Iteration Method

A step of the alternating sequence iteration method consists in modifying a single component xr of x. As a result zi, fi, ||x||,υ(x) will have to be calculated at each iteration. Calculating zi, ||x|| from their definition in equations (1), and (5) will be referred as recalculating. A considerable reduction of calculation can be accomplished if instead of recalculating these terms are merely updated according to the following steps:

Equation andEquation I=1, 2, 3,……,n

Equation     (35)

These steps will be referred to as the updating iteration. The updating equations can be obtained easily from equations 1 through 5. To prevent the accumulation of round off errors after a number of iterations, the variables will have to be recalculated instead of updating. If we are working in a double precision, our previous experiences indicate that it is more than sufficient to recalculate after every twenty five iterations.

Over-Relaxation Method

From various choices for functions Hr(x),hr(x) and υ(x) as indicated in equation (34) seems to give a rapid convergence at least for full matrices. The purpose of this section is to present a variant of equation (35) by introducing the over-relaxation technique. We consider the following equation

Equation      (36)

As it well known, for several suitable values forγ, is the overrelaxation factor, and 1 ≤ γ ≤ 2. Equation (36) may be useful in case of banded matrices. The over-relaxation method contains the following cases:

• γ=1 for simultaneous over-relaxation method, and

• 1 < γ < 2 for over-relaxation method.

Error vector in all methods the quantity Δ(x)=M(x)–m(x) as indicated in (4) is used as a measure of accuracy.

Discussion

Before we go any further, the following issues should be understood. Are both eigenvalues and eigenvectors required to be calculated, or are eigenvalues by itself enough? Is only the absolute largest eigenvalue of interest? Does the matrix have special characteristics such as real symmetric, essentially positive, and so on? If all eigenvalues and eigenvectors are required then this new cannot be used;

If a matrix (A) is essentially positive and the positive eigenvector (xA) and the corresponding eigenvalue (λA) are of particular interest, then the new method can be used. Each step of the numerical method requires n2 + 0(n) computations, if the parameters are chosen for the best rate of convergence. It is possible to assume that in half the steps practically no computations are needed, resulting thereby in Equation computations for each iteration. As previously stated, after some iteration the variables will have to be recalculated instead of updating. Recalculations need n2 + 0(n) additional computations. If the computations are performed in double precision, recalculation will not have to be performed so often. As a result, recalculations do not increase the total number of computation significantly.

For our numerical comparisons all three methods, Power, New Method, and QR methods, were tried to solve eigenvalue of the following matrices:

All three methods were used to estimate the eigenvalue of Hilbert matrices of various orders. Let Hn be a Hilbert matrix of order n. The elements of Hilbert matrix are defined according to the following relation: Equation

The results of the 3 methods can be seen in Tables 1-3.

Operations Δ(x) LogΔ(x)
1640 1.423 0.3528
4921 1.41 × 10-1 -1.958
8200 1.39 × 10-2 -4.275
11490 1.3441 × 10-3 -6.611
14764 1.296 × 10-4 -8.949
18040 1.25 × 10-5 -11.288
21322 1.208 × 10-6 -13.626
24600 1.116 × 10-7 -15.965
27880 1.123 × 10-8 -18.304

Table 1: Hilbert matrix H40 new method.

Operations Δ(x) LogΔ(x)
4800 1.016 × 10-1 -2.288
9600 3.71 × 10-3 -5.597
14400 8.549 × 10-5 -9.368
19200 3.068 × 10-6 -12.693
24000 7.061 × 10-8 -16.467
28800 2.536 × 10-9 -19.289
29700 9.977 × 10-10 -20.628

Table 2: Hilbert matrix H40 power method.

Operations Δ(x) LogΔ(x)
17400 2.423 × 10-2 -3.21
23400 1.007 × 10-4 -9.202
26600 8.263 × 10-9 -18.614
332000 2.422 × 10-2 -3.722
442000 1.0096 × 10-4 -9.212
506000 8.239 × 10-9 -18.615

Table 3: Hilbert matrix H40 QR method.

• We would like to find the efficiency of the three numerical methods, when a matrix had eigenvalues of nearly the same modulus. So it was decided to pick a matrix of order n that was almost cyclic (cn). Consider the below mentioned matrix

Equation. The elements of A1, 2 and A2, 1 were defined as follows,

Equation, A1, 2 is a (8, 12) matrix, and A2, 1 is a (12, 8) matrix.

The elements of A1, 1 and A2, 2 were defined in the following respects, Equation, A1, 1 is a (12, 12) matrix, and A2, 2 is a (8, 8) matrix.

If the elements of A1, 1 and A2, 2 were replaced by zero, then the matrix would be nearly cyclic. For comparisons, the results of those 3 methods can be seen in Tables 4-6.

Operations Δ(x) LogΔ(x)
3780 3.091 × 10-1 -1.174
7980 2.089 × 10-1 -1.566
1188 1.421 × 10-1 -1.950
16390 9.706 × 10-2 -2.334
20580 6.6381 × 10-2 -2.714
24780 4.5461 × 10-2 -3.092
28990 3.111 × 10-2 -3.478
33190 2.1341 × 10-2 -3.848
37380 1.4651 × 10-2 -4.228

Table 4: Almost cyclic matrix H40 power method.

Operations Δ(x) LogΔ(x)
1400 2.76 × 10-2 -3.589
2800 7.607 × 10-4 -7.187
4200 2.293 × 10-5 -10.699
5600 6.3492 × 10-7 -14.277
7000 1.9246 × 10-8 -17.774
8400 5.3279 × 10-10 -21.358

Table 5: Almost cyclic matrix C40 new method.

Operations Δ(x) LogΔ(x)
2400 3.091 × 10-1 -6.622
3260 2.089 × 10-1 -8.308
4000 1.421 × 10-1 -14.178
44680 9.706 × 10-2 -21.798
21340 6.6381 × 10-2 -6.620
29340 4.5461 × 10-2 -8.30892
37300 3.111 × 10-2 -14.178
41340 2.1341 × 10-2 -21.798

Table 6: Almost cyclic matrix C40 QR.

• Introducing a proper shift of origin could speed up the convergence of power method [9]. So it was decided to try that kind of matrix by introducing a shift of origin would not help the speed of convergence. Such a matrix of order n(Qn) can be given by the following relations.

Equation 1≤ i, j ≤ n. and Equation 1≤i ≤ n.

The results of our tests are indicated in Tables 7-10 for Arnoldi.

Operations Δ(x) LogΔ(x)
4200 6.506 × 103 8.788
8.788 1.0023 × 103 4.608
3780 6.196 1.824
4.608 1.065 0.0609
7990 2.1967 × 10-1 -1.518
1.824 4.985 × 10-2 -2.988
12190 1.160 × 10-2 -4.458
0.0609 2.703 × 10-3 -5.916
16390 6.294 × 10-4 -7.372

Table 7: Matrix Q40 power method.

Operations Δ(x) LogΔ(x)
2400 3.150 1.134
3260 2.089 10-4 -8.472
4000 4.778 10-7 -14.558
44680 3.124 10-10  

Table 8: Matrix Q40 new method.

Operations Δ(x) LogΔ(x)
4260 1.429 × 101 2.6591
5260 1.322 × 10-2 -4.3429
5870 4.664 × 10-5 -9.9722
41300 1.4294 × 10 2.6588
51300 1.324 × 10-2 -4.3262
55300 4.668 × 10-5 -9.9722

Table 9: Matrix Q40 new method.

Operations Δ(x) LogΔ(x)
4350 1.439 × 101 2.5591
4500 1.322 × 10-2 -6.473
5000 4.5778 × 10-7 -15.558
54678 4.134 × 10-10  

Table 10: Matrix Q40 arnoldi method.

We will assume that we are interested in the positive eigenvector and the corresponding eigenvalue of the essentially positive matrix. From our trials, it is obvious that in all three cases the rate of convergence of our new method is better than or at least as fast as the power [9]. The QR [26] method converges very slowly in the last two cases, when the separation between the eigenvalues is poor. Let us consider the results of case b, when the matrix is nearly cyclic. For a cyclic matrix there are some eigenvalues of equal modulus, and so for a matrix that is “near cyclic” it is plausible to assume the separation between the modules is very poor. The new method takes about 5, 700 multiplications and divisions to reach an accuracy of 8 digits; which is about 5 times the computations of the power method and the QR method reaches an accuracy of 2 digits and 4 digits respectively. We should remember that the QR method is not specifically designed to calculate just one eigenvalue; therefore, a comparable efficiency cannot be expected. Thus from our recent experience, we can conclude that the new method shows a good speed of convergence even when the separation of the eigenvalues is poor. However, in the case of banded matrices the new method converges slowly. The new was tried on various banded matrices arising from finite difference approximation to boundary value problems of ordinary differential equations. A computer code was written specially for banded matrices, to take advantage of the large number of zero elements in a banded matrix. We will here summarize the results of our computer runs with the following (20, 20) matrix

aii= – 2 if 1 ≤ in; ai + 1, i=ai, i + 1=1, 1 ≤ i ≤ n-1; ai, j=0 otherwise… (37)

The over relaxation method as described in equation (36), was tried on the previously mentioned matrix with values of γ ranging from 1 to 1.99. The speed convergence did not show a remarkable dependence of γ. An 8 digit of accuracy was obtained in 168 iterations for γ=1.73, whereas for full matrices the new method gave a 9 digit of accuracy in 21 steps.

We will now return our attention to full matrices. Let Rn be a matrix (of order n) with pseudo-random entries. The new method and the power method were tried on each family of matrices (Rn, Cn, Hn) of order n=20, 40 and 80. The speed of convergence is almost the same for the two methods remembering that each iteration step of the power method requires about twice as many computations. Within one method it is somewhat surprising that the number of iteration steps required for a given accuracy hardly depends on the order of the Hilbert matrix at all.

No convergence rates are presented, but the experiments shown indicate a linear convergence comparable to classical methods, such as the power method. Of course other methods, such as

Rayleigh Quotient Iteration, exhibit faster convergence rates (usually cubic).

The table of computational results is presented for a special class of dense matrices, namely Hilbert matrices, and others of similar structure. The new proposed method is about 10% faster (in number of operations) than the power method (compare 24, 000 operations in Table 2 to 27,880 operations in Table 1 for an error of the order of 10–8). For sparse matrices stemming for standard PDE problems, the new method is inferior.

Acknowledgment

The author wishes to thank Prof. Schneider, Mathematics Department at the University of Wisconsin in Madison, for his suggestions during the writing of the author’s earlier manuscript and criticism during the writing. His criticism and suggestions yielded the development of this method. The author acknowledges indebtedness to him, and to his stimulating comments during the review of the earlier article [19]. The author would also like to thank Professor Tsuypshi Ando at Hokkaido University, for his proof of corollaries 1, 2, and 3 for its convergences, which is what led me to pursue most of the research collected herein. Also, a special thanks to him for his enthusiasm and help. Lastly, the author would like to express his gratitude to Prof. Thomas Laffey in the school of mathematical sciences, University of Dublin, for the useful comments regarding the better clarification of essentially positive definite matrices and the refinement of the proof for corollaries 1 and 3.

References

Select your language of interest to view the total content in your interested language
Post your comment

Share This Article

Article Usage

  • Total views: 408
  • [From(publication date):
    December-2016 - Sep 25, 2017]
  • Breakdown by view type
  • HTML page views : 358
  • PDF downloads :50
 

Post your comment

captcha   Reload  Can't read the image? click here to refresh

Peer Reviewed Journals
 
Make the best use of Scientific Research and information from our 700 + peer reviewed, Open Access Journals
International Conferences 2017-18
 
Meet Inspiring Speakers and Experts at our 3000+ Global Annual Meetings

Contact Us

Agri, Food, Aqua and Veterinary Science Journals

Dr. Krish

[email protected]

1-702-714-7001 Extn: 9040

Clinical and Biochemistry Journals

Datta A

[email protected]

1-702-714-7001Extn: 9037

Business & Management Journals

Ronald

[email protected]

1-702-714-7001Extn: 9042

Chemical Engineering and Chemistry Journals

Gabriel Shaw

[email protected]

1-702-714-7001 Extn: 9040

Earth & Environmental Sciences

Katie Wilson

[email protected]

1-702-714-7001Extn: 9042

Engineering Journals

James Franklin

[email protected]

1-702-714-7001Extn: 9042

General Science and Health care Journals

Andrea Jason

[email protected]

1-702-714-7001Extn: 9043

Genetics and Molecular Biology Journals

Anna Melissa

[email protected]

1-702-714-7001 Extn: 9006

Immunology & Microbiology Journals

David Gorantl

[email protected]

1-702-714-7001Extn: 9014

Informatics Journals

Stephanie Skinner

[email protected]

1-702-714-7001Extn: 9039

Material Sciences Journals

Rachle Green

[email protected]

1-702-714-7001Extn: 9039

Mathematics and Physics Journals

Jim Willison

[email protected]

1-702-714-7001 Extn: 9042

Medical Journals

Nimmi Anna

[email protected]

1-702-714-7001 Extn: 9038

Neuroscience & Psychology Journals

Nathan T

[email protected]

1-702-714-7001Extn: 9041

Pharmaceutical Sciences Journals

John Behannon

[email protected]

1-702-714-7001Extn: 9007

Social & Political Science Journals

Steve Harry

[email protected]

1-702-714-7001 Extn: 9042

 
© 2008-2017 OMICS International - Open Access Publisher. Best viewed in Mozilla Firefox | Google Chrome | Above IE 7.0 version
adwords