# An Algorithm for Solving Indefinite Quadratic Programming Problems

^{*}

**Corresponding Author:**Siddieg AMAEl, Department of Mathematics, Prince Sattam Bin Abdulaziz University, Alkharj Public Library, Sa'ad Ibn Mu'adh, Al Kharj, Saudi Arabia, Tel: +966 11 588 8888, Email: [email protected]

*
Received Date: Dec 12, 2017 /
Accepted Date: Jan 11, 2018 /
Published Date: Jan 24, 2018 *

### Abstract

In this paper, we give in section (1) compact description of the algorithm for solving general quadratic programming problems (that is, obtaining a local minimum of a quadratic function subject to inequality constraints) is presented. In section (2), we give practical application of the algorithm, we also discuss the computation work and performing by the algorithm and try to achieve efficiency and stability as possible as we can. In section (3), we show how to update the QR-factors of , when the tableau is complementary ,we give updating to the LDLT-Factors of . In section (4) we are not going to describe a fully detailed method of obtaining an initial feasible point, since linear programming literature is full of such techniques.

#### Introduction

In this section we give the detailed outlines of the algorithm of indefinite quadratic programming problems. It references to the numbers of some equations and conditions appeared in the following equations [1-8]:

(1)

and

(2)

the algorithm assumes the availability of an initial basic feasible point [9-15]. The steps are:

1. Given and η, set *K*=*1*.

2. Solve (1) for q.

3. If terminate with otherwise solve for P_{1}.

4. If If is satisfied remove q from η; update the basic variables using [16-20] to : set *K*=*K*+*1* and go to (b) otherwise remove q from η [21-28]; update the basic variables using

5. Set r=1 and k k=k, set *KK*+*r* and add *P _{r}* to η .

6. Solve

7. If is satisfied, update the basic variables using to .

Set K=K+1 and go to (b).

Otherwise update the basic variables using to set r=r+1 and go to (e)

#### Practical Application of the Algorithm

The algorithm presented above represents a general outline of a method for solving indefinite quadratic programming problems rather than an exact definition of a computer implementation. In this section we discuss the computational work performed by the algorithm, and try to achieve efficiency and stability as possible as we can. In doing so we follow, with slight modifications, the work of Gill and Murray which has been applied to active set methods since mid-seventies until now [7-10]. The slight modifications are made to cope with the new forms of the matrices used in the method when G is indefinite. In the case when G is positive (semi definite) the active set methods are considered to be equivalent, [20], pointed out. There he gave a detailed description of that equivalence. He also re-mentioned this equivalence [6]. The major computational work of the algorithm is in the solution of

(3)

and

(4)

(5)

We do not solve them directly; instead, we make use of the special structure of the matrices involved. We use the matrices H, T and U defined in eqn. (5). Thus, accordingly the solution in eqn. (3) is given by:

(6)

(7)

(8)

(9)

(10)

(11)

H, U, T define the inverse of the upper left partition of the basis matrix when the tableau is complementary. This calls for making them available at every complementary tableau. In other words they are to be updated from a complementary tableau to another [12].

Referring to H, T and U are given by:

(12)

Where S^{(K)} and Z^{(K)} satisfy

(13)

and (14)

The choice of S^{(K)} and Z^{(K)} to satisfy in eqns. (13) and (14), respectively is generally open. Here we take the choice given in S=Q_{1}R^{-T}, Z=Q_{2} which is, according to K(Z^{T}GZ)≤K(G), is advantageous as far as stability is concerned. For the sake of making this section selfcontained we show how S^{(K)} and Z^{(K)} are obtained in away suitable to this section. Let:

(15)

represent the QR factorization of , where is n×L_{x}. Thus Q^{(K)} is n×n and R^{(K)} is an L_{K×}L_{K} upper triangular matrix partition Q^{(K)} into , where is L_{x}×n. and is . Thus, in eqn. (15) we have:

(16)

and (17)

so in eqns. (16) and (17) we define *S*^{(K)} and *Z*^{(K)} by:

(18)

and (19)

Where the identity matrix is whose columns are reversed. Thus we conclude by saying that the computation is focused on using the QR factorization of , (when the k^{th} iteration is complementary). So updating these factors is required at each iteration when the tableau is complementary [25].

#### Updating the QR-Factors of

In this section we show how to update the QR-factors of , when the tableau is complementary, also we give updating to the LDLTFactors of .

Following the stream of our discussions, two cases are to be considered separately. The case when the (k+1)^{th} iteration results in a complementary tableau, and the case when complementarity is restored at the (k+r+1)^{th} iteration after r successive non-complementary tableaux. In the first case the factors of are updated to give those of , and this is the case when a column, say, is deleted from . In the second case the factors of are used to give those of , and this is the case when one column, say, is deleted from and then r other columns are added to . We follow the same steps carried [9], with the appropriate modification in the second case. In the first case, let be the n×(L_{k}-1) matrix obtained by deleting the *q*^{th} column, , from . Suppose the QR-factorization of is given by: , partition into:

Where is n×(q-1) and is n×(L_{K}-q). Let R^{(K)} have the form

where R_{11} is (q-1)×(q-1) upper triangular, R_{12} is (q-1)×(L_{k}-q), R_{22} is (L_{k}- q)×(L_{k}-q) upper triangular, is a (q-L) vector, β is an (L_{K}-q) vector and γ is a scalar. Since will have the form:

Now, let be the product of the plane rotations which gives:

where R' is (L_{k}-q)×(L_{k}-q) upper triangular. In this case is an (L_{k}- q+1)×(L_{k}-q+1) orthogonal matrix. Thus if

Which is orthogonal, then

So we obtain and (20)

Thus, only the rows from the q^{th} to the L_{k}^{th} of Q^{(K)} are altered in obtaining Q^{(K+1)}, so if Q^{(K+1)} is partitioned into

(21)

then , in particular, takes the form:

(22)

Note also that the first q-1 rows of are not changed. This fact might be helpful as far as efficiency is concerned if we want to think of another alternative of choosing *q* in eqn. (1), such an alternative is:

(23)

So that increase the number of rows of and R^{(K)} which are unaltered in iteration (*K*+*1*), which in turns reduces the effort, especially when L_{k} is relatively large. We now consider the second case when complementarity is restored at the (k+r+1)^{th} iteration. Let . Let be obtained from by removing .

Thus

(24)

Pre-multiplying both sides in eqn. (24) by Q^{(k+1)} (defined in eqn. (21)) we get

(25)

where is defined in eqn. (20)

(26)

Define the QR-factorization of . Here is (h-L_{k}+1)×(n-L_{k}+1) and orthogonal, and is r×r upper triangular. If

Then (27)

Thus we obtain the QR-factorization of A^{(k+r+1)} with

(28)

(29)

#### Updating the *LDL*^{T}-Factors of

^{T}

The factors of are updated at each iteration when the tableau is complementary. Near the end of this section we show that is always positive definite (on the assumption that is positive semi-definite). Updating these factors is very stable when is positive definite as we shall see. This fact is counted as one of the good numerical features of the method. We consider the case when the (k+1)^{th} iteration results in a complementary tableau. Unfortunately, in the other case when complementarity is restored at the (k+r+1)^{th} iteration, we are unable till now to explore a way of using the factors of in obtaining those of . However n-L_{k}-r, the dimension of , decreases with r, in which case the effort of re-factorizing might not be so much, especially when n-L_{k} is itself small. This calls for choosing the starting *L*_{1} so that n-L_{1} is small. In the case when the number of constraints is greater than *n*, *L*_{1} is chosen to be equal to n; that is the initial guess is a vertex. With this choice , and in the second iteration we might expect a constraint to be deleted from the active set (which is the case when the second iteration is complementary). Otherwise the third iteration will definitely restore complementarity at another vertex leaving . In the former case the dimension of is 1. In general the dimension of keeps on increasing when constraints are deleted, and updating the factors is straight forward as will be shown. On the other hand the dimension of keeps on decreasing when constraints are added to the active set, and in this case we are faced with re-factorizing the factors. We return to the case when the (k+1)^{th} iteration is complementary. Here we are almost copying the work of in eqn. (9). In this case, as in eqn. (25) shows, and using in eqn. (19) we have :

(30)

the matrix is given by:

(31)

It can be shown that when a symmetric matrix is augmented by a single row and a column, the lower-triangular factor is augmented by a single row. Define:

(32)

If we substitute in eqn. (31) and in eqn.(32) into the identity:

, we obtain and d_{n-}L_{K}+1 as the solution of the equations

(33)

and (34)

The numerical stability of this scheme is based on the fact that, if is positive definite, the element must be positive. In this event in eqn. (34) ensures that arbitrary growth in magnitude cannot occur in the elements of

Before ending this section we show that when the kth iteration and the (k+1)^{th} iteration are complementary then must be positive definite.

Let the tableau be complementary at the k^{th} iteration. Let be the matrix whose columns correspond to the active constraints, and . The increase of v_{q} changes *f* according to

(35)

(36)

and changes according to

(37)

For the next tableau (i.e., the (k+1)^{th})) to be complementary u_{qq} must be negative, and the new value of v_{q} must not violate feasibility.

Thus, using in eqn. (35), we have which reflects the fact that *f* possesses a positive curvature along the direction . Now let be obtained from by removing and let Z^{(k+1)} be defined so that .

Pre-multiply both sides of in eqn. (37) by to get:

(38)

showing that lies in the space spanned by the columns of Z^{(k+1)}, so

(39)

for some (n-L_{k}+1) vector . Since along at and , then *f* is minimum along at, where

(40)

We therefore conclude, in the active set methods sense, that the direction solves the equality problem:

minimize

Subject to (41)

Using (39), , thus solves the problem

minimize ,

from which we conclude that is positive definite.

#### Finding an Initial Feasible Point

In this section we are not going to describe a fully detailed method of obtaining an initial feasible point, since linear programming literature is full of such techniques. The method of finding a feasible point has been resolved in linear programming by a technique known as phase 1 simplex [27]. The basis of the technique is to define an artificial objective function, namely: , where is the set of indices of constraints which are violated at the point , and to minimize this function with respect to , subject to the constraints . The function is linear and is known as the sum of infeasibilities. If a feasible point exists the solution of the artificial problem is such that . In the case when m exceeds n, a non-feasible vertex is available as an initial feasible point to phase 1 and the simplex method is applied to minimize . This process will ultimately lead to a feasible vertex [28]. Direct application of this method to finding a feasible point in the case when m is less than n is not feasible since, although a feasible point may exist a feasible vertex will not. Under these circumstances artificial vertices can be defined by adding simple bounds to the variables, but this could lead to either a poor initial point, since some of these artificial constraints must be active, or exclusion of the feasible region. A way out of this dilemma is described [6-9] a number of methods including the above one have been described. Gill and Murray is advantageous in that it makes available the QR-factorization of the initial matrix of active constraints which is then directly used in our algorithm.

#### References

- Bazaraa SM, Sherali DH, Shetty CM (1994) Nonlinear Programming: Theory and Algorithm.
- Coleman LLS (1990) Numerical Optimization. SIAM Books.
- Cottle RW (1990) The Principle Pivoting method positive visited math program. 48: 369-385.
- David GL (2003) Linear and nonlinear programming (2nd edn.), Pearson Education.
- Dennis, Schnabel (1996) Numerical Methods for unconstrained Optimization and nonlinear equation classics in applied Mathematics. Society for Industrial and Applied Mathematics.
- Fletcher R (2013) Practical Methods of Optimization. (2nd edn.), John Wiley and Sons.
- Gill PE, Murray W (1973) A numerically stable form of the simplex Algorithm. Journal Linear Algebra Applors 7: 99-138.
- Gill PE, Murray W (1975) Numerical Methods for constrained optimization. Academic press.
- Gill PE, Murray W (1978) Numerically Stable methods for quadratic programming. Math Programme 14: 349-372.
- Gill PE, Murray W, Margaret W (1981) Practical Optimization. Academic Press.
- Kunisch K, Rendl F (2003) An infeasible active set method for quadratic problems with simple bounds. SIAM Journal on Optimization 14: 35-52.
- Mohsin HAH (1996) An Extension to the Dantzig-Wolfe Method for general quadratic programming. University of Khartoum.
- Byrd RH, Gillbert JC, Nocedal J (2000) A trust region method based on interior point techniques for nonlinear programming. 9: 149-185.
- Boyd S, Vandenberghe L (2004) Convex Optimization. Cambridge University.
- Jorge N (2006) Stephen Wright books. Springer Series in Operations Research and Financial Engineering.
- Stephen GN, Ariela S (1996) Linear and non -liner programming. McGraw Hill, New York.
- Anitescu M (2005) On solving mathematical Programs with complementarity constraints as nonlinear programs. SIAM Journal on Optimization 15: 1203-1236.
- Byrd RH, Hribar ME, Nocedal J (1999) An interior point algorithm for large-scale nonlinear programming. Society for Industrial and Applied Mathematics 9: 877-900.
- Gould NIM, Toint PL (2005) An interactive working set method for large scale nonlinear optimization, Acta Numerica 14: 299-361.
- Gould NIM, Tiont PL (2002) An iterative working set method for large scale non-convex quadratic programming. Applied Numerical Mathematics 43: 109-128.
- Fletcher R, Leyffer S (2002) Nonlinear programming without a penalty function. Mathematical Programming 91: 239-269.
- Gill PEM, Murray W, Wright M (1991) Numerical Linear Algebra and Optimization.
- Kočvara M, Stingl MP (2003) PENNON: A code for non-converx non-linear and semi-definite programming Optimization Methods and software. 18: 317-333.
- Vanderbi RJ, Shanno DF (1999) An interior point algorithm for non-convex non-linear programming. Computation and Applications 13: 231-252.
- Vavasis SA (1990) Quadratic programming is NP. Information Processing Letters 36: 73-77.
- Gill PEM, Murray W, Wright MH (1981) Practical Optimization. Academic Press.
- Vavasis SA (1991) Nonlinear Optimization. Oxford University Press, New York and Oxford.
- Higham NJ (1996) Accuracy and Stability Of Numerical Algorithms. SIAM Publications Philadelphia.

Citation: Siddieg AMAEl (2018) An Algorithm for Solving Indefinite Quadratic Programming Problems. J Appl Computat Math 7: 380. Doi: 10.4172/2168-9679.1000380

Copyright: ©2018 Siddieg AMAEl. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.