alexa Development of a Procedure for Finding Active Points of Linear Constraints | Open Access Journals
ISSN: 2168-9679
Journal of Applied & Computational Mathematics
Make the best use of Scientific Research and information from our 700+ peer reviewed, Open Access Journals that operates with the help of 50,000+ Editorial Board Members and esteemed reviewers and 1000+ Scientific associations in Medical, Clinical, Pharmaceutical, Engineering, Technology and Management Fields.
Meet Inspiring Speakers and Experts at our 3000+ Global Conferenceseries Events with over 600+ Conferences, 1200+ Symposiums and 1200+ Workshops on
Medical, Pharma, Engineering, Science, Technology and Business

Development of a Procedure for Finding Active Points of Linear Constraints

Said Choufi*

Department of Mathematics, University of Batna, Route de Biskra, Batna, Algeria

*Corresponding Author:
Said Choufi
Department of Mathematics
University of Batna, Algeria
Tel: +213 33 31 91 34
E-mail: choufi_said @yahoo.fr

Received Date: March 03, 2016; Accepted Date: April 28, 2017; Published Date: May 06, 2017

Citation: Choufi S (2017) Development of a Procedure for Finding Active Points of Linear Constraints. J Appl Computat Math 6: 352. doi: 10.4172/2168-9679.1000352

Copyright: © 2017 Choufi S, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Visit for more related articles at Journal of Applied & Computational Mathematics

Abstract

 In this paper, we present an iterative method to determine active point of linear constraints. It is based on two basic operations which are addition and permutation of constraints. This procedure generates a finite sequence of points that basis in a new lemma and a new formula direction, the laspoint of sequence constitutes an active point, and this procedure gives also two matrices. The first one is constituted by the active constraints which are linearly independent and the second one is a matrix whose columns are the basis vectors of the kernel of the first matrix. 

Keywords

Active point; Interior point; Kernel of matrix; Optimisation continuous

Introduction

Currently, the domain of optimization is attracting considerable interest from the academic and industrial communities, see, for instances [1-5]. The various existing techniques for solving a given problem and the efficient algorithmic implementations open up many perspectives and diverse applications in different areas [6-8].

There are different methods of optimization exist in the literature, among other, we cite, the simplex [1], the interior point [1,2], the exterior point methods, and evidently with their improved versions [9-13].

In most optimization problems, initialization points are necessary and required in the resolution algorithms for performing numerical implementations [6-8,11,12]. However, the choice of the initialization points is not general, and the values of these points depend strongly on the adopted technique. Furthermore, these points are considered as active or feasible in the applied method.

In this study, we are interested by the optimization problem of the CSLP type (constraint satisfaction linear problems), where the set of constraints are linear and it is defined by determined the active point xact of a set E such as:

E = {xIRn , subject to Axb} (CSLP), where

A is an mxn data matrix, not necessarily full rank and b is a given as vector IRm

The problem to solve is the determination of the active points satisfying all the aforementioned constraints.

If the values of the matrix A and the vector b components are integer numbers, the above problem is discrete and can be solved by using the ellipse method [4]. However, in the case of optimization continuous, this is not studied in literature.

This past has motivated this investigation in the purpose of giving a theoretical and practical method of resolution of this problem.

The method that we propose is based on the construction of an iterative algorithm, such that, from any initial point (feasible or not) one produces another better point x, then one associates to it two matrices A and Z. The lines of A are constituted by the active constraints, which are linearly independent. The columns of matrix Z are constituted of the kernel of matrix A, and are also linearly independent.

This process allows to generate a sequence of points (xk)k∈IN which converge to the point that one search (feasible or active). It is important to mention that our method can be applied without knowing whether this domain of constraints is empty or not.

In the numerical implementation, we used the scientific environment FOTRAN F90 under windows, and the obtained results were very satisfactory.

The rest of this paper is organized as follows. In Sec. 2, we give some definitions and propositions that are used in this article. In Sec. 3, we present the construction of a die kernel. In Sec. 4, we give description of the active method and its implementation in Sec.5, we form algorithm for the active method. At last, we summarize our results in the last section.

We note that:

I={1, 2, 3,..., m} the set of constraint indices.

Xk: The iteration point k.

Equation the set of constraints indices containing the point xk externally.

Equation the set of constraints indices containing the point xk internally.

?: Orthogonal.

nt: The total number of iterations.

xact: The active point.

xfe: The feasible point.

Δ K: The set of active constraints in xk point.

Ak: The matrix formed by the active constraints linearly independent at the point xk.

Zk: The kernel of the matrix Ak.

np: The total number of permutations resulting by the choice x0.

zeli: The columns of index i deleted on the matrix Z1.

Definitions and Propositions

Definition

The constraint Equation is called active in xk if Equation [8].

This definition leads to the fact that, for any vector v, we can introduce all

Equation (3.1)

We then say that the vector v is a regular point of all eligible

Equation

(or just a regular point of the constraints) if and only if it is a regular point of Equation.

Definition

Let D be a domain of constraints in IRn, defined by [5]

Equation (3.2)

We call all candidates constraints, any set of constraints, among the

Equation considered as active constraint of the solution that we search for.

Proposition

A direction d is tangent to x∈X, if and only if there exists a sequence (dn) of limit d, and a sequence (μn) of positive real zero limit, such that x+μndn∈X. [5].

Remarks

Most of the algorithms fail when they have to solve a problem whose constraints are not qualified in the solution. Therefore it is preferable to change the description of the set of constraints before solving the problem.

Proposition (CS contraints satisfaction)

The constraints of D domain are qualified at the point x ∈ D, if the gradients in x of the active constraints [5].

Equation are linearly independent, where

Equation such that p +q=m.

Construction of Kernel Matrix

This section contains the most important results for the kernel numerical calculation of any matrix, especially in the case where we have a lot of matrices of large size, and to avoid repetition of the calculations.

The calculation of a matrices and vector kernel obeys certain rules of compliance.

These results are given in the following:

Lemma

Let v be a vector v ∈Rn, which defines a set of constraints as follows:

Equation Where α is a real number (4.1)

then v⊥Δ i.e. v is orthogonal to Δ.

Proof

Let x1 , x2 ∈ Δ, then vt x1α = 0 (1)

vt x2α = 0 (2)

Bysubtraction(1) of(2) itcomes: Equation

This gives that Equation for each Equation

From this lemma, we can construct sets Δ+ and Δ - that help us to lead the constraint equations in the following two corollaries:

Corollary

Let Δ be a set of constraints of the form: Equation

Where v is a vector of IRn ,αR . Then Equation such that a = xx' wivtht Equation (4.2)

Corollary

Consider the same data of corollary 4.2, then Equation (4.3) where −v∈Δ .

Lemma

Let v be a vector in |Rn*, then its kernel is formed by the following basis Equation where Equation, for each i=1,2,…,n-1 (4.4)

Proof

It suffices to show that the rows of the matrix Equation are linearly independent. Consider the scalars Equation satisfying Equation Multiplying by vt, it follows:Equation such as Equation then λ=0. And (1), we have: Equation for each i=1,…,n-1, from where the result.

Lemma

Let v1 and v2 be a two vectors in Rn, the set Equation is a basis of the kernel vector v1, where the im index {1,2,...., n-1} satisfies

Equation (4-5)

Then the matrix Equation is invertible if and only if Equation.

Proof

The same proof of Lemma (4.4), in the rows of the matrix (v1 v2 z1…….zim-1…..zn-1)t which are linearly independent vectors.

Corollary

Keeping the same data of Lemma (4-4), but here Equation. Then the matrix Equation is full rank.

Proof

The vectors v1, v2 are linearly dependent.

Lemma

Let v1and v2 be a two vectors in Rn, admitting Equation as the kernel of a basis vector Equation the index set of {1, 2, 3,…, n-1} such that Equation for each Equation.

And Equation for each Equation (4.6)

Then the set Equation form a common basis of the kernel vectors v1 and v2, and verifying Equation

Equation

Proof

Since Equation is a basis of the kernel vector v1.

We will show that Equation (4.7)

from a common basis vectors v1 and v2, knowing Equation resulting kernel v1 such that

Equation

When Equation For vector v2, we have: if Equation, it comes

Equation.

And if Equation, it comes Equation because {i1, i2,…,ik }is the largest subset of {1,2,...,n-1} satisfies Equation, when i ∈{ i1, i2,…,ik }.

It remains to show that set (4.7) is linearly independent.

Equation

Because { z1, z2, z3,…, zn-1} are linearly independent, from where the result.

Description of the Active Method

We focus in this section on a so-called active point approach

We construct the iterated xk+1 by the formula Equation where αk is the displacement step in the dk direction.

The choice of αk and dk ensures that xk+1 approaches the border of the constraints better than xk, and Equation (5.1) such as Ak is the matrix of active constraints at point xk+1

This process is repeated until the stopping test is satisfied.

Initialization

Location of the starting point x0: Let E be a set of constraints in a general form (equalities, inequalities, and mixed) be an arbitrary point and x0 be a point of departure in IRn.

We can distinguish the situation from the point x0 with respect to E, in one of the following three cases:

Case 1: Point x0 is located within E.

Case 2: The point x0 is located outside of E.

Case 3: The point x0 is located in the boundary of E.

Geometric representation at point x0

Adding and permutation of Constraints

Adding a constraint: Let Ak be the matrix of active constraints at xk point of iteration k, stitch-forming iteration k +1, we add in the matrix Ak the constraint Equation resulting from the following two equations:

Equation if xk is the result of the first or third case cited in sub -section (5.1.1)

Because, When the point xk is situated in the interior domain E (ξ =1) , We seek the constraint that nears to this xk point. in fact, If Equation it gives all Equation. Then we choose the constraintEquation that have a negate f max-valueEquation. Note that Equation (5.2)

This result is obtained by remplacing the xk point in all constraint of domain E (Figure 1).

applied-computational-mathematics-geometric-representation

Figure 1: Geometric representation at point x0.

Else in other part if xk is the result of second case cited in subsection 5.1.1. i.e.

The point xk is cited in exterior of E, we seek the constraint which is far to this point xk in fact.

If Equation, it gives all Equation. Then we choose the constraint Equation that have a positif max-value: Equation

Note that Equation (5.3)

This result is obtained by remplacing the xk point in all constraint of domain E.

Permutation of constraints: Let xk be the point in iteration k, in which two matrices are associated Ak, Zk, and ik is the index on constraints that can be added to Ak to obtain Ak+1, zk is the column that can be eliminated from the matrix Zk-1 to reach Zk.

Permute the constraint of index ik by another constraint of index i0 that result of equality: Equation

If and only if where the algorithm is moved from iteration k to iteration k+1 we meet the condition Equation

Where nk is the number of columns of matrix Zk.

and Equation is a set of columns eliminated on the matrix Z1.

Remarks: (1) The rows of the matrix Ak+1 are linearly independent, they are also active at the point xk+1. (2) From the kernel of the matrix Ak, we can easily determine the Zk+1 matrix whose columns form a basis of the kernel of Ak+1.

Direction of displacement

We consider the matrix Ak composed of active constraints linearly independent at the point xk and the columns of the matrix Zk form a basis of the kernel of Ak.

Equation the constraint that may be added to the matrix Ak.

To determine the direction dk, we distinguish two alternatives:

If k=0, we pose Equation (5.4)

Where, ik is the index of the constraint to added to Ak.

ξ is indicative of the position xk, and

if xk is the result of the second case. (§5.1.1)

ξ=1 If xk is the result of another case. (§5.1.1)

If k ≠ 0, here, we find also two other alternative:

If Equation, the direction dk is resulted by solution of the following linear system:

Equation (5.5)

Equation

and zk is the column that can be eliminated from the matrix Zk-1 to obtain Zk.

If Equation, the direction dk is resulted by the solve of the following linear system:

Equation(5.6)

Where Equation and io is the index of constraint satisfying Equation that concerned by the permutation. (By application of Lemma 4.7).

Step of displacement

Let xk is the point of iteration k, and dk the direction of displacement at the point xk.

After finding the associated constraint of iteration (k+1) which is active at the point xk+1, then Equation, from the determination of the direction dk, it comes that Equation, Which gives Equation.

We conclude that Equation. So, the step in the direction of displacement dk denoted by αk is given by the following expression

Equation (5.7)

Where ik is the index of constraint to added in the matrix Ak.

Remarks: The active constraint at xk, is also active at the point xk+1.

Theorem of convergence

Let (xk)kIN be an iterative sequence defined by Equation, where αk-1 is the displacement step along the direction dk-1, and x0 is a finit starting point of IRn.

Then the sequence Equation formed by a set of the directions (dk)kIN for the two cases (§5-1-1) Is convergent after a finite number of iterations.

Proof

It sufficiently to show that the set of direction (dk)k∈IN is linearly independent

By recurrence we can write that the scalar product

Equation such that Equation and di the set of direction.

First we consider λ1 d12 d2=0 (*) and we proof λ12=0

With application of our new defined direction, then

Equation

we replace in (*), we obtain Equation

as we know d2 is a non-null direction, then it result that λ2=0

now we have λ1 = λ2 = 0

we suppose that are true for all step m, and we proof it for step m+1.

Equation

We have λm =0 and we know, the direction dm+1 is non-null, it result λm+1 =0

Finally the set of direction Equation is linearly independent.

Algorithm for the Active Method

Data: The matrix A, the vector b and the departure x0.

Output: The point to find is active exact point.

1-Choose an arbitrary starting point, x0 in IRn, set k=0.

2- As long as stopping criterion is defined.

a) Computation of a searched direction, calculate dk.

b) Determine the step αk, and the new point xk+1=xk+ αk dk and add the active constraint Equation in xk+1.

c) Test: if Equation, we call the permute procedure.

d) Construct the active matrix Equation

The same way, we calculate the basis of KerAk+1.

e) K=k+1 and return to a).

Remarks:

i) This method determines the active points of a problem (E), without any constraints condition i.e., it does not require to make the linearly independent constraints.

ii) This method can be applied to any set E, defined by linear constraints, and even if it is empty.

Numerical Tests

From a practical point of view, our method has remarkable advantages.

This will be shown by numerical application of this method in different cases that may exist: the number of constraints, the number of variables, and the size of the matrix to be taken.

The obtained results are listed in the following tables:

Case 1: Standard form (Small size, Large size)

a - Let m=11 and n=5

Equation

Table 1 shows the Standard form of small size and large size [9].

x0 r0 ai1 r1 ai2   ai3 r3 xsol Zxsol
8 -20 2 -7 1 -3.999 -1 -1.826×10-7 1,833 -1-1,49×10-8
8   1   1   -1   3,166 1 0
8   3   0   -1 1   7,5 0,333 0,333
8   0   0   0   4,5 0,333 0,333
8   -1   0       9,333 0 1

Table 1: Standard form (Small size, Large size).

b - Let m=26 and n=12

Equation

Table 2 shows the inequality form in Large size.

x01 x02 r0 xsol1 xsol2 nt Ax Zx r6
8 8 -38 10 1,666 6 A1 A2 z1x -2,885×10-7
8 8   20 8,333     z2x  
8 8   6,66 14,99        
8 8   13,33 25        
8 8   4,99 11,66        
8 8   14,99 18,33        

Table 2: Inequality form (Large size).

Equation

Equation

Case 2: Inequality form (Large size)

Let m=39 and n=10

Equation

Equation

a=1.231059

Equation

Equation

Equation

Table 3 shows the mixed form in large size.

x01 x02 r0 xsol1 xsol2 nt Ax Z1x Z2x
8 8 -700.131 1,64 1,258 9   4,49 ×10-4 -3,95×10-10
8 8   1,258 36,878     0,013 -0,349
8 8   -64,23 36,878     0 -0,349
8 8   0,777 49,999     1,56x10-5 0
8 8   1,639 28,258     4,49x10-4 1

Table 3: Mixed form (large size).

Equation

Case 3 Mixed form (Large size)

a - Let m=23 and n=7

Equation

such that mi =8, i=1,2 and m3 =7

Equation

Equation

Equation

Table 5 shows the mixed form in large size.

x0 r0 ai1 x1 ai2 x2 ai3 x3 ai4 np r4 xsol
8 -2536.12 127.015 2.271 12.2 -62.967 65.346 3.8 -12.2 1 -1600 Is not found
8   -100 8.45 -10 83.179 0 164.64 10      
8   0 8 0 8 0 8 0      
8   0 8 0 8 20.346 -69.388 0      
8   200 -1.02 0 44.14 200 5.815 0      
8   0 8 0 8 0 8 0      
8   0 8 0 8 0 8 0      

Table 4: Standard form (large size).

x01 x02 r0 xsol1 xsol2 nt Ax zx r4
-8 -8 -30 -8 -9.18 4 A1 A2   -1.62×10-6
-8 -8   -9.4 -6,32        
-8 -10   -8 -10        
-8 -10   -2,87 -7.88        
-8 -10   -9,38 -6.17        
-8     -9,41          
-8     -13,27          
-8     -6.25          
-8     -6.81          
-8     -10.82          

Table 5: Mixed form (large size).

Where, the remaining matrices are given by:

Where, the remaining matrices are given by: Zx=obseve in Table 6

1 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0 0
0 -0,15 0 -1,1 -0,15 1,35 -1,5 1,35 -0,3 -0,4 0
0 0 0 1 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0
0 0 0 0 0 1 0 0 0 0 0
0 0 0 0 0 0 1 0 0 0 0
0 0 0 0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 0 1 0 0
0 0 0 0 0 0 0 0 0 1 0
0 -0,25 0 -1,5 -0,25 -0,75 0,5 2,25 -0,5 0 0
0 0 0 0 0 0 0 0 0 0 1
0 0,05 0 0,7 0,05 -0,45 1,5 -0,45 0.1 0,8 0,4
0 0,65 0 2,1 0,65 0,149 0,5 -2,85   0 0

Table 6: Where, the remaining matrices are given by: Zx=obseve in the above table.

Equation

Discussion

The numerical results obtained in the previous examples, show the efficiency of the proposed a new method to solve any initialization problem of optimization with linear constraints.

In the first example, with x0 = (8, 8, 8, 8, 8 ), we remark that the constraint in the first iteration, appears also in other iterations until the third iteration. The last iteration gives the active point, xact = (1.833, 3.166, 7.5, 4.5, 9.333)

In addition, our method allows defining two matrices A3 and Z3 satisfying:

Equation

by constraints of the domain of optimization. It satisfied Ax.Xact=bx,

And its lines are linearly independent.

Equation

Satisfies Ax.Zx=0 mxxn

We observe that xact is feasible with respect to the constraints which do not appear in Ax.

From a numerical point of view, it is difficult to take the best starting point in IRn, which helps us to obtain easily the active point that we search.

In a numerical application, it is competent to verify whether the domain of optimization is empty or not. This problem is very easy to solve it by our method.

We easily can know the state of the domain. This has been illustrated in the numerical test-case 3 (Table 4).

All the above results show the efficiency of this method in the problem of optimization, where the domains are consecutives, and with small size.

For large size, the problem is substantially the same; one has only to do a large amount of calculations.

So, the discussion is similar to that of domains of small sizes.

These results are showed in examples of three cases.

Conclusion

After a long scientific research, we have not found any thing on the method that discusses to solve this type of continues problem optimization, and then we have suggested this method with a new formula direction dk.

In this work, we studied theoretically and algorithmically an active method, which determines the extremes of a set defined by linear constraints. This set is in the form of equalities, inequalities or both of them. These m constraints are linear and function of n variables, our results can be givens in the following points:

• Starting from any initial point, it generates points belonging to the set E.

• It is possible to construct from the m constraints two matrices, where the lines of the first are linearly independents and actives, and the columns of the second form a basis of the kernel of the first matrix.

• Our method can be applied to matrices of large sizes.

• The active point is determined in at most n+ np iterations.

• The other advantage is the simplification of the computation, because each used constraint appears at most only once.

• Our method can be used in other algorithms of resolution of optimization problems to simplify their initializations and to improve their results.

References

Select your language of interest to view the total content in your interested language
Post your comment

Share This Article

Article Usage

  • Total views: 247
  • [From(publication date):
    August-2017 - Nov 23, 2017]
  • Breakdown by view type
  • HTML page views : 210
  • PDF downloads : 37
 

Post your comment

captcha   Reload  Can't read the image? click here to refresh

Peer Reviewed Journals
 
Make the best use of Scientific Research and information from our 700 + peer reviewed, Open Access Journals
International Conferences 2017-18
 
Meet Inspiring Speakers and Experts at our 3000+ Global Annual Meetings

Contact Us

Agri & Aquaculture Journals

Dr. Krish

agriaquaculture@omicsonline.com

1-702-714-7001Extn: 9040

Biochemistry Journals

Datta A

biochemjournals@omicsonline.com

1-702-714-7001Extn: 9037

Business & Management Journals

Ronald

business@omicsonline.com

1-702-714-7001Extn: 9042

Chemistry Journals

Gabriel Shaw

chemistryjournals@omicsonline.com

1-702-714-7001Extn: 9040

Clinical Journals

Datta A

clinicaljournals@omicsonline.com

1-702-714-7001Extn: 9037

Engineering Journals

James Franklin

engineeringjournals@omicsonline.com

1-702-714-7001Extn: 9042

Food & Nutrition Journals

Katie Wilson

nutritionjournals@omicsonline.com

1-702-714-7001Extn: 9042

General Science

Andrea Jason

generalscience@omicsonline.com

1-702-714-7001Extn: 9043

Genetics & Molecular Biology Journals

Anna Melissa

geneticsmolbio@omicsonline.com

1-702-714-7001Extn: 9006

Immunology & Microbiology Journals

David Gorantl

immunomicrobiol@omicsonline.com

1-702-714-7001Extn: 9014

Materials Science Journals

Rachle Green

materialsci@omicsonline.com

1-702-714-7001Extn: 9039

Nursing & Health Care Journals

Stephanie Skinner

nursinghealthcare@omicsonline.com

1-702-714-7001Extn: 9039

Medical Journals

Nimmi Anna

medicaljournals@omicsonline.com

1-702-714-7001Extn: 9038

Neuroscience & Psychology Journals

Nathan T

neuropsychology@omicsonline.com

1-702-714-7001Extn: 9041

Pharmaceutical Sciences Journals

Ann Jose

pharmajournals@omicsonline.com

1-702-714-7001Extn: 9007

Social & Political Science Journals

Steve Harry

social_politicalsci@omicsonline.com

1-702-714-7001Extn: 9042

 
© 2008- 2017 OMICS International - Open Access Publisher. Best viewed in Mozilla Firefox | Google Chrome | Above IE 7.0 version
adwords