Department of Mathematics, University of Batna, Route de Biskra, Batna, Algeria
Received Date: March 03, 2016; Accepted Date: April 28, 2017; Published Date: May 06, 2017
Citation: Choufi S (2017) Development of a Procedure for Finding Active Points of Linear Constraints. J Appl Computat Math 6: 352. doi: 10.4172/2168-9679.1000352
Copyright: © 2017 Choufi S, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Visit for more related articles at Journal of Applied & Computational Mathematics
In this paper, we present an iterative method to determine active point of linear constraints. It is based on two basic operations which are addition and permutation of constraints. This procedure generates a finite sequence of points that basis in a new lemma and a new formula direction, the laspoint of sequence constitutes an active point, and this procedure gives also two matrices. The first one is constituted by the active constraints which are linearly independent and the second one is a matrix whose columns are the basis vectors of the kernel of the first matrix.
Active point; Interior point; Kernel of matrix; Optimisation continuous
Currently, the domain of optimization is attracting considerable interest from the academic and industrial communities, see, for instances [1-5]. The various existing techniques for solving a given problem and the efficient algorithmic implementations open up many perspectives and diverse applications in different areas [6-8].
There are different methods of optimization exist in the literature, among other, we cite, the simplex [1], the interior point [1,2], the exterior point methods, and evidently with their improved versions [9-13].
In most optimization problems, initialization points are necessary and required in the resolution algorithms for performing numerical implementations [6-8,11,12]. However, the choice of the initialization points is not general, and the values of these points depend strongly on the adopted technique. Furthermore, these points are considered as active or feasible in the applied method.
In this study, we are interested by the optimization problem of the CSLP type (constraint satisfaction linear problems), where the set of constraints are linear and it is defined by determined the active point x_{act} of a set E such as:
E = {x∈IR^{n} , subject to Ax ≤b} (CSLP), where
A is an mxn data matrix, not necessarily full rank and b is a given as vector IR^{m}
The problem to solve is the determination of the active points satisfying all the aforementioned constraints.
If the values of the matrix A and the vector b components are integer numbers, the above problem is discrete and can be solved by using the ellipse method [4]. However, in the case of optimization continuous, this is not studied in literature.
This past has motivated this investigation in the purpose of giving a theoretical and practical method of resolution of this problem.
The method that we propose is based on the construction of an iterative algorithm, such that, from any initial point (feasible or not) one produces another better point x, then one associates to it two matrices A and Z. The lines of A are constituted by the active constraints, which are linearly independent. The columns of matrix Z are constituted of the kernel of matrix A, and are also linearly independent.
This process allows to generate a sequence of points (x_{k})_{k∈IN} which converge to the point that one search (feasible or active). It is important to mention that our method can be applied without knowing whether this domain of constraints is empty or not.
In the numerical implementation, we used the scientific environment FOTRAN F90 under windows, and the obtained results were very satisfactory.
The rest of this paper is organized as follows. In Sec. 2, we give some definitions and propositions that are used in this article. In Sec. 3, we present the construction of a die kernel. In Sec. 4, we give description of the active method and its implementation in Sec.5, we form algorithm for the active method. At last, we summarize our results in the last section.
We note that:
I={1, 2, 3,..., m} the set of constraint indices.
X_{k}: The iteration point k.
the set of constraints indices containing the point x_{k} externally.
the set of constraints indices containing the point x_{k} internally.
?: Orthogonal.
nt: The total number of iterations.
x_{act}: The active point.
x_{fe}: The feasible point.
Δ _{K}: The set of active constraints in x_{k} point.
A_{k}: The matrix formed by the active constraints linearly independent at the point x_{k}.
Z_{k}: The kernel of the matrix A_{k}.
np: The total number of permutations resulting by the choice x_{0}.
zel_{i}: The columns of index i deleted on the matrix Z_{1}.
Definition
The constraint is called active in x_{k} if [8].
This definition leads to the fact that, for any vector v, we can introduce all
(3.1)
We then say that the vector v is a regular point of all eligible
(or just a regular point of the constraints) if and only if it is a regular point of .
Definition
Let D be a domain of constraints in IR^{n}, defined by [5]
(3.2)
We call all candidates constraints, any set of constraints, among the
considered as active constraint of the solution that we search for.
Proposition
A direction d is tangent to x∈X, if and only if there exists a sequence (d_{n}) of limit d, and a sequence (μ_{n}) of positive real zero limit, such that x+μ_{n}d_{n}∈X. [5].
Remarks
Most of the algorithms fail when they have to solve a problem whose constraints are not qualified in the solution. Therefore it is preferable to change the description of the set of constraints before solving the problem.
Proposition (CS contraints satisfaction)
The constraints of D domain are qualified at the point x ∈ D, if the gradients in x of the active constraints [5].
are linearly independent, where
such that p +q=m.
This section contains the most important results for the kernel numerical calculation of any matrix, especially in the case where we have a lot of matrices of large size, and to avoid repetition of the calculations.
The calculation of a matrices and vector kernel obeys certain rules of compliance.
These results are given in the following:
Lemma
Let v be a vector v ∈R^{n}, which defines a set of constraints as follows:
Where α is a real number (4.1)
then v⊥Δ i.e. v is orthogonal to Δ.
Proof
Let x_{1} , x_{2} ∈ Δ, then v^{t} x_{1} − α = 0 (1)
v^{t} x_{2} − α = 0 (2)
Bysubtraction(1) of(2) itcomes:
This gives that for each
From this lemma, we can construct sets Δ_{+} and Δ _{-} that help us to lead the constraint equations in the following two corollaries:
Corollary
Let Δ be a set of constraints of the form:
Where v is a vector of IR^{n} ,α ∈R . Then such that a = x − x' wivtht (4.2)
Corollary
Consider the same data of corollary 4.2, then (4.3) where −v∈Δ .
Lemma
Let v be a vector in |R^{n*}, then its kernel is formed by the following basis where , for each i=1,2,…,n-1 (4.4)
Proof
It suffices to show that the rows of the matrix are linearly independent. Consider the scalars satisfying Multiplying by v^{t}, it follows: such as then λ=0. And (1), we have: for each i=1,…,n-1, from where the result.
Lemma
Let v_{1} and v_{2} be a two vectors in R^{n}, the set is a basis of the kernel vector v_{1}, where the i_{m} index {1,2,...., n-1} satisfies
(4-5)
Then the matrix is invertible if and only if .
Proof
The same proof of Lemma (4.4), in the rows of the matrix (v_{1} v_{2} z_{1}…….z_{im-1}…..z_{n-1})^{t} which are linearly independent vectors.
Corollary
Keeping the same data of Lemma (4-4), but here . Then the matrix is full rank.
Proof
The vectors v_{1}, v_{2} are linearly dependent.
Lemma
Let v_{1}and v_{2} be a two vectors in R^{n}, admitting as the kernel of a basis vector the index set of {1, 2, 3,…, n-1} such that for each .
And for each (4.6)
Then the set form a common basis of the kernel vectors v_{1} and v_{2}, and verifying
Proof
Since is a basis of the kernel vector v_{1}.
We will show that (4.7)
from a common basis vectors v_{1} and v_{2}, knowing resulting kernel v_{1} such that
When For vector v_{2}, we have: if , it comes
.
And if , it comes because {i_{1}, i_{2},…,i_{k} }is the largest subset of {1,2,...,n-1} satisfies , when i ∈{ i_{1}, i_{2},…,i_{k} }.
It remains to show that set (4.7) is linearly independent.
Because { z_{1}, z_{2}, z_{3},…, z_{n-1}} are linearly independent, from where the result.
We focus in this section on a so-called active point approach
We construct the iterated x_{k+1} by the formula where α_{k} is the displacement step in the d_{k} direction.
The choice of α_{k} and d_{k} ensures that x_{k+1} approaches the border of the constraints better than x_{k}, and (5.1) such as A_{k} is the matrix of active constraints at point x_{k+1}
This process is repeated until the stopping test is satisfied.
Initialization
Location of the starting point x_{0}: Let E be a set of constraints in a general form (equalities, inequalities, and mixed) be an arbitrary point and x_{0} be a point of departure in IR^{n}.
We can distinguish the situation from the point x_{0} with respect to E, in one of the following three cases:
Case 1: Point x_{0} is located within E.
Case 2: The point x_{0} is located outside of E.
Case 3: The point x_{0} is located in the boundary of E.
Geometric representation at point x_{0}
Adding and permutation of Constraints
Adding a constraint: Let A_{k} be the matrix of active constraints at x_{k} point of iteration k, stitch-forming iteration k +1, we add in the matrix A_{k} the constraint resulting from the following two equations:
if x_{k} is the result of the first or third case cited in sub -section (5.1.1)
Because, When the point x_{k} is situated in the interior domain E (ξ =1) , We seek the constraint that nears to this xk point. in fact, If it gives all . Then we choose the constraint that have a negate f max-value. Note that (5.2)
This result is obtained by remplacing the x_{k} point in all constraint of domain E (Figure 1).
Else in other part if x_{k} is the result of second case cited in subsection 5.1.1. i.e.
The point x_{k} is cited in exterior of E, we seek the constraint which is far to this point x_{k} in fact.
If , it gives all . Then we choose the constraint that have a positif max-value:
Note that (5.3)
This result is obtained by remplacing the x_{k} point in all constraint of domain E.
Permutation of constraints: Let x_{k} be the point in iteration k, in which two matrices are associated A_{k}, Z_{k}, and i_{k} is the index on constraints that can be added to A_{k} to obtain A_{k+1}, z_{k} is the column that can be eliminated from the matrix Zk-1 to reach Zk.
Permute the constraint of index i_{k} by another constraint of index i_{0} that result of equality:
If and only if where the algorithm is moved from iteration k to iteration k+1 we meet the condition
Where n_{k} is the number of columns of matrix Z_{k}.
and is a set of columns eliminated on the matrix Z_{1}.
Remarks: (1) The rows of the matrix A_{k+1} are linearly independent, they are also active at the point x_{k+1}. (2) From the kernel of the matrix A_{k}, we can easily determine the Z_{k+1} matrix whose columns form a basis of the kernel of A_{k+1}.
Direction of displacement
We consider the matrix A_{k} composed of active constraints linearly independent at the point x_{k} and the columns of the matrix Z_{k} form a basis of the kernel of A_{k}.
the constraint that may be added to the matrix A_{k}.
To determine the direction dk, we distinguish two alternatives:
If k=0, we pose (5.4)
Where, ik is the index of the constraint to added to A_{k}.
ξ is indicative of the position x_{k}, and
if x_{k} is the result of the second case. (§5.1.1)
ξ=1 If x_{k} is the result of another case. (§5.1.1)
If k ≠ 0, here, we find also two other alternative:
If , the direction d_{k} is resulted by solution of the following linear system:
(5.5)
and z_{k} is the column that can be eliminated from the matrix Z_{k-1} to obtain Z_{k}.
If , the direction d_{k} is resulted by the solve of the following linear system:
(5.6)
Where and i_{o} is the index of constraint satisfying that concerned by the permutation. (By application of Lemma 4.7).
Step of displacement
Let x_{k} is the point of iteration k, and d_{k} the direction of displacement at the point x_{k}.
After finding the associated constraint of iteration (k+1) which is active at the point xk+1, then , from the determination of the direction dk, it comes that , Which gives .
We conclude that . So, the step in the direction of displacement dk denoted by αk is given by the following expression
(5.7)
Where ik is the index of constraint to added in the matrix A_{k}.
Remarks: The active constraint at x_{k}, is also active at the point x_{k+1}.
Theorem of convergence
Let (x_{k})_{k} ∈_{IN} be an iterative sequence defined by , where α_{k-1} is the displacement step along the direction d_{k-1}, and x_{0} is a finit starting point of IR^{n}.
Then the sequence formed by a set of the directions (d_{k})_{k} ∈_{IN} for the two cases (§5-1-1) Is convergent after a finite number of iterations.
Proof
It sufficiently to show that the set of direction (dk)_{k∈IN} is linearly independent
By recurrence we can write that the scalar product
such that and d_{i} the set of direction.
First we consider λ_{1} d_{1}+λ_{2} d_{2}=0 (*) and we proof λ_{1}=λ_{2}=0
With application of our new defined direction, then
we replace in (*), we obtain
as we know d_{2} is a non-null direction, then it result that λ_{2}=0
now we have λ_{1} = λ_{2} = 0
we suppose that are true for all step m, and we proof it for step m+1.
We have λ_{m} =0 and we know, the direction d_{m+1} is non-null, it result λ_{m+}_{1} =0
Finally the set of direction is linearly independent.
Data: The matrix A, the vector b and the departure x_{0}.
Output: The point to find is active exact point.
1-Choose an arbitrary starting point, x_{0} in IR^{n}, set k=0.
2- As long as stopping criterion is defined.
a) Computation of a searched direction, calculate d_{k}.
b) Determine the step α_{k}, and the new point x_{k+1}=x_{k}+ α_{k} d_{k} and add the active constraint in x_{k+1}.
c) Test: if , we call the permute procedure.
d) Construct the active matrix
The same way, we calculate the basis of KerA_{k+1}.
e) K=k+1 and return to a).
Remarks:
i) This method determines the active points of a problem (E), without any constraints condition i.e., it does not require to make the linearly independent constraints.
ii) This method can be applied to any set E, defined by linear constraints, and even if it is empty.
From a practical point of view, our method has remarkable advantages.
This will be shown by numerical application of this method in different cases that may exist: the number of constraints, the number of variables, and the size of the matrix to be taken.
The obtained results are listed in the following tables:
Case 1: Standard form (Small size, Large size)
a - Let m=11 and n=5
Table 1 shows the Standard form of small size and large size [9].
x_{0} | r_{0} | a_{i1} | r_{1} | a_{i2} | a_{i3} | r_{3} | xsol | Z_{xsol} | |
---|---|---|---|---|---|---|---|---|---|
8 | -20 | 2 | -7 | 1 | -3.999 | -1 | -1.826×10^{-7} | 1,833 | -1-1,49×10^{-8} |
8 | 1 | 1 | -1 | 3,166 | 1 0 | ||||
8 | 3 | 0 | -1 1 | 7,5 | 0,333 0,333 | ||||
8 | 0 | 0 | 0 | 4,5 | 0,333 0,333 | ||||
8 | -1 | 0 | 9,333 | 0 1 |
Table 1: Standard form (Small size, Large size).
b - Let m=26 and n=12
Table 2 shows the inequality form in Large size.
x_{01} | x_{02} | r_{0} | x_{sol1} | x_{sol2} | nt | A_{x} | Z_{x} | r_{6} |
---|---|---|---|---|---|---|---|---|
8 | 8 | -38 | 10 | 1,666 | 6 | A_{1} A_{2} | z_{1x} | -2,885×10^{-7} |
8 | 8 | 20 | 8,333 | z_{2x} | ||||
8 | 8 | 6,66 | 14,99 | |||||
8 | 8 | 13,33 | 25 | |||||
8 | 8 | 4,99 | 11,66 | |||||
8 | 8 | 14,99 | 18,33 |
Table 2: Inequality form (Large size).
Case 2: Inequality form (Large size)
Let m=39 and n=10
a=1.231059
Table 3 shows the mixed form in large size.
x_{01} | x_{02} | r_{0} | x_{sol1} | x_{sol2} | nt | A_{x} | Z_{1x} | Z_{2x} |
---|---|---|---|---|---|---|---|---|
8 | 8 | -700.131 | 1,64 | 1,258 | 9 | 4,49 ×10^{-4} | -3,95×10^{-10} | |
8 | 8 | 1,258 | 36,878 | 0,013 | -0,349 | |||
8 | 8 | -64,23 | 36,878 | 0 | -0,349 | |||
8 | 8 | 0,777 | 49,999 | 1,56x10^{-5} | 0 | |||
8 | 8 | 1,639 | 28,258 | 4,49x10^{-4} | 1 |
Table 3: Mixed form (large size).
Case 3 Mixed form (Large size)
a - Let m=23 and n=7
such that m_{i} =8, i=1,2 and m_{3} =7
Table 5 shows the mixed form in large size.
x_{0} | r_{0} | ai1 | x_{1} | ai2 | x_{2} | ai3 | x_{3} | ai4 | n_{p} | r_{4} | xsol |
---|---|---|---|---|---|---|---|---|---|---|---|
8 | -2536.12 | 127.015 | 2.271 | 12.2 | -62.967 | 65.346 | 3.8 | -12.2 | 1 | -1600 | Is not found |
8 | -100 | 8.45 | -10 | 83.179 | 0 | 164.64 | 10 | ||||
8 | 0 | 8 | 0 | 8 | 0 | 8 | 0 | ||||
8 | 0 | 8 | 0 | 8 | 20.346 | -69.388 | 0 | ||||
8 | 200 | -1.02 | 0 | 44.14 | 200 | 5.815 | 0 | ||||
8 | 0 | 8 | 0 | 8 | 0 | 8 | 0 | ||||
8 | 0 | 8 | 0 | 8 | 0 | 8 | 0 |
Table 4: Standard form (large size).
x_{01} | x_{02} | r_{0} | x_{sol1} | x_{sol2} | nt | A_{x} | z_{x} | r_{4} |
---|---|---|---|---|---|---|---|---|
-8 | -8 | -30 | -8 | -9.18 | 4 | A1 A2 | -1.62×10^{-6} | |
-8 | -8 | -9.4 | -6,32 | |||||
-8 | -10 | -8 | -10 | |||||
-8 | -10 | -2,87 | -7.88 | |||||
-8 | -10 | -9,38 | -6.17 | |||||
-8 | -9,41 | |||||||
-8 | -13,27 | |||||||
-8 | -6.25 | |||||||
-8 | -6.81 | |||||||
-8 | -10.82 |
Table 5: Mixed form (large size).
Where, the remaining matrices are given by:
Where, the remaining matrices are given by: Zx=obseve in Table 6
1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
0 | -0,15 | 0 | -1,1 | -0,15 | 1,35 | -1,5 | 1,35 | -0,3 | -0,4 | 0 |
0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 |
0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |
0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 |
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0 | -0,25 | 0 | -1,5 | -0,25 | -0,75 | 0,5 | 2,25 | -0,5 | 0 | 0 |
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0 | 0,05 | 0 | 0,7 | 0,05 | -0,45 | 1,5 | -0,45 | 0.1 | 0,8 | 0,4 |
0 | 0,65 | 0 | 2,1 | 0,65 | 0,149 | 0,5 | -2,85 | 0 | 0 |
Table 6: Where, the remaining matrices are given by: Zx=obseve in the above table.
The numerical results obtained in the previous examples, show the efficiency of the proposed a new method to solve any initialization problem of optimization with linear constraints.
In the first example, with x_{0} = (8, 8, 8, 8, 8 ), we remark that the constraint in the first iteration, appears also in other iterations until the third iteration. The last iteration gives the active point, x_{act} = (1.833, 3.166, 7.5, 4.5, 9.333)
In addition, our method allows defining two matrices A_{3} and Z_{3} satisfying:
by constraints of the domain of optimization. It satisfied A_{x}.X_{act}=b_{x},
And its lines are linearly independent.
Satisfies A_{x}.Z_{x}=0 _{mxxn}
We observe that x_{act} is feasible with respect to the constraints which do not appear in A_{x}.
From a numerical point of view, it is difficult to take the best starting point in IR^{n}, which helps us to obtain easily the active point that we search.
In a numerical application, it is competent to verify whether the domain of optimization is empty or not. This problem is very easy to solve it by our method.
We easily can know the state of the domain. This has been illustrated in the numerical test-case 3 (Table 4).
All the above results show the efficiency of this method in the problem of optimization, where the domains are consecutives, and with small size.
For large size, the problem is substantially the same; one has only to do a large amount of calculations.
So, the discussion is similar to that of domains of small sizes.
These results are showed in examples of three cases.
After a long scientific research, we have not found any thing on the method that discusses to solve this type of continues problem optimization, and then we have suggested this method with a new formula direction d_{k}.
In this work, we studied theoretically and algorithmically an active method, which determines the extremes of a set defined by linear constraints. This set is in the form of equalities, inequalities or both of them. These m constraints are linear and function of n variables, our results can be givens in the following points:
• Starting from any initial point, it generates points belonging to the set E.
• It is possible to construct from the m constraints two matrices, where the lines of the first are linearly independents and actives, and the columns of the second form a basis of the kernel of the first matrix.
• Our method can be applied to matrices of large sizes.
• The active point is determined in at most n+ np iterations.
• The other advantage is the simplification of the computation, because each used constraint appears at most only once.
• Our method can be used in other algorithms of resolution of optimization problems to simplify their initializations and to improve their results.