alexa
Reach Us +44-7482-875032
Partitioned Particle Swarm Optimization | OMICS International
ISSN: 2168-9679
Journal of Applied & Computational Mathematics
Make the best use of Scientific Research and information from our 700+ peer reviewed, Open Access Journals that operates with the help of 50,000+ Editorial Board Members and esteemed reviewers and 1000+ Scientific associations in Medical, Clinical, Pharmaceutical, Engineering, Technology and Management Fields.
Meet Inspiring Speakers and Experts at our 3000+ Global Conferenceseries Events with over 600+ Conferences, 1200+ Symposiums and 1200+ Workshops on Medical, Pharma, Engineering, Science, Technology and Business
All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Partitioned Particle Swarm Optimization

Bisheban M1, Mahmoodabadi MJ2* and Bagheri A3

1Young Researchers Club, Bandar Anzali branch, Islamic Azad University, Bandar Anzali, Iran

2Department of Mechanical Engineering, Sirjan University of Technology, Sirjan, Iran

3Department of Mechanical Engineering, Faculty of Engineering, University of Guilan, Rasht, Iran

*Corresponding Author:
Mahmoodabadi M.J
Department of Mechanical Engineering
Sirjan University of Technology, Sirjan, Iran
E-mail: [email protected]

Received April 18, 2013; Accepted July 17, 2013; Published July 20, 2013

Citation: Bisheban M, Mahmoodabadi MJ, Bagheri A (2013) Partitioned Particle Swarm Optimization. J Appl Computat Math 2:133. doi: 10.4172/2168-9679.1000133

Copyright: © 2013 Bisheban M, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Visit for more related articles at Journal of Applied & Computational Mathematics

Abstract

The particle swarm optimization (PSO) is a population-based optimization method inspired by flocking behavior of birds and human social interactions. So far, numerous modifications of PSO algorithm have been published, which make the PSO method more complex. Several improved PSO versions succeed in keeping the diversity of the particles during the searching process, but at the expense of convergence speed. This paper is aimed at increasing the rate of convergence and diversity of solutions in the population via two easy techniques: (a) Applying improved acceleration coefficients (b) Dividing search space into blocks. In particular, the second technique is efficient in the case of functions with optimal design variables situated in the one block. Hence, instead of proposing more complex variant of PSO, a simplified novel technique, called Partitioned Particle Swarm Optimizer (PPSO), has been proposed. In order to find optimal coefficients of this method, an extensive set of experiments were conducted. Experimental results and analysis demonstrate that PPSO outperforms nine well-known particle swarm optimization algorithms with regard to global search.

Keywords

Partitioned Particle Swarm Optimization (PPSO); Improved acceleration coefficients; Partitioning search space; Sub population

Introduction

Particle Swarm Optimization (PSO) is one of the modern heuristic algorithms [1]. It was developed through simulation of simplified social systems, and has been found to be robust in solving nonlinear optimization problems [1,2]. PSO can generate good-quality solutions with stable convergence characteristic at a high speed [3,4]. Shi and Eberhart introduced a linearly decreasing inertia weight [5,6]. A fuzzy controller is used to adapt the inertia weight dynamically [7,8]. Clerc and Kennedy analyzed particle’s trajectory and introduced a set of coefficients to control convergence tendencies [9]. Kennedy and Mendes investigated various population topologies and proposed the Von Neumann topological structure [10,11]. Parsopoulos and Vrahatis combined local and global best topologies and proposed Unified PSO (UPSO) [12].Mendes, Kennedy and Neves proposed Fully Informed Particle Swarm (FIPS) which all of a particle’s neighbors’ previous best positions are considered to calculate the next positions [13]. By moving particles towards nearby particles with better fitness, Peram et al. proposed Fitness–Distance-Ratio based PSO (FDR-PSO) [14]. Van den Bergh and Engelbrecht, by using multiple swarms to optimize different components of the solution vector cooperatively, designed Cooperative Particle Swarm Optimizer (CPSO-H) [15]. A new method to maintain diversity in a standard generational evolutionary algorithm was proposed by creating subpopulations [16]. The notion of subpopulation is applied to the hybrid model based on the PSO and the standard genetic algorithm (GA) [17]. Comprehensive Learning Particle Swarm Optimizer (CLPSO) which performs well on multimodal problems is introduced [18]. In CLPSO each dimension of a particle may learn from the corresponding dimension of different particle’s personal best solution. An elitist learning strategy is introduced, and by evaluating the population distribution and particle fitness, inertia weight and acceleration coefficients of Adaptive PSO (APSO) are controlled [19]. A mechanism is integrated in the velocity update formula of Cellular PSO to modify trajectories of particles to avoid being trapped in local optimums [20]. In order to improve exploration ability, a random velocity is added to the velocity updating [21]. Inspired by the migratory behavior in the nature, the population is randomly partitioned into several sub-swarms, and some particles migrate from one complex to another to enhance the diversity of the population at periodic stage in the evolution [22]. By decomposition of the search space into subspaces which are probed by sub-swarms, a master–slave model for parallel cooperative micro-particle swarm optimization is introduced [23]. A PSO with Random Dimension Selection (PSORDS) algorithm by utilization of random dimension selection instead of stochastic coefficients is proposed [24]. While several PSO variants have been proposed, the complications were introduced in order to improve the PSO algorithm. Hence, a simplified novel technique is introduced in this paper. In this paper, to improve exploration of search space and escaping local minima, the particles are restricted to subspaces, and for increasing convergence, acceleration coefficients are improved. The rest of this paper is organized as follows. In the above section it gives a brief review on the original PSO algorithm. The above section introduces PPSO in details, followed by the other Sections analyzing the parameters used in PPSO. Experimental results and comparison study to verify the capability of PPSO are shown in the Section below. Finally, Conclusion concludes the paper.

Basic Concept of Particle Swarm Optimization

James Kennedy and Russell C. Eberhart [1] originally proposed the PSO algorithm for optimization. PSO is a population-based search algorithm based on the simulation of the social behavior of birds within a flock. Although PSO was originally adopted to balance weights in neural networks [25], it soon became a popular global optimizer, mainly in problems which the decision variables were real numbers [26,27]. In PSO, particles are flown through hyper-dimensional search space. Changes to the position of particles within search space are based on social-psychological tendency of individuals to emulate success of the others. Let image denote position of particle i, at time step t. The position of image is then changed by adding a velocity image to the current position:

(1)

The velocity vector reflects the socially exchanged information and, in general, is defined in the following way:

image (2)

where r1,r2 have random real values with uniform distribution in [0,1]. C1 is the cognitive learning factor and represents the attraction which a particle has toward its own success. C2 is the social learning factor and represents the attraction that a particle has toward the success of the entire swarm. W is the inertia weight, which is employed to control the effects of previous velocity on the current velocity. image is the personal best position of the particle i. image is the position of the best particle in the entire swarm.

PPSO

In this section, a novel variant of PSO is proposed, which is improved by a new search technique and utilizing modified acceleration coefficients to update particle positions. Traditional PSO algorithms easily fall into stagnation, when no particle is able to discover a better position than its previous best position on several iterations. The aim of introducing subspaces is to restrict particles to keep diversity, and attempt to evade suboptimal convergence. Search space is divided into subspaces, and particles are forwarded to them to explore global minimums. This strategy allows particles to explore large areas on initial iterations, and in multimodal landscapes, particles can simultaneously locate multiple optima, quickly and precisely. Actually, particles forwarded to a region work as a team. Each team explores one subspace without sharing information with other teams and achieves the best global solution there. Each dimension is divided into R parts as follows:

image (3)

image (4)

image (5)

Where R defined as the number of subspaces. If the search space is defined as [xmin, xmax], the search bounds of rth subspace is defined by image

Search algorithm

As well as search space, population is divided in to R subpopulations. The number of members in a group is the integer part of the result of dividing the population number by R. If the reminder is not zero, the rest of particles will be added randomly to groups. Each group has been sent to a special region. While the initial positions are distributed randomly, groups start exploring simultaneously for a maximum allowed number of iterations. By calculation of fitness values, image , and image of subspaces are determined, while there is no relationship among groups. Finally, personal best positions and the best particles are memorized as image s and image s.

In the transition step, all particles are initialized with uniform random distribution in the entire search space to have another opportunity to find global optimum. After determinate the number of iterations, personal best positions and the best particle are memorized as image s and image. In both of the first and the transition stages, if the global minimum in rth region is found or better image after e iterations cannot be found, rth group will stop searching. Therefore, iteration number of the second stage may be increased. Comparing results of the first and the transition stages, image for the second stage is selected. If imageis obtained from rth group, this group will be called best r group. In the second stage, in order to avoid being trapped in local minima and to discover better positions, particles can fly in the entire search space. Particles are initialized as the pseudo code shown in Figure 1.

computational-mathematics-pseudo-code

Figure 1: Pseudo code of initializing personal best positions in second stage.

Particles in the best r group remain in their positions while others jump to new positions around the best solution, as follows:

image (6)

Where imageis a vector randomly generated by normal distribution with a mean parameter μ and a standard deviation parameter σ. Here, μ and σ are set to 0 and 1 respectively [19]. Iteration number of last stage is defined as follows:

image (7)

where, FE is referred as the function evaluations, NM(i) as the number of members in ith group, and IT(i) as the number of iterations of ith group. The Pseudo code of PPSO is shown in Figure 2.

computational-mathematics-pseudo-code-ppso

Figure 2: Pseudo code of PPSO.

Acceleration coefficients

In this paper, acceleration coefficients, social learning factor (C2) and cognitive learning factor (C1), are initialized to 2.0 and increased over iterations as follows [28]:

image (8)

Reference [28] proposed δj as a random value in the interval [0.01 0.1]. The interval [0.05 0.1] is applied for δj selection in [19]. In order to find optimal interval for PPSO, an extensive set of experiments was conducted, and uniformly generated random values in interval [0.0250.80] is chosen. The sum of C1 and C2 should be clamped between 3.0 and 4.0, each of them should be set between 0 and 4 [28]. If the sum is larger than 4.0, the following expressions to set the coefficients are developed:

(9)

image(10)

If the sum is smaller than 3.0 both of coefficients are magnified by:

image(11)

image(12)

In [19], α1 and α2=4.0 are considered. Experiments over our algorithm reveal that α1 = 0.5, α2 = 0.8 , α3 = 3 lead to perform well on most of test functions.

Figure 3 illuminates changes of acceleration coefficients over 80 iterations. Initially, both of coefficients are set equal to two. In the second iteration, C2 and C1 fall substantially to 0.39 and 0.25 respectively, so the effect of inertia weight increases. It can help particles explore new regions, and the ability to flee from local minima may be improved. After sharply growing C2 and C1 to 1.88 and 1.12, respectively, they steadily rose to 2.02 and 1.28, respectively. This results in stronger tendency toward search around image . Again the black line declines to 0.43 and the red line decreases to 0.23. In two iterations they grow to 2.08 and 1.25 respectively. However, both of the social and cognitive learning factors reveal similar trends, a small gap between them leads to discourage premature convergence. Therefore, there is a good balance between the ability to explore new regions and search a smaller region around image. On the other hand, on the first iterations, C2 is greater than C1, so the tendency toward success of entire swarm is greater than individual’s success. In contrast, from iteration 16 to 38, except iteration 22, the pattern changes with higher maximum values of C1. These variations are another cause of the ability of algorithm to escape from local minima.

computational-mathematics-acceleration-coefficients-characteristics

Figure 3: Acceleration coefficients characteristics.

Inertia weight

In this paper, adaptive inertia weight is used which is given in Equation (13) [19].

image (13)

Where 0.4 and 0.9 are initial and final values of the inertia weight, respectively. The parameter f, which is called evolutionary factor, is defined by (14).

image (14)

image (15)

Where di is the mean distance of particlei to all of the other particles in its group, which can be measured using a Euclidian metric (15). N and D are population size of a group and number of dimensions respectively. of the globally best particle is denoted as dg. By comparing all di’s, the maximum and minimum distances, dmax and dmin are determined in a group, respectively.

Analysis of PPSO

In order to get an insight of how parameters influence performance of PPSO, in the following sections the parameters of PPSO will be changed separately to gain the best ranges. All parameters of the PPSO are set as Table 1, except those will be noted. The mean results of 30 independent trials will be presented. An extensive set of experiments were conducted over different test functions, but a few of them are selected randomly to be shown in this paper.

Number of iterations of first stages 500
Dimension of unfixed number functions 30
δj [0.0250.050]
α1 0.5
α2 0.8
α3 3
ε 10
Population size 36
Total iterations 5000
Number of subspaces 6

Table 1: Suggested parameter values for PPSO.

The twenty one benchmark functions, which represent challenges for optimization methods, are used. The name of function, search range of variables, location of global optimum (xopt) , and corresponding optimum value f ((xopt)) are summarized in Table 2. Note that these test functions should be minimized.

NO. Name Search range (xopt) f(xopt)
1 Sphere [-100,100]D [0, 0, . . .,0] 0
2 Rosenbrock [-2.048,2.048]D [1, 1, . . .,1] 0
3 Quadric [-100,100]D [0, 0, . . .,0] 0
4 Griewank [-600,600]D [0, 0, . . .,0] 0
5 Weierstrass [0-0.5,0.5]D [0, 0, . . .,0] 0
6 Quartic [-1.28,1.28]D [0, 0, . . .,0] 0
7 Nonc_Rastri [-5.12,5.12]D [0, 0, . . .,0] 0
8 Rastrigin [-5.12,5.12]D [0, 0, . . .,0] 0
9 Beale [-4.5,4.5]2 [3, 0.5] 0
10 Bohachevsky1 [-100,100]2 [0,0] 0
11 Bohachevsky2 [-100,100]2 [0,0] 0
12 Bohachevsky3 [-100,100]2 [0,0] 0
13 Booth [-10,10]2 [1, 3] 0
14 Branin [-5,10],[0,15] [-π, 12.275], [π, 2.275], [9.42478,2.475] 0.397887
15 Easom [-100,100]2   -1
16 Hartmann1 [0,1]3 [0.114614,0.555649,0.852547] -3.86278
17 Hump [-5,5]3 [0.0898,-0.7126], [-0.0898,0.7126] -1.0316
18 Matyas [-10,10]3 [0,0] 0
19 Michalewics1 [0, π]2 [2.2029055,1.5707963] -1.8013
20 Michalewics2 [0, π]5 [2.2029055,1.5707963,1.2849916,1.9230585,1.7204698] -4.6876582
21 Needle-in-a-haystack [-5.12,5.12]2 [0,0] -3600

Table 2: Test Functions.

The term image is limited to the range image which vmax is set to 20% of the search range [29]. If the dth dimension velocity violates this range, then it will be replaced by sign image. The equation image is used to prevent the particles moving out of the search bounds while the search range is defined as image. Number zero returns numbers less than 10-324 in all of the following tables.

Advantages of PPSO

In this section, modified acceleration coefficients and partitioning search space into blocks will be judged solely on their own merits. The adaptive inertia weight described in sections 4.3 and both of acceleration coefficients set to 2.05 is chosen. By partitioning search space without applying modified acceleration coefficients, effects of this technique on the performance of PPSO are revealed and vice versa. The results are presented in Tables 3 and 4. It can be seen that the easy technique of partitioning search space into blocks not only makes the searching ability stronger, but also increases the rate of convergence.

algorithm Simple Variant of PSO PSO only with modified acceleration coefficients PSO only with partitioning search space   PPSO
function
f1 Best
Worst
Mean
Std. Dev.
5.3980×10-86
3.2497×10-87
1.6472×10-79
7.2616×10-79
0
2.3037×10-137
2.1460×10-136
6.0325×10-136
0
4.8384×10-94
2.4192×10-95
1.0819×10-94
0
1.5154×10-136
7.5770×10-138
3.3885×10-137
f2 Best
Worst
Mean
Std. Dev.
1.8824×10+1
2.5290×10+1
2.0009×10+1
1.6533
7.3931×10-1
2.49751
1.6701
4.342×10-1
1.9104×10-8
1.2292×10-4
2.7554×10-5
3.2341×10-5
6.9124×10-12
1.5005×10-6
2.0057×10-7
3.8164×10-7
f3 Best
Worst
Mean
Std. Dev.
5.2318×10-4
4.0336×10-2
6.3514×10-3
9.4327×10-3
0
3.5010×10-32
4.7657×10-33
1.0955×10-32
0
4.3337×10-35
4.2690×10-36
1.3141×10-35
0
1.5511×10-46
7.7560×10-48
3.4684×10-47
f4 Best
Worst
Mean
Std. Dev.
0
3.2020×10-2
9.6035×10-3
9.9730×10-3
0
7.3960×10-3
7.3960×10-4
2.2764×10-3
0
0
0
0
0
0
0
0
f5 Best
Worst
Mean
Std. Dev.
0
2.4984
6.6790×10-1
8.1019×10-1
0
2.6221×10-1
6.1906×10-2
8.8027×10-2
0
2.8242×10-5
1.4121×10-6
6.3151×10-6
0
0
0
0
f6 Best
Worst
Mean
Std. Dev.
1.2633×10-3
4.7395×10-3
3.0865×10-3
9.4352×10-4
3.2819×10-5
7.5766×10+1
2.8834×10+1
2.4067×10+1
3.0071×10-5
2.9000×10-3
1.1975×10-3
9.4450×10-4
2.8664×10-5
0.00252×10-4
7.7829×1-4
7.6605×10-4
f7 Best
Worst
Mean
Std. Dev.
1.4000×10+1
3.9000×10+1
2.6000×10+1
7.8001
0
4.2000×10+1
1.3500×10+1
1.6116×10+1
0
0
0
0
0
0
0
0
f8 Best
Worst
Mean
Std. Dev.
1.5919×10+1
5.4722×10+1
3.5619×10+1
1.1788×10+1
0
4.4773×10+1
8.1586
1.6787×10+1
0
0
0
0
0
0
0
0

Table 3: Merits of modified acceleration coefficients and partitioning search space on PSO.

Number of subspaces

In this section, the number of subspaces is determined in order to ensure sufficient diversity of particles (Table 4). It seems that, greater number of subspaces leads to greater probability that the algorithm is able to explore regions without attractions of local minima of other regions. On the other hand; the rate of convergence becomes slower if the number of subspaces increases. Furthermore, if dimensions of the global minimum are placed in different regions, increasing the number of subspaces cannot help to reach the global minimum. Therefore, there should be a balance. An exhaustive set of experiments are conducted with fixed number of particles in order to investigate the number of regions which contributes to good results. If the number of regions is increased, the number of particles of each group will be decreased. So there should be a balance between the number of subspaces and total number of particles to enhance good explorations in subspaces.

Number of Subspaces   2   3    6    7    10    11    14    15  17 18 19
Function
f2 Mean
Std. Dev.
4.0312
3.0563
9.1836E-6
1.0948E-5
7.6532E-6
7.1094E-6
1.8902E-6
2.5823E-6
2.5992E-6
3.1969E-6
5.9704E-7
7.9011E-7
2.0540E-7
1.9815E-7
4.6731E-8
6.2346E-8
5.7236E-7
6.3662E-7
2.4536e-6
3.3057e-6
3.0786E-6
4.9099E-6
f6 Mean
Std. Dev.
0.0021984
0.0015983
0.0020557
0.0023226
0.0029927
0.0019425
0.0024364
0.0019783
0.0021393
0.0016112
0.0026192
0.0024394
0.0028811
0.0024995
7.7829E-4
7.6605E-4
0.0017587
0.0013614
0.0018858
0.0017758
0.0019014
0.0007980

Table 4: Number of Subspaces in PPSO.

Iteration number of first stage

Optimal maximum allowed number of iterations of the first stage is associated with 0020 maximum allowed number of iterations of the second stage.

image(16)

where F _IT and S _IT are maximum allowed number of iterations of the first and second stages respectively. PS is population size. NM is referred as the number of members in each group, FE as the number of function evaluations, and R as the number of subspaces. By substitutingimage in Equation (16), we have:

image (17)

By choosing the value of F _IT carefully, a configuration can be found which maintains the balance between F _IT and F _IT. The ratio of F _IT to S _IT s defined as N:1 Table 5, which lists the results gained by different values of parameter N, makes it clear that which value should be set to achieve a better performance.

N 0.163 0.111 0.081 0.053 0.020 0.010 0.005 0.002
Function
f2   Mean
Std. Dev.
8.3517E-8
1.0213E-7
4.6731E-8
6.2346E-8
1.00134E-7
2.6681E-7
6.3684E-8
8.4437E-8
5.2527E-8
5.0510E-8
2.1609E-7
2.0105E-7
0.48498
0.82835
0.53811
0.75343
f6   Mean
Std. Dev.
0.0010682
8.4955E-4
7.7829E-4
7.6605E-4
0.0015144
8.6131e-4
0.0013360
9.9126E-4
0.0013307
7.5878E-4
0.0014006
0.0011257
0.0020807
0.0012582
0.0020584
0.0013340

Table 5: Effects of ratio of number of iterations of first stage to number of iterations of second stage.

Table 5 represents that there is no much difference between high numbers of iterations of the first stage. Therefore,e is set to 10. On the other hand, because of incomplete search in subspaces, few numbers of iterations of the first stage led to poor performance.

The convergence curves of Rosenbrock function is shown in Figures 4a and 4b). If each dimension of global minimum is situated in one region, it is clear that algorithm cannot reach the best solution in restricted areas. In order to find global minimum, search area should be extended to the entire search space in the second stage. If search space is divided into 15 regions, each dimension of global minimums of Hartmann1 and Michalewick 2 functions will be placed in different regions. The convergence curves of these functions are shown in Figures 5a and 5b). These figures clearly indicate that global minimum would not be found via searching in subspaces.

computational-mathematics-median-converge-characteristics

Figure 4: The median converge characteristics on Rosenbrock. (a) first and transition stages (b) second stage.

computational-mathematics-median-converge-characteristics

Figure 5: The median converge characteristics of first and transition stages. (a) Hartmann1 & (b) Michalewick 2.

Parameters of acceleration coefficients

To access the sensitivity of δ1,2, different values of δ1,2 are tested. Results presented in Table 6 makes it clear that the interval [0.025, 0.8] is suitable. The magnitudes of α1, α2 and α3 are discussed in this section. From Tables 7-9, it is found that the best results are achieved when parameters α1, α2 and α3 are set equal to 0.5, 0.8, and 3, respectively.

δ1,2 [0.025, 0.05] [0.05, 0.1] [0.1, 0.2] [0.2, 0.4] [0.4, 0.8] [0.8, 1] [0.4, 1] [0.025, 0.1] [0.025, 0.2] [0.025, 0.4] [0.025, 0.8] [0.025, 1]
Function
f2   Mean
Std. Dev.
2.0540×10-7
1.9815×10-7
2.0228×10-7
3.0964×10-7
3.9130×10-7
6.3112×10-7
2.4374×10-7
3.9810×10-7
1.1846×10-5
2.2811×10-5
1.1900×10-4
3.0265×10-4
8.1805×10-6
2.2520×10-5
5.9091×10-7
1.1365×10-6
1.1491×10-7
1.2335×10-7
7.2238×10-8
8.2123×10-8
4.6731×10-8
6.2346×10-8
1.0050×10-7
1.7269×10-7
f6   Mean
Std. Dev.
1.2547×10-3
1.0330×10-3
1.7693×10-3
1.1993×10-3
1.3519×10-3
1.0320×10-3
1.0961×10-3
8.1700×10-4
1.1639×10-3
1.0881×10-3
1.2852×10-3
9.9044×10-4
1.0922×10-3
7.9003×10-4
1.2235×10-3
1.2530×10-3
1.2235×10-3
1.2530×10-3
1.6870×10-3
9.4670×10-4
7.7829×10-4
7.6605×10-4
1.3320×10-3
1.2088×10-3

Table 6: Effects of parameters of acceleration coefficients.

α1   0.1   0.2   0.3   0.4   0.5 0.6 0.7 0.8 0.9
Function
f1 Mean
Std.Dev.
4.0078×10-6
6.4849×10-6
1.3823×10-6
3.2109×10-6
2.3504×10-7
4.0190×10-7
2.2685×10-7
2.1163×10-7
4.6731×10-8
6.2346×10-8
8.9373×10-8
1.6042×10-7
1.6833×10-7
3.7672×10-7
5.5808×10-7
1.6830×10-6
3.0005×10-6
4.5625×10-6
f6 Mean
Std.Dev.
1.5318×10-3
1.0490×10-3
1.5440×10-3
9.9581×10-4
1.0815×10-3
7.1119×10-4
9.9886×10-4
1.0532×10-3
7.7829×10-4
7.6605×10-4
1.3403×10-3
1.2204×10-3
1.2665×10-3
1.4148×10-3
1.0240×10-3
9.3906×10-4
1.8199×10-3
1.1178×10-3

Table 7: Effects of parameter α1.

α1   0.1   0.2   0.3   0.4   0.5 0.6 0.7 0.8 0.9
Function
f2 Mean
Std.Dev.
2.4489×10-6
2.8687×10-6
4.7842×10-6
7.5940×10-6
1.4936×10-6
2.2980×10-6
1.2971×10-6
2.6730×10-6
5.0263×10-7
6.0714×10-7
5.6400×10-7
1.1954×10-6
1.6270×10-7
2.9078×10-7
4.6731×10-8
6.2346×10-8
2.7127×10-7
6.0852×10-7
f6 Mean
Std.Dev.
1.0179×10-3
6.0484×10-4
1.2031×10-3
1.0798×10-3
1.0798×10-3
6.7911×10-4
1.0281×10-3
6.3793×10-4
1.2706×10-3
9.6123×10-4
1.1096×10-3
9.9833×10-4
1.4781×10-3
1.3642×10-3
7.7829×10-4
7.6605×10-4
1.0770×10-3
9.4116×10-4

Table 8: Effects of parameter α2

α1   1.5   2   2.5   3   3.5   4   4.5   5
Function
f2 Mean
Std. Dev.
1.0379×10-2
1.2764×10-2
1.2946×10-4
2.3032×10-4
1.7700×10-7
3.3328×10-7
4.6731×10-8
6.2346×10-8
6.5411×10-3
6.2742×10-3
1.3856
7.125×10-1
1.9446
1.1841
1.8021
7.843×10-1
f6 Mean
Std. Dev.
2.1145×10-3
2.7532×10-3
2.2674×10-3
1.9325×10-3
1.5028×10-3
1.5653×10-3
7.7829×10-4
7.6605×10-4
1.3546×10-3
7.9550×10-4
1.9185×10-3
1.9474×10-3
2.9698×10-3
2.6959×10-3
2.1282×10-3
2.6912×10-3

Table 9: Effects of parameter α3

Numerical Experiments and Discussion

To evaluate the performance of the PPSO, nine famous PSO algorithms are used, as detailed in Table 10. In all of the tests, the parameter configurations of these PSO variants are set according to their corresponding references, and the algorithm configuration of the PPSO is shown in Table 2. For the first eight functions which are commonly used in different articles and one function which is randomly chosen from the rest of Table 1, the worst, the best, mean and standard deviation of fitness values of the best particles found by nine algorithms over 30 runs, are listed in Tables 11-19 [20]. Two different dimensions are tested (10 and 30 dimensions) for the first eight functions. The boldface numbers indicate the best results. The times of reaching global optima out of all runs are presented in the brackets. The successful rates of ten PSOs are listed in Tables 17a, 17b, 18a and 18b which is defined as follows:

image (18)

Algorithm Year Reference
PSO-w 1998 [5]
PSO-cf 2002 [9]
PSO-cf-local 2002 [10]
UPSO 2004 [12]
FDR 2003 [14]
FIPS 2004 [13]
CPSO-H 2004 [15]
CLPSO 2006 [18]
CPSO-outer 2010 [20]

Table 10: PSO algorithms used in the comparison.

Dimension PSO-w PSO-cf PSO-cf-local UPSO FDR FIPS CPSO-H CLPSO CPSO-outer PPSO
10
Best
Worst
Mean
Std. Dev.
6.16×10-140
3.98×10-131
3.52×10-132
9.46×10-132
4.79×10-6
9.96×10-6
8.36×10-6
1.65×10-6
3.91×10-6
9.94×10-6
8.37×10-6
1.56×10-6
8.98×10-232
2.72×10-225
1.97×10-226
0
4.69 ×10-225
1.80×10-241
5.99×10-243
0
7.34×10-55
7.28×10-52
1.33×10-52
1.81×10-52
4.55×10-29
1.02×10-24
1.17×10-25
2.42×10-25
3.58×10-78
6.77×10-75
3.98×10-76
1.24×10-75
0
0
0
0
0
0
0
0
30
Best
Worst
Mean
Std. Dev.
1.13×10-33
3.45×10-28
1.72×10-29
6.53×10-29
7.37×10-6
9.93×10-6
9.26×10-6
6.63×10-7
7.38×10-6
9.94×10-6
9.03×10-6
6.65×10-7
1.82×10-89
5.00×10-85
3.22×10-86
9.09×10-89
1.41×10-107
1.42×10-95
5.84×10-97
2.63×10-96
1.59×10-12
2.16×10-11
4.46×10-12
3.66×10-12
4.20×10-09
1.84×10-07
3.98×10-8
3.94×10-8
3.27×10-23
1.57×10-21
3.70×10-22
4.31×10-22
0(27)
2.81×10-69
9.48×10-71
5.13×10-70
0(27)
1.51×10-136
7.57×10-138
3.38×10-137

Table 11: Comparison of results on Sphere test function among ten PSOs.

Dimension PSO-w PSO-cf PSO-cf-local UPSO FDR FIPS CPSO-H CLPSO CPSO-outer PPSO
10
Best
Worst
Mean
Std. Dev.
1.03×10-2
3.99
1.09
8.18×10-1
2.05×10-4
3.99
2.68×10-1
1.01
4.70×10-3
4.05×10-2
1.25×10-2
7.41×10-3
5.70×10-4
4.37×10-3
1.87×10-3
9.53×10-4
5.41×10-5
2.55×10-2
1.1×10-3
4.62×10-3
  4.08×10-1
6.34×10-1
5.16×10-1
5.64×10-2
3.80×10-4
7.24×10-2
3.60×10-2
2.30×10-2
1.16×10-2
3.48
1.40
1.07
0(26)
3.99
2.66×10-1
1.01
7.64×10-13
5.79×10-9
9.26×10-10
1.75×10-9
30
Best
Worst
Mean
Std. Dev.
1.05
4.38
2.07
1.20
9.07
1.46×10+1
1.14×10+1
1.95
2.41×10+1
2.94×10+1
2.26×10+1
2.26
1.40×10+1
1.66×10+1
1.55×10+1
8.96×10-1
5.36
8.98
6.76
1.34
2.38×10+1
2.48×10+1
2.44×10+1
3.10×10-1
1.37×10-1
7.26×10+1
1.72×10+1
2.16×10+1
1.22×10+1
2.22×10+1
1.83×10+1
3.59
1.05×10-4
3.46
1.01
6.65×10-1
6.04×10-10
2.18×10-7
4.67×10-8
6.23×10-8

Table 12: Comparison of results on Rosenbrock test function among ten PSOs.

Dimension PSO-w PSO-cf PSO-cf-local UPSO FDR FIPS CPSO-H CLPSO CPSO-outer PPSO
10  
Best
Worst
Mean
Std. Dev.
5.88×10-136
2.12×10-128
8.28×10-130
3.86×10-129
5.14×10-6
9.91×10-6
7.81×10-6
1.34×10-6
  3.63×10-6
9.74×10-6
8.39×10-6
1.51×10-6
1.36×10-234
4.80×10-225
1.61×10-226
0
1.47×10-267
1.71×10-258
8.44×10-260
0
3.13×10-53
4.85 ×10-52
3.13×10-53
8.84×10-53
1.57×10-23
4.56×10-19
2.87×10-20
8.40×10-20
1.22×10-78
1.08×10-74
4.58×10-76
1.97×10-75
0
0
0
0
0
0
0
0
30  
Best
Worst
Mean
Std. Dev.
3.16×10-30
3.58×10-27
8.00×10-28
1.42×10-27
7.33×10-6
9.96×10-6
8.99×10-6
1.00×10-6
8.06×10-6
9.98×10-6
9.20×10-6
8.75×10-7
3.27×10-86
7.61×10-84
1.91×10-84
2.82×10-84
1.53×10-103
1.62×10-92
2.70×10-93
6.60×10-93
1.71×10-10
8.54×10-10
6.02×10-10
2.55×10-10
1.27×10-6
1.02×10-5
5.41×10-6
3.71×10-6
4.65×10-21
9.03×10-20
3.41×10-20
2.31×10-20
0(12)
3.67×10-51
8.10×10-52
1.24×10-51
0(29)
1.56×10-49
5.20×10-51
2.84×10-50

Table 13: Comparison of results on Quadric test function among ten PSOs.

Dimension PSO-w PSO-cf PSO-cf-local UPSO FDR FIPS CPSO-H CLPSO CPSO-outer PPSO
10
Best
Worst
Mean
Std. Dev.
9.86×10-3
1.50×10-1
5.05×10-2
2.61×10-2
2.46×10-2
2.26×10-1
8.37×10-2
4.47×10-2
7.40×10-3
6.89×10-2
3.41×10-2
1.65×10-2
9.86×10-3
3.20×10-2
2.02×10-2
1.03×10-2
7.40×10-3
1.06×10-1
5.53×10-2
2.28×10-2
2.77×10-10
2.37×10-2
7.98×10-3
7.86×10-3
9.86×10-3
1.25×10-1
3.95×10-2
2.49×10-2
0(6)
9.95×10-11
1.5×10-11
3.23×10-11
0
0
0
0
0
0
0
0
30  
Best
Worst
Mean
Std. Dev.
1.72×10-2
1.35×10-1
7.06×10-2
3.13×10-2
1.72×10-2
1.18×10-1
6.24×10-2
2.39×10-2
9.02×10-6
8.61×10-2
3.35×10-2
2.32×10-2
6.40×10-4
6.23×10-2
2.49×10-2
1.61×10-2
7.40 ×10-3
1.38 ×10-1
7.26×10-2
3.36×10-2
2.55×10-6
4.94×10-2
1.66×10-2
1.38×10-2
9.86×10-3
1.16×10-1
5.71×10-2
2.64×10-2
0(3)
8.00×10-10
6.65×10-11
1.91×10-10
0(8) 1.17×10-1
1.52×10-2
2.22×10-2
0
0
0
0

Table 14: Comparison of results Griewank test function among ten PSOs.

Dimension PSO-w PSO-cf PSO-cf-local UPSO FDR FIPS CPSO-H CLPSO CPSO-outer PPSO
10  
Best
Worst  
Mean
Std. Dev.
0
0
0
0
2.33×10-6
1.41×10-1
9.89×10-3
3.46×10-2
5.48×10-6
9.97×10-6
8.52×10-6
1.13×10-6
0(14)
1.37
1.06×10-1
2.79×10-1
0
0
0
0
0(5)
4.10×10-6
2.34×10-5
8.15×10-5
2.13×10-14
6.23×10-9
3.98×10-10
1.17×10-9
0
0
0
0
0
0
0
0
0
0
0
0
30   
Best
Worst 
Mean
Std. Dev.
7.11×10-15
2.25×10-3
9.53×10-4
1.02×10-3
4.21×10-6
1.65×10-2
6.94×10-4
3.07×10-3
6.26×10-6
9.98×10-6
8.80×10-6
8.06×10-6
7.77
1.26×10+1
9.06
1.85
7.11×10-15
1.50
3.20×10-1
6.63×10-1
1.25×10-4
1.96×10-4
1.62×10-4
3.00×10-5
7.11×10-15
3.96×10-9
2.48×10-10
7.66×10-10
0
0
0
0
1.32×10-1
3.00
1.91
1.38
0
0
0
0

Table 15: Comparison of results on Weierstrass test function among ten PSOs.

Dimension PSO-w PSO-cf PSO-cf-local UPSO FDR FIPS CPSO-H CLPSO CPSO-outer PPSO
10
Best
Worst
Mean
Std. Dev.
1.51×10-4
2.02×10-3
7.77×10-4
4.10×10-4
3.20×10-4
2.88×10-3
1.36×10-3
6.15×10-4
1.27×10-4
8.27×10-4
3.66×10-4
1.68×10-4
4.19×10-4
3.39×10-3
1.70×10-3
8.36×10-4
1.07×10-4
1.95×10-3
4.75×10-4
3.79×10-4
1.36×10-4
1.00×10-3
5.31×10-4
2.25×10-4
7.65×10-4
6.02×10-3
2.97×10-3
1.40×10-3
3.37×10-4
1.54×10-3
7.05×10-4
2.61×10-4
7.35×10-6
7.36×10-4
1.99×10-4
1.87×10-4
1.67×10-5
7.63×10-4
3.21×10-4
2.57×10-4
30
Best
Worst 
Mean
Std. Dev.
1.37×10-2
4.41×10-2
2.68×10-2
1.18×10-2
2.09×10-3
9.71×10-3
6.27×10-3
2.62×10-3
4.00×10-3
6.89×10-3
5.09×10-3
1.17×10-3
3.36×10-3
1.59×10-2
1.01×10-2
4.99×10-3
1.70×10-3
5.37×10-3
3.28×10-3
1.07×10-3
4.02×10-3
5.74×10-3
4.79×10-3
6.35×10-4
9.68×10-3
2.26×10-2
1.44×10-2
4.84×10-3
2.51×10-3
8.45×10-3
5.44×10-3
1.91×10-3
6.05×10-5
4.45×10-4
1.54×10-4
1.10×10-4
2.86×10-5
2.52×10-3
7.78×10-4
7.66×10-4

Table 16: Comparison of results on Quatric test function among ten PSOs.

Dimension PSO-w PSO-cf PSO-cf-local UPSO FDR FIPS CPSO-H CLPSO CPSO-outer PPSO
10 
Best
Worst
Mean
Std. Dev.
0(3)
3.00
3.67×10-1
7.18×10-1
6.12×10-6
9.00
4.67
2.09
7.72×10-8
3.00
2.00×10-1
7.61×10-1
0(4)
7.00
3.19
1.67
0(4)
2.00
4.67×10-1
6.81×10-1
4.65×10-9
2.24
6.52×10-1
7.48×10-1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
30
Best
Worst
Mean
Std. Dev.
7.00
3.90×10+1
1.70×10+1
1.18×10+1
2.30×10+1
7.10×10+1
4.15×10+1
1.92×10+1
4.00
1.90×10+1
1.87×10+1
5.20
6.50×10+1
1.00
7.96×10+1
1.59×10+1
7.00
1.70×10+1
1.08×10+1
4.40
5.62×10+1
7.61×10+1
6.23×10+1
7.21
3.33×10-10
2.04×10-8
5.39×10-8
7.49×10-8
4.80×10-13
7.12×10-12
2.57×10-12
2.61×10-12
2.50×10+1
1.25×10+2
7.52×10+1
3.52×10+1
0
0
0
0

Table 17a: Comparison of results on Non Continues Rastrigin test function among ten PSOs.

function Dimension PSO-w PSO-cf PSO-cf-local UPSO FDR FIPS CPSO-H CLPSO CPSO-outer PPSO
Sphere 10 0 0 0 0 0 0 0 0 100 100
30 0 0 0 0 0 0 0 0 90 90
Rosenbrock 10 0 0 0 0 0 0 0 0 86.7 0
30 0 0 0 0 0 0 0 0 0 0
Quadric 10 0 0 0 0 0 0 0 0 100 100
30 0 0 0 0 0 0 0 0 40 96.7
Griewank 10 0 0 0 0 0 0 0 20 100 100
30 0 0 0 0 0 0 0 10 26.7 100
Weierstrass 10 100 0 0 46.7 100 16.7 0 100 100 100
30 0 0 0 0 0 0 0 100 0 100
Quartic 10 0 0 0 0 0 0 0 0 0 0
30 0 0 0 0 0 0 0 0 0 0
Non Continues Rastrigin 10 10 0 0 13.3 13.3 0 100 100 100 100
30 0 0 0 0 0 0 0 0 0 100
Rastrigin 10 16.7 0 0 0 16.7 13.3 100 100 100 100
30 0 0 0 0 0 0 0 0 10 100
Totally winning 10 1 0 0 0 1 0 2 3 6 6
30 0 0 0 0 0 0 0 1 0 4

Table 17b: Successful rates of ten PSOs (problems with 10 and 30 dimensions).

Dimension PSO-w PSO-cf PSO-cf-local UPSO FDR FIPS CPSO-H CLPSO CPSO-outer PPSO
10
Best
Worst 
Mean
Std. Dev.
0(5)
2.98
1.45
8.70×10-1
9.95×10-1
1.29×10+1
5.27
2.88
9.95×10-1
9.95
3.45
1.95
9.95×10-1
8.95
5.49
2.46
0(5)
3.98
1.72
1.11
0(4)
9.95×10-1
1.01×10-1
3.03×10-1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
30
Best
Worst
Mean
Std. Dev.
1.29×10+1
3.68×10+1
2.89×10+1
7.54
4.68×10+1
9.35×10+1
2.59×10+1
1.70×10+1
2.89×10+1
6.17×10+1
4.03×10+1
1.20×10+1
5.17×10+1
9.95×10+1
6.63×10+1
1.45×10+1
2.09×10+1
3.38×10+1
2.77×10+1
4.30
5.66×10+1
8.51×10+1
7.30×10+1
9.70
2.07×10-8
7.12×10-8
1.46×10-8
1.92×10-8
1.07×10-14
7.66×10-13
2.12×10-13
2.18×10-13
0(3)
1.08×10+2
5.00×10+1
2.49
0
0
0
0

Table 18a: Comparison of results on Rastrigin test function among ten PSOs.

Function PSO-w PSO-cf PSO-cf-local UPSO FDR FIPS CPSO-H CLPSO CPSO-outer PPSO
Beale 100  100 100 100 80 100 100 100 100 100
Bohachevsky1 100 100 100 100 100 100 100 100 100 100
Bohachevsky2 100 100 100 100 100 100 100 100 100 100
Bohachevsky3 100 100 100 100 100 100 100 100 100 100
Booth 100 100 100 100 100 100 100 100 100 100
Branin 100 100 100 100 100 100 100 100 100 100
Easom 100 100 100 100 100 100 100 100 100 100
Hartmann1 100 100 100 100 100 100 100 100 100 100
Hump 100 100 100 100 100 100 100 100 100 100
Matyas 0 100 100 100 100 87.5 100 0 100 100
Michalewics1 100 100 100 100 100 100 100 100 100 100
Michalewics2 90 25 63.33 100 33.33 90 100 100 88.33 100
Needle-in-a-haystack 6.67 - 60 33.3 40 26.67 20 0 0 100
Totally winning 10 11 11 12 10 10 12 10 11 13

Table 18b: Successful rates of ten PSOs (problems with fixed number of dimensions).

Dimension PSO-w PSO-cf PSO-cf-local UPSO FDR FIPS CPSO-H CLPSO CPSO-outer PPSO
2
Best
Worst  
Mean
Std. Dev.
-3600
-2729.1
-2798.1
218.0172
-
-
-
-
-3600
-2734.9
-3269.8
417.5431
-36
-2732
-3105.3
412.3334
-3600
-2748.8
-3089.3
424.1387
-3600
-2748.8
-2975.8
382.8577
-3600
-2748.8
-2919
-346.3078
-3598.9
-2748.8
-29681
355.7935
-3600
-3600
-3600
0
-3600
-3600
-3600
0

Table 19: Comparison of results on Needle-in-a-haystacktest function among ten PSOs.

From Tables 11-19, it can be seen that the results of the proposed method are superior to other nine algorithms, for example, the results of 10-D Sphere and 30-D Greiwank and Rastragin and 10-D and 30-DRosenbrock.On10-D and 30-DQuatric, the CPSO-outer method is slightly better than PPSO; however, PPSO can get better result of the best solution. On 30-DQuadric, FDR performs best with high optimization robustness; however, PPSO could find global optimum with high successful rate. For 10-DSphere and Quadric; however, UPSO and FDR perform robust and show good accuracy, PPSO and CPSOouter can converge to global optimum with the same robustness. Only PPSO and CPSO-outer methods achieved the best results of sphere, 10-DQuadric, and Greiwank. However, 30-D functions are more difficult than 10-D functions; results show that PPSO can perform well when dimensions are increased. On 30-D Griewank, Rastragin and Non Continues Rastragin, only PPSO could achieve global optimums. Although, other algorithms find global optimums on 10-D functions, in 30-D cases their performances sharply plummet.

Conclusion

In this paper, PPSO with simple techniques of partitioning search space into blocks and use of improved acceleration coefficients is able to escape from local minima and converge to the global minima robustly. The proposed algorithm contains three stages. Particles are divided into smaller groups and search small regions at the first stage. Then, particles search all of the search space as the transition stage. Finally, based on the results of these two stages, particles continue searching. Moreover, PPSO performs well even if dimensions of search space are increased; however, the performance of some methods considerably drops. In short, PPSO is generally simple, and yields good results in comparison with other variants.

References

Select your language of interest to view the total content in your interested language
Post your comment

Share This Article

Article Usage

  • Total views: 12134
  • [From(publication date):
    August-2013 - Aug 20, 2019]
  • Breakdown by view type
  • HTML page views : 8318
  • PDF downloads : 3816
Top