Baldwinian Learning in Quantum Evolutionary Algorithms for Solving the Fine-Grained Localization Problem in Wireless Sensor Networks

A wireless sensor network is a modern and advanced communication technology, used for sensing and controlling the physical environment. WSNs are often composed of thousands or even hundreds of thousands of cheap and resource limited sensor nodes collaboratively working together to track an object or to monitor a physical event. Although designed first for military purposes, they are now being used for crisis management, monitoring of temperature, humidity control, vehicular movement and lightning condition [1].


Introduction
A wireless sensor network is a modern and advanced communication technology, used for sensing and controlling the physical environment. WSNs are often composed of thousands or even hundreds of thousands of cheap and resource limited sensor nodes collaboratively working together to track an object or to monitor a physical event. Although designed first for military purposes, they are now being used for crisis management, monitoring of temperature, humidity control, vehicular movement and lightning condition [1].
The localization problem are becoming one of the hottest topics in WSNs since the localization information is useful for routing [2], coverage [3], boundary detection [4], clustering [5] and topology control [6]. Algorithms for the localization problems in WSNs can be categorized into two classes: the range-free algorithms and the rangebased algorithms [7]. The range-free algorithms use connectivity based information, obtained by radio communication, such as neighbourhood or hop count to measure inter-node distances [8]. These algorithms are somewhat simple and cheap, and they impose no requirement of additional hardware. However, they are less accurate than the range-based algorithms. The range-based algorithms, on the other hand, utilize range based information such as location, range or angle to measure pair-wise distances of nodes [9,10]. This information usually comes from Time of Arrival (TOA), Time difference of Arrival (TDOA) or Received Signal Strength (RSS) measurement techniques [11,12]. Although range-based algorithms require special hardware such as radio signal receivers or antenna to estimate the positions of sensor nodes, they provide pair-wise distances of nodes with higher accuracy. The former are called the course-grained algorithms, and the latter are called the fine-grained algorithms [13].
The geographical location of sensor nodes in WSNs can be obtained by several approaches. First, it can be acquired through manual configuration. If the number of sensor nodes is small and the area at which they were deployed is limited, this approach will be a good option.
However, it is practically impossible since a WSN often contains a large number of sensor nodes, making the localization process intractable. Second, all sensors nodes can be equipped by GPS receivers. Despite localizing sensor nodes with good accuracy, this method has several problems such as being prohibitively expensive in case of large-scale WSNs and not working well in the place surrounded by large buildings or indoor and underground sites [14]. Finally, GPS receivers can be embedded on a few nodes called anchor or reference nodes, and the other nodes called non-anchor nodes can locate themselves though these reference nodes [15]. Given this approach, algorithms for WSNs can be classified into three classes: multidimensional scaling, relaxation and stochastic techniques [16]. The multidimensional scaling is a connectivity-based technique using distance-based information to estimate the relative positions of nodes. This method works well on the RSS measurement. However, in this method, all sensor nodes need to be in the vicinity of each other so each node can estimate its location through the relative positions of the other nodes [17]. The relaxation is another approach that was firstly suggested by Doherty [18]. In this approach, to solve the localization problem in WSNS the localization problem is turned to a Semi-Definite Programming problem (SDP), which is more easily to solve; then, geometric constraints among sensor nodes in the network are represented as linear matrix inequalities (LMIs). Finally, these LMIs join together to form a single semidefinite problem, which is tackled to create a bounding region for each node. Although this method is promising in case of large-scale network localization problems, the problem associated with this method is twofold. First, it cannot provide high-accuracy estimations for sensor nodes. In some applications such as fire detection in forests, it is necessary to know where sensor nodes are in order to do what it is necessary to do to cope with the issue immediately. Second, all geometrical constraints in the network cannot be formulated as LMIs and only geometrical constraints that form convex regions can be transformed to the LMIs [19].
The stochastic technique is another approach that was originally proposed by Kannan [20] where he showed that simulated annealing algorithm is a promising technique to solve the node localization in WSNs. It is easy to implement and requires small amount of computational effort, but when the flip ambiguity problem occurs, its performance plummets down. This problem crops up when three or more neighbours of the node being localized are almost collinear, and thus its location cannot be determined uniquely. This incorrect information then propagates to the entire network or a large region of it, causing mass confusion in finding the true locations of other sensor nodes. This phenomenon is shown in Figure 1, where three anchor nodes G, B and C are placed around a hypothetical line, and so the location of the non-anchor node D cannot be estimated correctly. To surmount this, Kannan then proposed a new version of SA (SAL), which uses a refinement phase to mitigate its effects [21]. Given the fine-grained localization problem as a two-objective optimization task, reference [22] proposed a two-objective evolutionary approach called the Pareto Archived Evolution Strategy (PAES), which attempts to address both the localization error and the flip ambiguity problem simultaneously. They showed that the PAES algorithm can successfully deal with the node localization problem and, in comparison with the SAL algorithm; it can provides solutions with higher quality.
Population-based algorithms are another group of optimizers, using a set of search agents (solutions) to localize sensor nodes. For instance, Liu and Yang [23] proposed a genetic algorithm named as Genetic Algorithm-based Localization (GAL) that employs two genetic operators, called the single-vertex-neighbourhood mutation and the descend-based arithmetic crossover, to localize sensor nodes in a WSN effectively. In another work, Mao and Fidan [24] proposed a new Particle Swarm Optimization Localization (PSOL) algorithm that applies a swarm of search agents working cooperatively to find good locations for sensor nodes.
In general, optimization techniques for the fine-grained localization problem in WSNs can be categorized into two different groups. The first group uses only stochastic technique such as SAL [16], PSOL [25] or GAL [26]. The second group, on the other hand, uses not only a stochastic optimizer but also an approximation stage such as multitrilateration [27] or priori knowledge such as node-categorizing information [28] to find initial locations of sensor nodes.
QEA is a nature-inspired population-based algorithm, which is inspired from the quantum computation [29,30]. Once proposed, QEA has been used in a variety of optimization problems like transient identification of nu clear power plants [31], vehicle routing problem [32], Watermarking Algorithm [33], Economic Load Dispatch [34] and VAR Planning [35], Unit Commitment [36], and many researchers have tried to improve its performance [37,38].
Both local search procedures and global search algorithms (population-based algorithms) have some advantages and weaknesses. Local search algorithms are efficient and promising once they apply to simpler problems (those that have smaller number of local optima). But, for the problems with large number of local optima, they may easily get trapped in local optima [39,40]. On the other hand, populationbased algorithms are effective for more difficult problems (those that have larger number of local optima) and they are less efficient in case of simpler problems [41]. So, one might say combining these two algorithms in a synergistic way can yield a better algorithm because the local search can cover the weakness of the population-based algorithm, that is, inability of focusing on a particular region of the search space, and the global search can cover the weakness of the local search, that is, inability of an escape from the basin of the attraction of the local optima being trapped in.
Making a trade-off between exploration and exploitation, Memetic Algorithms (MAs) are being proposed to solve not only small-scale optimization problems but also tackle large-scale optimization ones [42]. A memetic algorithm is an evolutionary algorithm, favoured by one or several local procedures. These local search procedures help the MAs to locate the local optima more quickly [43]. The memetic algorithms are inspired by the Neo-Darwinian idea in the natural evolution and Dawkin's cultural evolution unit called the "meme". In Dawkin's cultural evolution, a meme is the smallest unit of knowledge that can be reproduced, changed or improved. If a meme is an interesting one, it will be distributed with high probability within the entire population; if not, it will probably disappear in the next generations. On the other hand, in memetic algorithms, first coined by Moscato, a meme is a local learning procedure, which improves individuals in a population of solutions. These algorithms have recently drawn the attention of many researchers in solving a wide range of real-world problems, including quadratic assignment problem, flow shop scheduling [22], capacitated arc routing problem, DNA sequence compression, and university course timetabling.
Having been proposed by James Baldwin [3], BL [9], which is also called Baldwinian evolution [8] or Baldwin effect [41], is a natural evolution in the population suggesting that the individuals with higher level of adaptability to the changes in the environment have a better chance to live, and they survive longer than their competitors in the population. In MAs, likewise, the individuals with greater fitness values remain alive longer in the population through being selected in the next generations. Baldwinian-based MAs have been used for solving a range of optimization problems, including Terminal Assignment in Communications Networks [40], Numerical Functions [9,41], Feature Weighting in K-MEANS-based algorithms [8] and describing continuous-valued problem spaces [37].

B
G C D Figure 1: The flip ambiguity phenomenon in the wireless sensor networks. The nodes G, B and C are anchor nodes roughly located on a straight line; therefore, the non-anchor node D cannot be localized correctly.
In this paper, we propose a new memetic algorithm that uses QEA as the global search and the Baldwinian local search as the local search. The Baldwin local search helps the algorithm boost its exploitation ability and thus mitigating the stagnation tendency when solving the localization problem in WSNs. The binary-to-real mapping procedure makes the algorithm appropriate for solving the localization problem in WSNs. The proposed algorithm is also favoured with the MT procedure, claimed to be very efficient in providing good starting locations for sensor nodes [28]. From our best knowledge, it is the first time that the memetic algorithm is used for solving the localization problem in WSNs. To test the proposed algorithm (QEA+MT+BL), it is network; anchor nodes are the ones that are fully aware of their positions. This knowledge comes from their GPS receivers or their individual records. The non-anchor nodes, on the other hand, are the ones, which do not know their positions. The aim is to find out the positions of non-anchor nodes by using the geographical information of anchor nodes.
All sensor nodes have an equal connectivity range, r and compared with the proposed algorithm without BL (QEA+MT); they are distributed uniformly in a two-dimensional region with a range of R2. [43], PSO [5], ICA [30], TSA [28] and PAES [34] on ten randomly created network topologies and four different connectivity ranges. The results show that the proposed algorithm performs best on the localization problem in WSNs.
The remainder of this paper is structured as follows. In Section 2, the fined-grained localization problem is described. The QEA is presented in Section 3. The proposed algorithm is introduced in Section 4. In Section 5, we compare the pro-posed algorithm with two different variant of QEA and six optimization techniques based upon simulation results. Finally, the paper is concluded in Section 6.

Problem Definition
In this section, we define the fine-grained localization problem in WSNs, concentrating on the system model, the evaluation of the proposed algorithm during the optimization, and the assessment of the performance of the algorithm after the optimization.

System model
A wireless sensor network can be considered as a network consisting of anchor nodes and non-anchor nodes. In this network, anchor nodes are the ones that are fully aware of their positions. This knowledge comes from their GPS receivers or their individual records. The nonanchor nodes, on the other hand, are the ones, which do not know their positions. The aim is to find out the positions of non-anchor nodes by using the geographical information of anchor nodes. All sensor nodes have an equal connectivity range r, and they are distributed uniformly in a two-dimensional squared region with a range of We use RSSI measurement to estimate the internode distances ij d since it was shown that it can provide measurements with good accuracy and the low cost of hardware [24]. We assume that ij d is computed as, where ij d and d i j are the measured, and the true distance between the i th and j th nodes, respectively and γ is a Gaussian noise, added to the distances because of the measurement noise. The mean and standard deviation of the Gaussian noise are equal to 0 and 1, respectively. We assume that the measurement errors are distributed uniformly across the network. We use a simple disk model that are typically used in the literature [16,28,34] for network communication in that sensor nodes can communicate with each other as long as the actual distances among them are to be less than the communication range. For instance, node i can communicate with node j if di j<r.

Objective function and performance evaluation
In this paper, we use the following objective function [28] for evaluating the proposed algorithm during the optimization process. The objective function CX is found as Where m and n are the number of anchor nodes and non-anchor nodes respectively; dˆk j is the estimated distance between the anchor node k and the non-anchor node j, calculated as where a k is the real position of anchor node k and i x ∧ , j x ∧ are the estimated positions of nodes i, j. Further, in Formula 2, kj d ∧ represents the measured distance between non-anchor node i and j and kj d ∧ .
In order to evaluate the performance of the proposed algorithm, we use the following metric that calculates the distance between the estimated and the real positions of non-anchor nodes in the network.
Where α and β are complex numbers, which represent the corresponding state appearance probability, following below constraint: is measured distance between anchor node k and non-anchor node j. As mentioned, the distances among nodes are measured through Formula 1.
One of the advantages of using probabilistic representation is to simply represent 2m states simultaneously by using m q-bits. At each observation, a q-bits quantum state col-lapses to a single state as determined by its corresponding probabilities. Consider i th individual in t th generation, for instance, which is defined as an m-q-bit as below: of i th non-anchor node.

QEA
QEA is a problem-solving technique, using a set of probabilistic individuals to discover promising regions in the search i=1, 2, ..., n, n is the number of possible solutions in population, and t is the current generation number of the evolution space. Even if using a small number of individuals, the QEA preserves the diversity longer in the population. It is inspired from quantum computation, and its superposition of states is based on q-bit, the 'brick' or the smallest unit of information saved in a two-state quantum computer. A q-bit can be rep-initialized with 1. This implies that each q-bit individual 2q0 represents the linear superposition of based on q-bit, the 'brick' or the smallest unit of information saved in a two-state quantum computer. A q-bit can be rep initialized with 1. This implies that each q-bit individual 2q0 represents the linear superposition of described below: Where α and β are complex numbers, which represent the corresponding state appearance probability, following below constraint: One of the advantages of using probabilistic representation is to simply represent 2m states simultaneously by using m q-bits. At each observation, a q-bits quantum state collapses to a single state as determined by its corresponding probabilities. Consider i th individual in t th generation, for instance, which is defined as an m-q-bit as below:

QEA structure
In the initialization step of QEA, [αt, βt] T of all q0 are i j i j with equal probability. The next step makes a set of binary instants; xt by observing Q(t)={q t , q t , ..., q t } states, where Table 1: Lookup Table of ∆θ, the rotation gate. xi is the i th bit of the observed binary solution and bi is the i th bit of of q-bit population. Each binary instant, xt of length m, is formed by selecting each bit using the probability of q-bit, give some measure of its fitness. The initial best solution {f(x t )} is then selected and stored from among the binary instants of X(t). Then, in 'update' Q(t), quantum gates U update this set of q-bit individuals Q(t) as discussed below. This process is repeated in a while loop until convert-j th q-bit value of i th quantum individual in generation t, [αt t gence is achieved. The appropriate quantum gate is usually designed in accordance with problems under consideration.

Quantum gates assignment
Several quantum perturbation operators [2,11,20,23] have been proposed for steering the quantum individuals during the optimization. These operators act like the movement operator in the particle swarm optimization algorithm (PSO). Like the PSO algorithm, previously identified good solutions are selected as a guideline for the current individuals to adjust their positions in the search space. However, unlike the PSO algorithm in that the best individual in the population directly leads the other individuals, in the QEA the recently explored good solutions steer the individuals through increasing or decreasing their probabilities. More specifically, first, in the migration operator, the values of all individuals are replaced by those of the best individual, and then in the update operator each individual set its probability through these recently good obtained values. Here, we use the rotation gate as an updatingquantum individual procedure. Specifically, a q-bit individual t i q is as follows. The Where ∆θ is rotation angle controlling the speed of convergence, which is determined from Table 1. Reference [10] shows that these values for ∆θ have better performance.

The Proposed Algorithm
The proposed algorithm is a memetic algorithm that is a hybridization of the quantum evolutionary algorithm with the multitrilateration [17] and the baldwinian local search procedures for solving the fine grained localization problem in WSNs. It also uses a mapping procedure to convert the binary solutions obtained by the proposed algorithm to non-anchor node positions. First, we take look the solution representation in the proposed algorithm and then describe the algorithm in detail.

Solution representation
Sensor nodes locations in the proposed network model are encoded in the real value scheme, so because of the binary representation of the solutions of the proposed algorithm, we cannot directly apply the proposed algorithm to solve the localization problem, and we need to convert the binary solutions of the proposed algorithm to the real-coded solutions. To do this, in this paper we propose a binary-to real mapping procedure that converts the solutions obtained by the proposed algorithm to their corresponding real-coded solutions. Using this, we can evaluate the proposed algorithm performance. In particular, let us consider x 1 , x 2 , ..., x n as solutions of the proposed algorithm where x 1 is denoted in Figure 1. In general, the algorithm is composed of four specific parts: ML, real-to-binary and binary-to-real, quantum evolutionary and local search procedures. First, in order to give the quantum individuals a good location guideline that can be used during the search process, the algorithm employs the ML procedure, which is applied to the best personal observed solutions called B. The ML procedure is an approximation process attempting to provide good initial locations for non-anchor nodes. Second, to make the binary observed solutions suitable for the real-domain localization problem in WSNs, the algorithm applies the real-to-binary procedure to convert the binary solutions to the real solutions and the binary-to-real procedure to do the opposite. The algorithm also utilizes the Solis Wets' local search (SW-LS) [26] in the Baldwinian scheme for the best observed solution at each specific generation. We called it the Baldwinian evolution because similar to the Baldwin effect in genetic algorithms, it does not have a direct effect on the genotypes of the individuals. Instead, it improves the observed solutions indirectly by sending back the recently local searching-improved values to the population, then steering them through the rotation gate. The algorithm, finally, uses the rotation gate as a Q-gate for updating quantum individuals in Q t . The framework of the proposed algorithm is represented in Figure 3.

Algorithm 1
The Proposed Algorithm Proposed Algorithm Begin t=0 1. initialize Q 0 2. make X0 by observing the states of Q 0 3. Make C0 using MT procedure 4. Initialize B0 using real-to-binary procedure  6. make X t by observing the states of Q t-1 7. make real-code solutions E t using the binary-to-real procedure 8. evaluate E t 9. among X t and B t-1 , the best individuals are stored in B t 10. store the best individual among B t in b t-1 in b t Note that the proposed algorithm possesses a population of twodimensional quantum individuals 11  Where t is the current iteration and n is the size of population. It also uses two other populations: bi In order to apply the estimated locations of non-anchor in Q0 is initialized without as, nary population Xt, real-coded population Et. The description of the proposed algorithm is in the following.
1. This process is carried nodes given by the multi-trilateration procedure to B t , we need to convert the positions of sensor nodes to its corresponding binary form. To do this, we use a real-to-binary procedure simply turning the positions of sensor nodes into the binary solutions in B t . Note that since k is the dimensions of the area in which the nodes are located. each 4-digit real number in C t representing the x or y-position of a node is turned into the binary string of length 16, the size of the binary population B t is 16 times larger than that of C t . So the size of C t is 16 × 2 × n × m.
In order to find good initial locations for the non-anchor nodes, the algorithm first uses the ML procedure. It then copies the results into C t , the real-coded form of B t . The MT procedure is performed as follows. First, all nodes are divided into two sets, the set of anchor nodes, A, and the set of non-anchor nodes, F. Then, the non-anchor nodes in F are localized by the trilateration technique [17]. To do this, we require at least three anchor nodes. If a non-anchor node has at least three neighbours converting the binary solutions in Bt to real-coded solutions in Ct is used. In this process we do the opposite actions performed in the real-to-binary procedure. 4. In order to apply the estimated locations of non-anchor nodes given by the multi-trilateration procedure to B t , we need to convert the positions of sensor nodes to its corresponding binary form. To do this, we use a real to binary procedure simply turning the positions of sensor nodes into the binary solutions in B t . Note that since each 4-digit real number in C t representing the x or y position of a node is turned into the binary string of Length 16, the size of the binary population B t is 16 times larger than that of C t . So the size of Ct is 16 × 2 × n × m.

The while loop is finished until the maximum number of iterations MI is reached.
6. This step is carried out like Step 2.
7. In order to make the observed solutions suitable for the localization problems in WSNs, a mapping procedure converting the binary solutions in B t to real-coded solutions in Ct is used. In this process we do the opposite actions performed in the real-to-binary procedure.
8. To evaluate the individuals, they must be turned into real-coded form. So, we convert X t to real-coded form E t which is then evaluated using Formula 2.
9. For each individual the best place in the search space the   individual has reached during the optimization is stored in B t . For example, if Ct is better than E(t−1) i in terms of the CX value, its corresponding binary-formed individual in X t is stored in B i t .
Estimate the location of the non-anchor node. By iteratively using the trilateration technique, each non-anchor sensor node in F is localized, and moved to the set A. This technique is iterated until the non-anchor nodes in F, having at least three known node neighbors, are found. 10. Among all individuals in B t , the best is selected and stored in b t .
In addition, c t , the best corresponding real-coded solutions, are all replaced by the best individual in E t .
12. The SW-LS is performed periodically in the proposed algorithm. If the pre-specified periodic time is reached (that is 100 generations), the SW-LS process is initiated.
13. To perform the SW-LS on the best solution, we need to convert b t to its corresponding real-coded form ct and then after performing the local search, we reconvert and replace the best found solution to the binary form if reaching better CX value. The SW-LS is a stochastic hill-climber, using an adaptive step size to discover promising area of the search space. It starts from c t and then making several randomized moves toward the better nearby solutions in order to find a better area of the search space. After several successful and unsuccessful moves, the algorithm adjusts its step size, trying to find better position in the search space. After specific amount of iterations (equal to 100). If five successful steps are taken, ρ is multiplied by 2; if three unsuccessful steps are taken, it is multiplied by 2. Figure 4 shows how the SW-LS work. As seen, Success, Fail, Eval are variables, counting the number of success and failure obtained during the SW-LS process and the number of function evaluations spent on the LS process, respectively; s‫׳‬ and ‫׳׳‪s‬‬ are 2-dimensional arrays, holding the position of s after positive and negative perturbations, respectively; bias is an array holding the history of search during the optimization and max Eval is the maximum number of FEs assigned to the SW-LS at each local-search period. Note that, after performing the local search, we need replace c t with s, providing it has reached better CX value. Then, b t is re-placed by c t obtained from the real-tobinary procedure.
14. In the proposed algorithm, migration is performed glob-ally, tending to accelerate the convergence rate of the population. In the migration process, the binary and real values of the best solution (b t and c t ) are copied into all the best binary and real solutions (B t and C t ), respectively.

Simulation Results
The aim of this section is to evaluate the performance of the proposed algorithm when solving the node localization problem in WSNs. In our simulations, first, we have constructed 10 random network topologies named as TOP0-TOP9 with 4 various transmission ranges of the nodes. Then, we perform the proposed algorithm on these proposed net-works. More specifically, we first describe the randomly generated network topologies in terms of the number of nodes versus neighbourhood cardinality and the number of anchor nodes versus the number of non-anchor nodes. Then, we investigate how applying the BL and MT procedures can help the proposed algorithm boost its performance. Finally, we perform a comparison between the proposed algorithms with 6 existing optimization approaches on the proposed network topologies.
In the network topologies the noise factor NF is %0.1 and the transmission range of the nodes, which controls the connectivity of the nodes is r= (0.13, 0.15, 0.18, 0.22), and the number of anchor and nonanchor nodes are 20 and 180, respectively. The nodes are uniformly placed in a square region of [0, 1] × [0, 1] ⊂ R2.
Topology Setup: Figure 5 demonstrates the mean percentage of all nodes (anchor and non-anchor nodes) on 10 random network topologies against the neighbourhood cardinality of the nodes for 4 different connectivity ranges.
As shown in Figure 5, the more communication ranges the more neighbourhood cardinality. For instance, for r=0.13, about 13% of nodes have 10 adjacent nodes, and for r=0. 15

The impact of applying the local search to the performance of the algorithm
The local search procedure has great impact upon the pro-posed algorithm whether it is performed as a pre-processor procedure (the MT procedure) in the location initialization process or as an interleaved procedure (the BL procedure) in the evolutionary process. In this section, we investigate its effect on the proposed algorithm. First, we look at the effect of the ML procedure on the CX values. Second, we examine the effect of the BL local search on the CX values. To this end, we first compare the QEA+MT with QEA on ten randomly created network topologies described in the previous subsection, and then we compare the proposed algorithm (QEA+MT+BL) with the QEA+MT on the network topologies. Figure 6 shows the CX trends of the QEA+MT and QEA on the first topology (TOP0) and r=0.13.
As shown in Figure 7, for r=0.13, about 55% of non-anchor nodes have only one adjacent anchor node, and for r=0.18 and r=0.22 roughly 40 and 15 have one neighbouring anchor node. Furthermore, about 27%, 5% and 1% of non-anchor nodes have no nearby anchor nodes. This clearly indicates that solving the localization problem on the proposed network topologies is very demanding.
As shown in Figure 6 the QEA+MT reaches better CX values much faster than QEA. As a result, we can suggest that using the MT procedure can boost the ability of the algorithm to find better results. Table 2 summarize the results of the QEA+MT and QEA on 10 network topologies with 4 connectivity ranges. The best results are typed in bold rest cases (7 cases), for r=0.15, 0.18 and 0.22, interestingly, the QEA+MT+BL performs best on 7 cases, and the rest (3 cases) are gained by the proposed algorithm without the BL. This may suggest that the BL procedure has a positive effect on the performance of the algorithm. As represented in Table 2, the QEA+MT offer best results for all cases.

Comparison against existing optimization techniques
In order to evaluate the performance of the proposed algorithm on randomly generated networks, we compare it with the SAL [15], GAL  [43], ICA [30], PSO [5], TSA [28] and PAES [34] on the ten randomly generated network topologies. We use the best parameter values for all the algorithms recommended in [5,15,28,30,34] and represented in Table 3.
As shown in Figure 8, the proposed algorithm (QEA+MT+BL) reaches better CX values than QEA+MT. We can also see the CX values of the proposed algorithm start to rapidly decrease after each 100 generation in a step-shaped fashion. The reason behind such behaviour is that the BL procedure in the algorithm is reactivated after each 100 generation. Table 4 summarizes the proposed algorithm's performance with/without the BL procedure on the ten network topologies using four communication ranges of nodes. The best results are typed in bold. According to Table 3, for r=0.13 the proposed algorithm with the BL set to 50, the maximum number of iterations for GAL and PSOL is set to 1000 and for the single-solution algorithms (SAL, TSA and PAES), on the other hand, the maximum number of iterations is set to 50000. Note that because of involvement of the BL local search, the maximum number of iterations for the proposed algorithm is set to 980 so the    procedure performs best only on 3 cases and the proposed algorithm without the BL procedure on the To make a fair comparison, the termination condition for all the algorithms is set to 50000 function evaluations (FEs); that is, for the population-based algorithms (GAL, PSOL, ICA and the proposed algorithm), the size of the population is overall FEs for the proposed algorithm is 50000. Moreover, due to the new termination condition (50000 FCs); we have to ignore the other termination conditions of the algorithms. For instance, for the SAL, we ignore the Tf (the final temperature of the SAL algorithm). Randomly generated network topologies and four different communication ranges. The best results are typed in bold.
It can be observed in Table 5 for r=0.13, 0.15 the pro-posed algorithm achieves best for all network topologies (except TOP4 and TOP5); after that, the PAES, TSA and SAL are the second to forth best algorithms. Furthermore, for r=0.18 and 0.22 the QEA+MT+BL maintains its superiority over the other algorithms, and it achieves the best for all network topologies. It is also shown that as the connectivity range increases, the performance of the PAES and TSA significantly changes. For r=0.13 the PAES is superior over the TSA on 9 cases and for r=0.15 on 7 cases. However, for r=0.18, 0.22, the TSA is better than PAES on 8 and 10 cases, respectively.
Intuitively, a combination of an approximation procedure such as the multi-trilateration technique with an optimization process such as the SA or QEA induces better performance (see the results of the proposed algorithm and TSA). It is also observed that the pure optimization algorithms could not yield well results (see the results of the GAL, ICA and PSOL). To shed more light on the LE values, Figure 9 demonstrates the estimated coordinates of network nodes, obtained by the proposed algorithm and its spins-off as well as the other 6 optimization algorithms on TOP0, r=0.13 where black solid stars, rectangles, multiplication signs, straight lines represent the coordinates of anchor nodes, the real positions of non-anchor nodes, the estimated positions of non-anchor nodes and the Euclidean distance between the real and estimated positions of non-anchor nodes, respectively. As shown in Figure 9, the proposed algorithm as well as QEA+MT estimates the positions of non-anchor nodes with       TOP0  TOP1  TOP2  TOP3  TOP4  TOP5  TOP6  TOP7  TOP8  TOP9 r=0   higher accuracy than the other algorithms. After that the PAES and TSA offer good accuracy.

Conclusion
This paper proposed a new memetic algorithm for managing the fine-grained localization problem in WSNs. The memetic algorithm is based on the QEA and a local search procedure in the form of Baldwinian scheme. The QEA in the proposed algorithm improves the explorative ability and the algorithm, and the local search procedure enhances the exploitation ability of the proposed algorithm, finding local optima more quickly. In particular, the proposed algorithm can be summarized as following aspects. First, to make good initial locations for sensor nodes, the algorithm uses the multi-trilateration procedure iteratively. Second, to enhance the exploitation ability of the algorithm, it also utilizes Solis Wet's local search in the form of Baldwinian scheme. Third, to make the proposed algorithm appropriate for the localization problem a conversion procedure, which converts the real-coded solutions to the binary solutions, is used. The proposed algorithm was compared with six existing optimization techniques on ten randomly created network topologies. The simulation results and comparisons demonstrate superiority of the proposed algorithm in terms of localization error and robustness.