Cloud Computing In Aircraft Network
Department of MCA, Bharath Institute of Science and Technology, Chennai, TamilNadu, India.
|Related article at Pubmed, Scholar Google|
Load Balancing is a method of distributing workload across many servers in a network. Typical datacenter implementations rely on large, powerful (and expensive) computing hardware and network infrastructure, which are subject to the usual risks associated with any physical device, including hardware failure, power and/or network interruptions, and resource limitations in times of high demand. Load balancing in the cloud differs from classical thinking on load-balancing architecture and implementation by using commodity servers to perform the load balancing. This provides for new opportunities and economies-of-scale, as well as presenting its own unique set of challenges. Load balancing is used to make sure that none of your existing resources are idle while others are being utilized. To balance load distribution, you can migrate the load from the source nodes (which have surplus workload) to the comparatively lightly loaded destination nodes. When load balancing is applied during runtime, it is called dynamic load balancing — this can be realized both in a direct or iterative manner according to the execution node selection: In the iterative methods, the final destination nodes are determined through several iteration steps. In the direct methods, the final destination node is selected in one step. Another kind of Load Balancing method can be used i.e. the Randomized Hydrodynamic Load Balancing method, a hybrid method that takes advantage of both direct and iterative methods.