Ng Keng Meng*
Division of Mathematical Sciences, School of Physical & Mathematical Sciences, Nanyang Technological University, Singapore
Received Date: April 03, 2012; Accepted Date: April 06, 2012; Published Date: April 10, 2012
Citation: Meng NK (2012) Algorithmic Information Theory Using Kolmogorov Complexity. J Applied Computat Mathemat 1:e106. doi: 10.4172/2168-9679.1000e106
Copyright: © 2012 Meng NK. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Visit for more related articles at Journal of Applied & Computational Mathematics
We are faced with the task of storing and communicating increasingly huge amounts of information. The development of digital storage media and communication channels have not been able to keep up with this increasing demand in an economically viable way. Data compression is fast emerging as the key technique in resolving this technological issue. Kolmogorov complexity is the central tool used in this analysis. Informally the Kolmogorov complexity of an object is the length of the shortest string from which the original can be reconstructed lossless by a general-purpose computer. Thus the Kolmogorov complexity of an object measures the maximum amount of compression that any lossless compression program can achieve in theory. The main issue in translating theory to application is the fact that Kolmogorov complexity is non-computable. This means that no computer program can compute precisely the Kolmogorov complexity of a given string. Fortunately Kolmogorov complexity can be approximated by a computer, and this has been used as the standard method of implementing classification techniques.
Unfortunately the main drawback of applying compression techniques in practice is that there is probably no way of computing an error bound in any approximation for Kolmogorov complexity. In fact no single compression software can work well for every data set, because a compression program can never know if the current compression ratio is the best possible. Most compression programs do not work at maximum efficiency for every data set.
To develop tools to overcome these problems in practice, a fundamental theoretical question to consider is the issue of how the degree of incompressibility varies with computational power of the information being tested [1-2]. For instance suppose we are given the outcome of a series of random coin toss. This information will clearly pass any reasonable statistical test for randomness, but its stochastic nature prevents any useful information to be extracted. In contrast any finite amount of useful information can be written on a sufficiently long tape. This is also incompressible information, but highly useful. Information can therefore fail to be compressible due to one of two reasons: It can have high entropy and contain true chaos, or it can contain highly complex information in an orderly fashion. This behavior has recently been identified and made precise by researchers and many recent projects have explored this area [1,3]. The evidence from studies suggests that high computational strength is accompanied by low randomness content.
This area is a rapidly expanding topic of research with many potential applications, and therefore it is important to have an avenue of distributing new results in a quick and inexpensive way. The OMICS group provides open access for scientific publications and enables new results to reach a wide audience in as short a time as possible.
Make the best use of Scientific Research and information from our 700 + peer reviewed, Open Access Journals