alexa
Reach Us +447482876457
Deciphering the Enigma of Human Creativity: Can a Digital Computer Think? | OMICS International
ISSN: 0974-7230
Journal of Computer Science & Systems Biology

Like us on:

Make the best use of Scientific Research and information from our 700+ peer reviewed, Open Access Journals that operates with the help of 50,000+ Editorial Board Members and esteemed reviewers and 1000+ Scientific associations in Medical, Clinical, Pharmaceutical, Engineering, Technology and Management Fields.
Meet Inspiring Speakers and Experts at our 3000+ Global Conferenceseries Events with over 600+ Conferences, 1200+ Symposiums and 1200+ Workshops on Medical, Pharma, Engineering, Science, Technology and Business
All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Deciphering the Enigma of Human Creativity: Can a Digital Computer Think?

Felix T Hong*

Department of Physiology, Wayne State University, Detroit, MI 48201 USA

Dedicated to the memory of the late Professor Michael E. Conrad of Wayne State University

*Corresponding Author:
Felix T Hong
Department of Physiology
Wayne State University
Detroit, MI 48201 USA
E-mail: [email protected]

Received date: July 11, 2013; Accepted date: August 21, 2013; Published date: August 30, 2013

Citation: Hong FT (2013) Deciphering the Enigma of Human Creativity: Can a Digital Computer Think? J Comput Sci Syst Biol 6:228-261. doi:10.4172/jcsb.1000120

Copyright: © 2013 Hong FT. This is an open-access article distributed under the terms of the Creative Commons Attribution License,which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Visit for more related articles at Journal of Computer Science & Systems Biology

Abstract

The objective of the present article is a) to explain humans’ high creativity in non-mystic and unambiguous terms, b) to evaluate the performance of problem-solving computer programs and c) to make suggestions about future designs of heuristics. Unlike many previous attempts in the past century, we sought inspiration from two sources that had been neglected or excluded from considerations by experts: artificial intelligence and introspections of a number of highly creative individuals, who confessed that they had a penchant for visual thinking. Simonton’s chance-configuration model was refurbished accordingly. It is now possible for the refurbished model to explain a number of outstanding puzzles that had eluded our predecessors: a) what intuition is, b) why creators had no idea about their source of inspiration even after the fact, c) a peculiar event happening at the discovery time, known as the “aha” phenomenon, d) a type of accidental discoveries known as serendipity. Moreover, the elusive concept of abduction advanced by philosopher Charles Peirce is actually visual thinking in disguise. Blessed with this new understanding, we could evaluate the performance of a number of problem-solving computer programs from a cognitive point of view. It turned out that the common thread that links human creativity and computer-based creative problem solving is heuristic searching. Recognizing that a digital computer must perform heuristic searching in a digital environment, which is not the most user-friendly environment to do so, we made suggestions about how to circumvent the restrictions without sacrificing the principles in future designs of heuristics.

Keywords

Creativity; Computer-based creative problem solving; Intuition; Visual thinking; Abduction

Introduction

The thought process of geniuses or individuals with superior mental abilities has captured the fascination of philosophers and scientists since the inception of these professions. The mystery of human creativity was dramatically captured by a remark of mathematician Carl Friedrich Gauss. In referring to a long-standing problem which he had just solved, Gauss said, “The riddle solved itself as lighting strikes, and I myself could not tell or show the connection between what I knew before, what I last used to experiment with, and what produced the final success” (p. 383 of [1]). In other words, the creator himself did not known his own thought process that had logically led to his own discovery even after the fact. In the past century, psychologists and, subsequently, cognitive scientists attempted to elucidate the enigma of human creativity with limited success. Nevertheless, the accumulated data and limited insights are sufficient to be assembled into a coherent and intelligible explanation of humans′ high creativity. The veil of mystery can now be lifted.

Experts were divided in their views regarding the creative process. Historically, there are two schools of thought regarding geniuses′ thought process among practicing psychologists: the elitists and the non-elitists. Hayes [2] and Weisberg [3], both advocating the nonelitist view, claimed that geniuses simply knew more, had better techniques, and worked harder than non-geniuses. What they implied was that logical deductions are the path to creative acts. Others made explicit claims about the role of logical deductions. Educator Lawson [4] promoted deductive reasoning to the point of total dismissal of induction as an unscientific method of reasoning. He insisted that scientific discoveries start with proposing a hypothesis, to be followed by deductive reasoning so as to generate predictions, which are then verified through scientific observations and/or experimentation. He named his unique theory Hypothetico-Predictive Theory. Willingham [5], whose “Ask the Cognitive Scientist” column frequently appeared in American Educator (an official publication of American Federation of Teachers), essentially advocated the same view. Lawson′s theory was by no means unique. Decades ago, Medawar [6] also proposed a virtually identical theory except in name, “hypothetico-deductive theory,” without the additional claim of priority. Medawar also dismissed the existence of induction; he treated induction as merely “the inverse application of deductive reasoning”. The latter dogma had long been perpetuated in lay publications. Arthur Conan Doyle essentially affirmed this dogma. Through the words of his alter ego, Sherlock Holmes, Doyle asserted that the key to crime-case solving is logical deductions [7].

Deductive reasoning can be learned by means of rigorous training, especially scientific training. Moreover, the proficiency of deductive reasoning is experience-independent [8]. Thus, the ability to perform deductive reasoning alone cannot explain the individual difference of creativity. This line of thinking inevitably led to the conclusion that what can possibly make a difference in individual creativity is knowledge. As a consequence, emphasis on the transfer of knowledge became a central issue in education. In the age of information explosion, this emphasis became the nightmare of students, especially of biomedical sciences. However, Bell [9] greeted the non-elitist view with the following scathing comment: ″If drudgery alone sufficed, how is it that many gluttons for hard work who seem to know everything about some branch of science, while excellent critics and commentators, and never make even a small discovery?”

In a monumental treatise with the title ‘The Logic of Scientific Discovery′, Popper essentially negated the message conveyed by his title by proclaiming that there is essentially no logic that leads to a scientific discovery [10]. Furthermore, several apparent discrepancies cast doubt on the crucial role of knowledge and logic in creativity. First, if science is the best training ground for logical reasoning, why did the real world find liberal arts education the best training ground for business executives [11]? Second, the importance of (domain-specific) knowledge was probably over-rated. Some minority psychologists concluded, by experimental correlation, that prior knowledge hinders creativity [12]. Taking to the extreme, this finding implies that those who knew the least seemed to be in the best position to solve a difficult problem. Third, Simonton [13] conducted an investigation about the effect of education on creativity. He found that creativity increases with increasing years and level of education initially, reaches a peak at approximately the sophomore year, and then declines steadily with additional years and increasing level of education (Figure 1A). Simonton′s data show that surprisingly, on the average, doctoral degree holders appeared less creative than individuals with minimal education. In contrast, dogmatism exhibits a reverse trend with the minimum around the sophomore year whereas severity appears at both extremes (Figure 1B). Notably, doctoral degree holders are significantly more dogmatic than individuals with minimal education. Thus, the dumbing down effect of education, frequently alluded to in folklores, is probably reflected in declined creativity through cultivation of dogmatism. Taken seriously, this correlation suggests that one should drop out of college around the sophomore year for the sake of maximum preservation of creativity, just like several successful high-tech business entrepreneurs in recent years had done. Fortunately, the data are meaningful only in the statistical sense, and individual exceptions abound. Even so, without effective preventive measures, one still runs the risk of losing creativity in exchange for higher education; the gain does not make up for the loss caused by the harm of dogmatism. Einstein once said, “Imagination is more important than knowledge”. If not logic, what else? If not knowledge alone, what else in addition? How to cultivate imagination instead of dogmatism?

computer-science-systems-biology-ranked-eminence

Figure 1: The risk of education. A. The ranked eminence of the 192 creators and 109 leaders is plotted as a function of the level of formal education. B. The degree of dogmatism of 33 American presidents is similarly plotted. (Reproduced from [13]).

For those who did not accept the non-elitist explanation of creativity, the elusive factors that contribute to scientists′ discoveries were often referred to as intuition, inspiration, and insight. Yet, experts continued to be baffled by these terms; almost everyone knows how to use these terms but no investigators seemed to be able to explain them in explicit and intelligible terms. Descriptions in terms of abstract or even mystic terms often appeared in articles, of which the authors continued to pursue the topic of intuition and insight [14-17]. The predicament was best illustrated by a humorous remark made by Sternberg and Davidson [18]: “What we need most in the study of insight are some new insights!”

It suffices to say that creativity research over the past century has not succeeded in demystifying the process of human creativity. Curiously, it was mathematician Poincaré that had made an attempt to reconcile the logic vs. intuition debate. He said, “It is by logic that we prove, but by intuition that we discover” [8,19,20]. He thought that both logic and intuition are needed but during different phases of the creative process. He published a book, “The Foundation of Sciences”, about a hundred years ago [21]. It was essentially an introspective account of his thought process of mathematical creations. However, his work was greeted with suspicion. Boden [22] treated him as a “useful witness” to the creative process, whereas others thought his opinion preposterous [9]. Two important introspective reports subsequently appeared. Mathematician Hadamard [23] compiled a collection of introspective reports by fellow mathematicians or scientists, including Albert Einstein. Far less known but of equal importance was the introspective report of Nikola Tesla, which first appeared in an obscure trade journal called the Electrical Experimenters and which was subsequently published as his autobiography Moji Pronalasci (My inventions) [24]. Psychologist Harris [25] responded to Einstein′s introspection un-favorably. In apparent contempt, he derisively characterized the term “creativity” as one of those “vague words whose frequency of use lies in inverse proportion to the carefulness of use”. Ironically, obscurity let Tesla′s autobiography escape experts′ ridicule.

Actually, it is not that difficult to define creativity if one has working knowledge of Western science history. Here, we shall define creativity tentatively as ability to do original work rather than just to create something new but trivial. In this way, we shall differentiate between high creativity and a “run-of-the-mill” type of creativity based on minor improvements of others′ creations (the so-called “Me-Too” creativity). Actually, it is even harder to define the term genius. In certain cultures, geniuses were defined as those who had prodigious knowledge; whether the knowledge was important or trivial mattered little. Of course, this article does not subscribe to the latter definition. For the purpose of creativity research, we shall not define geniuses on the basis of their public recognition and/or social status. Realizing that none of the so-called objective criteria are reliable prior to elucidation of the enigma of human creativity, we shall let the examples to be cited in this article define implicitly what a genius is. But we shall not take an implicit or mystic definition of intuition as a satisfactory resolution of the enigma of creativity.

It has long been known, in the psychology literature, that there are two modes of thinking: visual thinking and verbal thinking [8,19,26,27]. Both Einstein and Tesla indicated that they had a penchant for thinking in pictures rather than in words and symbols. Other scientists who gave similar testimonials include Richard Feynman and Stephen Hawking. The pictures invoked in visual thinking need not be concrete images that one can actually see. The pictures or images could be imagined mentally just like how one recalls the gentle face of a long deceased relative or friend. What one imagines mentally is called mental imagery [20,28,29] (also known as the mind′s eye [27]).

The split-brain research pioneered by Sperry [30,31]ushered in attempts to interpret humans′ thought process in terms of the functions of the two cerebral hemispheres, in accordance with early theories of cerebral lateralization. Creative activities and intuition have been thought to be attributable to the right brain function [8]. This interpretation spurred a right-brain movement in education. However, as further progress was made in lateralization research, this line of thinking fell out of favor of experts and the right-brain movement was crushed [25], mainly because no detectable differences regarding the preferential uses of the two cerebral hemispheres could be demonstrated between geniuses and ordinary but competent individuals [32,33]. However, the condemning evidence was flawed (see General Discussions). Mental imagery was also dismissed by some psychologists as an epiphenomenon and serves no real physiological function [34]. In hindsight, the falsification of visual thinking by means of flawed experiments was premature. Note that absence of evidence is not evidence of absence. More recently, brain scientist Taylor, who completely recovered from a rare form of stroke that had completely obliterated her left-brain function for an extended period, gave a vivid first-hand account of the subjective inner feeling about the separate function of the two cerebral hemispheres [35]. The right-brain interpretation is fundamentally correct, although the details are more complicated than early theories envisioned.

With the advent of behaviorism in the mid-20th century, psychologists began to investigate behavioral evidence that could only be objectively demonstrated. Tangible conclusions were reached after extensive behavioral experiments often with the aid of statistical methodology. Geniuses′ introspections were therefore deemed too subjective to be reliable. For a lack of objective behavioral evidence, intuitive feeling of practicing scientists and mathematicians has thus been banished to the back alley of “folk psychology”. In hindsight, this attitude was somewhat strange, because it was tantamount to the strategy of studying politics and business administration by honoring only the opinions of academic scholars while ignoring the testimonials of practicing politicians and business leaders.

The subsequent rise of cognitivism, as a counterforce to the monopoly of behaviorism, did not reverse the trend. Instead, cognitive psychologists indulged in molecular and cellular biology of the human brain, thus missing the opportunity to decipher the enigma. The experts ignored a painful lesson learned in the research done in complexity theory: understanding at the atomic and molecular revel does not automatically lead to understanding at the macroscopic or systems level.

The author approached the investigation of human creativity from two unlikely angles: bio-computing and education. My earlier research interest in bio-computing guided me to view the enigma of human creativity from the perspective of artificial intelligence and computer science. To the best of my knowledge, Cohen [36] was the first to report an association of the right and the left hemispheric functions with parallel and sequential processing, respectively. But the idea has never gained popularity among psychologists presumably because of premature falsification by means of the above-mentioned flawed experiments. Curiously, Mozart was probably the first to recognize the role played by parallel processing and sequential processing in music composers′ thinking processes. In response to the inquiry of an admirer, Baron von P., Mozart described his approach towards music composing [37,38]. He wrote, “The whole, though it be long, stands almost complete and finished in my mind, so that I can survey it, like a fine picture or a beautiful statue, at a glance. Nor do I hear in my imagination the parts successively, but I hear them, as it was, all at once (gleich alles zusammen)”. Note that Mozart used a picture metaphor, although what concerned him were tonal patterns instead of visual patterns. In other words, he invoked visual thinking (or rather, auditory thinking), if we are allowed to generalized visual patterns to patterns pertaining to humans′ other special senses. Mozart told us that his mind processed music notes not by means of sequential processing - hearing the parts successively - but rather by means of parallel processing - gleich alles zusammen! It was all the more astonishing in view of the fact that Mozart had never taken a single course of computer science or artificial intelligence.

To the best of my knowledge, Herbert Simon was among the first to point out that creative problem solving involves recognition [39]. Thus, if we treated the problem as a pattern, then finding a solution is tantamount to recognizing a template that best matches the pattern. Pushing Simon′s view one step farther, we realized that there appears to be two fundamentally different ways of recognizing a pattern: digital pattern recognition and analog pattern recognition. Putting this latter notion together with Mozart′s identification of the two processes of handling musical notes, it became obvious that verbal thinking is a sequential process (digital pattern recognition) whereas visual thinking is a parallel process (analog pattern recognition). Mozart′s opinion also corroborated the introspective reports of Einstein and of Tesla, thus making the association of visual thinking with creativity more compelling than ever. Messages as such were almost certainly off limit to practicing psychologists not only because of its speculative nature but also because of its subjective nature from the perspective of untrained non-experts, i.e., folk psychologists.

Mozart′s insight suggested that there is no fundamental difference in the core processes of thinking in science as compared to that in humanities and arts. We were thus motivated to formulate a unified interpretation of human creativity that encompasses both science and humanities and perhaps any of humans′ other mental activities [40-42]. In this article, we shall use existing data and insights to synthesize a coherent and understandable explanation of humans′ high creativity. The primary purpose is to demystify the enigma. In order to do so, we need to close a gap left open by a missing link. The identification of this missing link revealed a far more intimate relationship between creativity and education than we had been led to believe. We shall defer the detail to a subsequent section. The implication to education was discussed by Hong FT [43].

Our interpretation of human creativity was a decisive break from the traditional view that thinking of scientists is rational whereas that of artists is sensible (rationality vs. sensibility dichotomy). This point of view also ran afoul with the highly popular theory of multiple intelligences proposed by Gardner [44]. In Gardner′s initial classification of human intelligences into 7 distinct types, mathematicallogical and musical intelligences were categorized as two distinct, if not mutually exclusive, human capabilities. Elsewhere we pointed out the major flaw of Gardner′s theory: a lack of parsimony [42].

Concurrent to psychologists′ creativity research, computer scientists and investigators in artificial intelligence succeeded in constructing computer programs that could solve problems with increasing ingenuity. Philosophers and computer scientists have since argued regarding whether a digital computer could be creative and actually think [45-50]. Elucidation of human creativity thus offers a renewed opportunity to re-examine the same question and to foster cross-fertilization between creativity research and computer-based problem solving research.

Exhaustive Search, Random Search, and Heuristic Search

By treating creative problem solving as a process of pattern recognition, how effectively and efficiently one searches for and finds a suitable template to match a given pattern reflects how creative one is. A useful concept that was conspicuously absent in the literature of psychology and cognitive science is that of heuristic searching [39,51]. In problem solving, the space that contains all theoretically possible solutions is commonly known as the search space. For a well-defined problem with a finite number of possible solutions, it is possible to examine each and every solution exhaustively so as to make sure that no tiles remain unturned. When the search space is sufficiently large, it may not be possible to examine each and every possible solution in real time. One of the few options left is random searching, which essentially depends on luck for success. Luck was an important element in making novel discoveries. But is luck alone sufficient? Conventional wisdom cast a serious doubt. It was not an uncommon observation that some creative individuals seemed to be consistently luckier than others. Conventional wisdom also emphasizes the importance of hard work, as if repetitions alone could improve the odd of winning. Something just did not quite add up correctly.

In operations research and human problem solving, approaching a problem by trial and error and by examining every possible way is discouraged. This is because the number of possibilities rapidly increases beyond bound as the complexity of the problem increases - a situation commonly known as combinatorial explosion. A typical example is provided by the enormous number of possible moves, countermoves, countermoves against countermoves, etc., in a chess game when a player tries to outsmart the opponent and searches for a strategic move by planning ahead at a search depth of several levels (or, rather, plies - half moves - in chess jargon). Even IBM supercomputer Deep Blue, which narrowly lost to world chess champion Garry Kasparov in 1996 but eventually defeated him in 1997, could not afford to explore the search space exhaustively [52]. Therefore, selective searching based on explicitly prescribed rules of thumb (called heuristics) often allows a problem to be solved within a reasonable amount of time, whereas an undirected or trial-and-error search would require an enormous amount of time, and often could not be completed in a human′s lifetime.

For some types of novel problems, the search space may be poorly defined and/or it may be difficult to come up with explicit heuristics. In the latter case, the creators often attributed the vague “rules” to intuition or inspiration. Heuristics can be articulated whereas intuition is often vague and (almost) impossible to articulate. How intuition differs from explicit heuristics will be made clear in this article. We shall attempt to piece together insights gained over a century and reported in both the science and the humanities literature. Therefore, we shall first consider existing creativity models proposed during the past century.

Models of Creative Problem Solving

One of the earliest creativity models was Wallas′ Four Phase Model [53]: preparation, incubation, illumination and verification. The incubation phase was apparently suggested by Poincaré′s introspective account about a geological field trip [21]. An explanation of the incubation phase will be presented in a subsequent section. Table 1 shows a number of creativity models subsequently proposed during the past century. The chance-configuration model of Simonton [13] will be used as the main frame of reference in the present article; it is refurbished to the extent of being capable of explaining most, if not all, outstanding puzzles of human creativity.

Simonton′s theory claimed its parentage in Campbell′s blind variation and selective retention model [54-56]. The latter, in turn, followed the analogy of evolution and learning [57,58]. Simonton′s theory stipulates three stages of problem solving: blind variation, selection and retention. Simonton′s model of creative problem solving is analogous to evolutionary learning: the evolutionary triad of (random) mutation, natural selection, and perpetuation (reproduction).

The first phase of Simonton′s model corresponds to the process of searching for potential solutions in the (subjectively) selected search space. Superficially, the notion of “blind” variation implies “random” searching. However, according to Wuketits [55], it means, instead, “not guided by anticipation,” although Campbell himself had difficulty making it unambiguous. In my opinion, the notion implies searching for a solution can neither be conducted by following a pre-determined route nor by means of trial and error alone, but rather a compromise of the two extremes, i.e., heuristic searching. Simonton′s original model did not specify how to conduct heuristic searches. Nor did it specify how creative people differ from others in conducting searches.

In the selection phase, the problem solver chooses, upon first screening of the search space, a short list of candidate solutions that are deemed more likely to be an appropriate solution. The ability to recognize the appropriate or probable solutions during the search process is just as important as the ability to define an appropriate search space. Failure to find a solution can be either due to exclusion of the solution from the search space or due to inability to recognize the solution when included.

In the retention phase, the solution that has been selected or recognized must be preserved and retained by some thought process. Often, more than one probable solution is found to match the problem, but not all of them are appropriate or right. Verification is thus required to complete the creative process. In the subsequent discussions, no distinction will be made between the terms “retention phase” and “verification phase”.

Simonton′s model can be conveniently recast in terms of pattern recognition. If a given problem is regarded as a pattern, then candidate solutions in the search space can be regarded as probable matching templates, and finding candidate solutions is tantamount to recognizing suitable templates, i.e., pattern recognition. Searching for candidate solutions begins with the search phase and ends with the match phase, thus resulting in the acquisition of a small number of solution templates that reasonably match the problem pattern. However, these candidate solutions must be subjected to further scrutiny in the verification phase, before either some of them are retained as the final solutions or none of them actually works and the search continues.

A Pattern Emerging from Comparison of Various Models of Creative Problem Solving

Instead of running through the list shown in Table 1, we here choose to carry out a comparative study of these models. If finding a satisfactory explanation of humans′ creative problem solving is itself a process of problem solving, then one way to do it is searching for a recognizable pattern among creativity models of the past century. This approach is tantamount to recognizing an emerging pattern from reports of the proverbial three blind men who attempted to figure out what an elephant looks like by touching different body parts of the elephant. None of them offered a recognizable pattern, but they might if their observations were pieced together.

In real-life problem solving, the first two phases of Simonton′s model often take place alternatingly in high speed, and are difficult to separate. If the two phases are combined into a single phase of searchand- match, it becomes apparent that Simonton′s model is equivalent to several other models listed in Table 1. Perceiving the equivalence of terminology may clarify our thinking regarding how we actually think. What we should not forget is: while we are doing this comparative study, we are actually performing the search-and-match step of Simonton′s model. Whatever conclusions being figured out are only tentative. A second step of rigorous verification is still needed.

By inspecting Table 1, it is obvious that the search-and-match phase and the verification-retention phase correspond, respectively, to the solution-generating and solution-verifying processes, stipulated by Newell et al. [59]. The two phases also correspond, respectively, to: a) Poincaré′s intuitive and logical approaches [8,21] b) Kris′ inspirational and elaborative phases [60,61], c) Bastick′s visual-ability and verbalability modes [8], and d) Boden′s parallel-intuitive and sequentialdeliberative thinking [22].

By taking the formal correspondence seriously, we found that the above comparison implied that elaborate logical reasoning, which can be verbalized, is invoked during the solution-verifying phase. By the same token, the above models also imply that intuition and inspiration are often responsible for effective searching and keen identification of novel solutions during the solution-generating phase. In this regard, Poincaré′s remark made an eminent sense: “It is by logic that we prove, but by intuition that we discover”. It is also readily identifiable that intuition or inspiration corresponds to Freud′s primary-process thinking. Primary-process thinking was often characterized as irrational or non-rational, as opposed to secondary-process thinking, which was considered logical and rational. Batick′s reference to visual and verbal modes of thinking implies that intuition and inspiration is often associated with visual thinking and is consistent with the afore-mentioned testimonials of several eminent creators. Boden′s classification was consistent with Mozart′s remark about his own thought processes in music composing. It was also consistent with Poincaré′s classification. In short, what we attempted to accomplish in this word-replacing game was to recognize an emerging pattern among our predecessors′ various models, each of which had captured certain features of the underlying cognitive processes. What remains to be done is piece them together to formulate a coherent interpretation as well as to verify the interpretation in reference to past observations and experiments.

The various sets of terminology mentioned above are not completely synonymous. But the comparison and identification allows for the meaning to shift from being explicitly descriptive of the phases or stages, such as the search-and-match phase and solution-generating phase, to being descriptive of the underlying mechanisms, (such as intuitive vs. logical, visual vs. verbal, or parallel vs. sequential). Freud′s terminology was conspicuously nondescript. But it helps explain why primary-process thinking was often regarded as irrational and nonrational. As we shall see, there is nothing irrational about primaryprocess thinking, because, just like intuition, it is difficult to articulate or verbalize. But silence in offering a logical and verbal explanation cannot be construed as an admission of irrationality. On the other hand, Freud′s classification was in tune with the common association of geniuses with insanity (e.g., the mad scientist designation) [62]. Actually, primary-process thinking is a characteristic thinking style of patients of manic-depressive (bipolar) disorder during the manic or hypomanic phase; their thinking, which is known as flight of ideas, is lightning fast!

In this word-replacing game, by arranging the corresponding terms in Table 1 in a judicious order, the range of diverse terms helps connect the underlying mechanisms to cognitive science (e.g., visual vs. verbal), on the one hand, and to computer science (e.g., parallel vs. sequential), on the other hand. Visual pattern recognition (and other sensory pattern recognition) involves recognition of the pattern as a whole. This was what Gestalt psychologists have always been preaching (It is regrettable that behaviorism succeeded in marginalizing Gestalt psychology, but cognitivism failed to resurrect it!). The idea sits well with the common knowledge that it often takes enormous software overhead to accomplish (analog) pattern recognition in a sequential digital computer. It also explains why intuition is difficult to verbalize, since verbalization is a sequential process and verbalization of an intuitive feeling is tantamount to a parallel-to-serial conversion.

Here, it is important to realize that it is not the parallel-to-serial conversion process that is time-consuming. Rather, it is finding an appropriate way of verbalization that is sufficient to capture the essence of a parallel process that is time-consuming and skill-dependent. Verbalization of a parallel process is tantamount to matching a pattern with an unknown nametag with another pattern with an alreadyknown nametag. This is illustrated by a first-hand account of a survivor of the September 11, 2001, terrorist attacks of the Twin Towers of New York City′s World Trade Center. This survivor described his ordeal of an hour-long journey on the way down via a staircase of one of the attacked buildings (C. Sheih, personal communication, 2001): “There was no smoke at all in the stairwell, but there was a strange peculiar smell, which I later remembered it smelling like how it does when one boards an aircraft. I later found out that this was jet fuel”. Sheih′s immediate awareness of the peculiar smell apparently stemmed from recognition of the smell pattern (olfactory pattern recognition). The verbal awareness of the presence of jet fuel was not immediate, since few people in the right mind would expect a bombing attack on the building by means of an airliner turned a manned missile. The peculiar smell pattern was remembered nevertheless, despite a temporary lack of verbal meaning. Verbalization came in two stages. First, it became associated with a location where previous experience with the same smell pattern took place. Then, more specifically, the smell pattern became associated with a particular substance, once his mind was prompted - or, rather, primed, in psychology jargon - by detailed news of the terrorist attacks.

Rosen′s Generalization of a Basic Linguistic Principle

In regard to the above-described word-replacing game, an additional interpretation was suggested by Rosen′s generalization of basic linguistic principles. Rosen [63] classified natural processes into two categories. Those processes that can be described by a sequential process such as a mathematical theory are called syntactic processes, whereas those processes that cannot be so described are called semantic processes. This usage of linguistic terminology includes the linguistic process as a special case. Rosen′s syntactic process can be expressed in terms of computer algorithm, i.e., algorithmizable, whereas his semantic processes cannot be adequately expressed in terms of computer algorithm, i.e., non-algorithmizable (of course, it was possible to fake it to various extents). Rosen′s classification of sciences also corresponds roughly to the common designation of hard sciences vs. soft sciences, respectively.

In light of Rosen′s generalization, the humorous remark made by Sternberg and Davidson [18] can now be explicitly analyzed. Essentially, their remark attempts to define the term “insight” implicitly: “What we need most in the study of insight are some new insights!” It was an attempt similar to that of Potter Stewart who implicitly defined obscenity: “I know it when I see it!” Linguistically, the sentence has zero syntactic content, since it merely enunciates a tautology. Yet, it is a meaningful sentence, which proclaims a conclusion that many, if not all, readers would agree to. Therefore, the entire meaning of this sentence resides in its semantic content. In other words, the notion of insight can only be articulated semantically though not syntactically. This new insight, together with Rosen′s generalization, allowed us to conclude that insight is a semantic, non-algorithmizable process. In brief, insight is a parallel process, so is intuition as well as inspiration.

By playing the above word-replacing game, various pieces of the jigsaw puzzle known as the enigma of creativity suddenly fell into the right slots in a single snap. All the above-mentioned models, when taken together, essentially conclude that the search-and-match (solution-generating) phase invokes non algorithmizable process variously known as intuition, insight, inspiration, and primary-process thinking, which are essentially a Gestalt process of (visual) pattern recognition. Logical reasoning is used primarily during verification of the generated solutions.

Two Modes of Thinking: Picture-Based vs. Rule-Based Reasoning

Discussions presented in the previous section indicate that various creativity models all points to the identification of visual thinking as a major factor for creativity. However, the identification did not completely dispel the mystery of high creativity. Does not everyone know how to perform visual thinking? If so, why were some people more creative and are bestowed by muse with the gift of intuition and why did others never have had the privilege? Choosing the right parents does not seem to be a practical answer. Apparently, there is a gray scale of creativity, but there may also be a qualitative difference in the thinking styles of individuals occupying the two extremes of this continuous spectrum. In other words, just because the color red makes a continuous transition to the color yellow through the intermediate color orange without any obvious discontinuity does not mean the color red is not different from the color yellow. Apparently, there was a missing link: a valid contrast model opposite to geniuses′ thought process. Eventually, the clue to this missing link came from an unlikely source: the teaching classroom. My teaching experience told me that my initial assumption that everyone could perform visual thinking effectively turned out to be an unproven assumption. But that is not to say, visual thinking is a completely innate ability. Visual thinking is by no means geniuses′ monopoly! Ordinary folks can be trained to practice visual thinking proficiently and to become a better self, but a transformation into a genius is not guaranteed.

This missing link was inadvertently provided by one of my most devastating frustrations as a veteran teacher in a medical school. An apparently bright individual failed to answer a test question although he knew all the pertinent knowledge. In other words, the student, who knew and remembered all the needed knowledge, did not realize what he had already known was sufficient to present a correct answer. Apparently, this student′s predicament was not ignorance but rather failure to recognize what he had already known as the right ingredients that he could have assembled into a coherent answer.

An attempt to understand the situation led to a serendipitous finding that the above-mentioned student learned the physiology course content as a set of rules to be applied at the occasion of taking examinations; much like a novice cook learns to cook by following step-by-step instructions listed in a cookbook. The student behaved like the agent in Searle′s Chinese Room argument [49,64]: the agent knows how to convert an input into the appropriate output by faithfully following the rules of conversion but has no idea about the reason why these rules work. Regrettably, this was not an isolated case. Similar cases were far more widespread than I had initially expected. I promptly coined a term dumb high-achiever to describe this paradoxical type of modern students. I also coined a term rule-based reasoning because of its similarity to computer algorithm [65]. Note that this student did not just memorize and regurgitate “canned” answers. Rule-based learning is one step better than rote memorization but it is still one step short of understanding.

Prior to this experience, I knew of only two ways of learning: either rote memorization or visualizing the process to get an intimate feeling. I used to refer to understanding by visualization as just understanding. Knowing that a third process had emerged, I promptly changed what I knew as understanding to a matching term: picture based reasoning [65]. Of course, the terms, rule-based reasoning and picture-based reasoning, which I coined at that time out of my own ignorance, correspond to verbal and visual thinking, respectively. I retained the pair of newly coined terms, and used the two pairs of terms synonymously and interchangeably, because together the two pairs bridge the gap between cognitive science and computer science, as demonstrated in the word-replacing game.

One peculiar event did not escape my attention though. The student used verbal thinking in the solution-generating phase rather than just in the solution-verifying phase, as the models in Table 1 suggested. In other words, they never or seldom practiced picture-based reasoning. Therefore, dumb high-achievers are practitioners of exclusively rulebased reasoning. In contrast, ordinary folks probably practiced both types of reasoning, whereas geniuses apparently invoked picture-based reasoning more frequently and with a greater proficiency than ordinary folks. Using a derogatory term to designate a group of handicapped (mentally disadvantaged) students seemed to be cruel and inhumane let alone politically incorrect. For the following reasons, I did not replace it with practitioners of exclusively rule-based reasoning, ostensibly a politically neutral term. First, exclusively rule-based thinking is not a permanent disability but it can be cured by deliberate practices of visual thinking to the extent that it becomes hammered into an everlasting mental habit. The above-mentioned student was subsequently trained to become a visual thinker, thus completely shedding the image of a dumb high-achiever. Second, I retained the term just to draw attention to the seriousness and absurdity of the problem in our educational practices; grade inversion is far more serious than gradual inflation. Knowing that dumb high-achievers could be cured, my conscience was cleared.

Apparently, Poincaré′s sweeping claim of discovering by intuition is not strictly true; it applies to individuals with sufficient creativity only. Dumb high-achievers discovered answers to examination questions by logic instead of by intuition. Ironically, Poincaré missed this important clue because he had no privilege to be surrounded by dumb highachievers! Suddenly, I realized that my greatest frustration turned out to be my most important inspiration, which was an opportunity that I did not recognize upon the first encounter: The opposite of geniuses are neither idiots nor ordinary folks, but, rather, dumb high-achievers. I did not recall seeing dumb high-achievers during the 1970s or the 1980s. The first time I noticed their existence was 1998, which coincided reasonably well with a report in Newsweek magazine: creativity of Americans showed a steady decline during the 1990s [66].

There is nothing fundamentally wrong about rule-based reasoning or algorithmic processes. Of course, an algorithmic process can be utilized to generate a solution, because it is supposed to re-generate the solution found during the solution-generating phase as a condition of verification. In addition, that is exactly what an expert system of the early AI stage does. Furthermore, if a subject has been learned and memorized as a set of rules, a problem can be solved by matching certain descriptive features of the problem with the descriptive content of relevant rules. It was an acceptable practice even in mathematics. In proving a geometric theorem or proposition, one is free to invoke already-proven theorems without having to re-deriving all the invoked theorems all over again. What is wrong is the practice of exclusively rule-based reasoning.

The subtle difference between the two ways of generating a solution was best demonstrated by a 1954 movie, called The Dam Busters, which was a fictionalized chronicle of (British) Royal Air Force′s raid on the Ruhr dams on the night of May 16-17, 1943 [67]. Lancaster bombers were specially modified and adapted to carry “bouncing” bombs (designed by Barnes Wallis) that would skip across water like a pebble, hit the side of a dam, and detonate after sinking. To do so, the bomb must be dropped from an altitude of precisely 60 ft (18.3 m). However, there was no sufficiently precise altimeter at that time to determine and ascertain the extremely low altitude. The fictionalized movie showed how the problem was solved while members of the elite 617 Squadron were relaxing at a London theatre shortly prior to the planned mission. While watching a scene with the dancers being spotlighted with two crossing beams, Wing Commander Guy Gibson was attracted to the adjustment of the projecting angles of the two spotlights so that the two beams tracked dancers in motion. The visual clue led to a serendipitous discovery of a novel solution: equipping Lancaster bombers with a pair of spotlights angled to meet at the water surface when the aircraft was at the correct altitude.

The recognition of the clue in the above-mentioned movie was based on analogy (analogical reasoning [68-71]) between a perceived pattern and mental imagery. The principle was based on a well-known geometry fact: A triangle is the only polyhedron that is not deformable, i.e., once the three sides are fixed, the three angles are also fixed (not so for a square or a pentagon). The same principle was invoked in the design of an architectural structure called truss. Actually, the Royal Navy invoked the same principle to track submarines by means of two properly positioned radar stations: a technique known as triangulation. Thus, in principle, the method of positioning an aircraft at an extremely low altitude could have been found by means of rule-based reasoning.

The identification of a proper rule to be used in problem solving relies heavily on the matching of keywords or key phrases, which are often nametags of rules or terse descriptions of their key features. To solve the Lancaster bomber problem by means of rule-based reasoning would require the correct keyword “triangulation” or “truss”. Even in hindsight, it would be hard pressed to recall these proper keywords prior to solving the problem. The likelihood of coming up with the wrong and misleading keywords could not be overestimated. A practitioner of exclusively rule-based reasoning thus runs the risk of becoming a “prisoner of words”: one can recognize a rule only when the name or the (written or verbal) description matches the features being sought. The criterion of matching is too strict and the resulting recognition lacks fault-tolerance; some potential solutions are likely to be excluded (false negatives). As a consequence, practitioners of exclusively rule-based reasoning have access to a limited search space, which is the repertoire of learned and still remembered knowledge, and lack the ability to recognize a disguised (“distorted”) solution even when it is included within the search space. Francis Bacon obviously recognized this pitfall; he wrote, “The ill and unfit choice of words wonderfully obstructs the understanding” [72]. Thus, picture-based reasoning relieves the bondage of words and lets one search more freely for potential solutions (fault tolerance) than rule-based reasoning. Of course, picture-based reasoning has a tendency to include false positive solutions. However, these false positive potential solutions can be eliminated by careful scrutiny by means of rigorous rule-based reasoning.

In emphasizing the difference in thinking styles between geniuses and dumb high-achievers, I inadvertently gave an impression that creative people do not invoke rule-based reasoning to find solutions. In fact, creative people used whatever legitimate approaches to find solutions. By invoking both picture-based and rule-based reasoning, creative people certainly enjoyed a bigger search space than rule-based reasoning alone would offer. It is difficult for practitioners of exclusively rule-based reasoning to improvise because only what had been learned or known are included in the search space.

Whereas rule-based reasoning is of cardinal importance in verifying potential solutions, picture-based reasoning still plays an important role of error checking during the solution-verification phase whenever finding a solution may not be good enough – It must be the best solution. This consideration reminded me of the fictional television detective Columbo, who confessed to have a terrible habit: he liked to “tie up all the loose ends”. Columbo′s allusion to loose ends reflected his uneasiness about subtle incongruity detected only by means of picture-based reasoning; he felt something did not quite add up correctly. He often hanged around a crime scene - beyond the call of duty - well after a case had been closed. His continuing pursuit of a closed case sometimes led to new evidence and eventually overturned the verdict.

Crime-case solving and theory proposing are two special cases of creative problem solving. These two activities differ from ordinary creative problem solving in the following sense. In solving a regular problem, the first available valid solution usually signals the end of a problem solving session. In the court of law, a defendant is seldom convicted on the basis of a single piece of evidence. Moreover, in a civil court, it may be sufficient to convict the defendant on the basis of preponderance of evidence, but, in a criminal court, all other conceivable suspects must be exhaustively ruled out and all lines of evidence must converge on the chosen suspect nearly perfectly in order to prove the guilt beyond reasonable doubt. This means the jury does not jump to conclusions when the first available suspect fits all descriptions specified by available evidence. Here, picture-based reasoning comes to rescue. By viewing the case in a holistic perspective, picture-based reasoning let seasoned sleuths like Columbo or Sherlock Holmes detect subtle discrepancies. These discrepancies were what Columbo referred to as “loose ends”.

In the case of theory proposing, the first available nearly satisfactory explanation of existing observations is often accepted for a lack of alternative theories. However, the acceptance is provisional, at least in Western science. As new evidence accumulates and starts to reveal discrepancies, new theories are proposed to remedy the deficiencies. An elimination process ensues on the basis of survival of the fittest. This is, of course, the consequence of Popper′s well-known falsifiability argument [10]. Again, detection of subtle discrepancies requires picture-based reasoning. Picture-based reasoning becomes important in the solution-verifying phase when elimination of alternative solutions becomes important and crucial.

Speaking about error checking, even a highly reliable digital computer demands some sorts of safeguard. That was why the oldgeneration serial transmission hardware demanded a parity bit in the transmitted ASCII code in order to ensure the integrity of transmitted data. Other devices, such as checksums, serve a similar purpose for a block of data. It is well known that handling of digital information that does not make intuitive sense is prone to errors. It is probably the reason why there was an increase of the incidence of “friendly fire” accidents in the battle fields, as a consequence of the advent of hightech weaponry: it is difficult to ascertain that one is actually aiming at foes instead of friends or oneself, by just looking at the numerical coordinate information that controls the launching of a cruise missile. Besides, a cruise missile only takes orders from its controller. It harbors neither affection for friends nor hatred towards foes, and is essentially selfless. This latter factor does not help reduce errors for a lack of human touch. In contrast, the old-fashioned way of aiming a canon was not very accurate, but at least, it was aiming in the right direction. This shortcoming is not too difficult to remedy: just convert the digital data back to analog forms so as to give the operators an intuitive feeling.

An excessive emphasis on picture-based reasoning often conveyed the opposite message that rule-based reasoning is trivial and unimportant. It is quite the contrary. Einstein considered the formal logical system invented by the Greek to be one of the two pillars for the development of Western science, whereas he thought the other pillar is the discovery of the possibility to find out the causation relation by systematic experimentation. Both processes pertain to the act of verification. As opposite to intuition, the ability to perform rigorous logical reasoning is not experience-dependent or age-dependent [8]. Thus, an innocent child could challenge the adults and declared that the Emperor actually had no clothes on. Otherwise, judgment of scientific truths would be the monopoly of the elder, and we would all be at the mercy of authority.

In the present article, I wish to painstakingly indicate that rulebased reasoning is required during the solution-verifying phase. In addition, rule-based reasoning (articulated in terms of words and/or equations) is required to communicate the thought content to others. One can hardly convince others with an unspeakable hunch. To add an additional layer of precaution, I also used the term practitioners of exclusively rule-based reasoning to describe dumb high-achievers; the latter used rule-based reasoning both during the solution-generating phase and during the solution-verifying phase.

Single-Step syllogism vs. Multiple-Step syllogism: Deduction or Abduction?

Amidst emphasizing the merits of visual thinking, there remained a lingering doubt. If a solution can be established, and thus verified, by means of rule-based reasoning during the solution-verifying phase, it ought to have existed out there during the solution-generating phase. Why could not one generate the same solution by means of rule-based reasoning alone? Why was such novel solutions off-limit to dumb highachievers? The urge to tie up this loose end sent me down that path to find a satisfactory explanation.

Again, I sought inspiration from Poincaré′s introspection. He said, “Pure logic could never lead us to anything but tautologies; it could create nothing new; not from it alone can any science issue” [21]. If we traced back even further, Galileo also expressed the same view. In his Dialogue Concerning Two New Sciences [73], Galileo stated through the word of his surrogate, the fictitious interlocutor Sagredo, “Logic, it appears to me, teaches us how to test the conclusiveness of any argument or demonstration already discovered and completed; but I do not believe that it teaches us to discover correct arguments and demonstrations”. Which view is correct? That of Lawson and Medawar or that of Poincaré and Galileo?

A second thought and willingness to give Lawson and Medawar the benefit of doubt let me recall that many lemmas and corollaries in mathematics were derived from major theorems by means of a simple one-step logical deduction. Poincaré′s remark about tautologies might not be strictly true. Apparently, his remark pertained only to multiplestep deductions as often encountered in mathematical creations, if we take the context of his remark into account. That is, he was speaking about combinations and permutations of known equations to generate novel mathematical statements. Elsewhere Poincaré said, “Evidently because it is guided by the general march of the reasoning. A mathematical demonstration is not a simple juxtaposition of syllogisms, it is syllogisms placed in a certain order, and the order in which these elements are placed is much more important than the elements themselves” [21]. The key issue was proper arrangements of syllogisms: how to find the relevant syllogisms and how to find the proper way of arranging them so as to ensure a smooth logic flow. He further pointed out that most permutations and combinations of syllogisms (or equations) were meaningless, whereas the possibilities of such operations were virtually infinite so that even a lifetime would not be sufficient to complete the blind searches [21]. He was concerned about combinatorial explosion. Single-step logical deductions do not seem to have this problem.

Single-step logical deductions are commonplace, and are often referred to as common sense, which is a skill all rational folks are supposed to master. Multiple-step logical deductions appear to be far more demanding intellectually. We shall analyze an example in Arthur Conan Doyle′s detective novel, A Study in Scarlet [7]. In Chapter 1, Sherlock Holmes greeted his future partner Dr. Watson upon their first encounter with a now-famous remark: “You have been in Afghanistan, I perceive”. Watson was astonished, and exclaimed, “How on earth did you know that?” Of course, Holmes was a fictional character, but a fictional description often reflected the author′s real-life experience. According to Abrams [74], the real-life counterparts of Watson and Holmes were Arthur Conan Doyle himself and Joseph Bell, M.D., of the Royal Infirmary of Edinburgh, respectively. In Chapter 2, with the title of “the Science of Deduction,” Holmes explained his reasoning with four syllogisms, arranged in an easy-to-understand order:

From long habit the train of thoughts ran so swiftly through my mind that I arrived at the conclusion without being conscious of intermediate steps. There were such steps, however. The train of reasoning ran, “Here is a gent1eman of a medical type, but with the air of a military man. Clearly an army doctor, then. He has just come from the tropics, for his face is dark, and that is not the natural tint of his skin, for his wrists are fair. He has undergone hardship and sickness, as his haggard face says clearly. His left arm has been injured. He holds it in a stiff and unnatural manner. Where in the tropics could an English army doctor have seen much hardship and got his arm wounded? Clearly in Afghanistan”. The whole train of thought did not occupy a second.

Keeping in mind this is fiction, I am not sure how seriously one wants to take Holmes′ estimate of time. However, in my opinion, his estimate was about right when one figures things out by picture-based reasoning. Both Gauss and Tesla used lightning strike to describe the swiftness of thought. Archimedes had no time to put on his clothes but just yelled Eureka!

Here is cognitive scientist Willingham′s interpretation [5]: “[Holmes′ insight] turns not on incredible intelligence or creativity or wild guessing, but on having relevant knowledge. Holmes is told that Watson is a doctor; everything else he deduces by drawing on his knowledge of, among other things, the military, and geography, how injuries heal, and current events”.

Apparently, Willingham′s interpretation was self-serving and selfdeceiving; it supported his own theory but it did not make much sense. The knowledge being used by Holmes was not of extraordinary type, but rather the common knowledge of his contemporaries, who read newspaper often. Holmes certainly knew more than just the portion of knowledge that he had used in the above reasoning. Holmes must sort out relevant knowledge from his myriad pieces of stored knowledge, accumulated during his detective career. Einstein was concerned about this point. He said, “For, if a researcher were to approach things without a pre-conceived opinion, how would he be able to pick the facts from the tremendous richness of the most complicated experiences that are simple enough to reveal their connections through [natural] laws?”

Einstein′s call for subjective judgment in scientific investigations was a far cry from our conventional wisdom: objectivity is one of main virtues of Western science. However, Einstein did have a valid point to make. Let us take a look at a contemporary publication about the investigation of intuition. Lieberman classified two separate systems of thinking processes: X-system (or Reflexive System) and C-System (or Reflective System). A quick glance at Table 1 of his review article [17] reveals that most of the key elements discussed in an earlier section, including parallel processing and sequential (serial) processing, were listed. A word-replacing game, similar to what was described in an earlier section, would have led to exactly the same conclusion as ours. In fact, it is readily identifiable that the X-system corresponds to visual thinking, whereas the C-system corresponds to verbal thinking. Surprisingly, Lieberman never drew the latter conclusion. Apparently, Lieberman failed to recognize what he had already known as the right ingredients that he could have synthesized into a coherent and intelligible interpretation of intuition.

In another review article about intuition, co-authored by Dane and Pratt [16], the authors alluded to two information processing systems (rational vs. nonrational). They correctly identified intuition as a “fast” and “affectively charged” process involving “making holistic associations” and involving “recognizing features or patterns”. All these key points are strongly suggestive of the association of intuition with visual thinking. In fact, had they practiced what they preached by making holistic associations with existing knowledge, they would have stumbled upon the answer. Instead, they got themselves entangled with ancillary (secondary) factors that facilitate but not guarantee intuition. Why they failed to cross the last bridge is of course an intriguing question. My speculation is: cognitive scientists refused to exercise their subjective interpretation of the objectively collected data because subjectivity is a taboo of the discipline, i.e., politically incorrect. Or, could it be because the notion of visual thinking, once “condemned” to be a prohibited topic, was permanently excluded from the search space? If this speculation is anywhere close to what actually transpired in their mind, then they were simply the victims of the institutionalized box (objectivity box or aversion-to-subjectivity box), of which few other than outsiders could jump out. In addition, they could be victims of undeclared assumptions (“taken for granted” assumptions), which are subjective opinions masquerading as objective facts. We have suggested that searching for undeclared assumptions is one of the quickest ways to jump out of the proverbial box [42].

Einstein did not indicate how he would have handled a similar situation subjectively. However, in his response to Hardarmard′s questionnaire, he wrote [23]: (A) the words or the language, as they are written or spoken, do not seem to play any role in my mechanism of thought. The psychical entities which seem to serve as elements in thought are certain signs and more or less clear images which can be “voluntarily” reproduced and combined. There is, of course, a certain connection between those elements and relevant logical concepts. It is also clear that the desire to arrive finally at logically connected concepts is the emotional basis of this rather vague play with the above mentioned elements. But taken from a psychological viewpoint, this combinatory play seems to be the essential feature in productive thought - before there is any connection with logical construction in words or other kinds of signs which can be communicated to others. (B) The above mentioned elements are, in my case, of visual and some of muscular type. Conventional words or other signs have to be sought for laboriously only in a secondary stage, when the mentioned associative play is sufficiently established and can be reproduced at will.

Einstein thought that he had invoked picture-based reasoning to find out the solution first, and then he had to belabor to get the idea out in words and equations (rule-based reasoning). Recall that picture-based reasoning is subjective whereas rule-based reasoning is objective. If we then “put two and two together,” what Einstein meant by preconceived opinions might be obtained by means of picture-based reasoning? This is of course no proof but just a clue.

Here is an additional clue. Just reading Holmes′ multiple-step syllogism in silent speech would take more than one second. Had it takes that long, it would not be called swift by my standard let alone by Holmes′ standard? Obviously, the conclusion could not have been reached by means of mindless combinations and permutations of conceivable logical statements. Some sort of heuristic searching must be involved. Most likely, it was done by means of picture-based reasoning. Most likely, Holmes figured it out first in pictures, and then found the relevant syllogism later at leisure or upon demand. If the readers feel uncomfortable about taking a fiction seriously, the readers are invited to look at any satirical cartoon whenever opportunity arises, estimate the latency time, which it takes before bursting into laughter, supply a logical explanation and time the latter process again for the sake of comparison.

Actually, not everyone interested in mystery or detective stories agreed with the claim of Doyle or his alter ego, Holmes. In discussing the thought process of Gregory House, M.D., a contemporary television character and a Sherlock Holmes clone, Abrams [74] pointed out that all great fictional detectives mistook their methods as deductive, and most, like Holmes, simply scoffed at guesswork. Abrams thought that the thought process of Holmes and Dr. House was, instead, abduction. Abduction (abductive reasoning) was a term introduced by philosopher Charles Peirce [75]. In his original formulation, abduction means reasoning backward or reverse deduction, so to speak. Abrams made a good point. However, abduction, in Peirce′s original formulation, is not the answer because it cannot fare any better than deduction against the wrath of combinatorial explosion.

Just like (forward) deductive reasoning, abductive reasoning is feasible only if it consists of a single step and/or finite and limited branching of reasoning. However, for complex matters, there seldom exists a single cause. In other words, the causes and effects are not a one-to-one correspondence, and reasoning backwards does not always lead to the main cause, even if the main cause is within reach (in terms of the limited number of steps of syllogism). Note that the wrath of combinatorial explosion put up roadblocks several times at several different hierarchical levels. First, one must make multiple attempts of single-step abductive reasoning, for reason just mentioned. That is, abductive reasoning must branch out backward. Second, multiplestep abductive reasoning must be summoned, if one succeeds in exhausting all conceivable single-step abductions to no avail. Third, multiple attempts of multiple-step abductions must be sought after if one intends to uncover or discover all relevant factors. Unless one attempts to branch out backward several layers deep, one runs the risk of locking onto the first success, thus becoming a victim of the so-called confirmation bias [76,77]: the tendency to explain things away in terms of the first available plausible cause, thus missing other more pertinent causes. All these formidable ramifications render the task of abductions no easier than finding a needle in a haystack with a finite number of layers. Therefore, abduction, in Peirce′s original formulation, is virtually useless for solving complex problems.

Abduction is useful, however, under certain conditions. In high school, I found a somewhat opportunistic way of establishing proof for a geometry theorem during an examination. Of course, I did not know the term “abduction”. It was just a trick devised to beat the system, and I am somewhat embarrassed about it now. It went as follows. In proving a mathematical theorem or statement, deductive reasoning from the premises often leads to a small number of intermediate conclusions whereas abductive reasoning from the final conclusion also leads to a small number of intermediate conclusions or, rather, intermediate premises. Matching up the “dangling” deduced conclusions and the “dangling” abducted premises often suggested a straightforward way of pinpointing the required intermediate steps of logical proof, without much additional thinking. Note that the method of combining abduction and deduction works because the branching processes are limited (i.e., only a small numbers of branches). It would still work even if deductive reasoning and abductive reasoning must run several layers deep, as long as the branching processes are limited. Abduction would probably also work for problems in reductionist sciences. In this regard, abduction is superior to deduction because the former recommends priming the mind in the right (backward) direction in search for a hypothesis. In contrast, amorphousness and continuum (non-discreteness) of complex problems make abductive reasoning inoperative.

Readers who are familiar with Peirce′s philosophy would probably object to our casual dismissal of abductive reasoning for good reason. The above analysis and dismissal specifically targeted Peirce′s original formulation (ca. 1866-1878), which could be characterized as an inverse of syllogism. However, there were significant modifications in later versions. Peirce was pursuing the inferential reasoning leading to the “discovery” of hypotheses. He proposed and believed that abduction is the only way to find hypotheses (“abduction is the process of forming explanatory hypothesis”). In my opinion, the most serious difficulty of Peirce′s original formulation is its outcome: generation of an infinite number of indifferent hypotheses. The probability of hitting the “correct” or “plausible” hypothesis is infinitesimal because of combinatorial explosion, as explained above. From Peirce′s writing, he was apparently aware of the problem, at least implicitly [78]; he could not accept a countless number of indifferent abducted hypotheses. He also had to answer skeptics′ doubt in regard to how abduction could give rise to new information (i.e., how could it not lead to tautologies?) In attempts to deal with mounting difficulties, Peirce began to shift his positions and started to add attributes to abduction. For example, he split abduction into a two-step process: generation and then selection of hypotheses [78]; he claimed that abduction is inference to the best explanation. In other words, Peirce was yearning and struggling to define some way of inference that constituted “educated guessing” or “clever guessing”. He even blurred the distinction between insight and inference. Therefore, he began to inject subjective elements into the attributes that define abduction (see Sec. 12 ad 13 of [42] regarding the role of subjectivity in arbitrating competing theories). None of these patch-up works could silence criticisms, but, instead, bred new questions. In an attempt to overcome the difficulties, Hintikka interpreted abduction in terms of a guessing strategy of throwing questions at Nature [79]. This modification was tantamount to shifting subjectivity to Nature as a way of hiding subjectivity instead of eliminating it. The trouble is: Nature may subjectively select questions to grant answers but humans still have to throwing questions randomly or systematically at Nature to avoid the appearance of subjectivity.

Space limit does not permit additional elaborations and digressions. It suffices to say, if we identify Peirce′s revised notion of abduction with picture-based reasoning, then most, if not all, of the above-mentioned difficulties vanish. In point of fact, Peirce′s subsequent elaborations and modifications generated sufficient constraints for what ought to be abduction, thus thrusting picture-based reasoning to the forefront as the unique candidate that could fill the shoes so specified. In other words, if we try to picture what Peirce had described about abduction, the picture so generated can be readily recognized as that of “picture-based reasoning”! It would not be an exaggeration if we assume that Peirce had in mind picture-based reasoning as the only way to generate plausible and non-trivial hypotheses without articulating it explicitly as if he had tried to present a riddle to us (the hint to the riddle: Peirce′s close encounter with the notion of insight and heuristic). In hindsight, Peirce was the strongest and the longest lasting voice against the popular claim of logical deductions as the venue towards discoveries; his objection should be construed as a wake-up call for those experts that continued to advocate logic and knowledge as the primary factors for creativity, thus legitimizing the educational practice of emphasizing transfer of knowledge at the expense of students′ reasoning skill. They were the primary architects of education policies that were responsible for failing public education, at least in the United States and perhaps also elsewhere.

Our “enhanced” interpretation of Peirce′s abduction also precludes the syllogistic interpretation of abductive reasoning. Syllogism and subjectivity are mutually exclusive. In philosophy jargon, there is no way to reconcile formalization with Bayes′s rule. The only way out of this dilemma is to jump out of the “syllogism box”. Picture-based reasoning itself has no directionality (random access); deductive reasoning and abductive reasoning are just two of its many possible verbal renditions (parallel-to-serial conversion) that happen to have the required rigor of reasoning but proceed in opposite directions. If so, abductive reasoning, in Peirce′s original formulation, is just the syllogism - inference to the best explanation - that constitutes an after-the-fact fabrication of “the best explanation” of how one has discovered and what has led to one′s discovery. It is similar, except in directions, to Sherlock Holmes′ fabrication, upon demand, of “the best explanation” of how he had correctly figured out Dr. Watson′s identity and his previous military tour to Afghanistan. If I were allowed to put words in Holmes′ mouth, he probably would say, “My dear Watson, it is elementary; whether it is deduction or abduction, it is your pleasure and prerogative to choose!”

Picture-Based Reasoning and Heuristic Searching

The analysis presented in the previous section demonstrated that searching for a major novel solution by means of multiple-step logical deductions is not practical because of combinatorial explosion. Our subsequent discussion implied that picture-based reasoning offers the advantage of heuristic searching, by default; the hazard of combinatorial explosion encountered by multiple-step rule-based reasoning makes picture-based reasoning look like a way of heuristic searching by comparison. However, that was not a foregone conclusion because of the following consideration. A larger search space may be an advantage for picture-based reasoning, but the space may be too large to be searched in real time. Just consider the number of templates to be selected in digital and analog pattern recognition. The search space in rule-based reasoning contains discrete items, which are often finite in numbers (digital pattern recognition). Besides, matching is swift; either a template fit or not fit, because of the lack of fault-tolerance. In contrast, the search space in picture-based reasoning contains a virtually infinite number of templates, because of fault-tolerance inherent in analog pattern recognition. The number of templates is infinite because there can be all shades of defects and distortions of all extents in a pattern (or template). Besides, recognizing suitable templates often requires some struggles; not every visual thinker can think as fast as Sherlock Holmes could. Superficially, it appears much more difficult to find a solution by means of picture-based reasoning than finding a needle in the haystack because, at least, the haystack is well defined and finite in size. In other words, practitioners of picture-based reasoning could possibly be overwhelmed by combinatorial explosion. That heuristic searching in picture-based reasoning is possible seems counter-intuitive.

Poincaré′s introspection offers a clue. In a frequently quoted but seldom-understood episode during a sleepless night following his drinking of black coffee [21], he decided to abandon his persistent efforts, which lasted 15 days, of trying a great number of combinations [of equations] and reaching no useful conclusions. He wrote about his alternative approach, “Ideas rose in crowds; I felt them collide until pairs interlocked, so to speak, making stable combination”. His description in terms of the words “collide” and “interlocked” as well as the phrase “stable combination” betrayed his thought as a game of piecing together a jigsaw puzzle. That is, he performed picture-based reasoning. He thus cleverly evaded the wrath of combinatorial explosion, since he did not have to waste his time trying those combinations that had not interlocked into a stable configuration.

Thus, picture-based reasoning does not mean examining each and every variation and distortion of a particular pattern “norm” indifferently; the intrusion of subjective judgment is apparent. It is common sense in playing a jigsaw puzzle not to perform random searches or systematic searches. For example, one selects only pieces with straight edges to form the edges or corners of the puzzle. Continuity or discontinuity of colors is also a useful guidance. The same principle can be applied to picture-based reasoning in general. Psychologists coined a useful term called “priming of mind,” which means getting the mind ready to receive or perceive only a certain type of information. In other words, the human mind is an anticipatory system [80,81]. For example, one often has trouble understand one′s own native language when one is travelling in a foreign country because the ears are tuned to the reception of spoken foreign words. This is actually a way of heuristic searching; one wastes no time in matching heard sounds with templates of one′s own native tongue but, instead, one searches only the expected foreign vocabulary for a quick match of the heard words. In analog pattern recognition, one tends to ignore myriads of unrecognizable distorted patterns and to focus only on what are readily discernible as meaningful in a given context. Thus, the perceived risk of potential combinatorial explosion is somewhat alleviated and minimized by virtue of the reality that not all continuously varying templates are recognizable as meaningful in a given context. In other words, the incoming information is filtered; different filters are dynamically selected for different circumstances, e.g., travelling in a particular foreign country.

There is an important requirement in analog pattern recognition: prior exposures. To see it once is cognition, whereas it takes seeing at least twice to recognize (or rather, re-cognize) a pattern. In other words, a template must be previously stored in the memory in order to recognize it. Analog pattern recognition and, for that matter, picturebased reasoning strongly depend on personal experience (prior exposures to a recognizable pattern or template). This is consistent with the report that intuition is experience-dependent whereas logic is not [8]. Legends had it that young children saw only dolphins in a well-known painting known as message d′amour des dauphins because young children had no prior experience of seeing a naked couple in amorous embraces. Likewise, non-native Danish speakers had great difficulty in differentiating many shades of variations of vowel sounds [82]. Likewise, non-native Polish speakers had great difficulty pronouncing many different consonant sounds strung together in a row.

Since recognizable patterns must be accumulated by exposures (experience), the number of recognizable templates to be stored in the brain may still be limited over a finite lifetime - sort of the tip of an iceberg. In the 21st century, it is far more likely to be overwhelmed by knowledge than by experience. This is illustrated by the phenomenon of 9/11 Demon Face or Satan′s Face 9/11.

The surprise attack on New York′s World Trade Center twin towers, on September 11, 2001, prompted the witnesses to search for devil′s faces in the smokes generated by massive explosions (priming of the mind). The fact that there was virtually no consensus as to how a devil should look like left plenty of room for imagination to soar (enhanced or exaggerated fault tolerance). Note that fault tolerance was greatly facilitated by two separate ways of varying (tweaking) both patterns and templates, respectively. In addition to the exaggerated fault tolerance just mentioned, which gives the viewers greater latitude of recognizing devils′ faces, the smokes were also deforming continuously, thus presenting a countless number of evolving patterns for the viewers to pick and choose as a decent depiction of devils′ faces. The situation was as if both the continuously deforming smoke patterns and the ever-changing imagination of the viewers made a joint effort to accommodate each other so as to identify a transient appearance of devils′ faces! Even so, the cumulative time, during which these faces were unmistakably discernible by an impartial third party, was quite short as comparing to the entire burning period before the towers eventually collapsed.

Intuition, insight and the “aha” phenomenon

The enigma of creativity has been cloaked in mysteries for so long that all serious challengers of existing creativity theories are obliged to pass a number of tests in terms of explanatory power. Preferably, the model or theory must also demystify the process to the extent that intelligent laypersons can understand.

We shall demonstrate the explanatory power of our present rendition of the chance-configuration model by subjecting the model to the following litmus tests: a) explaining why novel discoveries often occurred without prior warning (”aha” phenomenon), b) explaining why some creators had no ideas about the source of inspiration even after the fact, c) explaining why some creators were consistently luckier than others, and, last but not least, d) explaining why it was so difficult to explain what intuition, inspiration, insight, hunch, etc. are all about. To the best of my knowledge, none of the 20th century creativity theories satisfactorily passed the above tests. Picture-based reasoning offers a reasonable explanation that common folks (non-experts) can understand, thus demystifying the enigma.

Koestler, an accomplished writer from Budapest and a nonscientist by training, seemed to understand creativity better than most experts. He presented a unified explanation of scientific discoveries, arts and humour in his book The Act of Creation [26]. He pointed out that recognition of a joke is accompanied by a snapping action: bursting into laughter. In fact, the sample joke about Chamfort [26] demonstrated just that. The joke started with a fairly logical unfolding of the storylines, thus priming the mind of the readers in the wrong direction (locally logical storylines from the point of view of rule-based reasoning). The punch line was the ending sentence, which made the entire story look absurd (globally absurd from the point of view of picture-based reasoning). Appreciation of a subtle joke depends on picture-based reasoning for recognizing the funny aspect; it requires a holistic assessment of the entire storylines. In other words, one must see the trees as well as the forest.

Analog pattern recognition is based on an overall assessment of the entire “picture” or situation, rather than based on a discrete criterion or a finite number of discrete criteria. That is, the process is a parallel process, a holistic process, as the Gestalt psychologists used to preach. Therefore, the act of recognition does not follow a pre-determined procedure, but rather goes by an erratic sequence (random access). Picture-based reasoning also affords different angles of looking at the same problem (or its representing picture). One can pay attention to different aspects of the same problem or different parts of the same picture. One can also find a drastically different way of representing the problem in pictures. In other words, there are virtually infinite ways of priming one′s mind. Furthermore, the outcome of a failed picture representation could suggest new ways of picture representation. This is probably what Campbell meant by “not guided by anticipation”; it is guided by improvisation, instead. The so-called incubation period helps to prime one′s mind in different ways in random access. Some ways of priming yield no tangible clues, but others do. However, a fruitful way of priming one′s mind doesn′t reveal its identity until it is recognized; suddenness almost always accompanies the moment of recognition.

As briefly mentioned in the previous section, recognizing a distorted template or pattern is often neither instantaneous nor immediate; it takes time to stretch, twist, tweak, tackle, and eventually make a template snap into the right place, just like snapping together parts with poor clearance in a car manufacturing assembly line. In brief, delaying in recognizing caused by random tweaking (random access) imparts unpredictability during exploration, thus resulting in suddenness of occurrence and surprises. The moment of recognition (in Simon′s word) or discernment (in Poincaré′s word) coincides with the exclamation “Aha!” in English language and “Eureka!” in Greek language. It is also the moment, when one bursts into laughter in response to a subtle joke, which Koestler called a snapping action.

As to the question why creators had no clues about their discoveries even after the fact, parallel processing also offers a reasonable explanation. A perceived clue, whether it is in words or in pictures, is held in one′s working memory, which fades as quickly as it forms. If we do not catch it right away, it is gone and forgotten forever. If we do catch it, and link it to a viable solution, a picture clue is much harder to recall than a word clue, because the clue does not ring in our ears afterwards. If a particular clue comes from a portion of the picture rather than from the entire picture, it is even harder to recall. Among the hardest to recall are the type of clues provided by the absence of an expected element (see an example below). It is easier to recall or name a clue that is not supposed to be there; the mere presence of it provides a direct visual cue. When something that is supposed to be there is actually missing, one has no clue about what the clue is precisely because the clue itself is not around to serve as a reminder.

Once the elusive match between the pattern and the perceived template forms precariously, the mind must be able to “lock onto” the idea - snap it into a stable configuration, as indicated in Poincaré′s black-coffee episode - and not get confused again. Einstein also alluded to this brief period of mind struggle: “... the mentioned associative play is sufficiently established and can be reproduced at will”. Matching a correct answer with a novel and subtle problem may be as difficult as landing a modern fighter jet in the middle of the night on the heaving deck of a cruising aircraft carrier in a stormy sea. Sometimes the feat cannot be accomplished with a single attempt; multiple passes are often necessary.

We shall illustrate the points made above by the mind journey of Galileo when he discovered the four larger moons (called Galilean moons) of Jupiter more than 400 years ago, as chronicled in his book Sidereus Nuncius (The Starry Messenger or The Sidereal Messenger) [83]. An appendix at the end of this article presents a brief explanation of the trajectory of an outer planet, along with the definitions of direct motion, retrograde motion and station, for the benefit of readers who are unfamiliar with the topic.

On January 7, 1610, Galileo aimed his telescope at the night sky and brought Jupiter into view. He saw three little stars near Jupiter: two to the east of Jupiter and one to the west. Unsuspicious of what was going to transpire in the next few days, he thought the three little stars were ordinary fixed stars, which were too faint to be visible to naked eyes.

On January 8, he saw a different arrangement as shown in Figure 2. Galileo commented, “I was aroused by the question of how Jupiter could be to the east of all the said fixed stars when the day before he had be to the west of two of them. I was afraid, therefore, that perhaps, contrary to the astronomical computations, his motion was direct and …”. According to Footnote 91 of van Helden′s translation of Sidereus Nuncius [83], “Jupiter had passed its station at the end of January and was slowly moving from west to east” (i.e., Jupiter underwent direct motion at the very end of January). So we can infer that Jupiter was undergoing retrograde motion in early January. Therefore, when Galileo found that Jupiter had undergone direct motion from January 7 to January 8, he suspected that the astronomical predictions were in error. (The distances between those little stars were also shorter than the night before. However, we shall ignore this latter fact and just focus on the key point to be analyzed below.) So he “waited eagerly for the next night”. But the sky was overcast on January 9.

computer-science-systems-biology-Jupiters-moons

Figure 2: Galileo’s observation records of Jupiter’s moons from January 7, 1610, through January 13, 1610. Initially, Galileo’s presumed that the three or four little stars (depicted by the symbol *) around Jupiter (depicted by the symbol O) were fixed stars. He thought he was monitoring the apparent trajectory of Jupiter relative to the background of these fixed stars. The designations of direct motion and retrograde motion were Galileo’s interpretations under this unproven assumption. See text for further detail. (Reproduced and modified from [83]).

Then on January 10, he saw that two little stars were to the east of Jupiter, which implied retrograde motion. But Galileo merely said, “When I saw this, and since I knew that such changes could in no way be assigned to Jupiter, ...., now, moving from doubt to astonishment, ...” He concluded, “The observed change was not in Jupiter but in the said stars” What he meant was: it was not Jupiter that moved relative to the little [fixed] stars, but, instead, it were the little stars that moved relative to Jupiter. Note that there was a dramatic change of Galileo′s perception. If he were to continue his line of thinking as of January 9, he would have simply concluded that Jupiter was undergoing retrograde motion, as predicted by astronomical computations, and, therefore, the predictions were not in errors after all (and he could have explained away his observation of direct motion the night before by inventing an excuse, e.g., he probably had too much drink the night before!). But Galileo seemed to be totally oblivious to this obvious conclusion, but, instead, he seemed to be attracted or, more accurately, distracted by something else. He seemed to imply that the three little stars were whirling around Jupiter, sometimes to the east of Jupiter and sometimes to the west on a time scale of a couple of days. On January 11, he concluded, “entirely beyond doubt,” that these little stars are Jupiter′s satellites.

For stargazers, it is common knowledge that two consecutive stations are usually separated by months. For example, Jupiter underwent direct motion from January 2008 through May. It was at station around May 2008, and it did not move much during the month of May. From June 2008 through September, it underwent retrograde motion and reached station around September, and moved very little in early September. It turned to direct motion again from mid-September 2008 through January 2009 [84]. By leafing through some 65 hand drawn pictures in Galileo′s book, which were similar to what are shown in Figure 2, it became immediately apparent that Jupiter underwent too many reversals of motion from direct to retrograde, and vice versa, during the two brief months from January 7 to March 2, 1610. If I based my judgment solely on the interval between two consecutive reversals of Jupiter′s relative motion (i.e., two consecutive stations), the earliest possible day to draw the satellite conclusion would be January 12 (or January 13, to be on the sure side). Yet Galileo immediately abandoned the fixed-star hypothesis on January 10, and he turned “from doubt to astonishment,” and said something to imply that those three little stars moved around Jupiter as if it were a major center of attraction (no pun intended at this pre-Newtonian stage). By January 11, Galileo quickly reached the satellite conclusion, beyond reasonable doubt, so to speak. What bothered me was: Galileo′s conclusion preceded mine by a twoday margin. Besides, both his astonishment and his distraction on January 10 confounded me!

However, I was not alone. Stillman Drake, one of the foremost Galileo scholars of our time, even presented evidence to show that Galileo could not have reached the conclusion until January 12 or later [85-87]. He thought reaching the conclusion on January 10 or January 11 was “unthinkable”. Nevertheless, I was willing to give Galileo the benefit of the doubt, since his astonishment and his unusual remark caught my attention; his remark exhibited the quality of intuition-generated “affectively charged judgments” alluded to by Dane and Pratt [16]. Apparently, Galileo was able to detect the anomaly after seeing only a single reversal from direct to retrograde motion! What was the basis of his jumping to conclusions so soon? What did I miss?

In order to find the subtle clue that led Galileo to his conclusion, I must temporarily suppress my memory of Galileo′s descriptions about Jupiter after January 11, 1610, much like a jury whom the judge has instructed to ignore inadmissible evidence. In particular, I had to temporarily “erase” my memory about Jupiter′s frequent reversals of motion between January 12 and March 2. Failure to do so could only aggravate my own confusion and deepen the mystery.

I re-enacted Galileo′s observations in a strictly day-by-day and frame-by-frame fashion, without peeking ahead of his records in Sidereus Nuncius. Suddenly, like a lightning strike, I sensed something missing in the picture. By deliberately focusing on the picture part of my thought, I eventually succeeded in articulating Galileo′s unspeakable “gut feeling”: If it took months rather than days for Jupiter to complete two consecutive stations, why did Jupiter turn from direct to retrograde motion without a temporary standstill relative to fixed stars (i.e., station), for at least a couple of days, if not for a week? The absence of Jupiter′s station was what had astonished Galileo. Although he did not say it explicitly, his astonishment betrayed his “gut feeling” or intuition. Apparently, this subtle point eluded most, if not all, Galileo scholars for the past four hundred years.

Other thoughts must have also crossed Galileo′s mind. He noticed that the little stars and Jupiter formed a straight line in parallel with the ecliptic. This observation is consistent with three little stars orbiting around Jupiter with orbits that are coplanar with the Earth′s orbit. He also noted the significant changes of brightness (as little stars moved closer or farther to the Earth). Besides, he seemed to recognize which little star was which, just like his own children, and he said that he was sure no other stars on the ecliptic was near Jupiter, thus conjuring up a specter of three little stars whirling around Jupiter. Also, he noted the changes of mutual distances, which he initially had chosen to ignore and set aside. All these lines of evidence, which were weak when taken individually, became strongly suggestive of his satellite hypothesis, when taken together. The convergence of several lines of evidence suddenly crystallized into so coherent a picture that there appeared no other credible explanations that were consistent with so many lines of weak evidence. I do not believe that Galileo would draw his extraordinary conclusion on the basis of a single piece of evidence, especially when he was fully aware that his life was on the line or, rather, on the burning stake! Picture-based reasoning merged all these lines of evidence together in a snapping action, thus generating an “aha” moment while Drake and I were still in the dark! A detailed analysis of this event with additional evidence for the general audience can be found in [43].

Note that Galileo and I primed our mind differently. I was looking at the intervals between two consecutive stations (i.e., how fast Jupiter moved), whereas he was looking at how fast Jupiter′s apparent motion changed directions. Equivalently, I was looking at the first timederivative of Jupiter′s apparent motion, whereas Galileo′s was looking at the corresponding second time-derivative. It was a small difference of viewing angles, which got amplified into a two-day margin. His recognition was spontaneous, but mine required struggling in addition to the big hint from his astonishment! Albert Szent-Györgyi was right: “Discoveries consists of seeing what everybody has seen and thinking what nobody has thought”. The key difference is recognition, and the venue is picture-based reasoning.

The above-described thought process was a reconstruction based on data made available by Galileo′s Sidereus Nuncius. It is impossible to verify the above-described reconstruction because of the 400-year margin. However, it gave me a first-hand experience of using Galileo′s picture information, plus his emotional outburst, to get an “aha” experience. The reconstructed process had most of the important elements of intuition. I believe that the reconstructed process provides sufficient evidence to pass several tests listed at the beginning of this section.

Interpretation of Serendipity, Poincaré′s Incubation and Unconscious Work

The phenomenon of incubation, reported by Poincaré regarding his discovery during a geological excursion [21], was mysterious and controversial. Some mainstream psychologists such as Hayes [2] dismissed the significance of incubation simply because there was insufficient time for incubation to take place prior to discoveries. Yet, many of us who were lesser talents than Poincaré had the experience of benefiting from incubation.

Poincaré′s incubation can be understood in terms of modern theory of selective attention and in terms of the choice of appropriate search space. Poincaré′s sustained concentration on his work for a period of four or five days might have led him to focus on an unfruitful part of the search space. His trip to a pre-planned excursion led to defocused attention [88-90] and allowed him to shift his attention to a previously neglected portion of the search space. This interpretation is supported by both arousal and affect research. The detail will not be pursued here (see Sec. 4.8 of [41]).

In hindsight, it is difficult to justify Hayes′ selection of a minimal duration as the objective criterion of incubation periods; the criterion seems rather arbitrary. Why should there be a time limit for incubation except perhaps during a student examination? The incubation period could be as brief as a 15-minute coffee break or other brief diversions just long enough to get the mind off its previous pre-occupation but not long enough to be detected by Hayes′ behavioral observations or experiments. Of course, it could be much longer. Once it took me about two weeks to recover from despair following a failed effort, and to become able to look at the same failure from a different perspective so as to recognize that the very failure was actually a success in disguise (see p. 213 of [41]). Peirce′s incubation period for perfecting his definition, or definitions, of abduction was quite long. Presumably, he needed his critics′ perpetual counter-arguments for inspiration. It took a life-long dedication for Peirce to bring out the best of his talent.

However, defocused attention alone is probably not sufficient for incubation to work. It was probably necessary for Poincaré to maintain lingering attention to his problem in the back burner. That is, he must keep his problem at the “edge” of his attention, so that when plausible solutions began to surface, he was ready to recognize. This is probably one of the many crucial character traits of creative individuals, which serves to enhance their chance of serendipity.

In 1754, Horace Walpole coined a new word “serendipity” to describe accidental discoveries [91-93]. A number of important novel discoveries appeared to be the consequence of an accident, which ordinary folks would regard as a failure, but a creative individual saw an unexpected opportunity in it. What made creative individuals recognize their opportunity? Why did they have luck that eluded others? Luck was undoubtedly one of the factors, since the accident was unplanned. But it took more than just luck, since there was almost always someone else that lamented about the missed opportunity for having ignored the accident and for having passed it on as bad luck instead of blessing in disguise. Louis Pasteur had a punch line to characterize the person who had not missed the opportunity. He said, “In the fields of observation, chance favors only the prepared mind”. But what constitutes a prepared mind has been a subject of speculations. Hayes [2] thought what Pasteur means by “the prepared mind” was someone who is sufficiently knowledgeable to recognize the chance of a discovery [94]. What Hayes stated was the standard nonelitist explanation. Evidence against the non-elitist view was presented earlier, and it will not be repeated here.

The non-elitist view was by no means widely accepted. Pasteur′s prepared mind is better understood in terms of the concept of priming of mind. Root-Bernstein [95] thought that it is not sufficient simply to be in the right place at the right time: a scientist must be expecting something for serendipity to occur. But how to expect an unexpected event as a way of priming one′s mind ahead of time is an intriguing problem. Boden pointed out that parallel processing of the mind is a key factor for serendipity; it is not mere random chance alone but rather “chance with judgment” [22]. Boden also presented an extensive discussion about the unpredictability of serendipity. Her interpretation can be made clear, if the word “judgment” is replaced by Simon′s chosen word “recognition” or Poincaré′s chosen word “discernment”: serendipity is pattern recognition at an unguarded moment. Here, at work is the ability to make a subtle match between a pattern and templates under an unplanned, unexpected circumstance. Moreover, a prepared mind is one that stretches one′s attention to the problem beyond the formal session of working hours so that when the right solution pops up in an unexpected moment and in an unexpected “form” or circumstance, the stimulus automatically elicits a process of recognition. In fact, this is not just my opinion. When Isaac Newton was questioned about how he had discovered the law of gravitation, he indicated that he had done it “by thinking about it all the time”. More recently, Andrew Wiles recalled how he had been enchanted with the problem of proving Fermat′s Last Theorem [94,96], “I was so obsessed by this problem that for eight years I was thinking about it all of the time - when I woke up in the morning to when I went to sleep at night”. Here, the keyword in Wiles′ remark is “obsession”. The nonelitist school got it all wrong; it was obsession, from the practitioners′ perspective, which was disguised as hard work, from the observers′ (subjective) perspective. More extensive discussions about the role of obsession in creativity can be found in [97].

As for Boden′s notion of unpredictability, it is not just because of the unpredictable encounter with Lady Luck, but also because of the required step of recognition, which is by no means guaranteed upon the encounter with Lady Luck. Root-Bernstein′s notion of expecting the unexpected also requires further elucidation. Again, analog pattern recognition provides the explanation: the step of recognition is carried out in terms of pictures rather than in words.

Let us consider the case of Thomas Edison′s invention of phonographs, which was ostensibly his most original invention. Because of Edison′s experience as a telegrapher in his youthful years, he maintained a vivid interest in the invention or improvements of telegraphic equipment [98]. He invented an automatic telegraphic repeater, which recorded signals in Morse code and repeated them simultaneous to one or more stations. This instrument used two turntables with two punctuated discs. One day, an accidental current overload caused the machine to spin the disc at a considerably higher speed than normally. Suddenly, the repeater began to chatter in high squealing sounds, which resembled human voices. He was fascinated by it and listened to it for a moment. But then he quickly fixed the repeater and resumed his previous activity. Superficially, he seemed to have maintained only a temporary and passing curiosity about the episode. Actually, he must have instantly captured the hint. As soon as he was through with the on-going project, he returned to the idea and started to design what turned out to be the first phonograph.

The idea of recording sound for later reproduction had not been conceived until Edison encountered the episode of the malfunctioning telegraph repeater. Previously, his contemporary Joseph Faber invented a talking machine [99]. Faber focused on the sound-generating mechanism; his invention could be construed as a primitive analog speech synthesizer: an artificial organ of speech. He imitated the human voice-generating apparatus, by fabricating a vibrating ivory reed of variable pitch (artificial vocal cord), an artificial oral cavity with variable sizes and shapes, a rubber tongue and lips, a little windmill rolling in the throat and a tube attached to it so as to generate nasal sounds. Faber′s device was meant to produce pre-determined or preconceived voice-like sound patterns. Edison was aware of its development but his approach turned out to be quite different. Whereas Faber tried to produce different kinds of vibrations for voice imitation, Edison took the vibrations as given and ready-made, and focused on reproducing sound or voice from a pre-recorded vibration pattern. When Edison heard the voice-like sound generated by the malfunctioning telegraph repeater, he recognized a practical way of producing voice-like sound: he must have seen mental imagery of a spinning record disc with groves of varying patterns, previously generated by original sounds, just like his telegraph repeater′s disc. Edison must have been thinking about a voice-generating mechanism all the time and had put it on a back burner when he was preoccupied with other ongoing projects. Priming of his mind in this way allowed him to instantly recognize an excellent way of producing voice when he heard the “noise”. How did he prime his mind for the unexpected event? He simply could not have formulated his anticipation of the occurrence in words because there was no precedent of voice generation by means of a spinning disc with indentations or punctuations. Had he dreamed of an accident like that, he would have just gone ahead and done it rather than waited for a day when an accident fulfilled his dream. He could only anticipate the unexpected in terms of imagined but vague sound patterns (the auditory equivalent of mental imagery).

Compared with Faber′s talking machine, Edison′s invention of the photograph included a two-part mechanism: one for recording and the other for faithful reproduction of pre-recorded sounds or voices. He had no intention of creating original sounds with a machine. Rather, he opted for “garbage-in-garbage-out” as well as “gem-in-gem-out”, i.e., the idea of faithful recording and faithful playback by a 2-in-1 machine. While the noise from the malfunctioning telegraph repeater provided the clue for the voice-reproduction part of the invention, what inspired Edison to invent the recording part? I could not find an explicit explanation in books that I had consulted, but I can speculate here. I suspect that the inspiration came from the telegraph repeater itself with the aid of visual thinking. Recall that the telegraph repeater consisted of two spinning punctuated discs: one for reception and the other for repeating the received messages for transmission to several other stations. The symmetry of the two-disc arrangement might have inspired Edison to invent a sound recorder and a sound re-generator all in a single package of ideas. Symmetry appeals only to practitioners of visual thinking but not to practitioners of exclusively rule-based reasoning, simply because symmetry reveals itself in pictures rather than in words.

Actually, the most frequently cited story of serendipity was Alexander Fleming′s discovery of penicillin: a culture plate of bacteria, which was accidentally ruined by contamination of a then-unknown agent (subsequently found to a fungus penicillium), inspired him to discover penicillin. What did Fleming expect all the time as a way of priming his mind? Obviously, he could not have primed his mind to see a contaminated culture plate. For if he had, he could just go ahead and deliberately contaminate the cultural plate, as skeptics suspected. He was expecting to see a vague image of massive death of various kinds of bacteria. A clear patch on the culture plate caused by a widespread occurrence of rupture of bacteria′s cell membrane and cell wall was not the only visual scene of massive death. Seeing total immobilization of all kinds of bacteria under a microscope would be yet another. Interested readers can readily name other possibilities. I suspect that this kind of mindset and thinking was what Pasteur referred to as a prepared mind.

Many people, experts and laypersons alike, questioned the credibility of Fleming′s story; some of them accused Fleming of deliberately contaminating the culture plate and others were short of calling Fleming a liar. In other words, some people invoked their own subjective experience or, rather, absence of experience to deny Fleming′s subjective report. No wonder my classmate Delon Wu once complained, “Your objectivity is nothing but another kind of subjectivity”. The readers may find more examples in their own experience. In fact, there were so many people, myself included, who had the experience of serendipity in minor or modest discoveries. They were unlikely to be all liars. Experts′ lack of such personal experience was insufficient evidence for falsifying the claim of creative scientists regarding the existence of serendipity.

Edison′s serendipity was an impeccable story. Edison once said, “Genius is one percent inspiration, ninety-nine percent perspiration”. He must have a heart-felt disdain for others′ excessive emphasis on inspiration or intuition or he just wanted to downplay their role in making discoveries. Therefore, pundits should not have to be concerned with the possibility that Edison might have concocted the fancy story just to contradict his own remark.

Simulation of Gestalt Phenomenon

The discipline of artificial intelligence is a marriage of creativity research and computer science [100]. In earlier sections, we have demonstrated how progress made in computer science and artificial intelligence research inspired creativity research. Now, we wish to examine how enhanced understanding of human creativity can also help artificial intelligence research. Simon and his co-workers designed a series of computer programs with the intent to simulate intuition and the “aha” phenomenon. The contributions to computer-based creative problem solving by Simon and co-workers as well as other pioneers of artificial intelligence were compiled in several books [101-104]. It is instructive to examine Michael Wertheimer′s critique [46] and Simon′s rebuttal [47]. Michael Wertheimer′s analysis was based on the Gestalt view of creativity outlined in Max Wertheimer′s book Productive Thinking [105]. Max Wertheimer, one of the chief proponents of Gestalt psychology, differentiated two types of thinking: (“blind” or senseless) reproductive thinking and (truly insightful) productive thinking. Reproductive thinking manipulates mental structures, but does not generate new mental structures, whereas productive thinking does both. It is readily recognized that reproductive thinking is just exclusively rule-based reasoning, whereas productive thinking includes both picture-based and rule-based reasoning.

Whereas Michael Wertheimer acknowledged the accomplishment of Simon′s computer programs such as General Program Solver (GPS) [59,106], he thought that such programs performed only reproductive thinking. Specifically, he thought that crucial Gestalt elements, such as understanding (grasping both what is crucial in any given problem and why it is crucial), insight, and the associated “aha” experience, were lacking in these programs. Furthermore, the construction of problem-representations was done by the programmer, rather than by the computer program itself. Wertheimer dismissed the computer′s learning as learning by rote (“mechanical” learning) rather than learning by understanding. Simon disagreed and claimed that all these had been accomplished by digital computers.

Simon thought that intuition could be interpreted as essentially “recognition” [39]. He devised the following criteria for testing the presence of intuition. As an illustration, Simon cited a program named EPAM (Elementary Perceiver and Memorizer) [107,108]. When a stimulus was presented to EPAM, the program applied a sequence of tests to it, using the outcomes of the tests to sort it down along a discrimination net until it was differentiated from alternative stimuli. A threshold was set in the discrimination net for recognition. EPAM could learn by experience and improve its discrimination net. EPAM could indicate its recognition but no information was stored regarding the detail of reasoning that had led to the step of recognition. Therefore, the recognition process was not reportable.

Simon also made an attempt to explain how his computer programs, such as GPS and EPAM, could exhibit the “aha” phenomenon. EPAM searched for a trillion (1012) possibilities on an average run, and was able to quickly reach a step of recognition in about two-tenths of a second through the uses of heuristics. The last test prior to recognition was simply the last straw that broke the proverbial camel′s back, and was not the sole or main criterion that made recognition possible. Naturally, the computer program did not keep track of all the intermediate steps of testing and, therefore, EPAM could not report exactly how it had reached the conclusion. The computer simply did not remember.

Superficially, EPAM was still a rule-based program. What set it apart from expert systems was the comprehensiveness of the heuristics and the relative “freedom” granted by the programmer. On the one hand, EPAM tested only some portion of the features of a pattern, thus admitting fault-tolerance. Therefore, patterns needed not be identical in order to be recognized as the same by EPAM. EPAM could deal with similarity as well as identity of patterns. On the other hand, by increasing the number of criteria for matching in pattern recognition and by allowing similarity instead of just identity, EPAM introduced a gray scale of recognition, thus converting digital pattern recognition into quasi-analog pattern recognition.

Strictly speaking, there was mixing of both analog and digital processes. The argument of unreportability is not convincing: EPAM deliberately tried not to remember the intermediate steps of reasoning. With increasing capacities and decreasing prices of mass storage devices, EPAM could have chosen to leave a “paper trail” of how it went through the discrimination net, thus making the detail reportable. As illustrated in the case of Galileo′s discovery of Jupiter′s moons, intuition is inherently difficult to articulate not because the detailed steps have been forgotten but rather because the detailed logic has never crossed the mind during the solution-generating phase. Besides, Simon′s demonstration of the “aha” phenomenon lacked the snapping action alluded to by Koestler. The simulation of the Gestalt phenomena was close but not quite as close as Simon had claimed.

However, it is obvious that what Simon′s programs did was not just “mechanical” learning. These programs are not strictly rulebased systems like expert systems. It began to deal with the gray-scale nature of pattern recognition much like picture-based reasoning at a primitive level, in spite of the technical challenges of performing analog pattern recognition in a digital environment. The sequence of tests applied to the stimulus to sort it down along a discrimination net was still sequential in nature rather than true parallel processing, and the searches were systematic rather than heuristic. I would prefer to call it quasi-analog pattern recognition, not because anyone else could do better but because the inherent restriction imposed by a digital environment prevented the machine from completely duplicate and recreate true intuition. It is implicitly understood that simulation never means to be exact duplication; simulation only approaches duplication asymptotically.

The Simon-Wertheimer debate also raised another question: Can a digital computer understand? Simon followed a test suggested by Michael Wertheimer: “one test of whether learning [with understanding] has really happened is to check whether what has been learned will generalize to a related task - if all that has transpired is sheer memorizing or mechanical associating, the learner will be unable to recognize the similarity between a task that has already been mastered and a new one which, while it may be superficially quite different, requires the same insight to solve it that also worked in the earlier task”. The transfer of learning is a central issue for the “Gestalt theorist” [46]. What Michael Wertheimer alluded to is analogical thinking by means of picture-based reasoning. Simon pointed out that it was no great difficulty in constructing computer programs that could do just that. In fact, some programs could even learn to solve problems by examining worked-out examples and to construct a set of new instructions (rules) adequate for solving a wide range of algebra equations.

It is sometimes said that a problem is understood when it can be formulated or represented appropriately. The program UNDERSTAND [109] accepted simple problems stated in plain English and constructed representations of the problems that were suitable as inputs to a general problem-solving program like GPS. Several computer programs existed that had simple capabilities to use analogies to formulate new representations.

Can a digital computer make scientific discoveries? Newell et al. [59] constructed several such programs, including one named “Logic Theorist” that managed to discover a shorter and more elegant version of proof of a theorem in Chapter 2 of Whitehead and Russell′s Principia Mathematica than the original version. Thus, with proper coaching by the programmers, who provided or suggested how to devise heuristics, there was no question that a digital computer could make certain types of scientific discoveries, and, in all fairness, the performance must be considered impressive.

From the very outset, the debate between Wertheimer and Simon was destined to be inconclusive. The biggest hurdle was: no one then really knew what intuition is. It could only be indirectly specified on the basis of purely subjective feeling, which was hardly any consensus. Instead of evaluating the performance of these problem solving programs, the debating duo went for “the jugular” directly, and started to ask questions pertains to “thinking” and “understanding”. Both terms were heavily tainted with subjective connotation. It is highly challenging to evaluate “thinking” and “understanding” objectively. Now, we can, at least, claim that there are two levels of thinking: at the rule-based level and at the picture-based level. Understanding at the picture-based level appears to be more profound than understanding at the rule-based level. Knowledge learned by means of picture-based reasoning is more likely to be transferred to superficially unrelated tasks that involve the same principle. However, within these two levels, there are many sublevels of understanding; understanding has many shades of grayness. Let us consider the case of physics of electricity and magnetism.

When basic laws governing electricity and magnetism, such as Coulomb′s law, Ampere′s law, Lenz′ law, etc., became known, humans had some understanding about electricity and magnetism. Physicist Ernst Rutherford once said, “Qualitative is nothing but poor quantitative”. Having these quantitative laws, no one could deny that humans had a respectable level of understanding. When Maxwell united the theories of electricity and magnetism and developed the well-known Maxwell′s equations, humans attained a higher level of understanding than ever. One of these days when the Grand Unification Theory becomes well established, it will then be possible, at least for some professionals, to attain an even higher and unprecedented level of understanding. There are so many layers of understanding that peeling off one layer exposes the next layer, which waits to be understood. Even when all natural laws become condensed to a single one, one is still entitled to ask the question: why does this unique and ultimate natural law exist?

Obviously, both Wertheimer and Simon wanted to win the debate. What Wertheimer could do was set a bar or bars for Simon′s programs to jump over. But setting the bar in terms of understanding, insight and the “aha” phenomenon was asking for disagreement because of the vagueness of definitions, which allowed the debating duo to stretch the latitude of their interpretations in opposite directions. An impartial third party could be equally baffled by the vagueness of these criteria. Wertheimer did set some concrete criteria in terms of demonstrable performance such as transfer of learning. However, these performance standards capture only part of the many attributes of the Gestalt phenomena. In other words, Wertheimer′s limited number of criteria set only partial constraints for high creativity. It was tantamount to sorting out fish according to sizes: only computer programs with sufficient creativity could get through the mesh and avoid getting caught. Admirably, Simon′s achieved these performance standards one after another; the net of “constraints” left big enough holes for Simon′s programs to slip through successfully. I believe that Wertheimer was unrelenting and refused to be convinced perhaps for unspoken reasons: emotionally he could not accept the notion of a “thinking and understanding” machine (see later). What Wertheimer could have done would be set smaller and smaller mesh sizes until no fish slipped through. That is, Wertheimer could have raised the bar after each time the computer had passed a test.

In regard to the Wertheimer-Simon debate, there was an interesting parallel. Patterson trained a gorilla named Koko to communicate with humans by means of American Sign Language (ASL). It appeared that Koko had mastered the language to the extent of being able to crack self-deprecating jokes. However, Koko′s linguistic capability was often questioned. Some linguistic experts did not think Koko had mastered a natural language. For example, her grammar was a lot to be desired. Patterson and Linden complained that detractors′ objections were often based on what the apes had not yet done [110]. Every time Koko accomplished a new feat, detractors raised the bar of qualifications.

However, this spirit of playing a zero-sum game would generate unnecessary animosity between the two debating camps instead of building consensus. Needless to say, the debate of two camps of followers could not come to an agreement. Each side declared victory and stopped interacting. John Searle, who proposed the Chinese Room argument, invited a degree of hostility that was rarely seen in scholarly discourses.

An even-handed assessment of the performance of Simon′s programs must take into account the constraints of the task in light of our present understanding of humans′ high creativity. Simon′s programs started in the right direction: using heuristics to coerce the computer to select a fruitful search space, introducing grayscale in pattern recognition, and encouraging the computer to learn from its own experience to build new heuristics, etc. Strictly speaking, these tasks require parallel processing capability to implement effectively. However, in a digital environment, one can only simulate parallel processing by means of what I referred to as pseudo-parallel processing [41], just like the commonly known technique of multi-tasking (in software), multiplexing (in hardware) and raster scanning in the display of a cathode ray tube (CRT). Basically, the process was still sequential. However, rapid deployment of sequential processing often gives the illusion of parallel processing. Having to fake parallel processing with pseudo-parallel processing prevented the digital computer from fully unleashing of the power of intuition. In humans, heuristic searching is accomplished by subjectively selecting the proper search space and conduct parallel processing for recognition during the searchand- match phase. In a digital environment, heuristics restricted the search space to the programmer′s specification. Within the specified search space, the recognition process was essentially systematic and exhaustive (although clever algorithm sometimes cut the number of searches in half or less) as if specifying the search space alone made it sufficiently heuristic. Of course, the digital computer could afford to perform exhaustive searching within the confine of heuristically defined search space because of its sheer speed. Simon′s team recognized the limitation and managed to stay on the right track while they developed whatever contingent strategies necessary for circumventing the restrictions. So they added improvements in terms of the ability of inferring new heuristics by means of recombination of the old ones and to learn and make inferences from examples or experience, etc. Furthermore, instead of rigid and scanty discrete criteria, he used a large number of discrete criteria (approaching a virtual continuum), of which none was mandatory but each “inclines without necessitating,” as Boden aptly put [22]. The achievements were remarkable.

Although Simon was perhaps the first to recognize that problem solving is an act of recognizing the solution, he made no distinction between rule-based and picture-based reasoning. Simon certainly appreciated the difference between sequential processing and parallel processing. But he insisted that a parallel process could be implemented by means of a sequential process, and he deliberately blurred the distinction between the two processes. He thus missed the opportunity to link intuition to parallel processing.

Rather than claiming a “home-run,” he could have been better off taking the partial credit: just acknowledging the distinction and conceding that pseudo-parallel processing merely meets part of the demand of true parallel processing. His simulation programs did not quite exhibit intuition but still did exceptionally well in solving novel problems. We thus have to agree with Michael Wertheimer that Simon′s interpretation of intuition was misleading and constitutes a distortion of the Gestaltist notion of insight. But this misstep did not detract from Simon′s groundbreaking contributions to computerbased creative problem solving. Michael Wertheimer was wrong on one count: Simon′s programs did better than what Michael Wertheimer had dismissed as “mechanical” learning; EPAM could learn from experience and improve its own discrimination net. Still, Simon should have just forgotten about simulating the Gestalt phenomena, and he should have focused only on the performance of his programs. Simon and Michael Wertheimer each scored some points in their debate. One could either praise or criticize Simon′ programs depending on whether one considered the bottle half-full or half-empty.

In 1986, Quinlan [111] invented ID3 (Iterative Dichotomiser 3). It is a type of algorithm for building a decision tree from a database by means of what is tantamount to inductive reasoning. It examined examples in the database, by considering various attributes of the samples. The set of samples was split by the selected attribute to produce a subset of data. Iterations were repeated performed each subset. The criterion was a parameter called information gain, based on computation of entropy, as defined by Shannon′s information theory. The decision tree branched out discretely, and the attributes were arranged hierarchically in accordance with the ranking by means of information gains. Each attribute as well as sub-attribute, and sub-sub-attribute, etc. assumed a binary value of yes or no. The analog nature was reflected in the collection of a variety of attributes. One important virtue is the recursive nature of the algorithm, which ensures its inductive capability. There is no question that it is an improved way of computer-based thinking that certainly eluded our dumb high-achievers. Furthermore, in selected situations, it outperformed some experts.

By observations, I noted that a significant number of publications in the creativity literature began with definitions of important concepts and terms prior to any preliminary elucidation of the concepts and related topics. Of course, tentative definitions were needed to ensure meaningful discussions and debates. It was also fashionable to divide a gray scale into neat (provisional) pigeonholes. However, there is always the danger of “hardening of the categories,” as someone has aptly said. These definitions or categorizations were supposed to be tentative and were subject to future refinements. However, the tentative nature had often been forgotten, and such definitions were uncritically taken to be final and beyond further modifications.

For example, Dane and Pratt [16] thought that treating intuition synonymously with insight had caused past confusion. Therefore, they claimed that “in [insight] one consciously becomes aware of the logical connections supporting a particular answer or solution, whereas in [intuition] one is unable to consciously account for the rationale underlying the judgment that has arisen”. However, this particular classification was counterproductive. Just consider what transpired in the night of January 10, 1610. Galileo jumped to conclusions without articulating the logic of his reasoning. By Dane and Pratt′s definition, it was intuition. My reconstruction of Galileo′s thinking process revealed the clue leading to Galileo′s astonishment and conclusions. The same process now must be classified as insight. Basically, both acts were done by means of picture-based reasoning. Thus, intuition and insight are actually two sides of the same coin. Dane and Pratt subjectively subdivided a single concept into two. Inadvertently, Dane and Pratt′s mistake caused “hardening of the categories”: once intuition and insight became separate concepts, search paths that might have led to identification of both concepts with visual thinking became prematurely foreclosed. This pitfall, together with premature rejection of Einstein′s visual thinking as a possible venue for creativity, forever confined investigators to the wrong search space.

In contrast, Myers did just the opposite. In the sample problems cited in his book (Chapter 6 of [94]), Myers inadvertently lumped together errors owing to intuition along with errors owing to abuses or misuses of rules in rule-based reasoning as well as errors owing to sheer guessing. Myers′ practice might lead to sample heterogeneity in experimentation: treating a mixed sample of diversely different constituents as a pure sample. Sample heterogeneity invalidates the validity of the resulting statistical analysis without the conscious awareness of the offender as well as some unsuspicious readers [41,42].

ID3 programs avoided making this kind of mistakes, because of the inherently recursive process of continuing to improve information gains by iterations in loops, until an optimal classification had been attained. Of course, errors are still possible if entrapment at a local optimum fools the program. Still, cognitive scientists should have learned from ID3 programs. Boden once warned, “It′s a mistake to think that sequential computer programs cannot possibly teach us anything about psychology [of creativity]” [22].

Evaluating and Enhancing Heuristics for Computer- Based Problem Solving

We are ready now to transform the Wertheimer-Simon-like interactions into fruitful cross-fertilization. Since heuristics are probably the most crucial element in problem-solving computer programs, a better understanding of human creativity can potentially enhance future designs of heuristics. An unequivocal demonstration of the “aha” phenomenon in computers may be of great academic interest, but it is less relevant for the performance of problem-solving computer programs from the utilitarian point of view.

In the following discussion, it is necessary for me to raise the bar of performance beyond what had already been done by Michael Wertheimer [46]. However, instead of playing the role of a detractor, I wish to make constructive suggestions about possibilities of further enhancing future heuristics designs.

Let us begin by analyzing an early program named BACON that could examine actual data and re-discover several known physical and chemical laws [101,103,104,112,113]. A sample of heuristics used in earlier versions of BACON gives us a glimpse into the design of heuristics [22]:

• if the values of a term are constant, then infer the term always has that value.

• if the values of two numerical terms give a straight line when plotted on a graph, then infer that they are always related in a linear way (with the same slope and intercept as on the graph).

• if the values of two numerical terms increase together, then consider their ratio.

• if the values of one term increase as those of another decrease, then consider their product.

If the ratio or product of two variables, x and y, is not constant, additional heuristics instruct the computer to compute more complex ratios or products, such as xm/yn or xmyn(where m and n are integers), and check whether any of them is constant. Furthermore, BACON does not have to try every pairs of integers. Rather, it considers whether a ratio, if not constant, increases or decreases monotonically, thus cutting the number of pairs of variables to be tested in half (heuristic searching). In this way, BACON could discover several numerical models [112].

Just like GPS and EPAM, the major strength of BACON stemmed from the heuristics, which granted BACON freedom to explore but kept the program focusing on the most fruitful part of the search space without micromanaging BACON′s step-by-step chores of problem solving. There existed several later versions of BACON, in which improvements were made to allow it to use existing heuristics to act upon each other. Thus, a heuristic for creating discrimination rules might act upon a generalization heuristic to create a more powerful domain-specific generalization heuristic. Essentially, the program could learn to learn and learn from its own experience. By adding a radically different strategy to the repertoire of heuristics, BACON′s power of problem solving could be vastly enhanced. For example, by adding a symmetry heuristic, BACON re-discovered Snell′s law of refraction. However, not all laws are quantitative. Programs, such as GLAUBER, STAHL and DALTON, could discover qualitative laws (Part III of [104]).

The thinking process of BACON is not fundamentally different from what a student does when he or she learns a neat trick, which has been invented and once utilized by a past master. Once the student becomes familiar with the trick, he or she can then apply the trick to similar situations without having to re-learn the same trick (in disguise) all over again. This ability to transfer a learned trick was what Michael Wertheimer had expected an “understanding” computer to master (transfer of learning). Here, I must point out that some modern biomedical students might not meet Michael Wertheimer′s expectation. For example, students were instructed to plot experimentally collected biochemical kinetic data on a semi-logarithmic paper so that if plotted data points exhibit a straight line, they could conclude that the decay process follows a first-order exponential decay kinetics. More than once, I witnessed students who had been considered good students according to their grade performances inadvertently reversed the role of the two axes, thus performing at a level worse than BACON. It could be certain that these students did not understand the prescribed procedure. In any case, to say BACON could not think but our dumb high-achievers could think would appear to be hypocritical and would reflect our anthropocentric bias. Therefore, my honest answer would be: yes, BACON could “think” sometimes, if not always, better than our dumb high-achievers did. The quotation marks reflected my reluctance to use the word. I would prefer to use the word “perform”. So, I would say that BACON could perform intellectually at a level better than rulebased but not quite at the full-fledged picture-based level.

So far the examples demonstrated that computer programs could re-discover what had been discovered in science. This kind of creativity is what Boden [22] referred to as P-creative, or psychologically creative. In programming BACON, investigators used insights gained into past discoveries of known physical laws to construct the basic heuristics, thus inadvertently tipping off the computer regarding the secret. A program that could discover something that had never been discovered by any human being, living or dead, is said to be H-creative, or historically creative. In the latter case, no hindsight of the law-tobe- discovered could possibly be incorporated into the heuristics. Note that the bar had been raised. But sure enough, such programs indeed existed. Boden cited an ID3 program, which had discovered a chessplaying strategy for winning an endgame that was not known to any human experts [22].

Essentially, the programmer designed the heuristics for BACON on the basis of existing knowledge about how the problems had been solved historically. Boyle was historically creative to discover the law that bears his name. Nowadays every competent scientist knows how to examine the relationship of two experimental variables by first checking whether they bear any relation of direct or inverse proportionality, as well as any logarithmic or exponential relations. In fact, all well-trained scientists learn these neat “tricks” devised by past masters. Mastering these techniques did not make them creative. That does not mean one cannot invoke time-honored heuristics to discover something new. Certainly, there must be some unexplored areas of science, in which invoking these known heuristics may just be sufficient to make a new discovery. However, the discoverer cannot expect to be honored like a modern-day Boyle, because the discovery will then be classified as “Me- Too” creativity. Essentially, the bar had been raised since Boyle made his discovery. High creativity nowadays must also include finding a new break-through type of strategy or approach in addition to making a novel discovery. We shall make reference to this raised bar to evaluate computer-based creative problem solving.

Now let us just apply the new standard to evaluate Simon′s programs. Simon must be considered remarkably creative to conceive the idea of converting past masters′ winning insights into heuristics. But Simon′s programs are to be rated as very good copycat rather than historically creative. It is not surprising that these computers might outperform human beings because of their speed, memory capacity, stamina and patience, and, last but not least, because they had uniformly mastered the pooled insights of many past masters. It was remarkable but not too surprising that Theorist could find new proof that is more elegant than Russell and Whitehead′s original proof. In solving problems, humans tend to settle at the first satisfactory proof instead of finding the best or the most elegant proof, since humans invoke heuristic searching instead of exhaustive searching. We must not forget that Theorist still performed systematic searching within the chosen search space; it could afford to find not only an acceptable version of proof but also the most elegant version.

In spite of the luxury of speed and memory capacity, a digital computer did not always resort to exhaustive searching within the search space specified by heuristics. There existed situations that even systematic searching within heuristically chosen search space was impractical. In the historical 1996 match between IBM Deep Blue and Garry Kasparov [52], Deep Blue deployed 192 processors in parallel, along with clever heuristics designed by a group of experts specialized both in chess games and in computer-based problem solving. Even so, Deep Blue could not afford to resort to exhaustive searches except perhaps near the endgame phase. In general, Deep Blue searched at the depth of 13 plies; within the specified depth the searches were systematic but overall the searches were nowhere near exhaustive. Keep in mind that the chess game is a purely rule-based game. However, the player still needs to invoke visual thinking in order to keep track of the entire game and to perform heuristic searches. Likewise, it was certainly quite impressive, but not absolutely surprising, for ID3 to come up with an endgame strategy unknown to any human experts. Endgames are where exhaustive searching would still work but perhaps not midgames. Devising novel strategies heretofore unknown even to the programmer and discovering a heretofore-unknown natural law would be an entirely different story, because discovering novel natural laws is not a rule-based game; exhaustive searching would certainly lose its edge. Again, the bar has been raised, but I never dare trivialize the accomplishments of pioneering works done in computer-based creative problem solving.

Regardless of their impressive performance, Simon′s programs were not as creative as the scientists from whom the computer (or rather, the programmer) had derived their inspiration, if we place a greater emphasis on finding the winning strategy than the actual work so performed. Even so, the computer programs could be construed as more capable than some, if not all, past creators individually, simply because of the effect of pooling of all past insights. Even so, the computer programs were still copycats, albeit very clever copycats, rather than geniuses, by definition. In brief, these problem-solving programs could perform creative acts without being creative. The creativity must be attributed to AI pioneers who designed them. Ironically these pioneers would prefer to bestow the title of genius on their brainchildren instead.

It must be pointed out that all the heuristics implemented in BACON and similar programs are not a priori programmable but rather a posteriori programmable: programmable only with the aid of hindsight. That is, someone (either past creative scientists or the creative programmer) must have discovered the heuristics ahead of time. Of course, the computer can discover new heuristics by means of recombination of old ones. However, creative human beings could devise radically new heuristics that could not be derived from old ones by recombination (what Kuhn called paradigm shifts [114]). Since the programmer has no way of knowing what new and radically “revolutionary” heuristics are to appear in the future, these heuristics are not a priori programmable. Of course, we should not overlook the possibility that the programmer himself or herself could be a genius, thus being capable of designing heuristics of paradigm shift nature. The inherent unpredictability of what appropriate heuristics to be implemented shall be made clear by a few examples.

The first example is the discovery of recursive rules underlying an infinite sequence of alphabetic symbols cited in Simon′s 1973 article [115]. Simon demonstrated that the recursive pattern could be discovered efficiently (heuristically) by programming the computer to examine the relations of “same” and “next” between symbols that are not too far separated [115]. Simon′s point is well taken. Still, he designed the heuristics by taking advantage of his prior knowledge in regard to how this type of problems could be solved. He could hardly have programmed for future unknown types of similar problems. To make my point clear, let us consider a few more examples of which the rules of construction are easy to conceive but considerably harder to discover. The selection rules are the presence or absence of a certain feature of the symbols. For notational convenience, the sequences are made finite by first listing those with the designated feature and then listing those without the feature. They certainly can be presented as infinite sequences, by repeating each finite sequence infinite times, without compromising the arguments to follow:

• (A, E, F, H, I, K, L, M, N, T, V, W, X, Y, Z) vs. (B, C, D, G, J, O, P, Q, R, S, U)

• (A, B, D, O, P, Q, R) vs. (C, E, F, G, H, I, J, K, L, M, N, S, T, U, V, W, X, Y, Z)

• (A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P, R, S, T, U, W, Y, Z) vs. (Q, V, X)

The selection rule in the first sequence is whether the letters are constructed exclusively with straight-line segments or with both line segments and curves. In the second sequence, the letters are grouped according to whether there is one or more (topologically simply connected) enclosed area or not. In the third sequence, the grouping is based on a single criterion: presence or absence in the Polish alphabet set (the Polish alphabet does not include “Q”, “V” and “X”). Undoubtedly, stranger and more obscure selection rules can be easily conceived to construct additional examples with increasing degrees of difficulty in decoding. The above three rules cannot be readily discovered by examining the relations of “same” and “next” between symbols alone, certainly not by considering their relative “alphabetized” positions on the alphabet list. This type of problems seems suitable to be attacked by building a decision tree by means of an ID3-like strategy. The ease or difficulty in solving this type of brainteasers depends largely on how the problem solvers prime their mind, because there are too many diverse attributes to consider (that is why it is easy to construct but hard to solve). Unless the programmer primes the computer with the appropriate heuristics (a posteriori programmable), the computer is likely to remain clueless in solving this type of problems. For human problem solvers, mathematicians are more likely to solve the first two sample problems than non-mathematicians, because of the former′s familiarity with features of curves, lines and other geometric objects and familiarity with topological concepts of simply connectedness and multiply-connectedness. The third sample problem has little to do with creative thought but rather with the peculiar knowledge of the Polish alphabet. However obscure a selection rule may be, the rule, once known, can be included in the repertoire of heuristics, thus expanding the searching capability. But then again, it is a posteriori programmable.

There is another point to be made with regard to a posteriori programmability of heuristics. The success of BACON, which rediscovered Boyle′s law and Kepler′s third law, depends on the fact that these natural laws describe a simple mathematical relation between two variables: direct or inverse proportionality of the variables or powers of the variables. These mathematical laws are sometimes referred to as numerical laws. Oreskes and co-workers [116] discovered that numerical laws are not unique. Actually, the discovery of Oreskes and co-workers was a direct consequence of Popper′s more fundamental discovery in 1934 [10], since the conclusion of Oreskes and co-workers could be derived, in a single-step logical deduction, as a special case of Popper′s more general formulation of the falsifiability argument.

From an alternative perspective, the approach of fitting data with simple mathematical functions is also known as curve fitting. In the heydays of biophysics prior to the boom of molecular and cellular biology, some otherwise competent scientists believed that computeraided curve fitting was the third branch of science, in addition to theoretical and experimental investigations. Actually, the curvefitting activities do not constitute a third branch of science but rather applications of a very useful mathematical technique. The success of mindless (or brainless) curve fitting is guaranteed by a well-known mathematical theorem. Weierstrass′s Approximation Theorem [117] proclaims that any mathematical curve can be fit with a polynomial function of a higher and higher degree (if necessary) to any degree of accuracy. It is thus possible to run a polynomial curve through any set of data points with any desired degree of accuracy. This is a common practice in multiple regression analysis. The time course of a signal can be represented by an infinite power series:

f(t)=a0+ a1t+ a2t2 + a3t3+....,

Where a0, a1, a2, a3, … are constants, which are determined by curvefitting algorithm. However, this expansion of f(t) is by no means unique. For example, f(t) can be expanded as an infinite series, consisting of orthogonal polynomials, such as the Legendre polynomials and many others. Under a fairly general formulation, a continuous function f(t) can be treated as a vector in an infinite dimensional space, of which the power functions of orthogonal polynomials form a set of basis vectors for the coordinate system. Just as the Cartesian system is not the only coordinate system for ordinary vectors, the power series expansion is not the only way to express an arbitrary continuous function. Whereas the possibility of fitting the time course of a signal to a power series of the form shown above is always guaranteed, the physical meaning of the associated parameters a0, a1, a2, a3, etc. is not guaranteed. In fact, the non-uniqueness of numerical models, discovered by Oreskes and co-workers, is also an expected consequence of non-uniqueness of curve fitting, especially if the data contain noise to allow for greater fault-tolerance. The non-uniqueness of curve-fitting as well as nonuniqueness of numerical models creates a problem for computer programs similar to BACON: how to evaluate the merits of different numerical models?

Gauch [118] pointed out that the accuracy of predictions made by a numerical model increases initially with increasing complexity of the model, then it levels off and reaches an optimum, which he aptly called Ockham′s Hill, and, after that, it decreases with further increasing complexity. This is because complex models tend to overfit the data and begin to capture the feature of noise instead. The heuristics of BACON (e.g., testing fractional powers) were apparently set up with Boyle′s law and Kepler′s third law in mind; the latter contains a fractional power dependence. As emphasized earlier, it was an after-the-fact strategy inspired by previous creators′ discoveries (a posteriori programmable). In principle, it is impossible to include all known functions and their recombination in the repertoire or database, because there are infinite of them.

In addition to complexity, there are other factors that affect the quality of a mathematical model. Gauch further pointed out that predictions in terms of extrapolation make a model more convincing than predictions in terms of interpolations. In a geographic metaphor, Gauch meant that experience in navigating a portion of the Mediterranean coast is more reliable in predicting the navigating conditions elsewhere along the Mediterranean coast, but it is less reliable in predicting the navigating condition of the African coast. Gauch also presented examples, showing models making sharp predictions are of better quality than models making broad predictions. Likewise, a farfetched prediction, such as bending of starlight by the gravitational pull of the Sun, was a stricter test for Theory of General Relativity than run-of-the-mill predictions. A supporter (engineering by training) of the Theory of Intelligent Designs claimed that the theory had predictive power: unknown functions of genes, which were presently considered to have no function, would be revealed in the future. Surely, I do not even need a theory to make the same broad prediction!

Note that humans′ subjective judgment began to intrude into evaluation of goodness of mathematical models. Objective judgment in terms of root-mean-square deviations and the likes does not seem to be sufficient. Subjective judgment also surfaces when one evaluates simplicity or complexity of a mathematical model. Two additional examples suffice to illustrate the subtlety of the issue.

Chalazonitis and co-workers [119,120] discovered that, by applying a homogeneous magnetic field of about 10 kG, isolated rod photoreceptors of frogs in an aqueous suspension rotated and lined up in the direction of the applied field. Shown in Figure 3A was the measured time course of three such rod outer segments. A superficial inspection of Figure 3A tends to suggest fitting the data with a portion of a sine function, which is also the simplest one to select. The word “rotation” also suggests trigonometric functions as the appropriate function to fit. However, other relevant considerations must also enter. In the end, the data were fit with an obscure function [121,122]– perhaps too obscure to warrant inclusion in BACON′s database (Figure 3B):

equation

computer-science-systems-biology-isolated-frog

Figure 3: Rotation of three isolated frog rod outer segments (visual photoreceptors) of comparable sizes suspended in water (Ringer’s solution) under the influence of a homogeneous magnetic field of 10,000 Gauss (the Earth’s magnetic field is about 0.5 Gauss). A. Original data are displayed as orientation angle θ vs. time in second. B. The same data are displayed as ln tan θ vs. time. (A. Reproduced from [120]; B. Reproduced from [121]).

where t is time, Θ0 and Θ are the angular orientation of the photoreceptor rod at time 0 and t, respectively, H is the applied (constant and homogeneous) magnetic field, and the remaining symbols are physical constants that do not directly concern us here (the summation Σ is to be performed over the index i ). It suffices to say that plotting the (natural) logarithm of the tangent of the angular position (orientation) Θ, as a function of time t, on a graph yields a straight line, according to the above equation. The differentiation between the two mathematical models - in terms of either ln tan Θ or sin Θ - was not based on goodness of curve fitting. Instead, consideration was primarily focused on the viability of a physical model that is consistent with established knowledge of electromagnetic theory. Searching of the correct model was performed at the physical level rather than at the numerical level.

Based on the description of Chalazonitis and co-workers′ experimental observations, several conceivable physical mechanisms could be quickly ruled out much like how a physician uses differential diagnosis to rule out unlikely causes of an illness [123,124]. Once the most likely physical mechanism was identified, the above equation, which was needed for verifying the hypothesized physical mechanism, could then be deduced in a straightforward manner by means of purely rule-based reasoning; all the rules for converting the physical model into the corresponding mathematical model could be found in classical electromagnetic theory. We did not bother to find a physical model that describes a time course of a sine function. The choice between two different ways of curve fitting was based on consistency of the model with a larger body of established knowledge rather than objective evaluation of the respective goodness of fit. In other words, even though the numerical model of sine functions appears simpler than the obscure function being selected, the latter is connected to a conceptually simple physical model. In this way, complexity of the formula was a trade-off for a more robust and simple physical model. This example suggests future problem solving computer programs should be able to “think” at different hierarchical levels and be able to make subjective esthetic judgment, which was often euphemized in physical sciences as “elegance”.

The second example is the epic rivalry and confrontation of Ptolemy′s geocentric model and Copernicus′ heliocentric model of the planetary system. More appropriately, the two mathematical models should be called geometric models instead, though both models gave rise to numerical predictions. Obviously, the geometry of the respective models was suggestive of two contesting versions of physical reality: geocentric or heliocentric. That was why the church-establishment took a keen and life-threatening interest in the topic! The main data, which were invoked to evaluate both models, were observations of planets′ apparent trajectory on the celestial hemisphere, in particular, the looping motion of outer planets. Incidentally, both models adopted circular orbits for planets, as once dictated by Plato.

Modern students, who had learned about both models, tended to trivialize Ptolemy′s model as a poor creative act. That was because we had been brainwashed by schooling, and we inadvertently rendered an a posteriori judgement. A proper and historical way to evaluate them would be an a priori view; we must try to totally forget about the presently accepted answer and imagine ourselves being confronted with the puzzling looping motion of planets as one of the few clues available. We must not forget that Ptolemy′ model reigned supreme for a whopping 1,500 years! Ostensibly, Copernicus offered a better and more rational explanation than Ptolemy′s theory did. Still, it took several centuries to settle the dispute. Superficially, we tended to think it was because the church establishment was in the way, but actually it was more complicated than that. As far as predictions were concerned, one model was as good as the other in explaining old and less precise astronomical data. Even so, both models relied on corrections to force a good fit; according to Kuhn′s account [125], both Ptolemy′s and Copernicus′ model deployed up to 30 minor epicycles, eccentrics and equants (Kuhn called them ad hoc devices) in order to attain an acceptable level of performance. Taken together in a holistic view, neither model was better than the other. Johannes Kepler was fortunate to inherit Tycho Brahe′s precision observation data of planetary motions. On top of that, Kepler had to give up continuing to do the patch-up work of adding more epicycles and other ad hoc devices. Instead, he opted for jumping out of Plato′s box and replaced circular orbits with elliptic orbits for planets. Just like piecing together a jigsaw puzzle with a snapping action, all pieces thus fell into their proper place. In hindsight, Copernicus′ model is more robust than Ptolemy′ model because the former eventually inspired Newton to propose his universal law of gravitation. Again, the robustness argument was an a posteriori judgment. If a digital computer program were to decide between the two models on a pure geometric or numerical ground it would be hard to predict which one would win.

Kuhn [125] thought that subjective judgment based on naturalness, neatness and coherence or even harmony had entered into consideration, and he lamented that these factors were not debatable. As a veteran curve-fitter, so to speak, I introduced, in another battle ground, an additional point to be considered in the debates [126]: fudge factors or what Kuhn called ad hoc devices. The primary epicycles must be regarded as legitimate constructs because they are the key element of Ptolemy′s model. In contrast, minor epicycles and other relics were deployed in both Ptolemy′s model and Copernicus′ model to force an acceptable fit with observational data, thus constituting fudge factors with no physical meaning. In this way, the judgment would not be purely subjective [42].

From the above discussion, we can tentative conclude the following. By pooling together insights of past creators, it is possible to expand the repertoire of heuristics. However, because of inherent ambiguity in judging goodness of fit of various proposed mathematical models, heuristics must be expanded to include insights at the physical level. With the exception of clear-cut cases, differentiation sometimes requires pitting explanatory power against predictive power. Some sort of subjectivity may have to be included in the overall holistic judgment. Central to the issue of explanation is the power of demystification. It is not just for humans′ emotional satisfaction, demystification also points to the direction of finding a more robust model. Fudge factors may enhance a model′s predictive power, but predictive power is achieved at the expense of demystification.

For future development, it is desirable to design machines that can convert newly acquired knowledge (or information) into new heuristics (machine insight). The ID3 programs and their successors, C4.5 and C5.0 [127-129], opened a new avenue: the ability to extract new insights from examples used for machine training. It was a step in the right direction. Earlier, we pointed out that a major distinction between rule-based reasoning and picture-based reasoning is the latter′s ability to deal with grayscale in making judgment. The development of machine intelligence did follow this desirable direction. In view of the mandatory digital environment, it would be difficult, if not impossible, to completely shake off the binary nature of decision making in the digital world. ID3 programs started with attributes of discrete nature, and admit only yes or no in building a decision tree. The inclusion of several different attributes was an attempt to add a grayscale to decision making: decision was no longer based on a single type of attribute. The more advanced C4.5 programs began to handle data with continuous attributes but still used threshold to dichotomize attributes. There seems to be an enormous price to be paid to stay in a digital environment.

On the other hand, we are presently limited by contemporary mathematics that can only deal with sequential processes. Many biological phenomena including high creativity involve highly integrated and massively parallel processes. A group of investigators began to explore future possibilities of developing mathematics that are better suited for biological processes, a type of biologically inspired new mathematics, so to speak [130,131]. In particular, creative processes demand a kind of new mathematics that can effectively handle highly integrated and massively parallel processes. It is our hope that this endeavor will eventually yield mathematics that can handle what are presently considered to be non-algorithmic processes. The application of this future mathematics to problem solving computer programming is obvious. There must be room for major improvements in hopefully not-too-distant future.

General Discussions

In spite of major advances made in neuroscience in the past century, the enigma of human creativity remained enigmatic at the end of the past century. Creativity research differs from other science disciplines in a unique aspect. Although it is not exactly a classical chicken-or-egg problem, there is a superficial resemblance. In principle, understanding human creativity could offer help to solve the enigma itself. However, one needs to open the box to retrieve the key that will open this very box. The only way out of this dilemma is to crack this box without the key initially. Thus, one needs a little luck to stumble upon a clue to get a head start. After that one could do something like bootstrapping in initiating a digital computer: taking advantage of the initial insight unleashed by the initial luck to get additional clues, and then starting a recursive process. However, the initial clue was hidden in an unlikely place. The obvious places to find it were: 1) secrets revealed by creative individuals and 2) understanding of the pertinent brain mechanisms of thinking processes. The first source did not work out too well because creative individuals knew how to create but not necessarily knew how to be creative, as indicated by the famous quotation of Gauss. Why the second source was not helpful deserves a serious comment.

Conrad was among the first to point out that there are several hierarchical levels of biological information processing [132]: from the molecular level, through the intercellular level, to the systems level [133]. In view of the complexity of the human brain, there are additional sublevels of information processing in the neural networks of the brain, just like the government bureaucracy. Regarding the problem of cognition, questions can be posited at these different levels. A comprehensive understanding of human cognition requires understanding at all these levels. Metaphorically, the enigma of cognition consists of secrets contained in a series of boxes arranged hierarchically: a box inside another box inside yet another box, etc., just like a Russian doll. However, this box-in-another-box metaphor is an imperfect one, since it is possible to open the inner boxes without first opening the outer boxes, but the key for an inner box may or may not help open outer boxes.

In the era of molecular and cellular biology, some cognitive scientists in pursuit of creativity research focused on seeking findings at the molecular and cellular level. For example, brain imaging, based on, initially, positron emission tomography (PET) and, subsequently, functional magnetic resonance imaging (fMRI), was a popular approach to study cerebral lateralization and other brain functions. I do not doubt the value of such approaches for elucidating fundamental brain mechanisms, but it was a lot to be desirable if one was aiming at elucidating the enigma of human creativity mainly because creativity is a holistic process that appeared to be beyond reach by a reductionist approach. Let us examine these approaches from the vantage point of hindsight.

Let us begin a thinking process by reading the description of a novel problem. One must first invoke analog pattern recognition (parallel processing) to recognize the alphabetic letters and/or Arabic numerals of the written statement of problem (See Sec. 16 of [42] about the controversial topic of whether recognizing alphabetic words, as opposed to hieroglyphic words, involves a sequential process or a parallel process). One then proceeds to comprehend the meaning of words and sentences by means of sequential and parallel processes for the syntax and the semantic, respectively. Once the meaning of the written description of the problem is understood, the third step is to begin to reason so as to find a hypothesis or to select a few potentially workable candidate solutions (solution-generating phase). At this step, one has two options of reasoning, either picture-based reasoning (parallel processing) or rule-based reasoning (sequential processing). Note that events at three hierarchically distinct events are involved: recognizing words, comprehending the problem, and reasoning in search of potential solutions. All these processes require parallel and/or sequential processing. The mixing of these three-level events results in rapid switching of two sides of the brain functions in a high frequency that is certainly beyond the time-resolution of the present state-of-theart brain imaging techniques. Besides, without detailed knowledge of the thought content, it would be impossible to differentiate between events originating from three hierarchically distinct levels. The insights gained by means of brain imaging into the creative process are rather limited. It is simply the wrong kind of data to shed light on the secret of geniuses′ thought process, much less demystify the enigma. It was like judging a beauty by analyzing the anatomical data alone without bothering to take a look at the beauty as a whole.

Here, I wish to make it clear that I do not imply investigating molecular and cellular mechanisms is unimportant for understanding humans′ cognitive processes. It is quite the contrary. For example, how do the two anatomically similar cerebral hemispheres perform drastically different information processing (lateralization)? What other brain regions are involved, and in what way? How the two hemispheres extract different meanings from identical information is an enigmatic question no less intriguing than geniuses′ secrets. These questions can only be answered by investigations at the molecular and cellular levels. It is still a long way to go. On the other hand, understanding human creativity at the systems level allows investigators to ask the right questions at the molecular and cellular levels. An even-handed way of assessing the two approaches would be to consider the analogy of digging two separate tunnels from the opposite ends of the mountain. We hope that the two tunnels will meet in the middle, thus forming a single “coherent” tunnel, instead of two different tunnels that miss each other. The complementary nature of the two approaches will become more and more apparent when the two tunnels approach each other closer and closer from opposite directions.

Presently, beyond what we had already known about cerebral lateralization, additional detailed cellular and molecular mechanisms offered no additional clue for finding the key of the outer boxes, since the keys turned out to be hidden elsewhere in unlikely places: artificial intelligence and education. The journey of pursuing these clues was a fascinating demonstration of the importance of defining the search space and heuristic searching.

The most important clue is pattern recognition, which was first suggested by Herbert Simon in his 1988 Peano Lecture [39]. Once this clue is recognized, ensuing bootstrapping processes led to the concept of parallel processing and sequential processing, and then heuristic searching. Interestingly, these are key concepts shared by two problems: demystifying the enigma and designing better problemsolving computers. Curiously, the concept of heuristic searching was conspicuously absent in the literature of psychology and cognitive science. Thus, seeking inspiration from artificial intelligence was tantamount to seeking clues outside the box. Along the journey, there were plenty of search space traps to lead us astray. The correct search space seemed a bit counterintuitive initially.

The designation of “enigma” primed our mind to searching for mysterious explanations. But explaining a mysterious phenomenon in terms of other mystic terms, which were, in turns, explained by additional mystic terms, did little to demystify the enigma. In the end, the explanations were not mysterious at all. This means that the previous perpetual attempts to search for mysterious explanations were tantamount to searching in the wrong search space.

Artificial intelligence offered useful clues but it had its own traps waiting. Just like the machine metaphor helped the development of classical physiology, the brain-machine analogy naturally suggested the search path towards creativity algorithm. Again, two unlikely clues suggested a way out of this search space trap. Rosen′s classification of natural processes suggested that not all natural processes are algorithmic. In a way, Rosen also restored some respectability to soft (non-mathematical) sciences. This clue freed us the inhibition of seeking a qualitative model of creativity (The inhibition stemmed from Rutherford′s remark: “Qualitative is nothing but poor quantitative”.).

Before reaching the second clue, another box stood in the way: I shall call it the “research vs. teaching dichotomy box”. In the modern university culture, the practice of separation of research and teaching was an unintended consequence of the government′s research funding policies. Besides, creativity and education do not mix, superficially speaking: education is for the mass public whereas creativity seems to be reserved for the minority elites. The teaching classroom was perhaps the least suspected place where the missing link (i.e., the dumb highachiever) was to be found.

The search for this missing link also led to a surprising finding: instructing students to emulate geniuses′ thinking process significantly helped their learning. After all, creativity and education do mix! Furthermore, in less than two minutes, the same instruction converted a student who had no clue about a brainteaser into a new person who solved the problem without any sign of struggle. The observation had a subversive effect: it undermined the widespread belief that the IQ score is a life-long constant score for a given individual. It was almost too good to be true, but it was true. This anecdotal evidence was of course no proof of the claim. Nevertheless, the claim is a falsifiable one. Readers are invited to test the anecdotal proposition, especially in a classroom setting since young students are less resistant to an unproven method. Just ask the students to convert what they have already known into mental pictures (mental imagery) and to look for clues. It is as simple as that.

Speaking about proof, the mere mention of it conjures up the specter of medieval nightmares as well as the specter of a burning stake, which threatened the life of whoever dared to assert one′s scientific belief against the then-establishment′s doctrines. In the end, renaissance brought about the public consensus that only objective proof counts regardless of the sex, race, creed and nationality of the proponents.

However, objectivity is both a blessing and a curse. It worked just fine to get us out of the dark age, since truths were no longer decided by the establishment alone without at least making an effort to convince the rest of us in rational terms. Logic and experimentation could override the opinions of authority figures. Most novice investigators were told that the scientific method starts with the proposal of a hypothesis, which can be invoked to make predictions of natural phenomena [4]. In reductionist sciences, the prediction is usually about the effect of a single factor or a small number of independent factors. One then designs experiments to test the validity of predictions. The experimental samples are divided into two groups with the presence or the absence of the factor being investigated, respectively. Because of measurement noise, the measured data must be certified by standard statistical methodology. In order to uphold the hypothesis, a statistically significant difference must be demonstrated between the experimental group (with the factor) and the control group (without the factor). Additional cares are taken to eliminate human biases, such as double-blind tests with placebos unknown both to the experimenters and to the test subjects, etc., thus eliminating subjective biases that can lead to a false-positive effect or a false-negative effect. The precautions are particular relevant for evaluation of drug effects. This practice of the scientific method worked well in general for reductionist sciences. However, this so-called scientific method contains a number of pitfalls, to which complex systems are particularly vulnerable. In particular, creativity research seemed to be plagued with more fallacies than other scientific disciplines for good reason.

In complex systems such as human behaviors, one seldom encounters only a single factor. Worse yet, when multiple factors are involved, they are usually not independent factors but mutually interacting factors. It is well known that experimental results in behavioral science are notoriously theory- or model-dependent [134]. Simply put, it is possible for a misguided hypothesis to suggest experiments that happened to collect flawed data to successfully prove the hypothesis, much like a self-fulfilling prophecy.

Popper′s falsifiability argument has a farfetched consequence, of which a significant number of reductionist scientists were unaware: There is no such thing called absolute scientific proof. In arbitrating several rival scientific theories, one merely eliminates the weaker alternatives. In simple systems, it did not generate too much trouble, presumably because any flaws so derived were easily spotted. It was an entirely different story in investigations of complex human behaviors, such as creativity and learning. Unlike simple systems, parametrization (i.e., selecting relevant measurable variables) is by no means straightforward. A lack of awareness of Popper′s argument was potentially asking for troubles. In the previous section, we pointed out that improper definitions could lead to hardening of the categories as well as sample heterogeneity. Here, we shall consider another pitfall because of its high frequency of occurrences in creativity research and because of investigators′ addiction to statistical methodology.

Strictly speaking, the common practice of statistical certification of experimental data was merely a consistency check; it is neither absolute proof nor a confirmation of a cause-effect relationship. To be blunt, the success of a consistency check is also a failure to demonstrate any discrepancies. If we take Popper′s falsifiability argument seriously, it merely means that the given hypothesis survives a test, but it does not guarantee future survivals of new challenges. It is common knowledge outside of the science community that correlation is no causation. Sometimes statistical correlations let one discover a superficial but less relevant or even irrelevant factor that happens to be coupled, via an indirect route, to a hidden but more relevant factor. For example, the finding that knowing less is correlated with better chance of solving a difficult problem merely means that knowing less is the side effect of being an outsider that is also immune to dogmas. Immunity to dogma is the relevant but hidden cause whereas knowing less is a coupled superficial factor. Those who wish to get a humorous appreciation of this lesson are suggested to go to Google and click on the following keywords for a 1-minute YouTube video clip: Italian time, Italiantime, or Le palle del ciuchino. This humorous cautionary tale ought to be a wake-up call for addicts of statistical methodology: The peril of committing a sin of subjectivity in the name of objectivity should not be overlooked. Let us take the popular technique of brainstorming as an example.

Brainstorming was a technique proposed by Osborn in 1958 [135] to enhance group creativity. Brainstorming differs from other types of group activities: certain rules must be followed. The cardinal rule is the “absence of criticism and negative feedback”. The corporate world embraced the approach. Similar lines of thinking led to a number of learning strategies in educational practices, such as small group teaching and cooperative learning. Subsequently, a number of studies about brainstorming showed that it did not really work; sometimes, brainstorming led to opposite effects, such as idea fixation [136]. However, Seelig, in her popular book in Genius: A Crash Course on Creativity [137], insisted that brainstorming was effective, and she blamed the detractors for not “understanding how different brainstorming is from normal conversation”.

Regarding these disparate claims, I wish to offer my subjective opinion. Neither opposite claim is completely true. Brainstorming works sometimes but not all the times; it depends on whether the discussion leader masters the art of unknown underlying causes. One should never overlook the possibility that some superficial factors, such as the discussion leader′s enthusiasm and certain undocumented maneuvers, might have a contagious and/or insidious effect, which would be unlikely to be duplicated by detractors or by someone with lukewarm enthusiasm. In my opinion, those special rules in conducting brainstorming sessions seemed to be conducive to picturebased reasoning. In Boden′s words, brainstorming “inclines without necessitating” picture-based reasoning. Thus, these conditions are superficial and, at best, secondary factors. The main primary factor is hidden because an experienced expert did not suspect it but unconsciously knew how to implement it. In other words, experienced experts internalized the entire process without being aware of the detailed steps, just like a dancer who unconsciously knows how to execute the required sequence of muscle contractions by means of a holistic process called muscle memory in dance jargon. The speculated hidden factor is picture-based reasoning. The secondary or superficial factors are coupled to the primary hidden factor, but the coupling is not permanently inseparable. When the coupling is present, statistical evaluation tends to affirm the effectiveness of brainstorming. When the hidden factor is decoupled from the conditions implemented by brainstorming, statistical evidence evaporates. On the other hand, brainstorming also has some unwanted side effects such as encouraging conformity. It is well known that conformity is anathema to creativity [138,139]. When the main benefit of brainstorming vanishes, these side effects become the main and prominent feature. In any case, my speculation is readily falsifiable. Just compare the effect of brainstorming with and without urging the trainees to deploy picturebased reasoning in the test group and in the control group, respectively.

Visual thinking is not an unfamiliar term since the lay literature is full of books directly or indirectly related to it. Buzan′s mind map is apparently a practical but partial implementation of visual thinking [140]. Besides, visual thinking is an idea so deeply ingrained in the English language. For example, we must use our imagination to figure out how geniuses made novel discoveries, we visualize their vague descriptions so as to come up with a better understanding of the underlying process, we pieced together insights revealed by various creativity models to formulate a coherent explanation, and finally we had no intention to use the prevalence of dumb high-achievers to paint a gloomy picture about future education since students could be trained to practice visual thinking. In the German language, it is even more direct and explicit, since intuition (Anschauung in German) also means visualization [20]. Why did experts failed to connect brainstorming with visual thinking? Why did neither Lieberman [17] nor Dane and Pratt [16] link the concept of intuition to visual thinking? Both groups came so close to yet remained so far apart from the sensible interpretation. Why?

The culprit was the steadfast insistence upon objectivity to the point of exclusion of common sense. Common sense told us that when we get stuck in a blind alley we must execute a de tour. Common sense also told that if we could not find clues upon first attempt we should pay attention to what we could have overlooked during the second attempt. In the present case, a possible way of getting out of the blind alley is to visit or revisit a previously abandoned portion of the search space. Hawkins could not overcome a problem of data input designs in his PalmPilot project even after he had made considerable efforts. He decided to try a diametrically opposite idea and succeeded [81]. Similarly, ID3 programs invoke a recursive procedure to avoid getting trapped in a blind alley. Experts resolutely excluded Einstein′s introspection from their chosen search space and sent it into a permanent exile. After half a century′s search, the solution was not within reach. It would be sensible to assume that verdict rendered on Einstein′s introspection might be premature.

It is true that introspective reports of Einstein, Tesla, Mozart, etc., were all subjective in nature because it was impossible to get their reports objectively. Must we study subjective experience, such as creativity and learning, only by objective means? It was totally unnecessary to be so uptight, because, as indicated by most 20th century models of creativity (Table 1), a subjectively derived hypothesis still needs to be scrutinized and verified by means of rigorous logical reasoning and/ or experimental observations. Besides, I have a sneaky suspicion that those experts that permanently excluded Einstein′s opinion from their search space must never had a first-hand experience of visual thinking (remember that visual thinking is not a monopoly of geniuses). In a way, experts invoked their own subjective experience, or the lack of it, to overrule Einstein′s subjective opinion. The rest of us are left with the choice of siding with subjective opinions of either experts in creativity research or experts who practiced creativity. Needless to say, I chose to side with the latter and worried about objectivity later.

table

Thus, with rigorous verification as a safety net for objectivity, it would be harmless even if a hypothesis were derived subjectively or even nonrationally. Considering the present case of deriving our hypothesis by means of a comparative study of earlier models of creativity, the practice by itself does not appear to be scientific, at least not in the conventional sense stipulated by Lawson [4] and Medawar [6]. But it is one way of deriving a hypothesis. Of course, it would not be difficult to derive the hypothesis in a didactic manner from the concept of pattern recognition, as was done elsewhere [42]. Most scientific articles were not expected to reveal how the hypothesis had been derived unless a fraud was suspected. Therefore, few would remember to question how those hypotheses had been derived. Some, like Gauss, were aware of their own ignorance of the clues. Others, such as Einstein, were aware of the role of visual thinking. The overwhelming majority of investigators thought that the hypothesis had been derived by logical deductions. The present article presents logical arguments demonstrating that it is almost impossible to derive significant hypotheses by means of logical deductions; those significant hypotheses that could be deduced by means of one-step logical deductions must have been exhausted a long time ago. Peirce was apparently aware of the difficulty of theories based on the hypothetical-deductive scheme, such as that of Medawar and others (including Lawson as a late comer). His elusive idea of abduction could never shake off the image of being tainted with subjectivity. We thus came full circle and reached where Peirce was; Peirce′s revised notion of abduction is actually picture-based reasoning in disguise. Whereas the methods of deriving a hypothesis may not be logical (syllogistic), rational or scientific, it is the ultimate step of rigorous syllogistic verification that makes the entire process scientific. Statistical methodology cannot make a flawed inference scientific.

In the above-presented critique of statistical methodology, I do not wish to give the impression implying that all correlation-derived hypotheses are flawed. Quite the contrary, some hypotheses so derived might turn out to be valid. In fact, statistical correlation can be construed as a way of heuristic searching of possible cause-effect relationships. For example, Guilford [141,142] identified divergent thinking as a character trait of creative individuals. It makes sense because rigor and strictness in rule-based reasoning leave no room for divergent thinking due to a severe lack of fault-tolerance. Only picturebased reasoning allows for divergent thinking.

However, in applying statistical methodology for the purpose of heuristic searching, the standard approach familiar to biologists is not the only way. Standard methodology calls for the collection of sufficiently large pool of samples prior to analysis. The Bayesian School approaches statistical problems from a different angle. Let us consider the fundamentals of statistics, for example, of counting the head-tail distributions when one flips coins. The standard statistical approach is to flip many nearly identical coins at about the same time and then to count the head-to-tail ratio. The resulting ratio is called the ensemble average. The Bayesian way is to flip the same coin over and over again, and, at the end, to determine the ratio, called the frequency average. If the coin is honest (i.e., not illegally altered), both averaging processes reach the same conclusion of a 50 to 50 distribution. However, in formulating a hypothesis, the frequency average approach is more effective and more efficient than the ensemble average approach.

If rolling a dice reveals the same point twice in a row, it may be a coincidence. If the same point comes up three times in a row, the probability that it is due to coincidence now shrinks to 1/216 or 1/63. The likelihood that the outcomes are not a coincidence increases with the increasing frequency of repetitions. The same approach can thus be used in the laboratory to detect unknown possible cause-effect relationship. Thus, after a few recurrences of the same correlation, the investigator can subjectively begin to formulate a hypothesis and make predictions, i.e., to design a small test with the intent to falsify the preliminary hypothesis. In this way, one can revise the hypothesis concurrently, rather than afterwards, while the data are being collected. In this way, one can also easily uncover possible data heterogeneity if one scrutinizes the incoming data while they are being collected. In contrast, it is much harder to detect sample heterogeneity in the ensemble-average approach when a huge pile of data has already accumulated. If there is a lingering doubt about the legitimacy of the Bayesian approach in formulating a hypothesis, Galileo′s practice ought to put the doubt to rest.

Galileo noted that the three little stars (Galilean moons) formed a straight line in parallel to the ecliptic on the first day of observation (January 7, 1610). He mentioned it 19 times totally during a period slightly shy of two months (twice on January 22). When he mentioned it for the 9th time on January 23, he treated it as a foregone conclusion by adding a clause “as they have always been”. In hindsight, it was a consequence of the little stars orbiting around Jupiter with an orbit that is coplanar with the Earth′s orbit. There were totally 65 records in Sidereus Nuncius, which constitute hardly a sufficiently large sample to warrant a conventional statistical analysis. However, the repetition of the same relationship night after night within two months, corroborated with several additional lines of evidence, rendered it a foregone conclusion beyond reasonable doubt.

Of course, the Bayesian approach is not without any shortcomings. It is apparent that the Bayesian approach makes it easier to cheat. The cheater needs to alter a single dice or make a single coin dishonest with the Bayesian approach in mind; it would take more time and effort to cheat with the conventional statistics in mind. This may be why drug tests must be performed with the standard approach of double-blind tests. It is to prevent the investigator′s unintentional or intentional selecting data, in view of the infamous practice of throwing out an “outlier” data point just to make the analysis look good. Nowadays, the investigators must also reveal financial supports from all commercial donors, just to make sure no hole remains unplugged. No wonder the Bayesian School carries the stigma of being subjective. For a scientist in pursuit of truths, cheating is less of a problem since one must fool oneself before one can fool others. Therefore, the remaining problem is for the detractors to fish out any otherwise honest logical flaws, as I did in this article. This is why science must be practiced in an adversarial way so as to maximize the chance of exposing blind spots, to which few can claim to be immune. In attempting to expose others′ blind spots, one may inadvertently facilitate exposure of one′s own blind spots. On the other hand, in modern scientific practices, a hypothesis derived by means of the afore-mentioned Bayesian way often invited a criticism of being speculative. What the detractors did not realize was: a hypothesis, by definition, remains speculative until it is verified.

The same kind of popular bias in scientific practices tended to treat “anecdotes” like illegitimate children. For example, the subjective report of Fleming about his experience of serendipity was often dismissed as an anecdote, if not an outright lie. On the other hand, sporadic reports in support of brainstorming were not called anecdotes but instead called evidence-based observations. In the end, as it had transpired, evidence in support of serendipity was far more reliable than evidence in support of brainstorming. I suspect that the real difference is: anecdotes were derived from “folk psychology,” whereas evidencebased observations had the blessing of experts. In other words, the real difference is between common folks and science aristocrats.

Ultimately, the merits of a theory must be evaluated in terms of explanatory and predictive power, as well as parsimony. Parsimony of the refurbished Simonton′s model is attained by a unified treatment of creativity in science, technology, arts (including music) and humanities (see Sec. 4.20 of [41] for a discussion of creativity in music, art and literary works). For its explanatory power, we offered it should be in earlier sections, explanations of a number of long-standing puzzles. In addition, we have in hands a century-worth of observations and experiments to explain. So far we have not encountered a puzzle that we could not explain. Whether the explanations are satisfactory or not is for the readers to decide. As for verification, the present rendition of the refurbished chance-configuration model certainly cannot make quantitative predictions because it is a qualitative model. Predictions must be made in terms of whether adopting picture-based reasoning makes any difference in creativity [42]. Our anecdotal evidence indicated that it worked surprisingly well. Concerned readers will undoubtedly rush to try it out. In this way, a sufficient number of anecdotes may accumulate, thus rending the observations evidence-based.

Summarizing Conclusions

Creativity research is a fascinating and important topic for several reasons. It is a topic about our inner self. However, unlike other topics about our body in biological research, it is a problem about how we solve problems. The nature of self-reference is apparent. This might be why it remained an enigma for a long time, in spite of relentless attacks by means of modern arsenals of research techniques and instrumentation. It is hard to deny that it was creativity that brought about civilization. From the practical point of view, creativity research is important especially in the 21st century. It is almost a consensus that scientific and technological innovations are the primary factor for driving economy. Creativity is equally important in humanities and in just about any conceivable human endeavor. Although it has never been explicitly stated, enhancing creativity has been one of the goals of education. The irony is: although science and technology has been advancing at an increasing speed, education (at least in the United States) seemed to be in deep trouble, despite repeated reforms.

The invention of digital computers must be considered one of the most spectacular achievements of human creativity. However, the success brought about an unintended consequence: information overload due to information explosion. Indirectly, the resulting bottleneck adversely affects the so-called knowledge-based economy of the 21st century. Needless to say, prosperity of humans relies on computers with increasing efficiency and effectiveness to pre-process an ever-increasing amount of information (data mining). We need machines that can detect meaning from raw data rather than just manage the chores of processing information at humans′ direct command. Engineers are dreaming of a new kind of computing and analyzing facility like a “world brain” imagined by H. G. Wells a century ago that harness the wisdom of the crowd [143]. Creative problem solving computing that managed to pool the collective creativity of past creators would do just that.

Once I heard a luminary claiming that scientific activities are similar to sport activities. Although not everyone agrees to this claim, our educational system did train students in a way similar to that of training athletes. The same can be said about training programs designed to enhance trainees′ creativity. Regardless of innovations in methodology, creativity appeared to defy training. All failed methods share the same characteristic: pushing the process to the limit, just like training athletes. A notable exception to this rule is arts. Arts must be just right; any superfluous attempts to add or subtract destroys arts. Unbeknownst to many experts, creativity itself is like arts. Propp [144] once stated, “The ability to come up with creative approaches to problems can be cultivated, but it cannot be taught; it is more of an art than a craft”. There was some truth in this statement, because it turned out that creativity could not be achieved by pushing any step to the limit. However, our attempt to demystify human creativity thus brought creativity itself from the status of art down to the status of craft. As a consequence, one can learn to be creative through training and practices.

Contrary to conventional wisdom, creativity relies on a welltempered balance of various factors. Although we did not devote space to discuss some of the important factors to be mentioned below, we shall summarize the overall factors below. In order to maximize the fulfillment of creative potentials, several conflicting requirements must be met [40]:

• to use picture-based reasoning so as to maximize the probability of finding novel solutions, but to use rule-based reasoning so as to increase the speed of thinking,

• to focus on problem solving so as to enhance the retrievability of key techniques and knowledge, but to defocus on problem solving so as to avoid getting trapped in an unfruitful search path,

• to subdue one′s attention during a problem-solving session so as to optimize one′s affect, but to extend one′s attention beyond a formal problem-solving session so as to reap the benefit of serendipity,

• to perform heuristic searching so as to select a manageable search space, but to avoid premature shrinking of the search space,

• to “zoom in” and pay attention to details, but to “zoom out” and pay attention to big pictures,

• to be highly explorative so as to expand the search space, but to have sufficient task involvement so as not to spread oneself too thin,

• to be sufficiently motivated to overcome obstacles that stand in the way of creative acts but not to be so excessively motivated by extrinsic rewards as to become risk-averse.

• to be sufficiently confident to assert an unpopular view, but not to be so excessively confident as to overlook clues provided by critics or opponents,

• to be sufficiently disciplined to perform rigorous logical deductions and to play by the rules forged by social consensus, but to be sufficiently undisciplined to defy authority whenever necessary – a "mood swing" between the traditionalist and iconoclast dispositions, in the words of Simonton (see below), and

• last but not least, to be able to flexibly and dynamically switch between two opposing modes of action, as listed above.

The conflicting requirements baffled some investigators. Getzels spoke of the “paradox in creative thought”: creative thinking entails child-like playfulness, fantasy and the non-rationality of primary-process thinking as well as conscious effort, rationality, reality orientation and logic [145]. Csikszentmihalyi mentioned the conflicting requirement of openness and critical judgment [146]. Simonton emphasized the welladjusted trade-off between the traditionalist and iconoclast dispositions [13]. However, there is no compelling reason that the conflicting requirements must be fulfilled simultaneously. A creative mind is not static and fully “hard-wired” but rather it is dynamic, flexible and versatile. Therefore, there is no real paradox - just our temporary confusion - and there is hardly any trade-off or compromise - just judiciously timed mood swing between extreme randomness (lack of predictability) and extreme determinacy (well-behaving discipline) in the thought process. In other words, one can be extremely speculative and subjective at the stage of formulating a hypothesis, but later becomes extremely logical, methodical and objective at the stage of verification. In contrast, individuals with low creativity often fall short of both extremes and practice a static compromise between the two extremes, instead. Such dynamic flexibility, or the lack of it, reflects an individual′s mind habit, also known as personality. The present article focuses on the intellectual aspect of factors affecting creativity. For a detailed consideration of the emotional aspect of human creativity, see Sec. 4.21 of [41]. The Yerkes-Dodson law [147] and research done on rewards and motivation by Deci and co-workers [148,149] are particularly relevant for applications in education. We omit them here since these factors are not directly related to computer-based creative problem solving.

Geniuses and dumb high-achievers occupy the two extreme ends of a continuous spectrum of human creativity. It is in reference to this spectrum that we evaluate the performance of computer-based creative problem solving. Artificial intelligence investigators have accomplished a feat that was extremely difficult to achieve mainly because geniuses used non-algorithmic process of picture-based reasoning as an important part of their creative acts. A digital environment was not the most natural environment to implement creative processes. Nevertheless, clever designs of heuristics and clever ways of making the computer program generate and improve heuristics have yielded impressive results.

The elucidation of the enigma of human creativity is expected to provide a better roadmap for future designs of heuristics. Genuine parallel process may be theoretically unattainable in a digital environment and a digital computer may never exhibit genuine intuition and understanding, as evaluated by geniuses′ standard. In contrast, practical applications are an entirely different matter: close enough approximations may just be good enough for the time being. The closer intuition is simulated, the better performance of creative acts a digital computer can achieve. Instead of achieving genuine parallel processing, one can focus on circumventing the restrictions of a digital environment to better approximate genuine parallel processing.

The need to circumvent practical restrictions actually brings out the importance of understanding the underlying principles of creativity. This point was demonstrated by how past inventers of aircrafts had circumvented the impracticality of constructing a pair of flapping wings, once attempted by Leonardo da Vinci. In the end, the combined functions of propulsion and floatation of a bird′s pair of wings were separately implemented: the wings for floatation in accordance with Bernoulli′s principle, and the propellers (or the jet engines) for propulsion in accordance with Newton′s law of action and reaction. Circumventing the restrictions without sacrificing the principle would be the key to engineering designs, when humans seek inspiration from Nature. In this sense, understanding the theoretical principle is by itself a heuristic search for designs that circumvent the practical restrictions. This means, flawed theories only misled engineers to get lost in the jungle of all possibilities.

So far heuristic searching has been implemented by a two-step design: a programmer-chosen or programmer-suggested search space, and pseudo-parallel processing for high-speed systematic searching within the designated search space. Significant progress has been made in the past for improving the design of heuristics or instructions that granted the computer freedom to find better heuristics. Additional improvements can be done in the direction of replacing pseudoparallel processing with something closer to genuine parallel process. The advantage of genuine parallel process is random access, which together with improvisation (freedom to select) let geniuses′ perform heuristic searching within the chosen search space. Easily comes to mind is to enlist the help of parallel computers. However, in order to enable effective heuristic searching, the separate parallel computers must allow each to access others′ processed data so as to improvise asynchronously rather than at scheduled times only. It is good thing that computers have no emotion and the programmer does not have to be concerned with computers′ refusal to share data, as often happened in real life due to the so-called interdepartmental rivalry in the government bureaucracy. Genuine parallel processing with genuine random access may be impossible in a digital environment, but, for practical applications, close enough is often good enough. That was why I advise against simulating the Gestalt phenomena and advise for focusing on performance alone. How to give the computer a mind so that it can improvise effectively is of course quite challenging. In addition to clever heuristics, clever mathematics that can handle parallel processing or near-parallel processing is highly desirable [42].

In regard to the question posited in the title, “can a digital computer think?” I have evaded the question and declined to give a straight answer. It was not exactly a cop-out. I simply could not come up with an unequivocal definition of “thinking” and “understanding” due to their gray-scale nature. Depending on how we “bend” the meaning, the answer could be either yes or no, or yes but not exactly. Instead, I replaced the word “thinking” with the phrase “performing intellectually,” and said nothing about understanding. In this way, I could give straight answers, which I shall repeat here, as a sad commentary from an educator′s perspective. The problem-solving programs of Simon and other AI pioneers could perform at an intellectual level significantly higher than our dumb high-achievers. Furthermore, they performed in such a way as if they really had better understanding than dumb highachievers. In all good conscience, I could not say the digital computer could not think. Yet there is still something missing that prevented me from say that they could think and understand in the same sense as we did in judging real humans like ourselves.

In the present article, two different levels of reasoning could be used to evaluate the intellectual performances in lieu of a continuous spectrum of grayscale. Of course, the old-fashioned expert systems performed exactly at the level of rule-based reasoning, just like dumb high-achievers. Through clever design of heuristics, and the implementation of quasi-analog pattern recognition, the computers of subsequent generations performed at a level significantly better than rule-based. However, their performance was nowhere near that of human geniuses or highly creative individuals, because the heuristics were fashioned after insights of past creators. Therefore, the computer cannot be regarded as creative. In other words, a digital computer may perform creative acts without being creative. This statement sounds paradoxical. A metaphor suffices to make the point clear.

Svengali could make a mediocre singer sing at a professional level, but we all know that it was Svengali who pulled the string behind the scene even though Svengali himself could not perform. Even so, by devising better and better heuristics and by pooling together experience and insights of past creators, a digital computer may possibly outperform a human genius, if not all geniuses. However, there is no compelling reason to believe that geniuses will not continue to gain new insights in the future unless the Earth is destroyed prematurely by humans′ reckless assaults on the environment. The heuristics devised on the basis of future new insights could only be a posteriori programmable, since no one would know what new insights and new creative acts are forthcoming in the future. In this way, human geniuses may collectively keep one step ahead of the most creative problemsolving computers.

Our ambiguous position is debatable because of past cautionary tales. Rene Descartes used to deny the existence of consciousness for lower animals, mainly because the animals could not proclaim their consciousness by declaring, “Cogito ergo sum (I think therefore I am)”. The outlook has changed considerably since Koko, the gorilla, managed to learn American Sign Language, and a chimpanzee named Nim Chimpsky managed to do something similar [150,151]. Although linguists were reluctant to grant them the ability to master natural languages, few people now dared insist that they had no consciousness and that they could not think. An average person would probably judge that the computer could not think like humans because they had no consciouosness. This argument is much harder to dispute. The trouble is: we do not even know what consciousness exactly is.

Thinking and understanding, just like consciousness, have many shades of attributes. For example, it is difficult, if not impossible, to set a finite number of objective criteria to differentiate between living creatures with and without consciousness (Sec. 5 of [41]). It is quite conceivable that computer simulation of consciousness can make progress to the point that it becomes impossible to tell it apart from the real one. Yet, deep down in our own consciousness, we would not accept the simulation as real because something difficult to articulate is still missing. The same can be said about computer simulation of thinking and understanding; at a certain point in time, simulation and reality may become almost impossible to differentiate unequivocally and objectively, but we still cannot accept the notion of a “thinking and understanding” machine because it is just a man-made machine, thus betraying our anthropocentric bias (cf. [152]).

AI professionals used to invoke the celebrated Turing test, but the problem-solving programs of Simon and his contemporaries had already passed and, I suspect, our dumb high-achievers had failed the test. At this moment, perhaps I can say that the digital computer does not think like humans and understand like humans because it has no free will. I believe in free will, but I declare that I can neither prove its existence nor disprove its existence, and, furthermore, no one can, either. I shall refer to a recent article [153] of mine for explanations so that I need not change the subject at this juncture. All I need to add is: the most difficult aspect of free will is the origination issue (Sec. 5.15 of [41]). In my boldest opinion, the origination problem has not been settled yet. Therefore, the related questions concerning thinking, understanding and consciousness still hang in limbo.

Let me just continue by pointing out that the digital computer did not actually find those clever heuristics by itself, because the programmer made it happen, not by means of outright tipping off the computer, but by guiding the computer program to proceed, without micromanaging it, in such a way that the computer was almost guaranteed to discover those heuristics. I did not concoct this argument out of sheer blue sky; the original thought is attributed to composer Richard Wagner. He conveyed some of his profound philosophical thoughts via the vehicle of his monumental and colossal opera cycle, Der Ring des Nibelungen.

Wotan, the chief god of Valhalla, ruled the world by signing numerous contracts with other gods. He sought control of the magic ring forged by Alberich, the chief of the underground kingdom Nibelheim. However, he could not just go ahead and seize the ring without abrogating a certain treaty. He felt that he was the least free of all, but he cleverly dreamed up a plan. First, he sired a son, Sigmund, and a daughter, Siglinde. Through a dubious act of incest, the latter two bred a son, Siegfried. Siegfried was the hero of heroes, an übermensch, so to speak. Wotan avoided direct contacts with Siegfried in such a painstaking way that Wotan could claim that Siefried had complete free will to carry out a self-determined act. His secret plan was to create the circumstances so that Siegfried could eventually get the ring for him without the appearance of his involvement in the grand conspiracy. The trouble was: his elaborate scheme could not escape the detection of his wife, Fricka. Fricka saw through the trick and confronted Wotan with the accusation that Wotan had planted a sword on the trunk of an ash tree in anticipation that Siegmund would find it at the dire moment of great need to ensure what would transpire later in exactly the way Wotan had expected. In other words, Fricka knew that Wotan was pulling the string behind the scene all along, much like what Boden once aptly put, exerting an influence that “inclined without necessitating” the desired outcome, but it happened anyway. That was exactly the way the programmer did by planting the heuristics and other enabling conditions, just like the waiting sword on the ash trunk, so that the computer could pick it up in the right place at the right time, thus fulfilling the programmer′s wish and duping rest of us into calling the feat a genuine creative act. Amen!

That is as close as I wish to answer the question for the time being. As for the future, I have something else more serious to worry about. It is a sad commentary for me to confess that, during my long teaching career, I have both the privilege and the misfortune to witness two opposing trends, respectively: The problem-solving computer programs performed more and more like humans, whereas some of our students performed more and more like robots. I now wish to devote more time to teaching students to think creativity in my desperate effort to prevent or slowdown the ultimate crossing of two curves representing the two opposite trends. As for the future computers, can it possibly perform in a way that it becomes virtually indistinguishable from a human genius? Science and technology taught us a lesson: never say never, except perhaps just this once.

Appendix: The Trajectory of an Outer Planet

Because of the Earth′s rotation from west to east, all celestial bodies, including the Sun and the Moon, rise in the east and set in the west in their apparent (relative) motion, with a period of almost exactly 24 hours (diurnal apparent motion). Because of the orbiting motion of the Earth around the Sun from west to east (the same direction as its rotation), the Sun also moves relative to the background constellations slowly day after day, in its apparent motion, from west to east, with a period of about 365 days. The Sun′s trajectory relative to the fixed star background is called the ecliptic. The Sun′s glare is too bright to allow our eyes to see the background constellations around it, except during a total eclipse. But these constellations and the ecliptic can be inferred. For example, the constellation at the zenith at midnight is directly opposite to the Sun′s position on the ecliptic.

If one extends the plane of the Earth orbit so that it intersects the celestial sphere, the circular line of intersection is exactly the ecliptic. Likewise, if one extends the plane formed by the Earth′s equator to intersect the celestial sphere, the circular line of intersection is known as the celestial equator. The ecliptic intersects the celestial equator at an angle of about 23.5°, i.e., the inclination angle of the Earth′s axis of rotation. The extension of the Earth′s axis of rotation meets the celestial sphere at the celestial North Pole and the celestial South Pole. The celestial North Pole is about where Polaris (of the constellation Ursa Minor) is

The moon also orbits around the Earth from west to east with a period of about 27 days. Therefore, the moon rises later and later day after day (or night after night). That is, Moon moves slowly from west to east relative to the constellation background, with a period of about 27 days, as it waxes and wanes. Its apparent trajectory on the celestial sphere nearly coincides with the ecliptic, because the Moon′s orbit is nearly coplanar with the Earth′s orbit. So are planetary orbits. That is why planets as well as the Moon are always seen around the cliptic. However, these orbits are not strictly coplanar with the Earth′s orbit. Therefore, the trajectories of planets as well as that of the Moon are not exactly the same as the ecliptic. Had the Moon′s trajectory strictly coincided with the ecliptic, we would witness both solar eclipses and lunar eclipses once each, alternatingly, approximately every month (27 days)!

The apparent motion of planets relative to the background constellations is somewhat complicated as compared to that of the Moon. Unlike the Moon, planets are not orbiting around the Earth but, instead, around the Sun. Let us just consider the outer planets (those planets that are farther away from the Sun than the Earth, i.e., Mars, Jupiter, Saturn, Uranus and Neptune.)

The outer planets would always move eastward relative to the background constellations (just like the Sun) if the Earth did not orbit around the Sun but only rotated around its axis (stationary on its orbit). But actually the Earth also orbits around the Sun in the same direction. As a consequence, an outer planet does not always move eastward relative to the background constellations. When it does move eastward, just like the Moon and the Sun, it is called direct motion. The direct motion slows down and eventually became stationary relative to the background constellations (called at station). A station is actually a turning point, because after station, it moved westward (called retrograde motion). A retrograde motion accelerates first and then slows down again. Eventually, it reaches another turning point (station), and reverses direction relative to the background constellations, so as to become direct motion again.

It is like two racers running along two neighbouring tracks of unequal diameters. Imagine that the Earth is running in the inner track and it is passing Jupiter from behind. By projecting the image of Jupiter onto the background of the spectator stand, Jupiter is seen by the Earth as first moving forward (eastward) relative to the spectator background. When the two run close to each other shoulder by shoulder, Jupiter appears stationary momentarily. Eventually the Earth overtakes Jupiter, and Jupiter, which is now lagging behind, is seen as moving backward (westward) relative to the stationary spectator background.

Since the orbiting period of the Earth is shorter than that of Jupiter, eventually the Earth will come again from behind Jupiter after the Earth′s orbiting motion exceeds Jupiter′s motion by one lap, to a relative position to witness direct motion of Jupiter again. Before reaching the same starting point, there will be a point when Jupiter becomes stationary again, when it switches from retrograde motion back to direct motion. Thus, a kinky loop (or Z shaped trajectory) appears in the otherwise smooth trajectory of Jupiter.

The above described apparent motion stems from the fact that the outer planets are orbiting around the Sun rather than around the Earth. It is not too difficult to comprehend their apparent motion in the framework of Copernicus′ heliocentric system. Imagine how difficult it would be for believers of the geocentric view to comprehend: it took a bizarre model known as Ptolemy′s epicycle theory to explain the apparent motion of outer planets. In this context, Ptolemy′s theory was extremely and even insanely imaginative!

Acknowledgements

The author is deeply indebted to his friend and colleague, the late Professor Michael E. Conrad of Wayne State University, for his indelible influence. The author is also indebted to his mentor, Professor David C. Mauzerall of The Rockefeller University, who taught creativity by setting personal examples. The author wishes to express his gratitude to Gerard Jagers op Akkerhuis, Arran Gare and Leslie Smith for reading several versions of the manuscript and for valuable criticisms and suggestions, which helped improve the manuscript significantly.

Notes added in the proof

Recently, a team of scientists at the Department of Psychological and Brain Sciences of Dartmouth College reported the finding of the locus of mental imagery. They performed multivariate pattern analysis of functional MRI data and discovered a widespread neural network that performs specific mental manipulations on the contents of visual imagery. The report “Network structure and dynamics of the mental workspace,” co-authored by A. Schlegel, P. J. Kohler, S. V. Fogelson, P. Alexander, D. Konuthula and P. U. Tse appeared in the Proceedings of the Academy of Sciences of the United States of America (September 16, 2013). (http://www.pnas.org/content/early/2013/09/13/1311149110).

References

Select your language of interest to view the total content in your interested language
Post your comment

Share This Article

Article Usage

  • Total views: 14000
  • [From(publication date):
    August-2013 - Jul 23, 2019]
  • Breakdown by view type
  • HTML page views : 9889
  • PDF downloads : 4111
Top