25th Anniversary of the First Control of a Robot Using EEG Signals

Twenty five years ago, in 1988, electromagnetic energy emanating from the human brain was employed for the first time in history to control a physical object, a robot [1,2]. It was a milestone for Computer Science (pattern recognition, control), Systems Biology (conative biology, brain emanation patterns), and robotics. Another way of looking at it is that it was an engineering approach to psychokinesis. The approach was based on two interfaces: one toward a biological brain for EEG signal processing, and the other interface toward a physical object, in this case a robot; with today’s technology either interface can be wireless. This editorial marks the event and sheds some light on it from today’s prospective.


Introduction
Twenty five years ago, in 1988, electromagnetic energy emanating from the human brain was employed for the first time in history to control a physical object, a robot [1,2]. It was a milestone for Computer Science (pattern recognition, control), Systems Biology (conative biology, brain emanation patterns), and robotics. Another way of looking at it is that it was an engineering approach to psychokinesis. The approach was based on two interfaces: one toward a biological brain for EEG signal processing, and the other interface toward a physical object, in this case a robot; with today's technology either interface can be wireless. This editorial marks the event and sheds some light on it from today's prospective.

Brain-robot Interface
The setup of the system for an EEG driven robot control [1,2] consists of: 1) a human subject intentionally emanating various patterns in her/his EEG signals, 2) an interface toward a computer, a device which captures those signals, 3) a computer-based processing of EEG signals, including learning and pattern recognition algorithms, 4) an interface toward the physical object, a robot, and 5) a feedback (visual, audio, etc.), ensuring that the subject who controls the physical object observes results of her/his intentions which were emanated as EEG patterns.

Brain Signals
Various types of EEG signals may be used for object control and a taxonomy of EEG signals was introduced [3] which divided EEG signals into both spontaneous and event related, the latter being divided into both evoked and anticipatory, the latter being divided into both preparatory (e.g., readiness potential) and expectatory (e.g., contingent negative variation potential, or CNV). If spontaneous EEG is used, usually a frequency band is observed. Often the alpha band (8-13Hz) is used and its intentional change can be named Contingent Alpha Variation (CαV), motivated by the name for the CNV potential given in [4].

The 1988 Experiment
The scenario of the experimental research consisted of a human subject controlling a mobile robot to move along a closed black line, with predefined points where the robot should be started, stopped, and then restarted. The human subject generates increased CαV by entering relaxation state (utilising eye closure) which (re)starts movement of the robot. Decreased CαV is achieved by entering alert state (by opening the eyes) which stops the robot movement while subject observes where the robot was stopped.
The implemented robot was the Elehoby Line Tracer robot obtained at the Akihabara market in Tokyo, Japan, in 1984. It was a state-of-theart toy mobile robot, which was able to follow a black line on the floor by its own intelligence.
The signal processing started with filtering of the EEG alpha band (8-13Hz) which was done in hardware, by the EEG acquisition device. The signal was sampled with rate of 100Hz and processed on a 1988 PC/XT computer. The signal processing for robot control had learning (calibration) and pattern recognition (examination of learned) phases. The learning phase established the probability density curves for both time-and amplitude-differences computed between two adjacent EEG samples, for both eye closed and eye opened cases. The probability density distributions were computed from 10s sessions of EEG acquisition during the learning phase. In the pattern recognition phase, the learned probability density curves were used as templates against which the EEG samples were classified into eyes closed (relaxation state) and eyes opened (alert brain state) which controlled the robot start/stop behavior. For each sample, the recognition was done in hard real time in duration of 10 ms before the next sample appeared. A TV station featured the lab, equipment, students involved, and the experiment.

Discussion
Interestingly, it took 11 years for the second result of EEG control of a physical device to occur [5]. The 1999 research used signal recording with electrodes inside the brain of a monkey. That kind of recording is denoted invasive while that of 1988 is denoted non-invasive.
The 1988 and the 1999 result were the only ones in EEG-driven control of physical objects movement achieved in the 20 th century. In the 21 st century, starting 2000, there are many more reported results including ones using more advanced robots. Interested readers can find more detail in recent reports such as [6].
It should be noted that besides the challenge of controlling physical objects, there was a challenge of controlling virtual objects on the computer screen [7]. The first results were also achieved in the 20 th century. Further detail about the first 10 reports on EEG controlled objects, both physical and virtual, are given in [8].

Conclusion
In 1988 a new direction in science was opened: control of physical objects using brain signals. Today many researchers are following this direction under various terms such as the term of brain-robot interface (BRI) and the more general one brain-computer interface (BCI). With today's technology, the computer is small in size so it can be a physical part of either EEG interface or physical object interface, rendering the term of brain-computer interface potentially obsolete for EEG control