The basic Idea of a Brain-Computer Interface (BCI) is to use brain activity as an input for a computer system. There are different ways to use this input in a BCI. It could be used as command to control a system or the system could react to the brain activity of a user. A more detailed view of the application areas of BCI's can be found below in the section „Brain-Computer Interfaces & Electroencephalography“.
According to Jackson and Mappus ([JM10]) the two main ways to access the brain activity are Electroencephalography(EEG) - on which this wiki article will focus - and Functional Near Infrared Spectroscopy (fNIRS). The latter is a technique where light is projected on the brain and the wavelengths of the reflections produced by the brain are analyzed. For a detailed introduction it is recommended to take a look at „Brain-Computer Interfaces“ by ackson and Mappus ([JM10]).
Electroencephalography (EEG) is the recording of electrical activity produced by the brain. When neurons are activated electrical flow is produced. This activity can be recorded by placing electrodes on the scalp (elctroencephalogram) or directly on the brain (electrocortiogram). The latter, however, is very intrusive and not suitable for BCI's in general. EEG is a fast method with a good temporal resolution and hence applicable for real-time processing. In contrast to EEG there a methods with a much better spatial resolution like computer tomography (CT) or functional magnetic resonance imaging (fMRI).
The first one to discover electrical activity was Richard Caton in 1875; he examined exposed brains of monkeys and rabbits and found out about the electrical activity. Almost 50 years later in 1924 Hans Berger came along and became the most known person in the history of EEG. He used his radio equipment to amplify the singals produced by brains of humans to be the first to record the electricial activity on paper. Additionally, he found that the activity he recorded changed depending on the state of the brain. The name „electroencephalogram“ was introduced by him and after his first discoveries he published several papers with the title „Über das Elektroenkephalogram des Menschen“.
The last two persons we want to mention are Andrew and Matthews who introduced brain waves (see section „Brain Waves“) and became famous for the so called „alpha rythm“; signals with frequencies between 10 and 12 Hz.
When measuring electrical activity it is possible to distinguish between three types:
The brain waves describe the signals produced by spontaneous activity. The signals consist of a frequency (Hz) and an amplitude (μV). The former range from 0.5 to 50 Hz, while the latter range between 0 and 100 μV when activity is recorded from the scalp. The amplitudes are much bigger when EEG is directly measured from the brain - between 1000μV and 2000μV.
Brain waves describe different frequency bands:
If spontaneous activity is measured alpha, beta and gamma waves can be measured in awake adults while theta and delta waves only occur during sleep. However, if the signals are recorded and processed appropriately theta and delta waves can also be measured, yet not in real time.
Beta activity is best seen when recorded from the frontal or parietal lobe (see section about the human brain), for alpha waves the occipital lobe is suitable.
To measure brain activity electrodes are placed (usually) on the scalp and the potential differences is measured between those electrodes. The electrodes can be put directly on the head or e.g. an electrode cap can be used. There are two basic ways to measure the potential differences:
A usual setup includes an amplifier to whom the electrodes are connected to. The brain signals are rather weak, hence, the amplification is necessary. The electrical recording also needs to be transformed to a digital signal - an analog to digital converter is used for that. Finally, the signal is recorded by e.g. a computer.
A general questions when measuring EEG is where to palce the electrodes. To have compareable results the 10-20 system was introduced , so scientist use the same positions from which brain activtiy is recorded.
The 10-20 system is a guideline for electrode positions, the positions are named according to their placement in relation to the lobes of the brain. Further, positions are also tagged with numbers - odd numbers refer to the left side (hemisphere) of the brain while even numbers refer to the right. For the margin between the two hemispheres the letter Z is used, indicating a zero.
The letters used for the positions are:
For the transitions between two lobes often combinations of the letters are used (e.g FT → between the fonrtal and temporal lobe).
The numbers 10 and 20 refer to percentages of two lengths: the first length is the one measured between the nasion (right above the nose) and the inion (at the back of the head where a small dell is), the second is measured from one ear (preaurical point) to the other (there is a small dell slightly above the ear). The points can be seen in figure the pictures on the right.
The default procedure to start would be to measure those two lengths then take 50% of the front to back length starting from the nasion and mark Cz in the middle of the head. A guide to mark positions can be found here.
Below all possible positions that could be used in the 10-20 system can be seen.
When measuring EEG different aspects can influence the recorded signal depending on your setup. The two common sources for artifacts in the signal are:
As we measure brain activity with EEG it is also interesting to see the basic structure of the human brain and its functionality. In general the brain and the spinal cord are building the main part of the central nervous system (CNS). In the following we shortly look at the anatomy of the brain and functionalities related to the different areas of it.
The brain consists of three main parts the Cerebrum (or forebrain), the Cerebellum (or hindbrain) and the brain stem. The former has a surface layer called the cerebral cortex, additionally, it is divided into four lobes (frontal, parietal, temporal, occipital). Further, it is divided into the left and right hemisphere which are basically the left and the right part of the cerebrum. At the edge between the frontal and parietal lobe are the motor cortex and the somatic sensory cortex. The hippo campus is partly belonging to the temporal lobe as well as the amygdaloid nuclei.
As the cerebrum is the most relevant part of the brain for EEG its functionalities are most relevant. However, the cerebellum is mostly related to the coordination of voluntary muscle movement as well as balance maintaining, while the brain stem is related to unconscious processes like the control of the respiration or the cardiovascular system (heart) or hormone secretion.
The functionality of the cerebrum can be described as follows:
Many of the applications for a BCI come from the clinical application of EEG (e.g. coma monitoring). In the following the different areas according to Jackson and Mappus ([JM10]) are shortly presented.
Jackson and Mappus distinguish between three general areas: assistive technology, cognitive diagnostics/augmentation cognition and recreation.
Assistive Technology: As the name indicates these are applications that assist humans in different ways. It is often related to people with disabilities of any form (e.g. paralysis). The main subcategories are:
Cognitive Diagnostics & Augmentation Cognition: This are applications that are able to analyze processes of the brain which then can be used in different scenarios as seen below:
Recreation: For entertainment and relaxing BCI could also be used.
Neurofeedback (Biofeedback): A person can change his mental state by training his emotions and getting direct feedback over those changes through the flow of a system like a game, where obstacles come in the way if the user doesn't have the required mental state to advance that task in the game, or an armband which changes colors depending on the users emotions.
Musical neurofeedback for treating depression in elderly people: by encouraging participants to increase the loudness and tempo of musical pieces, they were encouraged to increase their arousal and valence, and thus direct emotional state to positive feelings and alleviate depression. Arousal (awakeness) level determined by computing the ratio of the beta (12-28 Hz) and alpha (8-12 Hz) brainwaves. The recordings were fed to an expressive music performance system which calculates appropriate expressive transformations on timing, loudness and articulation. The final results of the researched showed 17.2% average improvement in BDI scores + decreased relative alpha activity in left frontal lobe.
Emotion Recognition: EEG can help in emotion recognition that can help not only with evalution of emotions over the interaction with a system or place, but also with the interaction of a user with a system, for example for learning a game.
Driving a car:
Game interaction:
The headset uses 16 electrodes divided equally over the two hemispheres. They are placed on the main parts of the brain that are important for recording certain emotions and mental states. One of its downsides are that it does not measure any data from the midline, but it does record a lot of interesting and important signals in some of the key locations where it is placed at. It measures data in real-time with maybe a slight delay at times, depending on the signal through a wireless 2.4GHz band wireless connection. It has a sampling rate of 2048 Hz and a bandwidth of 0.2-45 Hz and it records certain waves in the brain activity.
The headset not only can control your mouse movement, but is also able to read behavioral commands through the cognitive (13), expressive (13) and affective (3) functions.
The cognitive features are regarding the thoughts process of the user. It can interpret conscious thoughts and also intent such as rotating or moving an object.
The headset can recognize facial expressions like blinking or smiling as the user moves his facial muscles and the arms of the headset that are near the face read the data.
It can as well read more complex thoughts such as emotions, for example excitement or engagement that can be interpreted by reading the wave activity in certain parts of the brain.
The headset can be used for different purposes, but it also has limitations in the number of actions it can perform. The actions also have to be trained for the user's mind and chain of thoughts to get good results. Since the user can select the order of the commands and it can relate a command to another actual movement in a system, we can break the limitations and extend the use of the headset for any function we want or need.
So the headset provides a lot of functionality, however, not all of it is actual a result of brain activity. The facial expression along with the eye movement are usually artifacts, hence, the only thing the Emotiv does is to filter them out and provide them as functionality. The affective states are detected by brain signals (as far as we know since we do not know how it is acutally implemented), also the cognitive commands rely on your brain activity. Yet the latter can be influenced by your muscle activity (e.g. you could train a command and perform a hand movement that relates to it, or as in the demonstration video; moving forward („push“) worked well in combination with leaning forward and the other way round for moving back („pull“).
Minimum Hardware and Software requirements:
There are different SDK's for the Emotiv with slightly different possibilites. On the one hand there is free SDK Lite available for everyone with which you can simply use all the commands provided with the device (e.g. you can train commands, get the affective states and so on). However, with the latter it is not possible to access the raw EEG data for that the Research SDK is necessary which can be purchased with the headset. There exists a community SDK which tries to make the access of raw EEG data possible (https://github.com/Emotiv/community-sdk).
You can basically use any language you want as long as you get the .dll(C++) files of the SDK integrated with it. They provide wrappers for C#, Java, Python and Matlab.
With the control panel you can test out all the features of the Emotiv headset. You can create a profile for yourself there and train some commands and e.g. use them in combination with EmoKey(see below) as alternative input. You can also monitor the affective states and see how the facial expressions are detected, additionally, you can enable the mouse control with the gyroscope.
The composer can be used to simulate all functionality of the headset, so it is not necessary to actually wear the headset to test e.g. an application.
EmoKey links the Emotiv technology to your applications by easily converting detected events into any combination of keystrokes. EmoKey is a nonintrusive, lightweight, background process that runs behind your existing games or applications. EmoKey lets you create mappings that define how detections are converted to keystroke combinations. Your mappings can then be saved and shared.
Below you can get an introduction to the different tools of the Research SDK, the SDK Lite version are slightly different in terms of their design and some functionality. However, it is recommended to just test the tools by yourself.
(Information and Pictures taken from the SDK UserManual)
The Emotiv API is exposed as an ANSI C interface that is declared in 3 header files (edk.h, EmoStateDLL.h, edkErrorCode.h) and implemented in 2 Windows DLLs (edk.dll and edk_utils.dll). C or C++ applications that use the Emotiv API simply include edk.h and link with edk.dll.
The Emotiv EmoEngine refers to the logical abstraction of the functionality that Emotiv provides in edk.dll. The EmoEngine communicates with the Emotiv headset, receives preprocessed EEG and gyroscope data, manages user-specific or application-specific settings, performs post-processing, and translates the Emotiv detection results into an easy-to-use structure called an EmoState. Emotiv API functions that modify or retrieve EmoEngine settings are prefixed with “EE_.”
Example of integrating the EmoEnginge and the Emotiv EPOC with a videogame:
An EmoState is an opaque data structure that contains the current state of the Emotiv detections, which, in turn, reflect the user’s facial, emotional and cognitive state. EmoState data is retrieved by Emotiv API functions that are prefixed with “ES_.” EmoStates and other Emotiv API data structures are typically referenced through opaque handles (e.g. EmoStateHandle and EmoEngineEventHandle). These data structures and their handles are allocated and freed using the appropriate Emotiv API functions (e.g. EE_EmoEngineEventCreate and EE_EmoEngineEventFree).
The picture to the right shows a high-level flow chart for applications that incorporate the EmoEngine. During initialization, and prior to calling Emotiv API functions, your application must establish a connection to the EmoEngine by calling EE_EngineConnect or EE_EngineRemoteConnect. Use EE_EngineConnect when you wish to communicate directly with an Emotiv headset. Use EE_EngineRemoteConnect if you are using SDKLite and/or wish to connect your application to XavierComposer or Emotiv Control Panel.
The EmoEngine communicates with your application by publishing events that can be retrieved by calling EE_EngineGetNextEvent(). For near real-time responsiveness, most applications should poll for new EmoStates at least 10-15 times per second. This is typically done in an application’s main event loop or, in the case of most videogames, when other input devices are periodically queried. Before your application terminates, the connection to EmoEngine should be explicitly closed by calling EE_EngineDisconnect().
There are three main categories of EmoEngine events that your application should handle:
Further information about programming with the SDK is provided by the User Manual.
Here is a short code example from the documentation of the SDK Lite; however, it is slightly adjusted as it connects to the EmoComposer with RemoteConnect() instead of connecting to the EmoEngine.
The example basically shows how to connect to the Composer or Engine and how to add your own handler for EmoEvents.
class Program { EmoEngine engine; static ushort composerPort = 1726; static void Main(string[] args) { Program program = new Program(); Console.WriteLine("hello remote engine connected"); program.mainLoop(); } void mainLoop() { engine = EmoEngine.Instance; engine.EmoStateUpdated += new EmoEngine.EmoStateUpdatedEventHandler(engine_EmoStateUpdated); engine.RemoteConnect("127.0.0.1", composerPort); while (true) { engine.ProcessEvents(1000); } } void engine_EmoStateUpdated(object sender, EmoStateUpdatedEventArgs e) { if (e.userId == 0) { EmoState es = e.emoState; Console.WriteLine("{0} ; excitement: {1} " ,e.userId, es.AffectivGetEngagementBoredomScore()); } else if( e.userId == 1) { EmoState es = e.emoState; Console.WriteLine("{0} ; excitement: {1} ", e.userId, es.AffectivGetEngagementBoredomScore()); } } }
Below you can see the basic code necessary to access the raw EEG data wit the research SDK taken from the documentation code examples provide by the SDK.
// enable data aquisition for this user. engine.DataAcquisitionEnable((uint)userID, true); // ask for up to 1 second of buffered data engine.EE_DataSetBufferSizeInSec(1); // get the current EEG data Dictionary<EdkDll.EE_DataChannel_t, double[]> data = engine.GetData((uint)userID);
The two examples show two ways how the Emotiv could be used: either you can use the built in functionality (e.g. the affective states) or you take the raw data and do whatever you want with it (e.g. write your own emotion recogntion algorithm).
Many more examples are available also for other languages than C# (C++, Python, Java, (Matlab)).
A paper examined the quality of the Emotiv device along with three other available devices and compared them.
Features:
They all offer methods for logging the timing of events and coding, but EPOC recording method of event triggers over the serial port is highly susceptible to jitter and delay due to a reliance on the recording of the PC’s operating system to coordinate with the serial port for integration with the EEG.
[TM02] TEPLAN, Michal. Fundamentals of EEG measurement. Measurement science review, 2002, 2. Jg., Nr. 2, S. 1-11.
[MP95] MALMIVUO, Jaakko; PLONSEY, Robert. Bioelectromagnetism: principles and applications of bioelectric and biomagnetic fields. Chapter 13 - “Electroencephalography”; Oxford university press, 1995. (Online version available at http://www.bem.fi/book/)
[KA13] Kandel ER. Principles of neural science. 5; Chapter 1 - “The Brain and Behavior“.;5th; ed. New York;London;: McGraw-Hill Medical; 2013.
[JD09] DESESSO, John M. Functional Anatomy of the Brain. In: Metabolic Encephalopathy. Springer New York, 2009. S. 1-14.
[FMD13] Fraga, Tania, Mauro Pichiliani, and Donizetti Louro. „Experimental art with brain controlled interface.“ Universal Access in Human-Computer Interaction. Design Methods, Tools, and Interaction Techniques for eInclusion. Springer Berlin Heidelberg, 2013. 642-651.
[LW14] Li, Gang, and Wan-Young Chung. „Estimation of Eye Closure Degree Using EEG Sensors and Its Application in Driver Drowsiness Detection.“ Sensors14.9 (2014): 17491-17515.
[DA13] Dutta, Arindam, et al. „A low-cost point-of-care testing system for psychomotor symptoms of depression affecting standing balance: a preliminary study in India.“ Depression research and treatment 2013 (2013).
[HD14] Hairston, W. David, et al. „Usability of four commercially-oriented EEG systems.“ Journal of neural engineering 11.4 (2014): 046018.
[HY14] HAO, Yu, et al. A visual feedback design based on a brain-computer interface to assist users regulate their emotional state. In: CHI'14 Extended Abstracts on Human Factors in Computing Systems. ACM, 2014. S. 2491-2496.
[JM10] JACKSON, Melody Moore; MAPPUS, Rudolph. Applications for brain-computer interfaces. In: Brain-Computer Interfaces. Springer London, 2010. S. 89-103.
[HY13] YIYUAN, Huang. Hybridization between brain waves and painting. In:Proceedings of the Virtual Reality International Conference: Laval Virtual. ACM, 2013. S. 20.
[NL09] NACKE, Lennart. Affective ludology: Scientific measurement of user experience in interactive entertainment. 2009. S. 86-89 & Chapter 5 - “Boredom, Immersion, Flow: Psychophysiological assessment of affective level designs in a First-Person Shooter game”
[LSN10] LIU, Yisi; SOURINA, Olga; NGUYEN, Minh Khoa. Real-time EEG-based human emotion recognition and visualization. In: Cyberworlds (CW), 2010 International Conference on. IEEE, 2010. S. 262-269.
[AP13] ASPINALL, Peter, et al. The urban brain: analysing outdoor physical activity with mobile EEG. British journal of sports medicine, 2013, S. bjsports-2012-091877.
[RR15] Ramirez, Rafael, et al. „Musical neurofeedback for treating depression in elderly people.“ Frontiers in neuroscience 9 (2015).
[MR13] MANDRYK, Regan L., et al. Games as neurofeedback training for children with FASD. In: Proceedings of the 12th International Conference on Interaction Design and Children. ACM, 2013. S. 165-172.
[WMG] http://www.medicine.mcgill.ca/physio/vlab/biomed_signals/EEG_n.htm (last visited 22.11.2015)
[WBC] http://www.bci2000.org/wiki/index.php/User_Tutorial:EEG_Measurement_Setup (last visited 22.11.2015)
[WEHD] https://emotiv.zendesk.com/hc/en-us/articles/200782309-Does-EMOTIV-really-measure-signals-from-my-brain- (last visited 23.11.15)