brain_computer_interfaces

Brain-Computer Interfaces (BCI)

Illustration of a BCI


What is a Brain-Computer Interface (BCI)

The basic Idea of a Brain-Computer Interface (BCI) is to use brain activity as an input for a computer system. There are different ways to use this input in a BCI. It could be used as command to control a system or the system could react to the brain activity of a user. A more detailed view of the application areas of BCI's can be found below in the section „Brain-Computer Interfaces & Electroencephalography“.

According to Jackson and Mappus ([JM10]) the two main ways to access the brain activity are Electroencephalography(EEG) - on which this wiki article will focus - and Functional Near Infrared Spectroscopy (fNIRS). The latter is a technique where light is projected on the brain and the wavelengths of the reflections produced by the brain are analyzed. For a detailed introduction it is recommended to take a look at „Brain-Computer Interfaces“ by ackson and Mappus ([JM10]).


Electroencephalography (EEG)

EEG

What is Electroencephalography (EEG)

Electroencephalography (EEG) is the recording of electrical activity produced by the brain. When neurons are activated electrical flow is produced. This activity can be recorded by placing electrodes on the scalp (elctroencephalogram) or directly on the brain (electrocortiogram). The latter, however, is very intrusive and not suitable for BCI's in general. EEG is a fast method with a good temporal resolution and hence applicable for real-time processing. In contrast to EEG there a methods with a much better spatial resolution like computer tomography (CT) or functional magnetic resonance imaging (fMRI).

The first one to discover electrical activity was Richard Caton in 1875; he examined exposed brains of monkeys and rabbits and found out about the electrical activity. Almost 50 years later in 1924 Hans Berger came along and became the most known person in the history of EEG. He used his radio equipment to amplify the singals produced by brains of humans to be the first to record the electricial activity on paper.Berger's EEG Recordings [TM02] Additionally, he found that the activity he recorded changed depending on the state of the brain. The name „electroencephalogram“ was introduced by him and after his first discoveries he published several papers with the title „Über das Elektroenkephalogram des Menschen“. The last two persons we want to mention are Andrew and Matthews who introduced brain waves (see section „Brain Waves“) and became famous for the so called „alpha rythm“; signals with frequencies between 10 and 12 Hz.

Measuring Electroencephalography

Electrical Activities

When measuring electrical activity it is possible to distinguish between three types:

  • Spontaneous Activity: describes the activity that is continuously produced by the brain. The signals produced are classified by their frequency - so called brain waves (see next section).
  • Event Related Potentials (ERP): ERP's can not be recorded directly from the brain, they occur after certain stimuli (like audio or visual stimuli). As the signal is rather weak and noisy the stimulus must be presented several times and the recordings need to be averaged appropriately to reduced the signal-to-noise ratio.
  • Activity produced by single neurons: It is also possible to get a hold of activities produced by single neurons, usually micro electrodes are placed in the head. For clinical applications or neuroscience research this might be useful; however, for a BCI this is too intrusive.

Brain Waves

The brain waves describe the signals produced by spontaneous activity. The signals consist of a frequency (Hz) and an amplitude (μV). The former range from 0.5 to 50 Hz, while the latter range between 0 and 100 μV when activity is recorded from the scalp. The amplitudes are much bigger when EEG is directly measured from the brain - between 1000μV and 2000μV. Brain Waves [MP95] Brain waves describe different frequency bands:

  • delta (0.5 - 4 Hz): mostly related to deep sleep and unconscious processes
  • theta (4- 8 Hz): mostly related to daydreaming, creativity, intuition, memory recall, emotions and sensations
  • alpha (8 - 13 Hz): mostly related to cortical inactivity, mental idleness and attentional demand
  • beta (13 - 30 Hz): mostly related to cognitive processes, decision making, problem solving and information processing
  • gamma (30 - 50 Hz): mostly related to very high concentration

If spontaneous activity is measured alpha, beta and gamma waves can be measured in awake adults while theta and delta waves only occur during sleep. However, if the signals are recorded and processed appropriately theta and delta waves can also be measured, yet not in real time.

Beta activity is best seen when recorded from the frontal or parietal lobe (see section about the human brain), for alpha waves the occipital lobe is suitable.

Trying to produce some alpha waves

How is it measured

To measure brain activity electrodes are placed (usually) on the scalp and the potential differences is measured between those electrodes. The electrodes can be put directly on the head or e.g. an electrode cap can be used. There are two basic ways to measure the potential differences:

  • Bipolar: two electrodes are taken and the difference between those two is computed
  • Unipoloar: all electrodes are compared to one electrode or the potential of all of the electrodes is averaged

A usual setup includes an amplifier to whom the electrodes are connected to. The brain signals are rather weak, hence, the amplification is necessary. The electrical recording also needs to be transformed to a digital signal - an analog to digital converter is used for that. Finally, the signal is recorded by e.g. a computer.

Electrode Placement

A general questions when measuring EEG is where to palce the electrodes. To have compareable results the 10-20 system was introduced , so scientist use the same positions from which brain activtiy is recorded. 10-20 System [MP95] The 10-20 system is a guideline for electrode positions, the positions are named according to their placement in relation to the lobes of the brain. Further, positions are also tagged with numbers - odd numbers refer to the left side (hemisphere) of the brain while even numbers refer to the right. For the margin between the two hemispheres the letter Z is used, indicating a zero.

The letters used for the positions are:

  • F for the frontal lobe
  • P for the parietal lobe
  • O for the occipital lobe
  • T for the temporal lobe
  • C is used additional and is an abbreviation for central

For the transitions between two lobes often combinations of the letters are used (e.g FT → between the fonrtal and temporal lobe).

The numbers 10 and 20 refer to percentages of two lengths: the first length is the one measured between the nasion (right above the nose) and the inion (at the back of the head where a small dell is), the second is measured from one ear (preaurical point) to the other (there is a small dell slightly above the ear). The points can be seen in figure the pictures on the right. The default procedure to start would be to measure those two lengths then take 50% of the front to back length starting from the nasion and mark Cz in the middle of the head. A guide to mark positions can be found here. Below all possible positions that could be used in the 10-20 system can be seen.Full 10-20 System [MP95]

Challenges

When measuring EEG different aspects can influence the recorded signal depending on your setup. The two common sources for artifacts in the signal are:

  • Eye movement, meaning when one is looking to the left or right, up or down, and blinking. In general this is called Electrooculagraphy (EOG). So, if electrodes are placed on the frontal lobe area the EOG activity is usually visible in the EEG recordings. Hence, those artifacts need to be filtered out or at least taken into consideration when interpreting the signal.
  • Muscle activity (Electromyography (EMG)) especially facial also produces artifacts. The common source are, for instance, the muscle that is controlling the eyebrow (e.g. winks) or the muscle responsible for closing the jaw. For the latter it is suggested to leave the mouth slightly open during recording.
  • Other artifacts can be produced by the skin. If electrodes are placed on the scalp and the skin is not cleaned with e.g. alcohol the signal could be influenced. As well if sweat is produced beneath the electrodes. Additionally, wrong attachment of the electrodes in general can cause problems or if they are moved/touched during the recording session.
Producing some artifacts


The Human Brain

As we measure brain activity with EEG it is also interesting to see the basic structure of the human brain and its functionality. In general the brain and the spinal cord are building the main part of the central nervous system (CNS). In the following we shortly look at the anatomy of the brain and functionalities related to the different areas of it.

Brain structure [KA13]

Anatomy

The brain consists of three main parts the Cerebrum (or forebrain), the Cerebellum (or hindbrain) and the brain stem. The former has a surface layer called the cerebral cortex, additionally, it is divided into four lobes (frontal, parietal, temporal, occipital). Further, it is divided into the left and right hemisphere which are basically the left and the right part of the cerebrum. At the edge between the frontal and parietal lobe are the motor cortex and the somatic sensory cortex. The hippo campus is partly belonging to the temporal lobe as well as the amygdaloid nuclei.

Left and right hemisphere

Functionality

As the cerebrum is the most relevant part of the brain for EEG its functionalities are most relevant. However, the cerebellum is mostly related to the coordination of voluntary muscle movement as well as balance maintaining, while the brain stem is related to unconscious processes like the control of the respiration or the cardiovascular system (heart) or hormone secretion. The four lobes [KA13] The functionality of the cerebrum can be described as follows:

  • Frontal Lobe: related to motor functions (movement control), motivation, short term memory and planning of future actions.
  • Parietal Lobe: related to the interpretation of sensory information (somatic sensations), for instance, if a human touches something the somatic cortex interprets what the person feels, hence, the brain is able to recognize the shape without seeing it. Temperature is another example.
  • Occipital Lobe: related to visual input decoding, for instance, when EEG is measured from the two electrode positions O1 and O2 according to the 10-20 system it can easily be seen if a person closes his or hers eyes in the alpha frequency band.
  • Temporal Lobe: related to hearing, declarative memory and emotion. For the latter the hippo campus is also important.

Brain-Computer Interfaces & Electroencephalography

Many of the applications for a BCI come from the clinical application of EEG (e.g. coma monitoring). In the following the different areas according to Jackson and Mappus ([JM10]) are shortly presented.

General Application Areas

Jackson and Mappus distinguish between three general areas: assistive technology, cognitive diagnostics/augmentation cognition and recreation.

Assistive Technology: As the name indicates these are applications that assist humans in different ways. It is often related to people with disabilities of any form (e.g. paralysis). The main subcategories are:

  • Communication: Use the brain activity as a new communication channel between a system and a user (e.g. for mute people)
  • Environmental Control: Use the brain activity to directly control things around a person - his or hers environment. Imagine someone sitting in a wheelchair not able to move anything except for his head. A BCI could offer the possibility to let such a person do the things that are quite normal for a healthy person (e.g. turn on/off a device).
  • Mobility: Use the brain activity to control movement of other devices to increase the persons mobility. This could be controlling something that the person is normally not able to perform (e.g. robots for precise tasks, controlling a wheelchair for paralyzed people).

Cognitive Diagnostics & Augmentation Cognition: This are applications that are able to analyze processes of the brain which then can be used in different scenarios as seen below:

  • Coma Detection: If a person is in a coma his or her brain activity can be monitored to find indications of regaining consciousness of that person. Monitoring can also be used to detect e.g. brain death.
  • Meditation Training: If a person wants to train the control of his or hers brain EEG can be used to give feedback on the current state during training. Neurofeedback (or more general biofeedback) is a technique often used in psychology where diseases like ADHD or depression can be addressed with feedback.
  • User Experience: By analyzing the mental state of a user a new technique for measuring user experience is given (e.g. assess mental workload, emotions like frustration during interaction with a system).
  • Attention Monitoring: A BCI could monitor a persons attentional state and give a warning if the state is not suitable for a task currently performed by a person (e.g. detect drowsiness when driving a car).

Recreation: For entertainment and relaxing BCI could also be used.

  • Gaming: A video game could use the brain activity as alternative input device, by doing that also paralyzed people could get an access to games. The other way would be to assess the player experience and let the game adapt to it.
  • Virtual Reality: While this might not be always distinguishable form games with a BCI one could discover other virtual worlds (e.g. navigate through a museum).
  • Creative Expression: Another way to use the brain activity is to create something with it in terms of art (e.g. control musical output or brain painting).

Examples

Neurofeedback (Biofeedback): A person can change his mental state by training his emotions and getting direct feedback over those changes through the flow of a system like a game, where obstacles come in the way if the user doesn't have the required mental state to advance that task in the game, or an armband which changes colors depending on the users emotions.

Musical neurofeedback for treating depression in elderly people: by encouraging participants to increase the loudness and tempo of musical pieces, they were encouraged to increase their arousal and valence, and thus direct emotional state to positive feelings and alleviate depression. Arousal (awakeness) level determined by computing the ratio of the beta (12-28 Hz) and alpha (8-12 Hz) brainwaves. The recordings were fed to an expressive music performance system which calculates appropriate expressive transformations on timing, loudness and articulation. The final results of the researched showed 17.2% average improvement in BDI scores + decreased relative alpha activity in left frontal lobe.

Emotion Recognition: EEG can help in emotion recognition that can help not only with evalution of emotions over the interaction with a system or place, but also with the interaction of a user with a system, for example for learning a game.

Driving a car:

Game interaction:


Emotiv Epoc+

Specification

The headset uses 16 electrodes divided equally over the two hemispheres. They are placed on the main parts of the brain that are important for recording certain emotions and mental states. One of its downsides are that it does not measure any data from the midline, but it does record a lot of interesting and important signals in some of the key locations where it is placed at. It measures data in real-time with maybe a slight delay at times, depending on the signal through a wireless 2.4GHz band wireless connection. It has a sampling rate of 2048 Hz and a bandwidth of 0.2-45 Hz and it records certain waves in the brain activity.

Functionality

The headset not only can control your mouse movement, but is also able to read behavioral commands through the cognitive (13), expressive (13) and affective (3) functions. The cognitive features are regarding the thoughts process of the user. It can interpret conscious thoughts and also intent such as rotating or moving an object. The headset can recognize facial expressions like blinking or smiling as the user moves his facial muscles and the arms of the headset that are near the face read the data. It can as well read more complex thoughts such as emotions, for example excitement or engagement that can be interpreted by reading the wave activity in certain parts of the brain.

The headset can be used for different purposes, but it also has limitations in the number of actions it can perform. The actions also have to be trained for the user's mind and chain of thoughts to get good results. Since the user can select the order of the commands and it can relate a command to another actual movement in a system, we can break the limitations and extend the use of the headset for any function we want or need.

Emotiv & EEG

So the headset provides a lot of functionality, however, not all of it is actual a result of brain activity. The facial expression along with the eye movement are usually artifacts, hence, the only thing the Emotiv does is to filter them out and provide them as functionality. The affective states are detected by brain signals (as far as we know since we do not know how it is acutally implemented), also the cognitive commands rely on your brain activity. Yet the latter can be influenced by your muscle activity (e.g. you could train a command and perform a hand movement that relates to it, or as in the demonstration video; moving forward („push“) worked well in combination with leaning forward and the other way round for moving back („pull“).

SDK's

Minimum Hardware and Software requirements:

  • 2.4 GHz Intel Pentium 4 processor (or equivalent)
  • Microsoft Windows XP with Service Pack 2 or Microsoft Windows Vista (also running on Mac OS and the Community SDk on Ubuntu )
  • 1GB RAM
  • 50 MB available disk space
  • one or two unused USB 2.0 ports

There are different SDK's for the Emotiv with slightly different possibilites. On the one hand there is free SDK Lite available for everyone with which you can simply use all the commands provided with the device (e.g. you can train commands, get the affective states and so on). However, with the latter it is not possible to access the raw EEG data for that the Research SDK is necessary which can be purchased with the headset. There exists a community SDK which tries to make the access of raw EEG data possible (https://github.com/Emotiv/community-sdk).

You can basically use any language you want as long as you get the .dll(C++) files of the SDK integrated with it. They provide wrappers for C#, Java, Python and Matlab.

SDK Tools

Emotiv Control Pannel

With the control panel you can test out all the features of the Emotiv headset. You can create a profile for yourself there and train some commands and e.g. use them in combination with EmoKey(see below) as alternative input. You can also monitor the affective states and see how the facial expressions are detected, additionally, you can enable the mouse control with the gyroscope.

EmoComposer

The composer can be used to simulate all functionality of the headset, so it is not necessary to actually wear the headset to test e.g. an application.

EmoKey

EmoKey links the Emotiv technology to your applications by easily converting detected events into any combination of keystrokes. EmoKey is a nonintrusive, lightweight, background process that runs behind your existing games or applications. EmoKey lets you create mappings that define how detections are converted to keystroke combinations. Your mappings can then be saved and shared.

TestBench (included in the Scientific SDK only)
  • Real-time display of the Emotiv headset data stream, including EEG, contact quality, FFT, gyro (if fitted – custom option), wireless packet acquisition/loss display, marker events, headset battery level. Record and replay files in binary EEGLAB format.
  • Command line file converter included to produce .csv format.
  • Define and insert timed markers into the data stream, including on-screen buttons and defined serial port events. Markers are stored in EEG data files. Marker definitions can be saved and reloaded. Markers are displayed in real time and playback modes.
  • Export screenshot for documentation Features
Introduction to the different Tools

Below you can get an introduction to the different tools of the Research SDK, the SDK Lite version are slightly different in terms of their design and some functionality. However, it is recommended to just test the tools by yourself.

SDK Basic to build Software(Emotiv API and EmoEngine)

(Information and Pictures taken from the SDK UserManual)

The Emotiv API is exposed as an ANSI C interface that is declared in 3 header files (edk.h, EmoStateDLL.h, edkErrorCode.h) and implemented in 2 Windows DLLs (edk.dll and edk_utils.dll). C or C++ applications that use the Emotiv API simply include edk.h and link with edk.dll.

The Emotiv EmoEngine refers to the logical abstraction of the functionality that Emotiv provides in edk.dll. The EmoEngine communicates with the Emotiv headset, receives preprocessed EEG and gyroscope data, manages user-specific or application-specific settings, performs post-processing, and translates the Emotiv detection results into an easy-to-use structure called an EmoState. Emotiv API functions that modify or retrieve EmoEngine settings are prefixed with “EE_.”

Example of integrating the EmoEnginge and the Emotiv EPOC with a videogame:

An EmoState is an opaque data structure that contains the current state of the Emotiv detections, which, in turn, reflect the user’s facial, emotional and cognitive state. EmoState data is retrieved by Emotiv API functions that are prefixed with “ES_.” EmoStates and other Emotiv API data structures are typically referenced through opaque handles (e.g. EmoStateHandle and EmoEngineEventHandle). These data structures and their handles are allocated and freed using the appropriate Emotiv API functions (e.g. EE_EmoEngineEventCreate and EE_EmoEngineEventFree).

The picture to the right shows a high-level flow chart for applications that incorporate the EmoEngine. During initialization, and prior to calling Emotiv API functions, your application must establish a connection to the EmoEngine by calling EE_EngineConnect or EE_EngineRemoteConnect. Use EE_EngineConnect when you wish to communicate directly with an Emotiv headset. Use EE_EngineRemoteConnect if you are using SDKLite and/or wish to connect your application to XavierComposer or Emotiv Control Panel.

The EmoEngine communicates with your application by publishing events that can be retrieved by calling EE_EngineGetNextEvent(). For near real-time responsiveness, most applications should poll for new EmoStates at least 10-15 times per second. This is typically done in an application’s main event loop or, in the case of most videogames, when other input devices are periodically queried. Before your application terminates, the connection to EmoEngine should be explicitly closed by calling EE_EngineDisconnect().

There are three main categories of EmoEngine events that your application should handle:

  • Hardware-related events: Events that communicate when users connect or disconnect Emotiv input devices to the computer (e.g. EE_UserAdded).
  • New EmoState events: Events that communicate changes in the user’s facial, cognitive and emotional state. You can retrieve the updated EmoState by calling EE_EmoEngineEventGetEmoState(). (e.g. EE_EmoStateUpdated).
  • Suite-specific events: Events related to training and configuring the Cognitiv and Expressiv detection suites (e.g. EE_CognitivEvent).

Further information about programming with the SDK is provided by the User Manual.

Code Examples

Here is a short code example from the documentation of the SDK Lite; however, it is slightly adjusted as it connects to the EmoComposer with RemoteConnect() instead of connecting to the EmoEngine.

The example basically shows how to connect to the Composer or Engine and how to add your own handler for EmoEvents.

    class Program
    {
        EmoEngine engine;
        static ushort composerPort	= 1726;
        static void Main(string[] args)
        {
            Program program = new Program();
            Console.WriteLine("hello remote engine connected");
            program.mainLoop();
 
 
        }
 
        void mainLoop()
        {
            engine = EmoEngine.Instance;
            engine.EmoStateUpdated += new EmoEngine.EmoStateUpdatedEventHandler(engine_EmoStateUpdated);
            engine.RemoteConnect("127.0.0.1", composerPort);
 
            while (true)
            {
                engine.ProcessEvents(1000);
            }
 
        }
 
        void engine_EmoStateUpdated(object sender, EmoStateUpdatedEventArgs e)
        {            
            if (e.userId == 0)
            {
                EmoState es = e.emoState;
                Console.WriteLine("{0} ; excitement: {1} " ,e.userId, es.AffectivGetEngagementBoredomScore());
            }
            else if( e.userId == 1)
            {
                EmoState es = e.emoState;
                Console.WriteLine("{0} ; excitement: {1} ", e.userId, es.AffectivGetEngagementBoredomScore());
            }
        }
 
    }

Below you can see the basic code necessary to access the raw EEG data wit the research SDK taken from the documentation code examples provide by the SDK.

           // enable data aquisition for this user.
            engine.DataAcquisitionEnable((uint)userID, true);
 
            // ask for up to 1 second of buffered data
            engine.EE_DataSetBufferSizeInSec(1); 
 
            // get the current EEG data
            Dictionary<EdkDll.EE_DataChannel_t, double[]> data = engine.GetData((uint)userID);
 

The two examples show two ways how the Emotiv could be used: either you can use the built in functionality (e.g. the affective states) or you take the raw data and do whatever you want with it (e.g. write your own emotion recogntion algorithm).

Many more examples are available also for other languages than C# (C++, Python, Java, (Matlab)).

How to set it up

Alternative Devices

A paper examined the quality of the Emotiv device along with three other available devices and compared them.

  • ActiveTwo: it records data without reference which lately will be manually substracted in the preprocessing stage. It uses a cap with 64 electrodes placed on it in the 10-20 positions. The sampling rate is 2048 Hz or grater and the bandwith is user defined.
  • B-Alert X10: It consists of a flexible plastic strip connecting nine electrode sites and two electrically-linked electrodes which connect the head unit via wires to provide signal reference. It offers TCP/IP connection, code triggering via serial or parallel port directly at the wireless receiver, as well as C++ SDK and MATLAB routines compiled in C for direct communication with the device or for software development. The sampling rate is 256 Hz and the bandwith is between 0.1 Hz and 100 Hz.
  • HMS: It uses 9 electrodes through direct contact with the scalp without other supplies. It has a sampling rate of 240 Hz and a bandwith between 0.02 Hz and 120 Hz.

Features:

  • Fit: ActiveTwo fits the general population when it comes to head size; B-Alert X10 has different sizes, but is not considering adjustments ; HMS has a complex system of spring-loaded joints that work well with only adult heads; EPOC has arms the flex outward for large heads and that do not fit big small heads;
  • Weight: B-Alert X10 and EPOC are light, ActiveTwo close by and HMS considerably heavier. EPOC has problems though with uneven weight distribution on the scalp;
  • Comfort: uncomfortable headset due to pressure on the scalp, unlike the B-Alert X10;
  • Setup: The HMS setup time is short, but it takes more effort to establish the signal. The EPOC and B-Alert X10 takes a relatively small amount of time for manual manipulation of the electrodes. The Active-Two requires most time due to larger number of channels to set up;
  • Signal: The ActiveTwo is stable and has a reliable signal, whereas the other 3 can move around with sufficient movement by the subject;
  • Electrode distribution: ActiveTwo distributes electrodes equivalently. For EPOC and HMS the inter-electrode distance is fixed, limiting the total compensation possible for inter-electrode distance;
  • Data recording: EPOC lacks recording locations around the midline (area covered). HMS is the most accurate due to the large diameter of the sensors.
  • Data reports: HMS and EPOC report in real-time, whereas B-Alert X10 requires a manual button click to begin calculations;
  • User preference: In a study where participants tested all 4 devices, the B-Alert X10 was preferred most, followed by ActiveTwo

They all offer methods for logging the timing of events and coding, but EPOC recording method of event triggers over the serial port is highly susceptible to jitter and delay due to a reliance on the recording of the PC’s operating system to coordinate with the serial port for integration with the EEG.

Sources & Further Reading

Papers and Books

[TM02] TEPLAN, Michal. Fundamentals of EEG measurement. Measurement science review, 2002, 2. Jg., Nr. 2, S. 1-11.

[MP95] MALMIVUO, Jaakko; PLONSEY, Robert. Bioelectromagnetism: principles and applications of bioelectric and biomagnetic fields. Chapter 13 - “Electroencephalography”; Oxford university press, 1995. (Online version available at http://www.bem.fi/book/)

[KA13] Kandel ER. Principles of neural science. 5; Chapter 1 - “The Brain and Behavior“.;5th; ed. New York;London;: McGraw-Hill Medical; 2013.

[JD09] DESESSO, John M. Functional Anatomy of the Brain. In: Metabolic Encephalopathy. Springer New York, 2009. S. 1-14.

[FMD13] Fraga, Tania, Mauro Pichiliani, and Donizetti Louro. „Experimental art with brain controlled interface.“ Universal Access in Human-Computer Interaction. Design Methods, Tools, and Interaction Techniques for eInclusion. Springer Berlin Heidelberg, 2013. 642-651.

[LW14] Li, Gang, and Wan-Young Chung. „Estimation of Eye Closure Degree Using EEG Sensors and Its Application in Driver Drowsiness Detection.“ Sensors14.9 (2014): 17491-17515.

[DA13] Dutta, Arindam, et al. „A low-cost point-of-care testing system for psychomotor symptoms of depression affecting standing balance: a preliminary study in India.“ Depression research and treatment 2013 (2013).

[HD14] Hairston, W. David, et al. „Usability of four commercially-oriented EEG systems.“ Journal of neural engineering 11.4 (2014): 046018.

[HY14] HAO, Yu, et al. A visual feedback design based on a brain-computer interface to assist users regulate their emotional state. In: CHI'14 Extended Abstracts on Human Factors in Computing Systems. ACM, 2014. S. 2491-2496.

[JM10] JACKSON, Melody Moore; MAPPUS, Rudolph. Applications for brain-computer interfaces. In: Brain-Computer Interfaces. Springer London, 2010. S. 89-103.

[HY13] YIYUAN, Huang. Hybridization between brain waves and painting. In:Proceedings of the Virtual Reality International Conference: Laval Virtual. ACM, 2013. S. 20.

[NL09] NACKE, Lennart. Affective ludology: Scientific measurement of user experience in interactive entertainment. 2009. S. 86-89 & Chapter 5 - “Boredom, Immersion, Flow: Psychophysiological assessment of affective level designs in a First-Person Shooter game”

[LSN10] LIU, Yisi; SOURINA, Olga; NGUYEN, Minh Khoa. Real-time EEG-based human emotion recognition and visualization. In: Cyberworlds (CW), 2010 International Conference on. IEEE, 2010. S. 262-269.

[AP13] ASPINALL, Peter, et al. The urban brain: analysing outdoor physical activity with mobile EEG. British journal of sports medicine, 2013, S. bjsports-2012-091877.

[RR15] Ramirez, Rafael, et al. „Musical neurofeedback for treating depression in elderly people.“ Frontiers in neuroscience 9 (2015).

[MR13] MANDRYK, Regan L., et al. Games as neurofeedback training for children with FASD. In: Proceedings of the 12th International Conference on Interaction Design and Children. ACM, 2013. S. 165-172.

Websites

brain_computer_interfaces.txt · Zuletzt geändert: 2018/12/03 09:43 (Externe Bearbeitung)