Improving Brain Computer Interface Prediction by Measuring Eye Position and Focus

Abstract

Including more information about the person's physiological state has been shown to affect BCI performance. Measurement of eye position can provide information about the general location of a desired movement.

Brain Computer Interfaces (BCI) measure brain activity and allow subjects to control a computer or robotic device purely through imagery. Accuracy of BCI devices vary greatly between subjects and for many, performance is not at a functional level [2].

Measurement noise and sensing technology are a few factors that contribute to the challenge of BCI decoding. Including more information about the person's physiological state has been shown to affect BCI performance [2]. In general when one reaches for or manipulates an object, the eyes are fixed upon it. Measurement of eye position can provide information about the general location of a desired movement. In addition, measurement of eye focus may provide information about depth perception [8]. Combining BCI readings for direction and filtering from eye position and pupil dilation may increase overall BCI performance.

Introduction

Measuring activity in the sensory motor cortex (SMC) is the basis of BCI function. This ability allows for translation of brain activity into specific commands for computers or actuators. Recording sensory motor rhythms (SMR) allow the BCI to extract information about the subject's intended movement and in turn target that location using a cursor or robotic device. Even though in general, people can voluntarily modulate their SMR [1], decoding accuracy is highly dependent on the subject and as a result requires many calibration trials [2].

Through muscle stimulation and robotics, BCIs have achieved complex movement capabilities relying purely on visual feedback [12]. When planning a movement, the brain uses visual information about the target and translates it into a desired trajectory that is used to execute the movement [11]. Because of the strong dependence on visual feedback, including physiological information of the eye can provide clues about intent of the subject [2].

Eye tracking devices allow patients with locked-in syndrome to communicate by tracking eye movement through infrared sensing. Video recording is used as well and processed by the software, providing a canvas which allows interaction exclusively through eye movement. Similar to virtual control in BCIs, eye tracking devices merely turn the eye into a cursor on a computer screen.

The devices do not measure eye focus, but there is a correlation between pupil size and focus distance [10]. Relative changes in pupil size can be recorded using video feedback already present on eye trackers. With the added measure of depth, a preliminary target in space can be established and used by the filtering algorithms during decoding.

The Motivation

As stated in [2], many papers will only include results for participants who successfully achieved BCI control. Many challenges are still present such as illiteracy, measurement noise, variability in performance and long training periods
[1, 2, 3, 4, 6, 11]. In fact, research has shown that 15-30% of participants experience
BCI illiteracy in which no control is achieved [2]. This phenomenon leads to long, frustrating training sessions that ultimately do not result in BCI control. Those that do achieve success, have training periods ranging from 30 minutes to 2 weeks
and movements are generally slow and unrefined [11].

Activity in the SMC has been observed as general commands for a motion that are more precisely carried out by the upper spinal cord [13]. Therefore, exclusive measurement of SMC activity may lack valuable information, resulting in poor BCI performance or illiteracy [13].

Research of BCI performance while monitoring neurophysiological signals has shown positive and negative effects depending on mood and motivation [2, 3, 6]. Variability between subjects was still significant in spite of including neurophysiological predictors in [2, 3]. Noise generated by occipital neural activity was detected during BCI use in [2]. Further experimentation revealed that idling of the vision lead to increased distortion of SMR measurement [2]. Interestingly, engagement of the visual senses in an activity reduced noise generated by occipital idling rhythms [2].

Motivational factors have a significant impact on BCI performance as well [3]. Subjects tasked with uninteresting or unchallenging goals experienced low BCI performance in [3]. When these subjects were given a simple simulated environment, performance was significantly better. Though still unknown, factors such as novel challenges and fun/interesting tasks have been associated with these results [3]. The neurophysiological responses observed in [2, 3] are valuable to the training paradigm and filtering strategy used in BCI design.

The Hypothesis

The focus of this paper is on physiological clues of a subject's intended movement by measure of pupil position and diameter. There is strong dependence on visual feedback during planning and execution of movements in able-bodied and BCI dependent subjects [2, 11]. Including physiological information of the eye may aid in the design of BCI filtering strategies.

Highly accurate and robust eye tracking devices have been developed that are low cost and noninvasive. These devices do not measure eye focus directly, but pupil size can be easily determined from live imaging of the eye. Correlation between relative
changes in pupil size and focus can provide a relative measure of target depth [8, 9, 10]. Tracking pupil position and focus would narrow the range of the intended motion while also providing a general target in the 3D space of the subject. Combining current BCI decoding strategies with information from eye position and depth perception can greatly improve performance. With this information, decoding algorithms can filter SMR activity more selectively and define a threshold around a preliminary target prediction.

The Technology

Planar Eye Tracking Devices

Devices such as the EyeWriter and GazeTracker are projects that provide open-source, publicly licensed software for eye tracking. The complete setup uses infrared LEDs and video recording that sense pupil position. The software filters and refines the data in order to track eye position and velocity [14, 16]. In addition to infrared sensing, video recording is processed using common web cam hardware [15, 16].

Both devices are noninvasive, the GazeTracker centralizes all its electronics and compensates for head movement. This device has even achieved EMG clicking capability for use with their software [14]. Both devices have successfully enabled patients with ALS to communicate and even produce artwork purely from eye movement [14, 15, 16].

Both devices use a computer interface which allow subjects to select letters and words in a text processor. The Eyewriter provides a canvas as well and its refined, smooth tracking capability allows its subjects to produce fine, detailed drawings.

Depth Measurement

Many environmental factors influence the physiological responses of the eye; limited here to the effects on pupil diameter [7, 8, 9]. Previous works have studied the effects of illumination, focus distance and mental alertness on pupil accommodation [9]. The objective here is asses the possibility of determining object distance based only on pupil diameter.

The experiment in [9] directly measured pupil diameter at systematic depth fixation intervals. Observe in figure 1 the monotonic increase in pupil size over the range of focus distance. The data suggests a linear relationship in the space from $10cm$ to $50cm$ from the body [9]. These results provide a straight forward method to calibrating and predicting depth fixation in patients using existing eye tracking devices.

Figure 1: Experimental data relating pupil size to depth fixation distance, image from [9].

Obtaining the data in [9] required development of software methods to measure pupil size from recorded images. These methods can be ported directly to the eye trackers discussed, in combination with the linear model defined in [9].

Integration

Current technology can track the eyes and project their motion on a 2D plane, typically a computer screen or wall projection. Measurement of pupil size can be determined from the existing video processing and changes in pupil size can be recorded. Depth calibration, in theory could be carried out with only 2 test points at different distances. With planar tracking and depth fixation distance, a general target in 3D space can be determined.

Due to the high noise in measurement of brain activity, BCIs typically include some of form of band-pass filtering, Laplace-filtered channels or subject optimized spacial filters [2]. Figure 2 depicts the population vector algorithm used in BCI decoding. The approach provides a means of measuring neurons and predicting the desired movement direction. Noise from neuron pools arise from a variety of uncontrollable sources making decoding very challenging [5]. As a result, the algorithm may converge on different targets for the same intended movement [5].

Figure 2: Population vector algorithm used in BCI decoding from [9].

The approach is to use eye position and focus to define a general location in 3D space in front of the subject. Updating this prediction based on the subject's visual attention would filter neuron activity outside a threshold around the target. Eliminating extremes from the data pool may increase consistency of convergence. In addition, adapting the population vector algorithm to include a weighting based on visual feedback may allow for prefiltering of the data upfront.

With current eye tracking technology and the added dimension from depth measurement, a preliminary target prediction can take place before any BCI prediction takes place. Perhaps combining this with existing filtering techniques may decrease BCI illiteracy and increased overall performance for existing users.

Conclusion

For years BCIs have achieved success in performing complex movements such as gripping and reaching [12]. This technology allows subjects to control a computer or robotic device by imagining a movement [2]. After 30 years of research, BCI performance is still very sensitive to each subject and for many, does not meet criteria [2]. Including more information about the person's physiological state has been shown to increase BCI performance [1, 3].

Using eye tracking devices to measure planar motion and depth fixation would allow for a preliminary target definition in space. Establishing a threshold and weighting scheme around this target may significantly filter noise and increase accuracy and precision of BCI devices.

References

  1. Fetz, E.E. ``Volitional control of neural activity: implications for brain computer interfaces," Journal of Physiology, Article 579, 571–579, 2007
  2. Blankertz B. et al. ``Neurophysiological predictor of SMR-based BCI performance," NeuroImage, Article 51, 1303–1309, 2010
  3. Nijboer F., Birbaumer N. and Kubler A. ``The influence of psychological state and motivation on brain–computer interface performance in patients with amyotrophic lateral sclerosis– a longitudinal study" Frontiers in Neuroscience, Vol. 4, Article 55, July 2010
  4. Schwartz A.B. et al. ``Brain-controlled interfaces: movement restoration with neural prosthetics," Neuron, Article 52, 205–220, October 2006
  5. Georgopoulos A. P. ``Translation of directional motor cortical commands to activation of muscles via spinal interneuronal systems" Cognitive Brain Research, 151-155, 1996
  6. Kubler, A. et al. ``Patients with ALS can use sensorimotor rhythms to operate a brain computer interface," Neurology, Article 64, 1775–1777, 2005
  7. Marzocchi N., Breveglieri R., Galletti C. and Fattori P. ``Reaching activity in parietal area V6A of macaque: eye influence on arm activity or retinocentric coding of reaching movements," European Journal of Neuroscience, Vol. 27, 775–789, 2008
  8. Marcos S., Moreno E., Navarro R. ``The depth-of-field of the human eye from objective and subjective measurements," Vision Research, Article 39, 2039–2049, 1999
  9. Lee E. C., Lee J. W., and Park K. R. ``Experimental Investigations of Pupil Accommodation Factors," IOVS Papers in Press, Manuscript iovs.10-6423, February 2011
  10. Ripps H., Chin N. B., Siegel I. M., and Breinin G. M. ``The effect of pupil size on accommodation, convergence, and the AC/A ratio" Investigative Ophthalmology, Vol. 1, Article 1, February 1962
  11. Green A. M. and Kalaska J. F. ``Learning to move machines with the mind" Trends in Neurosciences, Vol. 34, Article 2, February, 2011
  12. O’Doherty J. E. ``Active tactile exploration using a brain machine brain interface" Nature, Vol. 479 228-332, November 2011
  13. Dum, R.P. and Strick, P.L., ``The corticospinal system, a structural framework for the central control of movement," American Physiological Society, New York, 1996
  14. San Agustin J., et al. ``Evaluation of a low-cost open-source gaze tracker." ACM, New York, NY, USA, 77-80, 2010. http://doi.acm.org/10.1145/1743666.1743685
  15. San Agustin J., Hansen J. P., Hansen D. W. and Skovsgaard H. ``Low-cost gaze pointing and EMG clicking" ACM, New York, NY, USA, 3247-3252, 2009. http://doi.acm.org/10.1145/1520340.1520466
  16. Gaskins N. R. ``Cognitive-Computer Artifacts for Urban Art and Communication" ACM, Austin, TX, USA, 2012