Event Abstract

An Open-Ended Approach to BCI: Embracing Individual Differences by Allowing for User-Defined Mental Commands

  • 1 McMaster University, School of Scientific Computing and Engineering, Canada
  • 2 McMaster University, Psychology, Neuroscience, and Behaviour, Canada

Introduction A brain-computer interface (BCI) provides a direct communication channel between a brain and a computer by detecting specific user intentions from brain activity (Wolpaw et al., 2002). The reliability of these systems is based on successful coadaptation between the user, who must learn to produce consistent mental commands, and the machine, which typically uses learning algorithms to tune the interface for each user. Currently reliability is still low for many BCIs, and in many cases, a high proportion of users cannot control the system at all (Neuper & Pfurtscheller, 2010; Allison & Neuper, 2010). A great deal of attention has been given to improving machine learning methods for improved reliability, but some researchers have recently argued that not enough has been done to improve user training (Neuper & Pfurtscheller, 2010; Lotte et al., 2013). BCIs are usually designed around specific brain signals, which users must then learn to generate effectively. Therefore, improving the ability of the user to adapt to the system should improve overall reliability. However, individuals naturally differ in their ability to produce different kinds of brain signals, thereby impacting BCI performance (Hammer et al., 2012; Randolph, 2011; Scherer et al., 2015). It follows that different individuals may be better able to control a BCI using different brain signals (Allison & Neuper, 2010). Though considering only specific brain signals simplifies the problem of translating brain activity into actions, restricting users in this way may contribute to suboptimal user learning and performance. In the long-term this approach could be a limiting factor in the usability of BCI, due to significant variability in performance across users. Because predicting optimal mental commands for each user may be an intractable problem, we propose that the issue of individual variability in mental command production may be addressed by an open-ended BCI design in which the user defines his/her own mental commands. Methods Six undergraduate participants used a commercial EEG headset and trained to control three binary BCIs with user-defined mental imagery in three sensory modalities: auditory imagery was used to control the pitch of a tone, visual imagery was used to control the size of an object, and motor imagery was used to control the position of an object. Each paradigm was practiced over three 25-minute trial-based sessions (200 trials per session) taking place within one week, totaling 9 sessions over three weeks per participant. The same processing scheme was used for all sessions. All models were initialized at the start of each session so that no information from previous sessions was used. Models were updated after every block of 20 trials by extracting candidate features with Common Spatial Patterns (CSP) (Ramoser et al., 2000) and power spectral density estimation (in offline analysis, Filter-Bank CSP (Ang et al., 2008) was used instead), selecting relevant features with max-relevance min-redundancy (mRMR) (Peng, 2005), and classifying trials with a linear Support Vector Machine. Artifact rejection based on Z-score thresholds was used in offline analysis. Feedback was based on classifier confidence, or the probability of belonging to the decided class, which was measured based on the relative distance between a trial and the classification boundary in feature space (Chang and Lin, 2001) (e.g., the distance the object moved in the classified direction was proportional to classification confidence). Hence, users were trained to increase the distance between their mental commands and the classification boundary. Results Online: Three participants achieved more than 70% classification accuracy (based on the average of the last five blocks in a session) in at least one form of mental imagery. The block-by-block accuracy and classifier confidence per session is given for each of these participant's best paradigm in Figure 1. It is clear that participants performed very differently with different forms of mental imagery. Performance was not correlated with condition preference or interest, but did correspond to each participant’s domain expertise (e.g., the participant with the most musical training also performed best with auditory imagery). While some of this variance may be explained by different choices of mental commands being more suitable for detection with EEG, large differences in performance were seen with highly similar mental commands as well. This suggests that an individual's ability to produce those mental commands may have played a significant role in performance. Participants who obtained above-chance sessions in a particular sensory modality also showed a significant increase in the learning rate across sessions, measured by the slope of the classification accuracy across all blocks within a session (Whole session learning slopes: F(2,18) = 4.56, p = .0249; Last five blocks learning slopes: F(2,21) = 5.04, p = .016; Variance explained: ω2 = 0.252). Since all models were initialized per session, the increase in learning slope can be taken as a measure of human learning across sessions. Offline: Figure 2 shows the offline classification accuracy per session. 30-fold cross-validation with randomized 70-30 training-test set partitions was used in offline analysis. On average 75.9 (39.6) trials were rejected. Offline classification accuracy was much higher than online, with at least one participant having high classification accuracy for each paradigm. Conclusions BCI performance in this study is consistent with other studies exploring non-motor imagery (e.g., Bobrov et al., 2011; Cabrera & Dremstrup, 2008), despite the added difficulty of classifying user-defined mental commands rather than pre-specified commands. This study shows the feasibility of an open-ended BCI while providing additional evidence that individuals can differ significantly in BCI performance using different kinds of mental imagery. Furthermore, we provide a measure indicating a significant contribution of human learning in achieving control with a BCI. Together these results suggest that developing an open-ended BCI which grants the user freedom to explore and choose their own mental commands may contribute to more consistent performance across users if the added technical challenges can be met. A study is currently in preparation which will use research grade EEG hardware and a novel neurofeedback approach specifically designed for the open-ended paradigm in order to improve human-machine coadaptation and to better compare the open-ended BCI design to current standards.

Figure 1
Figure 2

Acknowledgements

This research was funded by a Discovery grant from the Natural Sciences and Engineering Research Council of Canada (NSERC) to SB and an NSERC PGS scholarship to KD.

References

Allison, B. Z., & Neuper, C. (2010). Could anyone use a BCI? In B. Graimann, G. Pfurtscheller, & B. Allison (Eds.), Brain-Computer Interfaces (pp. 35–54). Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-642-02091-9

Ang, K. K., Chin, Z. Y., Zhang, H., & Guan, C. (2008). Filter Bank Common Spatial Pattern (FBCSP) in Brain-Computer Interface. 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), 2391–2398. doi:10.1109/IJCNN.2008.4634130

Bobrov, P., Frolov, A., Cantor, C., Fedulova, I., Bakhnyan, M., & Zhavoronkov, A. (2011). Brain-computer interface based on generation of visual images. PloS One, 6(6), e20674. doi:10.1371/journal.pone.0020674

Cabrera, A. F., & Dremstrup, K. (2008). Auditory and spatial navigation imagery in Brain-Computer Interface using optimized wavelets. Journal of Neuroscience Methods, 174(1), 135–146. doi:10.1016/j.jneumeth.2008.06.026

Chang, C.C. and Lin, C.J. LIBSVM: a library for support vector machines, 2001 Software available at http://www.csie.ntu.edu.tw/˜cjlin/libsvm

Hammer, E. M., Halder, S., Blankertz, B., Sannelli, C., Dickhaus, T., Kleih, S., Kübler, A. (2012). Psychological predictors of SMR-BCI performance. Biological Psychology, 89(1), 80–86. doi:10.1016/j.biopsycho.2011.09.006

Lotte, F., Larrue, F., & Mühl, C. (2013). Flaws in current human training protocols for spontaneous Brain-Computer Interfaces: lessons learned from instructional design. Frontiers in Human Neuroscience, 7(September), 568. doi:10.3389/fnhum.2013.00568

Neuper, C., & Pfurtscheller, G. (2010). Neurofeedback Training for BCI Control. In B. Graimann, G. Pfurtscheller, & B. Allison (Eds.), Brain-Computer Interfaces (pp. 65–78). Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-642-02091-9

Peng, H. C. (2005). Feature Selection Based on Mutual Information Criteria of Max-dependency, Max-relevance, and Min-redundancy. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27, 1226–1238.

Ramoser, H., Müller-Gerking, J., & Pfurtscheller, G. (2000). Optimal spatial filtering of single trial EEG during imagined hand movement. IEEE Transactions on Rehabilitation Engineering : A Publication of the IEEE Engineering in Medicine and Biology Society, 8(4), 441–446.

Randolph, A. B. (2011). Not all created equal: Individual-technology fit of brain-computer interfaces. Proceedings of the Annual Hawaii International Conference on System Sciences, 572–578. doi:10.1109/HICSS.2012.451

Scherer, R., Faller, J., Friedrich, E. V. C., Opisso, E., Costa, U., Kübler, A., & Müller-Putz, G. R. (2015). Individually adapted imagery improves brain-computer interface performance in end-users with disability. PloS One, 10(5), e0123727. doi:10.1371/journal.pone.0123727

Wolpaw, J. R., Birbaumer, N., McFarland, D. J., Pfurtscheller, G., & Vaughan, T. M. (2002). Brain-computer interfaces for communication and control. Clinical Neurophysiology : Official Journal of the International Federation of Clinical Neurophysiology, 113(6), 767–791.

Keywords: brain-computer interface (BCI), Mental Imagery, visual imagery, auditory imagery, Motor Imagery, Human Learning, user-centered design

Conference: German-Japanese Adaptive BCI Workshop, Kyoto, Japan, 28 Oct - 29 Oct, 2015.

Presentation Type: Poster presentation

Topic: Adaptive BCI

Citation: Dhindsa K, Carcone D and Becker S (2015). An Open-Ended Approach to BCI: Embracing Individual Differences by Allowing for User-Defined Mental Commands. Front. Comput. Neurosci. Conference Abstract: German-Japanese Adaptive BCI Workshop. doi: 10.3389/conf.fncom.2015.56.00015

Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.

The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.

Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.

For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.

Received: 08 Oct 2015; Published Online: 04 Nov 2015.

* Correspondence: Prof. Sue Becker, McMaster University, Psychology, Neuroscience, and Behaviour, Hamilton, Ontario, L8S4L8, Canada, becker@psychology.mcmaster.ca