Assessing EEG neuroimaging with machine learning
Item Status
Embargo End Date
Date
Authors
Abstract
Neuroimaging techniques can give novel insights into the nature of human cognition.
We do not wish only to label patterns of activity as potentially associated with a
cognitive process, but also to probe this in detail, so as to better examine how it may
inform mechanistic theories of cognition. A possible approach towards this goal is to
extend EEG 'brain-computer interface' (BCI) tools - where motor movement intent is
classified from brain activity - to also investigate visual cognition experiments.
We hypothesised that, building on BCI techniques, information from visual object
tasks could be classified from EEG data. This could allow novel experimental designs
to probe visual information processing in the brain. This can be tested and falsified by
application of machine learning algorithms to EEG data from a visual experiment, and
quantified by scoring the accuracy at which trials can be correctly classified.
Further, we hypothesise that ICA can be used for source-separation of EEG data to
produce putative activity patterns associated with visual process mechanisms. Detailed
profiling of these ICA sources could be informative to the nature of visual cognition in
a way that is not accessible through other means. While ICA has been used previously
in removing 'noise' from EEG data, profiling the relation of common ICA sources to
cognitive processing appears less well explored. This can be tested and falsified by using
ICA sources as training data for the machine learning, and quantified by scoring the
accuracy at which trials can be correctly classified using this data, while also comparing
this with the equivalent EEG data.
We find that machine learning techniques can classify the presence or absence of
visual stimuli at 85% accuracy (0.65 AUC) using a single optimised channel of EEG
data, and this improves to 87% (0.7 AUC) using data from an equivalent single ICA
source. We identify data from this ICA source at time period around 75-125 ms
post-stimuli presentation as greatly more informative in decoding the trial label. The
most informative ICA source is located in the central occipital region and typically has
prominent 10-12Hz synchrony and a -5 μV ERP dip at around 100ms. This appears to
be the best predictor of trial identity in our experiment.
With these findings, we then explore further experimental designs to investigate
ongoing visual attention and perception, attempting online classification of vision using
these techniques and IC sources. We discuss how these relate to standard EEG
landmarks such as the N170 and P300, and compare their use. With this thesis, we
explore this methodology of quantifying EEG neuroimaging data with machine learning
separation and classification and discuss how this can be used to investigate visual
cognition. We hope the greater information from EEG analyses with predictive power
of each ICA source quantified by machine learning separation and classification and discuss how this can be used to investigate visual
cognition. We hope the greater information from EEG analyses with predictive power
of each ICA source quantified by machine learning might give insight and constraints
for macro level models of visual cognition.
This item appears in the following Collection(s)

