Sharing scientific thinking with a lay audience can have interesting consequences — intended and otherwise. Back in 2016, I attended one such lecture on the topic of “attention”. I suppose it is fair to say that neither me nor the speaker could have foreseen this consequence: a machine-learning based classification model of viewing behaviour.

Origin story

Having just read Stefan van der Stigchel’s book on attention, I was excited to hear his public lecture on the topic. Stefan and I were affiliated to the same Bachelor’s program on Artificial Intelligence back when I was a PhD candidate. Hence, after the lecture was over, I walked over to have a chat.

Stefan spoke at length about eye tracking: an extremely fruitful paradigm used to investigate attention. Having recently read about continuous authentication through biometrics, this piqued my interest. More specifically, I had heard about machine learning models in the context of behavioural classification of mouse dynamics. I wondered whether such an approach might work here. Both excited by the possibility, we agreed to follow-up.

The journey was long but extremely satisfying. The result is a paper by a team of six authors with complementary expertise.

Abstract

Since the seminal work of Yarbus (1967), multiple studies have demonstrated the influence of task-set on oculomotor behavior and the current cognitive state. In more recent years, this field of research has expanded by evaluating the costs of abruptly switching between such different tasks. At the same time, the field of classifying oculomotor behavior has been moving toward more advanced, data-driven methods of decoding data.

For the current study, we used a large dataset compiled over multiple experiments and implemented separate state-of-the-art machine learning methods for decoding both cognitive state and task-switching. We found that, by extracting a wide range of oculomotor features, we were able to implement robust classifier models for decoding both cognitive state and task-switching. Our decoding performance highlights the feasibility of this approach, even invariant of image statistics. Additionally, we present a feature ranking for both models, indicating the relative magnitude of different oculomotor features for both classifiers. These rankings indicate a separate set of important predictors for decoding each task, respectively. Finally, we discuss the implications of the current approach related to interpreting the decoding results.