Published on 07 Oct 2025

Dr Yuvaraj publishes three new journal articles

Dr Yuvaraj Rajamanickam, Education Research Scientist at the Science of Learning in Education Centre (SoLEC), recently published three journal articles.

1.  Classroom activity recognition using hybrid 3D-CNNs and visualization of action features with Grad-CAM

This study presents an automated framework for recognizing classroom activities using a 3D-convolutional neural network (3D-CNN) combined with an extreme learning machine (ELM) classifier. The system detects teacher and student behaviors from classroom videos by extracting spatiotemporal features and classifying them into activity categories. Tested on the EduNet dataset, the 3D-CNN+ELM model achieved an average accuracy of 88.17%, outperforming the baseline I3D-ResNet-50 by 5.87%, and showed 80% accuracy on independent online videos, demonstrating good generalizability. Grad-CAM visualizations confirmed that the model identified meaningful visual cues. The proposed framework shows strong potential for automated analysis of teaching and learning activities in educational settings.

2. EEG-based functional connectivity patterns during boredom in an educational context

This study examined the neural basis of boredom in an educational context using EEG. Eighty-four adults watched educational videos designed to induce boredom or a neutral state while their brain activity was recorded. Analysis of functional connectivity across EEG frequency bands revealed that boredom was associated with higher global efficiency and lower characteristic path length in the alpha, beta, and gamma bands, as well as higher clustering and local efficiency in the gamma band. These findings indicate that boredom is linked to distinct patterns of brain connectivity, suggesting increased internal processing. The results provide new insights into how boredom influences brain network dynamics in learning contexts.

3. Automated Boredom Recognition Using Multimodal Physiological Signals

This study explored automatic boredom recognition during a video lecture using a multimodal system based on physiological signals. EEG, ECG, GSR, and eye gaze data were collected from 84 adults while they watched boring and non-boring educational videos. Features extracted from these signals were analyzed and classified using machine-learning models (XGB, RF, and GB) with leave-one-out cross-validation. Results showed that multimodal approaches outperformed single-signal models, achieving the highest boredom recognition accuracy of 88.56% ± 0.82% using EEG and eye gaze fusion with Random Forest. The findings demonstrate the potential of multimodal physiological systems for reliable boredom detection in learning contexts.