Abstract: EEG-based object recognition is gaining attention as brain signals provide unique neural patterns when visual stimuli are perceived. This research proposes an automated classification pipeline that learns EEG temporal dependencies using a 1D Convolutional Neural Network (1D-CNN). EEG signal segments collected from five electrode positions—AF3, AF4, T7, T8, and PZ—are integrated to construct a spatial feature matrix containing diverse signal responses. Feature normalization is applied using standard statistical scaling to maintain consistent input distribution, and object Object categories are converted into numeric class identifiers for multi-class model training. class indices to support multi-class learning. The The model structure is composed of multiple processing layers designed for EEG pattern learning. contains sequential 1D convolutional layers that capture short-range temporal interactions, followed by max-pooling to reduce noise sensitivity and support stable feature extraction. Dense layers further learn high-level signal abstractions, leading to a A softmax output layer that converts raw scores into a normalized probability distribution probability-based classification. Training is performed using an 80:20 data split using batch-driven learning to stabilize gradient updates. For end-user inference, the trained model is deployed on the Hugging Face cloud using a Gradio interface to support real-time prediction and confidence visualization through a dynamic gauge chart.
Keywords: Brain-Computer Interface, EEG Signal Classification, 1D Convolutional Neural Network, Visual Stimuli Recognition
Downloads:
|
DOI:
10.17148/IJARCCE.2025.141299
[1] Divya R, Manasa N S, Harshitha R, Nisha Shaimine, "NEURO VISION: DEEP LEARNING AND BCI FOR AI ENABLED ASSISTIVE DEVICES," International Journal of Advanced Research in Computer and Communication Engineering (IJARCCE), DOI: 10.17148/IJARCCE.2025.141299