Abstract: In today’s technologically advancing academic environment, the timely and accurate evaluation of subjective answers plays a crucial role in educational assessment. While objective-type answers can be easily evaluated using automated systems, the assessment of subjective responses demands more sophisticated techniques that consider context, content relevance, structure, and grammatical accuracy. This research paper presents an AI- based assessment framework capable of evaluating both handwritten and typed subjective answers using Machine Learning (ML) and Natural Language Processing (NLP) methods. Handwritten responses are first digitized using Optical Character Recognition (OCR), which converts the input into textual data. Subsequently, the evaluation process utilizes semantic similarity measures, keyword extraction, and grammatical analysis. The framework integrates pretrained language models along with custom-trained classifiers to compare student responses against reference solutions, enabling the assessment of contextual accuracy and logical coherence. The proposed method reduces human bias, enhances consistency through algorithmic scoring, and significantly minimizes manual grading effort. The results this work demonstrates that the system achieves high accuracy (99.30%) and outperforms traditional evaluation techniques in speed and reliability. This framework offers a scalable and intelligent approach for subjective answer assessment, contributing meaningfully to the integration of AI in modern education.

Keywords: Optical Character Recognition (OCR), Convolutional Neural Networks (CNN), Machine learning(ML) Natural Language Processing (NLP), Large Language Models (LLM), Subjective Answer Assessment.


PDF | DOI: 10.17148/IJARCCE.2025.14652

Open chat
Chat with IJARCCE