Abstract: In the era of rapidly expanding student populations and increasing academic work- loads, traditional methods of evaluating handwritten answer sheets have become inef- ficient, inconsistent, and resource-intensive. AutoGrad addresses these challenges by leveraging cutting-edge Generative AI to automate the assessment process with high ac- curacy and scalability. The system integrates the Gemini model for Optical Character Recognition (OCR), effectively digitizing diverse handwriting styles, and the LLaMA-7B language model for semantic answer evaluation.AutoGrad introduces a novel Mixture-of-Experts (MoE) architecture to significantly reduce character recognition errors and uses adaptive thresholding to fine-tune evaluation rigor based on question types. The solution further integrates a rule-based and AI-driven hybrid evaluation engine, ensuring both factual correctness and semantic coherence in student answers. With a Flask-based user interface, vector similarity matching, and a real-time feedback generation system, AutoGrad offers an end-to-end, scalable solution for academic institutions.Empirical results from real-world deployments show a reduction in grading time and a correlation with faculty evaluations. AutoGrad not only automates evaluation but en- hances it—providing detailed feedback, promoting personalized learning, and supporting academic integrity at scale.

Keywords: Automated Grading, Generative AI(or LLM), Optical Character Recognition(OCR), Semantic Evaluation,Scalable Assessment, Mixture-of-Experts(MoE), Distributed Processing


Downloads: PDF | DOI: 10.17148/IJARCCE.2025.141170

How to Cite:

[1] Miss. Raheen Rafique Bagwan, Miss. Akansha Anil Sasane, Miss. Riya Chandrakant Chawate, Miss. Rutuja Atul Kavitake, "AutoGrad," International Journal of Advanced Research in Computer and Communication Engineering (IJARCCE), DOI: 10.17148/IJARCCE.2025.141170

Open chat
Chat with IJARCCE