Abstract: Autonomous cars (AVs) are being developed at a great pace with sensors and deep learning, but there is limited public confidence because of limited comparative proof against human motorists. The current paper creates a hybrid deep learning model by linking traffic signs and object perception with convolutional neural networks (CNNs) alongside temporal signal perception with recurrent (RNN/LSTM) networks. Conditional imitation learning facilitates contextual decision-making under changing conditions of roads, traffic, and weather. Training is assisted by vast datasets like GTSRB, Comma.ai, and BDD100K, pre-processed by augmentation along with fusion of camera, LiDAR, and radar signals. 95% of validation accuracy and virtually flawless (99%) traffic sign compliance are attained, surpassing human motorists (91%). Comparative analysis shows averaged reaction time of 0.32 s against 1.25 s, averaged lane deviation of 5 cm against 12 cm, and substantially reduced abrupt braking occurrences (3 per 100 km against 11). The findings demonstrate the model's quicker reaction, higher accuracy, and more cautious driving. In pursuit of transparency, explainable AI techniques (attention maps, SHAP values) are included, enhancing interpretability and confidence. It gives empirical proof that AVs can reliably surpass human-driven vehicles in major measures of safety, lending support to AVs being eventually permitted in widespread real-world transportation.

Keywords: Autonomous Vehicles, Manual Driving, Sensor Data, Road Safety, Deep Learning, Traffic Sign Recognition, Human-Computer Comparison, Driving Behaviour, CNN-LSTM


Downloads: PDF | DOI: 10.17148/IJARCCE.2025.14912

How to Cite:

[1] Balaji K, Krupashree LK, Hemanth Kumar, "Human vs Machine: A Deep Learning Based Comparitive Study of Autonoumous and Manual Driving," International Journal of Advanced Research in Computer and Communication Engineering (IJARCCE), DOI: 10.17148/IJARCCE.2025.14912

Open chat
Chat with IJARCCE