Abstract: Training deep neural networks often relies on fixed learning rates and static hyperparameters, which can lead to inefficiencies and suboptimal results [1, 2]. This paper introduces Adaptive Learning via Dynamic Variable Integration (ALDVI), a novel method that dynamically adjusts learning parameters during training. By incorporating auxiliary variables that adapt based on loss and accuracy trends from prior iterations, ALDVI enhances the optimization process and reduces dependence on manually tuned hyperparameters [3]. This adaptive mechanism refines convergence behavior and improves generalization, addressing challenges in training efficiency and robustness [4]. Experimental evaluations on widely used benchmark datasets demonstrate substantial improvements in convergence speed, accuracy, and resistance to hyperparameter sensitivity [5, 6]. These findings highlight ALDVI’s potential as a valuable augmentation to conventional training strategies for deep neural networks.
Keywords: Adaptive Learning, Dynamic Variable Integration, Neural Network Optimization, Hyperparameter Tuning, Convergence Efficiency, Generalization Performance, Deep Neural Networks, Loss and Accuracy Trends, Benchmark Datasets, Robust Training Strategies, Parameter Adjustment, Model Convergence, Training Efficiency, Hyperparameter Sensitivity, Optimization Process
|
DOI:
10.17148/IJARCCE.2025.14205