Abstract: Now-a-days, deep learning plays a vital role in the field of machine learning. Deep learning is derived from the concept of artificial intelligence. The major advantages of using deep learning is effective, supervised, reduced cost and time efficient machine. Deep learning allows various procedures and topographies for the input data to be sent into an algorithm for processing to produce required output. Deep learning is used in various applications and mainly for security purposes. Domains which are running by using deep learning are education, science and technology, medical science in the job of cancer detection, stock market analysis, natural language processing, face recognition and many more. The most complicate process in deep learning is training deep neural networks, because, each and every input layer has to be trained in the manner of using parameters to produce required output. This process may slow the entire process, in the form learning rates and initialization of parameters. By using an appropriate algorithm like Multilayer perceptron neural network (MLPNN) which is forward technique of Artificial Neural Network (ANN) that can be used for back propagation training, convolutional neural network (CNN) which is used for analyzing visual image based on shared-weights architecture and translation invariance characteristics, recurrent neural network (RNN) which develops a connection between the nodes in the directed graph, generative adversarial network (GAN) which acts as a two neural network and used for gaming process, deep belief network (DBN) which helps to develop a graphical model of an input data under several layers of perceptron and many more. Deep learning mechanism allows user to intercept an input to many layers of perception until the required output is obtained. This paper is designed in the process of deriving the super concept of normalization in deep learning architecture for training the input data sets. Normalization is a new technique for training data seta and also activating hidden data sets in the neural network. It uses the unbiased technique in gradient variations for input data sets. Gradient variation helps to develop the model in various layered perceptron. This normalization can also be implemented for statistical data analysis. Normalization is also helps to identify the minimum and maximum values among the data sets. If huge number of data sets is processed, then normalization allows batch process to train the data sets.
Keywords: Artificial Intelligence, Machine Learning, Neural Networks, Generative Adversarial Network, Normalization, Gradient Variations.
| DOI: 10.17148/IJARCCE.2022.11147