Abstract: This study uses the IMDb movie review dataset to compare how well different deep learning architectures perform in sentiment classification. The experiments evaluate three sequential models, each enhanced with distinct configurations of activation functions and layer compositions. First Model utilizes ReLU and ELU activations within a bidirectional LSTM architecture, Second Model incorporates Tanh and SELU functions, while Third Model adopts a combination of Leaky ReLU and Tanh within a similar structural framework. To determine the effect of network architecture and activation function selection on classification effectiveness, each model is evaluated using accuracy, precision, recall, F1 score, and loss measures. Results indicate that the incorporation of advanced activations such as SELU and Leaky ReLU can lead to performance gains in certain metrics, with Third Model demonstrating improved generalization and lower loss compared to its predecessors. These results highlight how important activation functions are for improving deep learning models for tasks involving natural language processing.
Keywords: Sentiment Analysis, IMDb Dataset, Deep Learning, LSTM, ReLU, ELU, Tanh, SELU, Leaky ReLU, Natural Language Processing.
|
DOI:
10.17148/IJARCCE.2025.14421