Abstract: In an age of ongoing machine learning and deep learning applications, energy-efficient AI clusters are valuable in that they greatly reduce carbon footprints. AI clusters have become indispensable for large models that have a long training time and require large amounts of data. AI clusters can be divided into two major parts for their operation: the first part is to train the model in the deep learning model. The second part is to store the massive amount of high-dimensional input data for our model. Using AI clusters in the deployment of cloud infrastructures, as well as high-speed storage solutions, has been integrated.
Given the large computational costs of large-scale AI jobs, it is logical to optimize energy resources for both aspects individually. Little research has been done, however, on the relationship between storage and CPU energy optimization. Modern state-of-the-art AI systems mainly depend on the assignment of CPU-bound, disk-bound, or GPU-bound parts, connected over network links. While the usage of some of these elements can be diminished, usually the entire connection is cut off along the device chain, resulting in rapid degradation of the performance of the overall AI application. Energy-efficient and environmentally friendly technical implementation is highly significant. Artificial Intelligence is evolving rapidly, providing excellent solutions for many new challenges as well as making existing solutions even better. However, one of the main challenges is the enormous consumption of energy in the training process of AI.

Keywords: Energy-efficient AI Clusters, Carbon Footprint Reduction, Deep Learning Models, Large-scale Training, High-dimensional Data, Cloud Infrastructure Deployment, High-speed Storage, Computational Costs, Energy Optimization, CPU-bound Tasks, Disk-bound Tasks, GPU-bound Tasks, Network Links, Performance Degradation, Environmentally Friendly AI, AI Energy Consumption, AI Training Optimization, Sustainable AI, AI Deployment Challenges, Energy-efficient Storage.


PDF | DOI: 10.17148/IJARCCE.2023.121223

Open chat
Chat with IJARCCE