Deploying large language models on diverse computing architectures: A performance evaluation framework

Ayodele Emmanuel Sonuga 1, *, Kingsley David Onyewuchi Ofoegbu 2, Chidiebere Somadina Ike 3 and Samuel Olaoluwa Folorunsho 4

1 Intel corporation, Hillsboro Oregon, USA.
2 MegaCode, USA.
3 Atlantic Technological University, Letterkenny, Ireland.
4 Independent Researcher, London, United Kingdom.
 
Review Article
Global Journal of Research in Engineering and Technology, 2024, 02(01), 018–036.
Article DOI: 10.58175/gjret.2024.2.1.0026
Publication history: 
Received on 30 July 2024; revised on 11 September 2024; accepted on 13 September 2024
 
Abstract: 
Deploying large language models (LLMs) across diverse computing architectures is a critical challenge in the field of artificial intelligence, particularly as these models become increasingly complex and resource-intensive. This review presents a performance evaluation framework designed to systematically assess the deployment of LLMs on various computing architectures, including CPUs, GPUs, TPUs, and specialized accelerators. The framework is structured around key performance metrics such as computational efficiency, latency, throughput, energy consumption, and scalability. It considers the trade-offs associated with different hardware configurations, optimizing the deployment to meet specific application requirements. The evaluation framework employs a multi-faceted approach, integrating both theoretical and empirical analyses to offer comprehensive insights into the performance dynamics of LLMs. This includes benchmarking LLMs under varying workloads, data batch sizes, and precision levels, enabling a nuanced understanding of how these factors influence model performance across different hardware environments. Additionally, the framework emphasizes the importance of model parallelism and distribution strategies, which are critical for efficiently scaling LLMs on high-performance computing clusters. A significant contribution of this framework is its ability to guide practitioners in selecting the optimal computing architecture for LLM deployment based on application-specific needs, such as low-latency inference for real-time applications or energy-efficient processing for large-scale deployments. The framework also provides insights into cost-performance trade-offs, offering guidance for balancing the financial implications of different deployment strategies with their performance benefits. Overall, this performance evaluation framework is a valuable tool for researchers and engineers, facilitating the efficient deployment of LLMs on diverse computing architectures. By offering a systematic approach to evaluating and optimizing LLM performance, the framework supports the ongoing development and application of these models across various domains. This paper will evaluate the deployment of large language models (LLMs) on diverse computing architectures, including x86, ARM, and RISC-V platforms. It will discuss strategies for optimizing LLM performance, such as dynamic frequency scaling, core scaling, and memory optimization. The research will contribute to understanding the best practices for deploying AI applications on different architectures, supporting technological innovation and competitiveness.
 
Keywords: 
Large Language Models; Computing Architectures; Performance evaluation; CPUs; GPUs; TPUs; Accelerators; Scalability; Model Parallelism; Deployment Optimization; Energy efficiency.
 
Full text article in PDF: