Volltext-Downloads (blau) und Frontdoor-Views (grau)

Comparison of Benchmarks for Machine Learning Cloud Infrastructures

  • Training of neural networks requires often high computational power and large memory on Graphics Processing Unit (GPU) hardware. Many cloud providers such as Ama­zon, Azure, Google, Siemens, etc, provide such infrastructure. However, should one choose a cloud infrastructure or an on premise system for a neural network application, how can these systems be compared with one another? This paper investigates seven prominent Machine Learning benchmarks, which are MLPerf, DAWNBench, DeepBench, DLBS, TBD, AIBench, and ADABench. The recent popularity and widespread use of Deep Learning in various applications have created a need for bench­marking in this field. This paper shows that these application domains need slightly different resources and argue that there is no standard benchmark suite available that addresses these different application needs. We compare these benchmarks and summarize benchmark­related datasets, domains, and metrics. Finally, a concept of an ideal benchmark is sketched.

Export metadata

Additional Services

Search Google Scholar

Statistics

frontdoor_oas
Metadaten
Author:Manav Madan, Christoph ReichORCiDGND
URL:https://www.thinkmind.org/index.php?view=article&articleid=cloud_computing_2021_3_10_20011
ISBN:978-1-61208-845-7
Parent Title (English):CLOUD COMPUTING 2021 : The Twelfth International Conference on Cloud Computing, GRIDs, and Virtualization, April 18 - 22, 2021, Porto, Portugal
Document Type:Conference Proceeding
Language:English
Year of Completion:2021
Release Date:2021/11/19
First Page:41
Last Page:47
Open-Access-Status: Open Access 
Licence (German):License LogoUrheberrechtlich geschützt