How to put a supercomputer in the hands of all scientists
Understand how to accelerate innovation with fast networks and learn how to configure an HPC cluster — High Performance Computing — in a flexible way.
Written by Paulo Aragão, Senior Solutions Architect, Amazon Web Services (AWS) I Nina Vogl, Senior Specialist HPC Solutions Architect, Global Public Sector AWS I José Cuéncar, Senior Solutions Architect, Global Public Sector AWS
Amazon Web Services (AWS) provides IT resources on demand over the Internet and customers pay only for what they use. Rather than purchasing, managing, and maintaining their own data centers and servers, organizations can purchase technology such as computing power, storage, databases, and other services as needed.
The AWS proposal has aroused a lot of interest, especially for High Performance Computing (HPC), as the rapid advancement of technology quickly renders investments made in solutions for proprietary datacenters obsolete. Researchers, Engineers, Geophysicists, Data Scientists and all HPC users know the difficulties that exist, such as:long lines to access HPC environments; delay in execution and delay in accessing results, leading to a delay in the development of a product and loss of competitiveness in the market; difficulty in maintaining the operational environment adding complexity and cost to the project; difficulty in obtaining the best performance due to the lack of specialized equipment for the problem to be solved, among others. In addition, smaller institutions and companies often struggle to get the budget needed to create a first HPC environment, and hire the right people to manage it.
In this article, we’ll explain how to use AWS ParallelCluster to create and configure an HPC cluster in a flexible, elastic, and repeatable way, as well as how to manage your HPC software in the same way. The ease that AWS ParallelCluster brings to HPC users allows them to create ephemeral or persistent clusters and can adapt them to their budgetary needs, as well as reducing operational complexity by automating and abstracting much of the infrastructure implementation. Finally, a benefit that the cloud brings is access to the newest technologies in an agile way, allowing tests on different types of equipment with agility, accelerating access to large computing resources, helping to reduce time-to-market. and all this at a lower cost than own datacenters.
The AWS Cloud provides access to virtually unlimited infrastructure and offers many infrastructure options suitable for HPC workloads such as different microprocessor architectures (Intel, AMD, ARM64), GPGPU acceleration (nVIDIA and AMD), FPGA, inference and much more. The tools we are introducing today will allow you to easily configure HPC clusters, customized for your needs and when you need them. In addition, we’ll show you how you can make your favorite HPC application available for these environments, allowing you to run workloads such as computational chemistry, genome processing, computational fluid dynamics, transcoding, encoding, modeling, and timing simulation, just to name a few.
We’ll use an open source application ( Palabos ) as an example to show how easy it is to install HPC applications and then reuse them in new clusters. Palabos is a computational fluid dynamics (CFD) application based on the Lattice Boltzman (LB) method. We will follow the download step by step, install the software on a separate disk, and then enjoy this installation on a new cluster. Palabos is just one example and you can replace it with any of your favorite HPC apps.
Interested? Then check out the full article here.
Also check: It’s always time to become a Cloud Mage
This is a production of Amazon Web Service — AWS, partner of 100 Open Startups , which will be monthly here on the blog, with specific articles for startups! AWS will also be present throughout the year with special content for startups in all editions of Oiweek and exclusive opportunities on the 100 Open Startups platform !