ARM Servers on AWS: How to Save up to 30%

OpsWorks Co.
6 min readNov 18, 2021

--

Amazon’s early attempts to use alternative processors in cloud computing weren’t that good. AWS Arm instances did not stand out with high performance. However, now the company is set to change the industry with its new-generation processors Graviton2 built with ARM Neoverse N1 architecture. Let’s check our best practices for maintenance cost savings.

What’s AWS Graviton

Arm’s invasion of today’s high-density data centers has been very successful, especially since AWS started using it for its processors. Arm already operates on several AWS services, including Amazon Elastic Compute Cloud (EC2) and AWS Compute Optimizer, to provide machine learning instance recommendations for specific workflows. Arm also uses Databricks services to develop and run machine learning tools on AWS.

The new Graviton2 SoCs are designed to replace the previous generation of Graviton processors with single-chip assemblies and up to 16 cores. The Graviton2 processors (like the Graviton) are designed by Amazon engineers using the latest ARM Neoverse N1 cores. The collaboration with AWS is mutually beneficial: Arm’s Neoverse V1 and N2 cores have just been released several months ago and promise 40–50% growth over Neoverse N1, and V1 cores support vector computing (SVE) to enable high-performance computing.

The new Graviton2 processors deliver a 7x performance improvement when comparing the capabilities of the new instances to the older A1 on the first Graviton processor. Performance per core has doubled, and access to the memory pool has accelerated five times. AWS is promoting Graviton2-based ElastiCache as an upgrade on the grounds that it is faster and offers more bandwidth than comparable x86-based instances. New instances have the latest Amazon Linux 2 installed by default. The latest Redis and Memcached are supported with seamless upgrades from previous generation instances for easy migration to new instances.

ElastiCache is an excellent example of AWS behaving more like a SaaS provider than a cloud provider. ElastiCache from AWS allows users to create Redis or Memcached storage in Amazon cloud services. Back in December 2019, AWS tested Memcached on Arm and found that performance was 43% higher and latency was significantly lower. Hence, it was a matter of time to run ElastiCache on its own Graviton2 processor. It took AWS a little less than a year to get things done, and Graviton2-based M6g and R6g instances now support ElastiCache, giving it a performance advantage of up to 45% over previous generation instances.

The Features of Graviton2

Graviton2 is manufactured using a 7nm process technology and contains 30 billion transistors. The area of ​​the crystal with such content of logic elements can reach 350 mm2. Let’s add that each core has 1 MB of L2 cache, and the SoC has 32 MB of L3 cache. A mesh network connects the cores with a total bandwidth of 2 TB/s.

The Graviton2 memory subsystem is implemented in the form of eight channels with support for DDR4–3200 modules. Memory access is hardware encrypted using the AES-256 algorithm. The maximum memory capacity of instances on Graviton2 reaches 512 GB with the peripheral interface of 64 PCIe 4.0 lanes.

There are three variants of optimized instances: M6g, R6g, and C6g. Instances are networked at 25 Gbps. The EBS (Elastic Block Storage) speed is 18 Gbps. A new type of AWS EC2 instance based on them is called M6g. Available vCPUs configurations range from 1 to 128. This is more than x86 can offer: its parameter is limited to 96. Physically, the new platform uses one socket with one 64-core processor at the same time. The user gets 64 physical cores, while with x86 he can get 32 ​​cores with SMT support.

How AWS Graviton Helps Save Money

AWS Graviton processors are natively engineered by Amazon Web Services to provide the most significant cost-efficiency value for Amazon-based EC2 cloud computing. Amazon EC2 grants the most extensive selection of compute instances, many of which use the most advanced Intel and AMD processors.

AWS Graviton2 processors are faster and more potent than their first-generation analogs, providing 40% resource optimization than x86-based instances. They provide 7x faster operation, 4x more processing cores, 2x more cache, and 5x more high-speed memory.

In addition to reducing costs, Arm is reducing development time and product-to-market cycle times, which should positively impact the company’s bottom line.

Let’s compare Graviton2 benchmarks with its two main competitors.

source: anandtech

AWS Graviton processors further expand the choice and enable clients to save resources while managing workloads. They are used for Amazon EC2 A1 instances, the world’s first Arm-based server instances which can significantly reduce the cost of other multipurpose instances for scale-out apps. These include web servers, container microservices, log data processing, and other workloads that can be managed with smaller cores and available memory.

AWS Graviton Benefits

Since it initially originated as a mobile chip, users perceive Arm cloud server as a means to decrease power consumption. But AWS is committed to combining Arm benefits such as high core count and the ability to reduce power in order to deliver a resource-efficient solution.

AWS Graviton2 platform, from the point of view of economic efficiency, is unmatched. SoC Graviton2 can be faster than x86-compatible Intel processors by up to 50% or more depending on the tasks. In general, the superiority of Graviton2 over Intel solutions reaches 40% in terms of performance per dollar.

The top benefits of integrating AWS Graviton in your company or organization are decreased costs, lower latency, greater scalability, better availability, enhanced safety.

  1. Cost-efficiency. The AWS Arm processor architecture allows reducing power consumption costs retaining performance levels satisfactory.
  2. Multisystem compatibility. The processors are built on the 64-bit Arm core architecture. Multiple Linux OS are compatible with that configuration which creates a wider range of choices for clients.
  3. General-purpose design. AWS Graviton core is designed to enhance the servers’ performance and improve micro-services/cluster computing.
  4. Efficient CPU capacity. Graviton processor offers nearly 3.45% greater operating efficiency over classical architecture as well as more effortless processor deployment compared to X86 processors.
  5. Computer-intensive design. AWS Graviton is based on an HD video performance computing model, encoding videos, gaming, and CPU-based computer learning processes.

Graviton2 vCPU Core

Other than instance types, the most critical metric for defining an instance’s strengths is its vCPU count. AWS Arm allows running instances extending from 1 vCPU to up to 128, while other prominent platforms extend in sizes of up to 96. The Graviton2 employs a 64-core socket without SMT, meaning the platform is usable with the maximum vCPU instance size of 64.

Running in a standard setting, Graviton2 performs 20% better, and the power consumption of the Arm core is about half that of other types of cores. Since the cost savings are also about 20%, performance-cost improvements reach 40%.

Honeycomb, a SaaS company, switched to Graviton2 to reduce operating expenses. The company engineers can now run 40 instances of the same Ingress workload on Graviton2 rather than the 70 (that was necessary with X64), retaining performance level. The company also mildly increased CPU utilization: 50–60% instead of 45%, without risking the CPU getting overloaded when the usage spikes.

Arm Beyond AWS

AWS holds the title as the most significant provider for Arm cloud servers so far, but the tech space will not be limited to AWS only. Oracle offers fully virtualized Arm systems or bare metal systems to perform infrastructure scenarios like transcoding or maintaining Kubernetes containers.

Microsoft has been in the process of building Windows Server systems with Arm processors on Azure since 2017; this means the current Azure services can now support a broader audience of cloud customers. The Project Olympus OCP chassis, used with Microsoft Azure, can accommodate different Intel or Arm motherboards.

For developers, having both Windows and macOS on Arm silicon could be a watershed moment for more extended utilization of Arm-based servers in the future. Developers will be willing to code using their heir local devices before shifting to CI/CD platforms that ultimately progress to production.

Final Word

AWS is now telling its customers that it can deliver the highest quality service with its own processor. This increases the likelihood that AWS will extend this practice to other workloads. AWS’s competitive advantage will only grow as other major cloud operators begin to leverage its solutions.

Being a partner of AWS, OpsWorks Co. helps companies to decrease costs for AWS cloud services by up to 30%.

--

--