Arms Shakes Up the Future of Computational Storage With New 64-Bit Processor
The new Arm Cortex-R82 looks to address the imminent challenges of mass data creation and processing needs.
Data creation is at an all-time high, and emerging technologies like IoT are expected to only increase this trend. In fact, Forbes analyst
Tom Coughlin predicts that by 2025, there will be over 79 zettabytes created by IoT alone.
Annual data predictions. Image used courtesy of Forbes
One major challenge with this massive amount of data is the memory limitations in most computer architectures. Data has to move in
and out of memory and to the processing units, causing a bottleneck in system performance.
Recently, Arm announced the release of its newest processor: the Arm Cortex-R82. This processor is the company's first 64-bit, Linux-capable
Cortex-R processor and is designed to accelerate the development and deployment of next-generation enterprise and computational storage solutions.
What problems will this new processor address?
The Problem With Computational Storage
von-Neumann architectures dictate that data always has to move between the CPU and storage (memory), causing a delay in response time for
input queries. With massive amounts of data being produced and accessed, along with the real-time compute demands of AI/ML, this bottleneck
can pose a severe challenge.
Computational storage vs. traditional architecture. Image used courtesy of Tech Target
One solution that engineers are working toward is computational storage. Computational storage moves processing closer to or even inside of
storage devices. This technique addresses the real-time processing requirements of AI/ML applications by reducing resource consumption and
costs while achieving a higher throughput for latency-sensitive applications.
Additionally, computational storage minimizes data movement energy, which normally consumes vast amounts of power in data centers.
New Arm Processor For Computational Storage
Among the many features of Arm's new processor is the ability for the CPU's cores to be individually and dynamically assigned to either a
memory protection unit (MMU) or a memory management unit (MMU).
This theoretically affords storage controllers the ability to operate with different profiles during peak and off-peak hours, reassigning cores
from real-time traditional memory tasks to computational storage tasks as needed.
The Arm Cortex-R82 has the ability to adjust the type of workload running on the storage controller based on external demands. Image used courtesy of Arm
Arm says the new Cortex-R82 also provides up to two times performance improvement, depending on the workload, compared to previous
Cortex-R generations. This allows storage applications to run workloads (like ML/AI) at a lower latency. Arm is also extending its optional
Neon technology to provide additional acceleration.
Cortex-R82 is notably 64-bit, providing access to up to 1TB of DRAM for advanced data processing in storage applications.
The Future of AI/ML Workloads
Arm is working in the right direction with the release of the Cortex-R82 processor.
The future of ML/AI and IoT applications are going to be severely limited by the memory bottleneck present in von-Neumann architectures.
An immediate and obvious solution is to remove such limitations and design devices that can perform computations in memory. This processor
may allow designers to do just that.
Neil Werdmuller, director of storage solutions at Arm, explains the company's rationale in building the Cortex-R82: “In a world of billions of
connected devices, data processing can no longer only happen in the cloud. Cortex-R82 will help to ensure companies can generate insights and
extract the most value out of their future IoT deployments more efficiently and securely.”
Featured image (modified) used courtesy of Arm