EQS-News: WEKA Breaks The AI Memory Barrier With Augmented Memory Grid on NeuralMesh

Dienstag, 18.11.25 16:10
Monitoransicht mit Chart.
Bildquelle: pixabay

EQS-News: WEKA / Key word(s): Product Launch
WEKA Breaks The AI Memory Barrier With Augmented Memory Grid on NeuralMesh

18.11.2025 / 16:10 CET/CEST
The issuer is solely responsible for the content of this announcement.


Breakthrough Memory Extension Technology, Validated on Oracle Cloud Infrastructure, Democratizes Inference, Delivering 1000x More Memory and 20x Faster Time to First Token for NeuralMesh Customers

ST. LOUIS and CAMPBELL, Calif., Nov. 18, 2025 /PRNewswire/ -- From SC25: WEKA, the AI storage company, today announced the commercial availability of Augmented Memory Grid™ on NeuralMesh™, a revolutionary memory extension technology that solves the fundamental bottleneck throttling AI innovation: GPU memory. Validated on Oracle Cloud Infrastructure (OCI) and other leading AI cloud platforms, Augmented Memory Grid extends GPU memory capacity by 1000x, from gigabytes to petabytes, while reducing time-to-first-token by up to 20x. This breakthrough enables AI builders to streamline long-context reasoning and agentic AI workflows, dramatically improving the efficiency of inference workloads that have previously been challenging to scale.

WEKA's breakthrough Augmented Memory Grid is now available on NeuralMesh.

From Innovation to Production: Solving For The AI Memory Wall
Since its introduction at NVIDIA GTC 2025, Augmented Memory Grid has been hardened, tested, and validated in leading production AI cloud environments, starting with OCI. The results have confirmed what early testing indicated: as AI systems evolve toward longer, more complex interactions—from coding copilots to research assistants and reasoning agents—memory has become the critical bottleneck limiting inference performance and economics.

"We're bringing to market a proven solution validated with Oracle Cloud Infrastructure and other leading AI infrastructure platforms," said Liran Zvibel, co-founder and CEO at WEKA. "Scaling agentic AI isn't just about raw compute—it's about solving the memory wall with intelligent data pathways. Augmented Memory Grid enables customers to run more tokens per GPU, support more concurrent users, and unlock entirely new service models for long-context workloads. OCI's bare metal infrastructure with high-performance RDMA networking and GPUDirect Storage capabilities makes it a unique platform for accelerating inference at scale."

Today's inference systems face a fundamental constraint: GPU high-bandwidth memory (HBM) is extraordinarily fast but limited in capacity, while system DRAM offers more space but far less bandwidth. Once both tiers fill, key-value cache (KV cache) entries are evicted and GPUs are forced to recompute tokens they've already processed—wasting cycles, power, and time.

WEKA's Augmented Memory Grid breaks through the GPU memory wall by creating a high-speed bridge between GPU memory (typically HBM) and flash-based storage. It continuously streams key-value cache data between GPU memory and WEKA's token warehouse, using RDMA and NVIDIA Magnum IO GPUDirect Storage to achieve memory speeds. This allows large language and agentic AI models to access far more context without having to recompute previously computed KV cache or previously generated tokens, dramatically improving efficiency and scalability.

OCI-Tested Performance and Ecosystem Integration
Independent testing, including validation on OCI, has confirmed:

  • 1000x more KV cache capacity while maintaining near-memory performance.
  • 20x faster time to first token when processing 128,000 tokens compared to recomputing the prefill phase.
  • 7.5M read IOPs and 1.0M write IOPs in an eight-node cluster.

For AI cloud providers, model providers, and enterprise AI builders, these performance gains fundamentally change inference economics. By eliminating redundant prefill operations and sustaining high cache hit rates, organizations can maximize tenant density, reduce idle GPU cycles, and dramatically improve ROI per kilowatt-hour. Model providers can now profitably serve long-context models, slashing input token costs and enabling entirely new business models around persistent, stateful AI sessions.

The move to commercial availability reflects deep collaboration with leading AI infrastructure collaborators, including NVIDIA and Oracle. The solution integrates tightly with NVIDIA GPUDirect Storage, NVIDIA Dynamo, and NVIDIA NIXL, with WEKA having open-sourced a dedicated plugin for the NVIDIA Inference Transfer Library (NIXL). OCI's bare-metal GPU compute with RDMA networking and NVIDIA GPUDirect Storage capabilities provides the high-performance foundation WEKA needs to deliver an Augmented Memory Grid without performance compromises in cloud-based AI deployments.

"The economics of large-scale inference are a major consideration for enterprises," said Nathan Thomas, vice president, multicloud, Oracle Cloud Infrastructure. "WEKA's Augmented Memory Grid directly confronts this challenge. "The 20x improvement in time-to-first-token we observed in joint testing on OCI isn't just a performance metric; it fundamentally reshapes the cost structure of running AI workloads. For our customers, this makes deploying the next generation of AI easier and cheaper."

Commercial Availability
Augmented Memory Grid is now included as a feature for NeuralMesh deployments and on the Oracle Cloud Marketplace, with support for additional cloud platforms coming soon.

Organizations interested in deploying Augmented Memory Grid should visit WEKA's Augmented Memory Grid page to learn more about the solution and the qualification criteria.

About WEKA
WEKA is transforming how organizations build, run, and scale AI workflows with NeuralMesh™, its intelligent, adaptive mesh storage system. Unlike traditional data infrastructure, which becomes slower and more fragile as workloads expand, NeuralMesh becomes faster, stronger, and more efficient as it scales, dynamically adapting to AI environments to provide a flexible foundation for enterprise AI and agentic AI innovation. Trusted by 30% of the Fortune 50, NeuralMesh helps leading enterprises, AI cloud providers, and AI builders optimize GPUs, scale AI faster, and reduce innovation costs. Learn more at www.weka.io or connect with us on LinkedIn and X.

WEKA and the W logo are registered trademarks of WekaIO, Inc. Other trade names herein may be trademarks of their respective owners.

WEKA: The Foundation for Enterprise AI

Photo - https://mma.prnewswire.com/media/2825138/PR_WEKA_Augmented_Memory_Grid.jpg
Logo - https://mma.prnewswire.com/media/1796062/WEKA_v1_Logo_new.jpg

Cision View original content:https://www.prnewswire.co.uk/news-releases/weka-breaks-the-ai-memory-barrier-with-augmented-memory-grid-on-neuralmesh-302618216.html

rt.gif?NewsItemId=EN26523&Transmission_Id=202511181006PR_NEWS_EURO_ND__EN26523&DateId=20251118


18.11.2025 CET/CEST Dissemination of a Corporate News, transmitted by EQS News - a service of EQS Group.
The issuer is solely responsible for the content of this announcement.

The EQS Distribution Services include Regulatory Announcements, Financial/Corporate News and Press Releases.
View original content: EQS News


2232038  18.11.2025 CET/CEST



Quelle: DGAP



myChampions100PLUS Einzelkontenverwaltung ab 500.000 Euro