At the 2026 CES Jensen Huang made many announcements associated with the introduction of Nvidia’s Rubin Vera AI architecture and the various open-source AI foundational models that the company created for various applications, including Alpamayo for automotive. One interesting announcement from NVIDIA was about an Inference Context Memory Storage Platform or ICMSP, using the Nvidia BlueField-4 Network intelligent Interface Card or NIC. This is described as a new kind of AI-native storage infrastructure designed for gigascale inference in order to accelerate and scale agentic AI. An image of the BlueField-4 NIC is shown below.
So, what is context memory? During an interaction with an AI system data is generated about the interaction with the AI. This is the context of that interaction. If that context information is saved it can make future interactions with the AI more consistent, coherent and personalized. It allows AI to remember details across conversations, learn unique patterns, and understand complex, multi-turn interactions by storing relevant data beyond the immediate prompt. By storing this context information in long term storage this context can retained for future interactions.
Besides the additional usefulness of context memory storage, it can also reduce the new calculation requirements by an AI system for individual queries, since data is recovered from storage rather than regenerated or kept in expensive and limited HBM, thus saving energy and allowing more efficient use of GPU processing and memory by using the context from prior interactions. This data is in the form of key-value, or KV, cache.
I spoke with Kevin Deierling, an executive from Nvidia, about the BlueField smart ethernet NIC, or digital processing unit, DPU. He told me that the ICMSP is a network storage system that can consist of SSDs and/or HDDs for storing and accessing the context memory data. It thus comprises a new tier of storage between traditional enterprise storage and the HBM DRAM, which holds the data being processed by GPUs.
The Nvidia ICMSP will store 16TB of storage per GPU and this can enable petabytes of shared context across a GPU cluster to support very large workloads. Throughput is targeted at 800Gb/s through the BlueField-4 board. The ICMSP retains an interesting form of data, in that, if needed, it can be regenerated, unlike the data typically stored in enterprise storage systems. This means that traditional data retention requirements, such as redundancy, can be relaxed and still meet the needs for context memory storage. Thus a 4-9’s reliability might be acceptable versus 9-9’s required in conventional enterprise storage.
NVIDIA said that ICMSP products from AIC, Cloudian, DDN, Dell Technologies, HPE, Hitachi Vantara, IBM, Nutanix , Pure Storage, Supermicro, VAST Data and WEKA will be available in the second half of 2026.
In addition to Kevin, I also spoke with from Phil Manez, VP of GTM Execution at VAST and Jeremy Werner SVP & GM Core Data Center Business Unit at Micron about their plans and observations about memory, storage and the ICMSP. Phil spoke about the shortages in all types of memory and storage this year.
He also pointed out that 16TB per GPU in an ICMSP could easily result in an additional 100 Exabytes of context memory data being stored, putting additional requirements for storage, particularly solid-state NAND storage. He said that the ICMSP could provide a premium inference experience for customers. The image below shows a conceptual drawing of VAST’s implementation of an ICMSP.
VAST allows adding policies in their system, such as providing premium user experiences. Phil said that VAST has an advantage for this types of storage through their use of erasure coding with an overhead ratio of only about 3% with n+4 redundancy. They also have an extensive capability for data reduction where only the differences in otherwise very similar files are stored.
In addition to their ICMSP VAST also has what they call a flash reclamation program that can reuse a company’s existing SSD storage using VAST and they have a soft launch currently with this offering.
Jeremy Werner from Micron said that this is a very good time to be in the memory and storage business and spoke about the company’s investments in additional production capacity in Boise and New York state. This should result in 3.6M square feet of DRAM fabrication in the US. He said that the storage and memory hierarchy is getting more specialized layers and spoke also about trends for memory disaggregation.
He said that the company’s Gen 6 NVMe SSD is in qualification and that at the 2025 Super Compute Conference they demonstrated 230M IOPS in a single storage server and that the company is working on additional innovations like Storage Next.
Micron is also looking at the large amount of storage required for context KV cache storage. He mentioned the company’s 245TB E3L form factor SSDs that it is introducing this year. He foresaw overall DRAM supply growth in the high teens to 20% this year but that demand will be much higher in 2026, and this will lead to higher prices and some constraints on AI data center buildouts.
Nvidia’s inference context memory storage initiative based upon the BlueField-4 DPU will drive even greater demand for storage to support higher quality and more efficient AI inference experience.


