Kioxia has begun shipping evaluation samples of its next-generation UFS 5.0 flash memory, with peak throughput reaching up to 10.8 GB/s. The samples comply with the upcoming JEDEC UFS 5.0 specification and move the standard from roadmap planning into early-stage hardware validation for smartphone and embedded device platforms.
The February 24 announcement states that the embedded flash modules follow the forthcoming JEDEC Universal Flash Storage (UFS) 5.0 specification, which is still under finalization. The evaluation samples are intended for smartphone manufacturers, SoC vendors, and controller developers preparing UFS 5.0-compatible platforms. Final specifications could change before JEDEC ratifies the standard.
UFS 5.0 is built on MIPI M-PHY version 6.0 for the physical layer and UniPro version 3.0 for the protocol stack. It introduces HS-GEAR6 signaling, allowing theoretical speeds of up to 46.6 Gbps per lane. In a dual-lane setup, that translates to roughly 10.8 GB/s of effective bandwidth.
For comparison, UFS 4.0 delivered peak speeds of about 4,200 MB/s. That means UFS 5.0 more than doubles the previous generation’s throughput, making it the fastest mobile-class embedded storage interface announced so far.
At its highest speeds, UFS 5.0 begins approaching entry-level PCIe Gen5 NVMe SSD performance a clear sign of how quickly smartphone storage is catching up with PC-class bandwidth.
The jump in performance has real implications for on-device AI.
Today’s flagship smartphones increasingly run large local AI models for computational photography, generative image processing, voice assistants, live translation, and multimodal inference. Faster storage reduces the time needed to load AI model weights into DRAM, speeds up multi-frame image processing, and supports demanding 8K video capture workflows.
As AI models grow larger and more complex, storage throughput becomes a key factor in overall responsiveness. Higher speeds also help enable “race to idle” behavior, where components complete tasks quickly and return to low-power states sooner, improving battery efficiency during heavy AI workloads.
Beyond raw speed, UFS 5.0 adds several architectural improvements aimed at reliability and security. These include integrated link equalization for better signal integrity at high transfer rates, separate power supply rails between PHY and memory subsystems to reduce interference, and inline hashing support for stronger data protection and integrity checks. These features are becoming increasingly important as more AI processing happens locally on devices rather than in the cloud.
Kioxia’s evaluation samples use a newly developed in-house UFS 5.0 controller paired with its 8th-generation BiCS FLASH 3D NAND. The modules are available in 512 GB and 1 TB capacities and come in a compact 7.5 × 13 mm package. That small footprint matters for foldable phones, ultra-thin flagship devices, AR and VR headsets, automotive compute systems, and compact edge AI hardware, where space is limited.
Shipments of the 512 GB samples began February 24, with 1 TB versions expected in March. These units are strictly for evaluation purposes, and pricing has not been announced.
When UFS 5.0 reaches mass-market devices will depend on host controller support from leading mobile SoC vendors. Major storage transitions typically align with new flagship chip launches. Once JEDEC finalizes the specification and chipmakers integrate compatible controllers, premium smartphones and AI-focused devices could adopt the interface in late 2026 or early 2027.
The added bandwidth supports workloads in AI PCs, automotive compute platforms, and AR/VR systems where large data streams must move quickly between storage, memory, and dedicated processing units.
Faster storage shortens load times for large AI models and reduces delay in high-resolution capture workflows.
With evaluation sampling in progress, UFS 5.0 moves from specification planning into hardware testing across the mobile ecosystem. Broad adoption will depend on final JEDEC ratification and host controller support from SoC vendors. At up to 10.8 GB/s, the new interface increases available mobile storage bandwidth and reduces data transfer time for large AI models, high-resolution video capture, and other data-intensive workloads. Storage throughput directly affects how quickly data can be loaded into memory, influencing overall responsiveness in next-generation devices.
Sources: CNX Software, Kioxia






