Neureality Builds DPU for AI

Author: Anand Joshi

 
 
 
Neureality Builds DPU for AI
 

Data processing units, or DPUs, offload network functionality to a separate chip and free up the CPU for other tasks. A startup out of Israel, NeuReality, is taking this one step further and building an offload engine for AI-application pre- and postprocessing. The NR1 chip removes the need for a CPU in a server and works directly with an accelerator. In doing so, the company says that it increases performance and reduces power and cost.

NeuReality, headquartered in Tel Aviv, is a 45-person company founded in 2019. The company raised $35 million in Series A funding in 2021 led by Samsung Ventures with participation from SK hynix. The company is currently sampling the chip and expects to reach production in 2024.

The company’s network addressable processing unit (NAPU) is a hybrid of multiple types of processors that process the data flow from Ethernet to accelerator. It can perform functions such as video decoding, image signal processing, job scheduling, and accelerator queue management. Historically, the CPU has handled all these tasks. These tasks could overload the CPU impacting overall application performance.

NeuReality has built a PCIe card called the NR1M module and plans to sell a network-attached inference server, the NR1S, that includes several NR1Ms. The company lists names such as IBM, Lenovo, and AMD as its partners and has published benchmarks with AI accelerator chip start-up, Untether.

The authoritative information platform to the semiconductor industry.

Discover why TechInsights stands as the semiconductor industry's most trusted source for actionable, in-depth intelligence.