Synaptics’ Astra Pushes More AI to the Edge

Author: Dylan Mcgrath

 
 
 
Synaptics’ Astra Pushes More AI to the Edge
 

Artificial intelligence (AI) continues to move rapidly to the edge, spurred by the push for faster response times, lower costs, more privacy, and the drive to function independent of the cloud. Synaptics’ new Astra platform endeavors to ease the integration of more AI compute capability in IoT edge devices with Arm-based SoCs that integrate AI acceleration engines and an ecosystem of supporting software and development tools.

The Synaptics SL-Series, the first devices fielded under the Astra platform, includes 64bit processors that integrate quad-core Arm CortexA73 or CortexM55 CPUs with a dedicated neural processing unit (NPU) to offer more than 240% AI capability compared to similar embedded SoCs. The high-end SL-Series models also feature other compute engines, including a GPU and multimedia features for image signal processing, 4K video encode/decode, and audio.

There exists a pronounced gulf in the neural processing capabilities of IoT embedded processors currently on the market. On one side exist what are fundamentally beefed-up MCUs capable of 2 TOPS or less and on the other side are essentially scaled down versions of smartphone SoCs that offer 16 TOPS or more. Synaptics is attempting to fill this gap with the SL-Series to offer AI capability that falls between these extremes for IoT workloads appropriate for edge inferencing and multistream video processing. Two of the initial three SL-Series processors, the SL1680 and SL1640, integrate an NPU offering 7.9 and 1.6 TOPS, respectively.

Three models of SL-Series devices as well as an Astra series development kit are available now. Synaptics withheld pricing information.

The authoritative information platform to the semiconductor industry.

Discover why TechInsights stands as the semiconductor industry's most trusted source for actionable, in-depth intelligence.