Although data centers and networks are scalable, incorporating AI/ML processing into embedded edge devices creates a solution that automatically scales with load increases because, as more devices needing AI/ML processing are added to the edge, more AI/ML processing capability is added by each new device as well.
The Akida architecture bundles four NPUs into a node and connects multiple nodes with a mesh NOC . This architecture can implement standard CNNs , DNNs , RNNs , sequence prediction networks, vision transformers, and other types of neural networks in addition to the Akida NPU’s native networks, SNNs .-generation Akida platform architecture adds optimized hardware called Vision Transformer nodes, which work with the existing event-based neuromorphic components to create vision transformers.
There are three distinct licensable IP products based on Brainchip’s Akida platform architecture, as shown in the figure below:The Max Efficiency variant provides as many as four nodes , runs as fast as 200MHz, delivers the equivalent of 200 GOPS of performance, and needs only milliwatts of power for operation.
Education Education Latest News, Education Education Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Source: DiscoverMag - 🏆 459. / 53 Read more »