Artificial intelligence (AI) requires extreme computational horsepower, but Maxim is cutting the power cord from AI insights. The MAX78000 is a new breed of AI microcontroller built to enable neural networks to execute at ultra-low power and live at the edge of the IoT. This product combines the most energy-efficient AI processing with Maxim's proven ultra-low power microcontrollers. Our hardware-based convolutional neural network (CNN) accelerator enables battery-powered applications to execute AI inferences while spending only microjoules of energy.
The MAX78000 is an advanced system-on-chip featuring an Arm® Cortex®-M4 with FPU CPU for efficient system control with an ultra-low-power deep neural network accelerator. The CNN engine has a weight storage memory of 442KB, and can support 1-, 2-, 4-, and 8-bit weights (supporting networks of up to 3.5 million weights). The CNN weight memory is SRAM-based, so AI network updates can be made on the fly. The CNN engine also has 512KB of data memory. The CNN architecture is highly flexible, allowing networks to be trained in conventional toolsets like PyTorch® and TensorFlow®, then converted for execution on the MAX78000 using tools provided by Maxim.
In addition to the memory in the CNN engine, the MAX78000 has large on-chip system memory for the microcontroller core, with 512KB flash and up to 128KB SRAM. Multiple high-speed and low-power communications interfaces are supported, including I2S and a parallel camera interface (PCIF).
The device is available in a 81-pin CTBGA (8mm x 8mm, 0.8mm pitch) package.