D-Matrix Targets Fast LLM Inference for ‘Real World Scenarios’

January 13th, 2025

Startup D-Matrix has built a chiplet-based data center AI accelerator optimized for fast, small batch LLM inference in the enterprise, in what the company calls “real-world scenarios.” The company’s novel architecture is based on modified SRAM cells for an all-digital compute-in-memory scheme that the company says is both fast and power efficient.

Read the full article on EE|Times