d-Matrix is a semiconductor manufacturing company building new chips for data center AI inference using in-memory computing (IMC) techniques with chiplet-level scale-out interconnects.
d-Matrix is a semiconductor manufacturing company building new chips for data center artificial intelligence (AI) inference using in-memory computing (IMC) techniques with chiplet-level scale-out interconnects. d-Matrix has developed a purpose-built AI hardware and software platform. This includes Corsair, theCorsair—the company's chip based on digital in-memory compute (DIMC) architecture, and Aviator, theAviator—the company's AI software stack using broadly adopted open-source software. While early customers have purchased chips for evaluation, Corsair is expected to launch in 2024. d-Matrix differentiates itself from other AI chip manufacturers by focusing on inference and not the training of large models.
Headquartered in Santa Clara, California, d-Matrix was founded in 2019 by Sid Sheth (CEO) and Sudeep Bhoja (CTO). Sheth and Bhoka previously worked together at Inphi (Marvell) and Broadcom. The pair started d-Matrix after seeing the rise of transformer-based AI models and the need for new hardware to run large language modelslarge language models (LLMs). On April 20, 2022, d-Matrix announced $44 million in series A funding led by Playground Global, M12, and SK Hynix. On September 6, 2023, d-Matrix announced a $110 million series B round led by Singapore-based firm Temasek. Additional investors included Microsoft and previous investor Playground Global. Sheth stated Microsoft plans to evaluate the chip for its own workloads upon launch in 2024. Upon the series B announcement, d-Matrix also released revenue projections under $10 million for 2023 and $70 million to $75 million in two years when the company will break even. 2023 revenue is largely from customers purchasing chips to evaluate.
Headquartered in Santa Clara, California, d-Matrix was founded in 2019 by Sid Sheth (CEO) and Sudeep Bhoja (CTO). Sheth and Bhoka previously worked together at Inphi (Marvell) and Broadcom. The pair started d-Matrix after seeing the rise of transformer-based AI models and the need for new hardware to run large language models (LLMs). On April 20, 2022, d-Matrix announced $44 million in series A funding led by Playground Global, M12, and SK Hynix, with participation from Nautilus Venture Partners, Marvell Technology, and Entrada Ventures. On September 6, 2023, d-Matrix announced a $110 million series B round led by Singapore-based firm Temasek. Additional investors included Microsoft and previous investor Playground Global. Sheth stated Microsoft plans to evaluate the chip for its own workloads upon launch in 2024. Upon the series B announcement, d-Matrix also released revenue projections under $10 million for 2023 and $70 million to $75 million in two years when the company will break even. 2023 revenue is largely from customers purchasing chips to evaluate.
2019
Sheth and Bhoka previously worked together at Inphi (Marvell) and Broadcom.
d-Matrix is a semiconductor manufacturing company building new chips for data center AI inference using in-memory computing (IMC) techniques with chiplet-level scale-out interconnects.
d-Matrix is a semiconductor manufacturing company building new chips for data center AI inference using in-memory computing (IMC) techniques with chiplet-level scale-out interconnects. d-Matrix has developed a purpose-built AI hardware and software platform. This includes Corsair, the company's chip based on digital in-memory compute (DIMC) architecture, and Aviator, the company's AI software stack using broadly adopted open-source software. While early customers have purchased chips for evaluation, Corsair is expected to launch in 2024. d-Matrix differentiates itself from other AI chip manufacturers by focusing on inference and not the training of large models.
d-Matrix was founded in 2019 by Sid Sheth (CEO) and Sudeep Bhoja (CTO). The company is headquartered in Santa Clara, California.
Containing 130 billion transistors, d-Matrix claims Corsair offers up to 20x higher throughput for generative inference on LLMs, up to 20x lower inference latency for generative LLMs, and up to 30x less total cost of ownership (TCO). Corsair is designed to fit the user's entire model in memory and comes with a software development kit (SDK) for easier deployment.
Headquartered in Santa Clara, California, d-Matrix was founded in 2019 by Sid Sheth (CEO) and Sudeep Bhoja (CTO). Sheth and Bhoka previously worked together at Inphi (Marvell) and Broadcom. The pair started d-Matrix after seeing the rise of transformer-based AI models and the need for new hardware to run large language models (LLMs). On April 20, 2022, d-Matrix announced $44 million in series A funding led by Playground Global, M12, and SK Hynix, with participation from Nautilus Venture Partners, Marvell Technology, and Entrada Ventures. On September 6, 2023, d-Matrix announced a $110 million series B round led by Singapore-based firm Temasek. Additional investors included Microsoft and previous investor Playground Global. Sheth stated Microsoft plans to evaluate the chip for its own workloads upon launch in 2024. Upon the series B announcement, d-Matrix also released revenue projections under $10 million for 2023 and $70 million to $75 million in two years when the company will break even. 2023 revenue is largely from customers purchasing chips to evaluate.
d-Matrix is a semiconductor manufacturing company building new chips for data center AI inference using in-memory computing (IMC) techniques with chiplet-level scale-out interconnects.
d-Matrix is a semiconductor manufacturing company building new chips for data center AI inference using in-memory computing (IMC) techniques with chiplet-level scale-out interconnects.
d-Matrix was founded in 2019 by Sid Sheth (CEO) and Sudeep Bhoja (CTO). The company is headquartered in Santa Clara, California.
d-Matrix is a semiconductor manufacturing company building new chips for data center AI inference using in-memory computing (IMC) techniques with chiplet-level scale-out interconnects.