Microsoft Maia 200 Beats Blackwell’s Efficiency

Microsoft Maia 200Microsoft is deploying its second-generation AI accelerator. The new Maia 200 NPU targets inference, with raw performance similar to that of the Nvidia Blackwell but requiring much less power. Maia only employs Ethernet, compared with other accelerators that employ a different technology for local (scale-up) interconnect. Microsoft has withheld details; its statements indicate that Maia […]

Other contents

to boldly go where no man has gone before, to seek out new fabs

Amazon Puts a Dollar Figure to Its AI Chip Business

Buttering No Parsnips: Google Says Nice Things About Intel's Chips

Repost: How is the Memory Crisis Reshaping the AI and Server Worlds? 🧠💻

Repost: How is the Memory Crisis Reshaping the AI and Server Worlds? 🧠💻

Google TPU Could Sell as Well as Nvidia Rubin

Apple Sources Leading-Edge Glass Substrates for Big Data-Center Chips

Not Any Dumber Than Acid Washed: Intel and Tesla/SpaceX Team Up to Fab Semis

Intel to invest in SambaNova. This is not a repost

Caltech Researchers Take Another Stab at One-Bit AI Models

Nvidia puts $2B into Marvell