Fractile Computes in Memory for AI Inference

Sierpinski carpetStartup Fractile claims its data-center NPU will be 100× faster than an Nvidia H100 at 1/10th the cost. The secretive company indicates it's employing in-memory computation to improve MAC-unit utilization and power efficiency.

Other contents

Esperanto Technologies Exits AI Chips

Esperanto Technologies Exits AI Chips

Indian startup InCore has proven its SoC Generator in a test chip

Indian startup InCore has proven its SoC Generator in a test chip

Intel Auto Business Runs Out of Gas

Intel Auto Business Runs Out of Gas

Microsoft’s Maia NPU Slips

Microsoft’s Maia NPU Slips

Google TPU to Host OpenAI Models

Google TPU to Host OpenAI Models

The MCUs Aren’t All Right

The MCUs Aren’t All Right

Intel axes auto business. In other news, Intel had an automotive business.

Intel axes auto business. In other news, Intel had an automotive business.

Huawei and SiliconFlow describe speedy LLM for CloudMatrix 384

Huawei and SiliconFlow describe speedy LLM for CloudMatrix 384

Slash Server Costs Without Hardware or Software Changes

Slash Server Costs Without Hardware or Software Changes

AMD Bares Instinct MI350 GPU, Teases MI400 and MI500

AMD Bares Instinct MI350 GPU, Teases MI400 and MI500