Revolutionary Breakthrough in AI Hardware
American startup Normal Computing has unveiled the world’s first thermodynamic computing chip, the CN101, capable of performing vector and matrix operations up to 1000 times more efficiently than conventional processors. This innovation taps into thermodynamic principles, where the chip naturally settles into a state that reveals the output data—significantly reducing energy consumption during computation.
Born from Quantum Computing Expertise
Founded by former Google engineers experienced in quantum computing and artificial intelligence, Normal Computing recognizes that randomness is a core aspect of AI. They’ve applied this concept to hardware, leveraging natural stochastic phenomena like scattering and fluctuations to accelerate calculations.
How the CN101 Works
The CN101 consists of numerous identical oscillatory circuits with capacitors, where weight coefficients are defined by the stored charge. Remarkably, to accelerate processing, the chip needs to cool down—even by submerging it in water. Once thermal equilibrium is reached, capacitor charges are read, enabling matrix operations and linear algebra computations without consuming energy for the operations themselves.
Specialized for Stochastic Sampling and AI
The CN101 is optimized for stochastic sampling using lattice random walk (LRW), a technique vital for probabilistic computations in scientific modeling and Bayesian inference. This unique design ensures superior AI performance per watt, rack, and dollar—a critical factor in scaling AI without hitting current energy and cost limits.
Future Roadmap: CN201 and CN301
Normal Computing aims to commercialize large-scale thermodynamic computing. The CN201, expected in 2026, will target high-resolution diffusion models and advanced AI capabilities. By late 2027 or early 2028, the CN301 will debut, optimized for next-gen video diffusion models. According to CEO Faris Sbahi, this breakthrough could redefine AI scaling laws for decades by implementing AI algorithms directly in physical systems.
A Paradigm Shift for AI Efficiency
Chief Scientist Patrick Coles revealed the company’s multi-year plan: demonstrate CN101’s core applications in 2024, achieve record-breaking medium-scale GenAI performance in 2025 with CN201, and deliver multi-fold performance gains on large-scale AI tasks by 2027 with CN301. This vision follows the success of their 2024 thermodynamic computer, which proved that thermal equilibrium in simple circuits can execute complex linear algebra computations.
Conclusion
With the launch of the CN101, Normal Computing has taken a historic step toward energy-efficient AI hardware. By merging thermodynamic principles with advanced computation, this technology promises to unlock unprecedented performance gains while drastically reducing power demands—potentially reshaping the AI hardware landscape for years to come.





