DeepSeek's latest AI model, R1, is making waves-not just for its performance but for the existential questions it raises about AI chip demand.
The company claims its training costs were a mere $5.6 million, a fraction of what frontier foundation models demand.
Naturally, investors are wondering: if AI can be trained this efficiently, does that mean the industry's chip-buying frenzy is about to cool off?
Jevons Paradox Strikes Again
JPMorgan's Harlan Sur isn't hitting the panic button. Instead, he points to history-where efficiency gains in computing have paradoxically driven more demand, not less.
From x86 virtualization in the 2000s to ARM Holdings PLC's (NASDAQ: ARM) dominance in mobile, every step forward in computing power has led to a proliferation of use cases that need more chips, not fewer. The same could happen here: DeepSeek's efficiency might not curb AI chip demand but rather accelerate the adoption of AI, pulling forward the need for high-performance semiconductors.
Custom Silicon Could Be the Winner
DeepSeek's low-cost efficiency doesn't just raise questions-it also opens opportunities. Sur believes that hyperscalers and cloud providers will keep pushing for greater AI capabilities, but they won't just rely on off-the-shelf GPUs. Custom-built ASICs-where companies like Broadcom Inc (NASDAQ: AVGO) and Marvell Technology Inc (NASDAQ: MRVL) thrive-could see an uptick as cost and power performance become critical differentiators.
Despite lingering uncertainties around DeepSeek's exact cost structure and reliance on open-source models, one thing remains clear: AI innovation never slows down-it only fuels further breakthroughs.
Sur reiterates his bullish stance on Broadcom, Marvell, and Nvidia Corp (NASDAQ: NVDA), emphasizing that the race for AI dominance is far from over. If history is any guide, this is just the beginning of the next wave of semiconductor demand.