DeepSeek's latest AI model, R1, is making waves-not just for its performance but for the existential questions it raises about AI chip demand.
The company claims its training costs were a mere $5.6 million, a fraction of what frontier foundation models demand.
Naturally, investors are wondering: if AI can be trained this efficiently, does that mean the industry's chip-buying frenzy is about to cool off?
Jevons Paradox Strikes Again
JPMorgan's Harlan Sur isn't hitting the panic button. Instead, he points to history-where efficiency gains in computing have paradoxically driven more demand, not less.
From x86 virtualization in the 2000s to ARM Holdings PLC's
Custom Silicon Could Be the Winner
DeepSeek's low-cost efficiency doesn't just raise questions-it also opens opportunities. Sur believes that hyperscalers and cloud providers will keep pushing for greater AI capabilities, but they won't just rely on off-the-shelf GPUs. Custom-built ASICs-where companies like Broadcom Inc
Despite lingering uncertainties around DeepSeek's exact cost structure and reliance on open-source models, one thing remains clear: AI innovation never slows down-it only fuels further breakthroughs.
Sur reiterates his bullish stance on Broadcom, Marvell, and Nvidia Corp