It’s hard to imagine that the chip and technology sector could repeat the stunning year it had in 2023, but financial and technology analyst groups both are nearly universal in agreement that 2024 will be just that.
Fueled by the need for more computing horsepower than ever before, including the rise of AI, chip companies will likely see 2024 as another strong year, with nuances between the levels of success.
Let’s start with a look at some of the critical players in the data center and IT infrastructure markets. This segment fueled the AI appetite in 2023, driving valuations higher and consuming essentially every chip that these players could produce. I see no reason not to have another 20%-50% increase in silicon and infrastructure opportunities, a way for nearly all players to benefit to some degree.
Nvidia
NVDA,
is obviously the leader in the clubhouse, benefiting with a more than 200% increase in stock price over the past year thanks to its dominance in the data center for its GPUs, powering the most intense and complicated workloads to train the AI models revolutionize computing. Though other companies were playing catch up in 2023 and will continue to do so in 2024, I expect the rise of competing chip options from Advanced Micro Devices
AMD,
and in-house developed silicon from the likes of Microsoft
MSFT,
Alphabet
GOOGL,
and Amazon.com
AMZN,
to start to make some inroads.
Nvidia’s data center segment revenue will grow along with the larger market expansion, but it seems inevitable that its market share will trickle backwards a bit as competition heats up. The company is hoping to offset that with new product and service offerings; if Nvidia can truly be more than just a chip supplier to the market, and instead offer “AI factories” as CEO Jensen Huang has dubbed them, or other AI and compute microservices, it could see the same or better revenue growth in 2024.
The primary competitor for Nvidia is going to be AMD and its MI300 family of GPUs announced late in 2023. CEO Lisa Su has been dripping with confidence about this new chip and making some bold statements on expected revenue, with customers like Microsoft buying in early. As a true second supplier for high-performance, high-memory bandwidth chips capable of handling the most demanding AI workloads, AMD should see significant upside on that alone. But AMD also will continue to see improvements in its market share in the data center CPU space, with the EPYC line of processors needling away at the dominant Intel Xeon family.
Intel
INTC,
has the cloudiest but potentially most interesting 2024 ahead of it in the data center segment. Last December the company launched its next-generation Xeon processor, codenamed Emerald Rapids, and teased Gaudi 3, a dedicated AI processor that uses a unique architecture compared to the GPUs from Nvidia and AMD. Gaudi 2 is selling now, though visibility into how successful it has been in 2023 or what the pipeline of customers in 2024 looks like isn’t well known. This custom AI architecture offers unique power and performance benefits for some specific workloads based on testing I have seen, but it needs to convince large scale AI customers that this is a long-term solution. Intel will see some gains simply because of the limitation of Nvidia’s own GPU inventory, but CEO Pat Gelsinger has much larger ambitions than just Nvidia’s leftovers.
Finally for the data center space, it’s worth mentioning the cloud service providers and the expected growth there. Microsoft Azure, Google Cloud, and Amazon AWS are the three biggest players that could see significant increases in compute utilization by AI companies large and small, offering both off-the-shelf solutions from AMD, Nvidia, and Intel, but also with their own custom chips in a fully verticalized solution stack. All three of these companies, along with Meta Platforms
META,
and others, have verbally committed to these custom silicon and software development roadmaps, but it isn’t clear what kind of percentage of their own cloud service offerings will really utilize them over the next 12 months.
Another risk spot for cloud providers is the need for data security and privacy for enterprises rolling out AI applications and solutions where it might make more sense to have on-premises data centers handling some of the AI training and inference workloads, a possible win for Dell Technologies
DELL,
and HPE
HPE,
to fill that gap.
The client space for chip companies might be more interesting than even the data center segment, despite growing at a much lower clip. The impact of the AI PC, and client computing increases in general, is more likely to create dramatic winners and losers over the next 12 months, with either incumbent leaders squashing the competition or with that same competition taking noticeable market share.
Microsoft’s potential game-changer
A unique opportunity here lies with Microsoft. The AI PC push, which is still ill-defined and more of a marketing term than anything consumers can understand or depend on, provides Microsoft a chance to redefine what personal computing is across devices, operating systems, content management, generative AI tools, and more.
Much like the smartphone revolution changed how we think about accessing and creating data, how we integrate with AI will be the next big shift. This could bring a renaissance of the PC, taking back mindshare from the smartphone as the preeminent device for getting things done and organizing our massive collections of data and information. And what better way to try to fight back against Apple
AAPL,
and the Mac than with a capability Apple has yet to really engage with? The window might be small, but it’s definitely there.
The three big names around the “AI PC” have been Intel, AMD, and Qualcomm
QCOM,
to this point. Intel launched its Meteor Lake chips, now called Intel Core Ultra, in December of last year, promising to bring AI compute acceleration to a whole generation of laptops in 2024. By integrating a dedicate NPU (neural processing unit) on the chip itself, like you can integrate a graphics engine on the chip, Intel is pushing for a balance of performance and battery life. Intel Core Ultra gives Intel the chance to scale up the AI PC space, shipping millions and millions of processors to market this year, and it hopes to capitalize on this by creating thought leadership (Intel = AI on your PC) and probably pushing up prices of these chips to its customers to drive revenue.
AMD has had a dedicated AI accelerator in a small segment of its laptop chips since the middle of last year, but sales and market saturation are unknown. In December it announced the next two generations of its Ryzen laptop chips coming in 2024 with improvements in AI performance for both, hoping to take more of this chip market from Intel. For AMD, it can continue to displace Intel with OEMs and as long as it has a competitive and reliable AI PC strategy, can draft off of any marketing Intel and Microsoft push.
Qualcomm is also in this race, with its Snapdragon X-series of chips coming in the middle of 2024, announced last October. These processors are based on the Arm architecture, rather than x86, and include a much more powerful NPU for AI processing than either Intel or AMD options. This is all upside for Qualcomm, part of its strategy of moving from a communications company to a compute company, and I expect we’ll see some significant wins with key system partners announced in the next few months. Qualcomm is also the only chip company in the client space that appears to be heavily investing in brand marketing, hoping it can capitalize on the stagnation of its chip competitors, tying Qualcomm and Snapdragon to the biggest advancements in AI computing for consumers.
Not frequently brought up in the conversation of the AI PC, Nvidia has an interesting position that it has plans to emphasize in 2024. It’s GPUs, the same ones used in the data center AI segment but with less memory, are a great place for high performance, high throughput AI processing on a PC. For high end content creators, game developers, video and 3D animators, the power of a discrete GPU for AI work on a local machine dwarfs that of any integrated NPU from Intel, AMD, or Qualcomm. The highest end GeForce products have 800+ TOPS (tera-operations per second) compared to just 10 TOPS for the NPU on the Intel Core Ultra. This comes at the cost of power, and battery life if we are looking at Nvidia GPUs in a laptop, but for many AI workloads you just want pure performance. Not to mention Nvidia’s software stack is second to none, and developers of nearly all AI applications have been using Nvidia GPUs from day one, giving them another potential advantage to capitalize on.
3 unknowns
There are three big questions that I will be tracking through 2024 that will have impact on the above predictions and analysis, and just generally set the tone for what 2025 might look like.
- Where will the majority of AI processing take place? In the cloud, at the edge, or on a consumer’s local device? This will determine the balance, to some degree, of how much the data center segment or client segment chip battles affect the companies mentioned in this story.
- How prevalent will the custom silicon options from Microsoft, Alphabet, Amazon, Meta, and others become relative to the traditional silicon providers of Intel, AMD, and Nvidia? These are expensive investments that require billions of dollars and thousands of engineers, both hardware and software, to do right. Will the advantages of the verticalization, or for performance and power efficiency, provide enough gain to offset the momentum of traditional chip company products?
- This is kind of a wild one, but will Intel remain a combined integrated device manufacturer, or will it eventually split into a manufacturing company (making the chips) and a product company (designing chips)? We continue to see signs of Intel moving away from non-core businesses like Mobileye, its FPGA group, and now even its AI software business. A change here opens opportunities for other chip companies to expand their supplier options and gives the Intel products the chance to engage with new partnerships.
Ryan Shrout is the President of Signal65 and founder at Shrout Research. Follow him on X @ryanshrout. Shrout has provided consulting services for AMD, Qualcomm, Intel, Arm Holdings, Micron Technology, Nvidia and others. Shrout holds shares of Intel.
More: Apple stock looks expensive vs. the rest of the ‘Magnificent Seven’
Also read: Nvidia’s stock breaks new ground after healthcare partnerships and ahead of CES