Three Stocks to Buy as AI Hyperscalers Ramp Spending

Enterprise AI spending is entering its most aggressive phase. Hyperscalers including Amazon, Microsoft, and Google are planning record data center capital expenditures in 2026, with projections showing even higher levels in 2027. Nvidia’s CEO has forecasted that global data center spending could reach $3-4 trillion annually by 2030, representing a massive expansion from current levels.

This infrastructure buildout isn’t theoretical—it’s happening now with multi-billion dollar commitments already announced. The companies providing the computational hardware that powers AI workloads are positioned to capture disproportionate value as this spending wave accelerates over the next several years.

The investment opportunity is straightforward: identify the semiconductor companies with technology advantages, manufacturing capacity, and customer relationships to dominate AI hardware sales as enterprises move from experimentation to production-scale deployment. Three companies stand out for their distinct approaches to capturing AI infrastructure spending.

Nvidia Corporation (NVDA)

Market Cap: $4.5 trillion | Currently trading around $185 | Dividend Yield: 0.02%

Nvidia has dominated AI hardware since the current boom began in 2023, achieving the remarkable distinction of becoming the world’s largest company by market capitalization. The company’s graphics processing units set industry standards for AI training and inference workloads, but Nvidia’s advantages extend beyond chip performance to encompass full-stack solutions including software frameworks and system integration.

Wall Street expects 50% revenue growth in fiscal 2027 despite Nvidia’s $4 trillion market cap—a growth rate that would be impressive for a startup, let alone the world’s largest company. This projected expansion reflects the massive scale of AI infrastructure investments planned by hyperscalers and enterprises globally.

Nvidia recently unveiled its Rubin platform architecture, representing the next generation of AI computing hardware. These continuous innovation cycles maintain Nvidia’s technology leadership while creating upgrade opportunities as customers deploy newer, more powerful systems. The rapid pace of improvement means AI infrastructure deployed today will likely require refresh cycles faster than traditional data center equipment.

The company’s CUDA software platform creates powerful switching costs that reinforce hardware dominance. Developers building AI applications on CUDA frameworks face substantial retraining and code migration costs to switch to competing platforms. This installed base of CUDA-trained developers and optimized applications provides defensive moats that hardware performance alone cannot explain.

Nvidia’s data center revenue has grown from a small fraction of total sales just a few years ago to the dominant revenue driver, reflecting the successful pivot from gaming-focused GPUs to AI-optimized hardware. This transformation demonstrates management’s ability to identify and capture emerging opportunities at massive scale.

Supply constraints have emerged as Nvidia’s primary limitation rather than demand weakness. The company sold out of cloud GPUs in recent quarters, leaving hyperscaler demand partially unfulfilled. This supply-demand imbalance supports pricing power and suggests Nvidia could grow even faster if manufacturing capacity expands to meet demand.

Gross margins exceeding 70% reflect both technological advantages and favorable pricing environments created by supply constraints. These extraordinary margins provide substantial cash generation that funds continued R&D investments, shareholder returns, and potential strategic acquisitions to maintain competitive positioning.

The valuation reflects Nvidia’s dominant position and growth prospects, trading at premium multiples that assume continued technology leadership and market share retention. Any evidence of share loss to competitors or slowing demand growth would likely trigger significant multiple compression given how much optimism current prices reflect.

But the bull case remains compelling. If AI infrastructure spending reaches the $3-4 trillion annual levels Nvidia forecasts by 2030, and Nvidia maintains leadership positions across training and inference workloads, the company’s revenue could continue growing substantially from current levels despite its massive size.

For investors believing AI infrastructure spending will accelerate and Nvidia will maintain technology leadership, the stock offers exposure to the secular trend despite premium valuations. The combination of 50% near-term growth projections and multi-trillion dollar long-term market opportunities justifies consideration despite the company’s enormous market cap.

Advanced Micro Devices Inc. (AMD)

Market Cap: $331 billion | Currently trading around $203

AMD has struggled to match Nvidia’s AI success, capturing meaningfully smaller market share in AI GPUs despite strong positions in traditional data center CPUs and gaming graphics cards. But recent developments suggest AMD’s AI trajectory may be inflecting positively as supply constraints at Nvidia create opportunities for alternatives.

Nvidia’s sellout of cloud GPUs creates an opening for AMD. Hyperscalers need computing capacity to meet their AI infrastructure buildout plans, and they won’t simply pause investments because Nvidia cannot fulfill all orders immediately. This supply gap provides AMD with opportunities to win designs and deployments that might otherwise have defaulted to Nvidia.

AMD’s ROCm software platform—the equivalent of Nvidia’s CUDA—saw downloads increase 10x year-over-year in November, suggesting developers are actively exploring AMD as an alternative. Each software download represents potential future hardware sales as developers become familiar with AMD’s tools and optimize applications for AMD GPUs.

Software ecosystem development has been AMD’s primary challenge in competing with Nvidia’s entrenched CUDA platform. The 10x increase in ROCm adoption indicates AMD is making progress addressing this limitation through improved software releases and developer support. As the ROCm ecosystem matures, switching costs from Nvidia decrease, making AMD more viable for production workloads.

Management projects its data center division will grow at a 60% compound annual growth rate over the next five years, driven primarily by AI GPU sales. While AMD’s overall growth rate of 35% reflects slower-growing business segments, the data center projection demonstrates management confidence in capturing AI market share.

AMD’s existing relationships with hyperscalers through CPU sales provide natural pathways for GPU adoption. Cloud providers already run AMD processors extensively in their data centers, creating familiarity with AMD’s reliability, support capabilities, and partnership approach. These relationships lower barriers for GPU trials and deployments.

The company’s competitive pricing relative to Nvidia creates economic incentives for customers to diversify supply chains and avoid single-vendor dependency. Even customers preferring Nvidia’s technology may choose AMD for portions of their workloads to maintain negotiating leverage and supply security.

AMD’s gross margins of 44% trail Nvidia’s 70%+ margins substantially, reflecting both technology gaps and less favorable pricing power. However, these margins provide room for improvement as volumes scale and AMD gains pricing leverage from supply constraints and improved competitive positioning.

The stock has experienced significant volatility, ranging from $76 to $267 over the past year as investors debate AMD’s ability to capture meaningful AI market share. Current levels around $203 reflect cautious optimism that AMD can grow AI revenue while acknowledging execution risks and Nvidia’s continued advantages.

For investors seeking AI hardware exposure with potential for multiple expansion if AMD captures more share than expected, the stock offers interesting risk-reward. AMD doesn’t need to match Nvidia’s dominance—capturing even 15-20% of the AI GPU market would represent transformative growth for the company.

Broadcom Inc. (AVGO)

Market Cap: $1.6 trillion | Currently trading around $345 | Dividend Yield: 0.7%

Broadcom pursues a fundamentally different strategy than Nvidia and AMD, focusing on custom application-specific integrated circuits (ASICs) rather than general-purpose GPUs. This approach targets a specific subset of AI workloads where customized chips deliver superior performance and economics compared to flexible but less efficient GPUs.

ASICs excel at repetitive, well-defined workloads where algorithms and data flows are standardized rather than constantly changing. AI inference—running trained models to generate predictions—often fits this profile once models reach production deployment. While training new models benefits from GPU flexibility, serving predictions at scale can be more cost-effective with purpose-built ASICs.

Broadcom provides design services to help hyperscalers develop custom ASICs optimized for their specific AI workloads. This collaborative approach allows customers to achieve better performance-per-watt and lower costs for high-volume inference tasks while maintaining ownership of chip designs. Broadcom manufactures these custom chips and captures both design service revenue and ongoing production volume.

Several major customers have already deployed Broadcom-designed ASICs with additional launches planned throughout 2026. Management expects AI semiconductor revenue to surge 100% in the coming quarter, demonstrating the rapid growth trajectory as customers ramp production volumes.

The ASIC strategy positions Broadcom to coexist with rather than directly compete against Nvidia and AMD. Different workload types require different computational approaches, and the AI infrastructure market is large enough to support multiple chip architectures serving different use cases. Customers often deploy both GPUs for training and ASICs for inference, creating complementary rather than competitive dynamics.

Broadcom’s 64.71% gross margins reflect the value of custom design services and specialized manufacturing capabilities. These margins trail Nvidia’s but significantly exceed AMD’s, demonstrating favorable economics from the ASIC approach. The combination of design service fees and manufacturing revenues creates multiple margin capture points.

The company’s diversified business model extends beyond AI semiconductors to include networking chips, enterprise software, and broadband infrastructure. This diversification provides stability if AI ASIC growth disappoints while allowing AI upside to drive outperformance if adoption accelerates.

Broadcom’s 0.7% dividend yield provides modest income while the company pursues AI growth opportunities. The dividend reflects Broadcom’s commitment to shareholder returns even while investing heavily in emerging markets like AI infrastructure.

The stock has appreciated substantially, ranging from $138 to over $414 in the past year as investors recognized Broadcom’s positioning in AI infrastructure. Current levels around $345 reflect strong growth expectations while providing some valuation buffer compared to the 52-week high.

For investors seeking AI hardware exposure through a differentiated technology approach and more diversified business model than pure-play GPU companies, Broadcom offers compelling characteristics. The ASIC strategy may capture less total AI market share than GPUs, but it provides access to high-margin, high-growth segments with less direct competition from Nvidia.

data center spending plans for 2026, and AI workloads drive a substantial portion of these investments. The companies providing the chips that power AI computing should see accelerating growth as these capital expenditure plans materialize into actual hardware purchases.

For investors seeking AI exposure through established semiconductor companies with proven execution capabilities, these three stocks offer different risk-reward profiles within the same secular trend. Portfolio diversification across all three provides broad AI hardware exposure while allowing individual position sizing to reflect conviction in specific strategies or valuations.



NEXT: