The Oracle Thesis

The Neutral
Hyperscaler


Oracle, the $550 Billion Reservation Book,
and the Cascade from Paper to Power

I. The Floor: Why $550 Billion Already Settles the Thesis

The contracted AI demand alone would settle the thesis. Oracle holds more than half a trillion dollars in remaining performance obligations. The task in front of them is delivery — bring the capacity online. The capital is committed, the partners are contracted, the sites are under construction.

It is also the floor.

Before AI, Oracle was growing at roughly 7% a year — the standard arc of a legacy enterprise software business converting its installed base of databases and applications to the cloud. Steady. Unremarkable. A business priced for exactly what it was doing.

That Oracle no longer exists. Three things aligned at once: a product portfolio that mapped cleanly onto AI workloads, an installed base that concentrated the end market AI would need to reach, and a cloud architecture that — for reasons predating AI — happened to suit the physical design of AI data centers. Alignment of that kind is rarely engineered. It gets recognized after the fact.

The opportunity compounds from there. Oracle captures revenue from two directions at once: selling the infrastructure on which AI is built, and selling AI-transformed versions of its own database and software products to the enterprises already running them. The first is the half-trillion-dollar story. The second is a separate story stacked on top of it.

Exhibit 1Oracle: A Business in Transition
Exhibit 1 — Oracle: A Business in Transition
The shape of Oracle before AI, and the shape of Oracle after. A 7%-grower has become something else entirely.

II. OCI at an Inflection: From 8% to 29% in Eleven Quarters

Oracle Cloud Infrastructure — OCI — is Oracle’s public cloud platform: the compute, storage, networking, and managed services that customers rent on a consumption basis, in direct competition with AWS, Azure, and Google Cloud.

OCI was built for a specific job: serve as the cloud destination for Oracle Database, Fusion, and the enterprise workloads the installed base had been running for forty years. Every design choice reflects that purpose. The network is RDMA, optimized for database traffic. The compute is bare-metal. The regional footprint is built for repeatable enterprise deployments. The sales motion is top-down — reference-account selling into existing Oracle relationships, not bottoms-up developer adoption. And the capex reflects it most of all: Oracle spent what it took to support the migration of its own customers, and not meaningfully more.

That posture is exactly why Oracle has long been described as the “distant fourth” among hyperscalers. By the conventional scoreboard — total cloud revenue, developer mindshare, breadth of services — AWS, Azure, and Google Cloud occupied the first three places, and OCI sat well behind. The label stuck for a decade. It is the lens through which most analysts and investors still approach the company today.

The “distant fourth” framing reads the resulting gap with AWS, Azure, and Google Cloud as a verdict. It is not. It is the outcome of a race Oracle was not running. Oracle did not build a developer ecosystem, chase consumer-internet workloads, or commit capital ahead of demand through the 2010s — because its opportunity lay in the franchise it had spent forty years building. Measured against what OCI was designed to do, the strategy worked.

We saw this old lens repeatedly in our primary research. The most skeptical voices on OCI’s AI opportunity, across our interviews with almost a dozen former Oracle employees, were the ones with the longest tenure in the traditional data center business. Their evidence was internally consistent and reflected real expertise — but it was expertise calibrated to the OCI of the prior decade, in which Oracle’s footprint, sales motion, and capex were sized to a database migration opportunity. Set against the actual demand picture for AI infrastructure today — where capacity is constrained, GPUs are anything but commodities, and the buyer is a frontier lab rather than an enterprise IT shop — that evidence did not carry. The mental model could not update.

That OCI is not the OCI of the AI era. The financials show the shift clearly, and anyone still reading through the old frame will misjudge both 2025 and what follows.

$900M
IaaS, Q1 FY23
$4.9B
IaaS, Q3 FY26
~29%
Share of Total Revenue, Q3 FY26
85%
IaaS YoY Growth, Q3 FY26

A note on terminology: IaaS — infrastructure-as-a-service — is the segment line on Oracle’s income statement that captures consumption-based cloud infrastructure revenue. In practice, this line is OCI: the compute, storage, and networking that customers rent on Oracle Cloud, including the GPU capacity sold to frontier AI labs and the multicloud database services running inside AWS, Azure, and Google. When the IaaS line moves, OCI is what is moving.

In Q1 FY23, IaaS generated roughly $900 million — under 8% of total company revenue. Eleven quarters later, in Q3 FY26, the number is $4.9 billion in a single quarter, approaching 29% of total revenue.

The path between those two points is not linear. Through FY23 and FY24, growth was steady but modest — the segment was still small in absolute terms and capacity was being added incrementally. It stepped up through FY25 as multicloud regions came online and the first large AI contracts converted into billable consumption.

The most recent three quarters carry the sharpest inflection of the period. $3.0 billion in Q4 FY25. $4.1 billion in Q2 FY26. $4.9 billion in Q3 FY26. At the current sequential pace, IaaS now adds more revenue in a single quarter than the entire segment generated in any quarter of fiscal 2023.

The annual view tells the same story at lower resolution. $4.5 billion in FY23. $6.8 billion in FY24. $10.2 billion in FY25. $15.4 billion on an LTM basis through Q3 FY26. IaaS has moved from roughly 9% of Oracle’s business in FY23 to nearly 24% on an LTM basis — and the quarterly share is now running materially above that annual figure, which means the ratio keeps climbing. What was once a supporting act in Oracle’s financials is, in the most recent quarter, approaching one dollar of every three the company takes in.

Exhibit 2Quarterly IaaS Revenue — Eleven Quarters of Acceleration
Exhibit 2 — Quarterly IaaS Revenue — Eleven Quarters of Acceleration
The slope is visibly steepening in the last three prints, not flattening. At the current sequential pace, Q4 FY26 alone is on track to exceed Oracle’s entire FY23 IaaS revenue in a single quarter.
Exhibit 3IaaS Year-over-Year Growth Rate
Exhibit 3 — IaaS Year-over-Year Growth Rate
After decelerating predictably against a growing base through FY24, YoY growth has reaccelerated to 85% on a materially larger denominator — a pattern consistent with contracted backlog converting faster than the base is expanding. Exceptionally rare for a segment of this scale.
Exhibit 4Quarterly IaaS Share of Total Revenue
Exhibit 4 — Quarterly IaaS Share of Total Revenue
The curve is not just rising but steepening — quarterly share has added roughly 10 percentage points in the last four quarters alone. At the current slope, IaaS crosses one-third of Oracle’s quarterly revenue within the next few prints.
Exhibit 5Annual IaaS Revenue (LTM through Q3 FY26)
Exhibit 5 — Annual IaaS Revenue (LTM through Q3 FY26)
Annual IaaS has more than tripled in three years, and the LTM figure still excludes a Q4 FY26 that is on pace to be the largest IaaS quarter on record. The full-year FY26 print will land materially above the $15.4 billion shown here — and FY27 begins from a base that didn’t exist twelve months ago.

III. Pulled Into the Market: The Case for a Neutral Hyperscaler

Oracle did not push its way into the AI cloud market. It was pulled in by the two parties with the most leverage in the entire AI value chain — OpenAI and Nvidia — because doing so rewrites the terms on which those two deal with the incumbents.

The structural problem for both companies was concentration. From OpenAI’s seat, the supply side of frontier-scale training capacity collapsed to three counterparties: AWS, Azure, and Google Cloud. From Nvidia’s seat, the demand side for the largest GPU orders collapsed to roughly the same three. When supply or demand consolidates that tightly, each side of the table holds outsized leverage over price, allocation, and pacing — and that leverage shapes every contract written within the triangle.

Layered on top of the concentration are the conflicts. OpenAI’s largest landlord, Microsoft, sells products built on OpenAI’s own models. Amazon is now Anthropic’s primary cloud and equity partner. Google runs Gemini in direct competition with the labs it would otherwise host. Nvidia faces the mirror image: each of its three largest customers is also building silicon designed to displace Nvidia’s — Trainium at AWS, Maia at Microsoft, TPU at Google. Without alternative buyers of comparable scale, threats to accelerate internal programs carry real weight at the negotiating table.

The neoclouds proved the appetite was real. CoreWeave, Lambda, Nebius, Crusoe — pure-play GPU operators who buy Nvidia systems, rack them in leased datacenter space, and rent the capacity to AI customers. No general-purpose cloud services. No foundation model of their own. No silicon program. The business begins and ends with GPU hosting. Before Oracle was a serious AI cloud, OpenAI was signing with CoreWeave, and Nvidia was allocating scarce product — and roughly $6 billion in equity — to CoreWeave, Lambda, Nebius, and Crusoe. Neither party was a charity. OpenAI wanted a supplier that was not also a competitor. Nvidia wanted a buyer that was not also building a replacement. The neoclouds offered both — and could not finish the job. Too small. Too thinly capitalized to counterbalance three trillion-dollar incumbents. They proved the leverage was wanted. They did not prove it existed at scale.

And supply is finite. Nvidia’s top-end systems are bottlenecked upstream by TSMC advanced packaging and HBM memory from a handful of Korean and U.S. suppliers — physical constraints that no amount of Nvidia revenue can unlock in any given quarter. Demand runs well ahead of what the fabs can deliver, and has for years. That makes allocation, not manufacturing, the true lever. Every system Nvidia ships to Oracle is a system it does not ship to Microsoft, AWS, or Google. When the company commits hundreds of thousands of its most constrained product to a fourth cloud, it is pulling that supply directly out of the three customers it is trying to discipline. That is what gives the move weight at the negotiating table.

The principals have said so plainly. Jensen describes Nvidia as working “like mad” to expand OpenAI’s capacity “not only on Microsoft Azure but also on Amazon Web Services and Oracle Cloud Infrastructure.” OpenAI paired its $300 billion Oracle commitment with a $38 billion AWS agreement and negotiated its Microsoft exclusivity down to a right of first refusal. What was once a contractual monopoly is now a queue position.

Oracle has not replaced anyone. A new market structure has been built. For as long as OpenAI keeps buying compute and Nvidia keeps selling chips, a fourth hyperscaler-scale operator that competes with neither of them sets the ceiling on what the incumbents can extract. Not a datacenter. A permanent change in the balance of power.

IV. The Backlog: A Half-Trillion-Dollar Reservation Book

The revenue you see in any given quarter is the tip of the iceberg. The leading indicator of Oracle’s infrastructure trajectory is its remaining performance obligations — RPO — which represents contracted, legally committed future revenue.

Remaining Performance Obligations is money customers have legally agreed to pay. It is not a forecast or a pipeline estimate. It is a contractual commitment. Think of it as a restaurant with a reservation book so full that every table is booked for the next several years. Oracle’s RPO of $553 billion means the company has over eight years of current annual revenue already under contract. The question is not whether the revenue will arrive. It is how fast Oracle can build the infrastructure to deliver it.

$38B
RPO, FY20
$138B
RPO, FY25
$553B
RPO, Q3 FY26
$65B
New Commitments in 30 days (Q2)
Exhibit 6Remaining Performance Obligations Over Time
Exhibit 6 — Remaining Performance Obligations Over Time
From $38B in FY20 to $553B in Q3 FY26. FY23 was the first break in the pattern; fiscal 2026 exposed the magnitude. The early signs were in the 2023 data — read at the time as a stronger version of the old story, rather than as the first evidence of a new one.

The shift began in fiscal 2023, not fiscal 2026. From FY20 through FY22, Oracle’s RPO grew from $38.0 billion to $46.6 billion — the kind of steady, incremental build that reflects normal enterprise cloud adoption. Then FY23 closed at $67.9 billion, a $21.3 billion single-year addition that exceeded the prior two years’ gains combined. FY24 added another $30 billion to reach $98.0 billion. FY25 reached $138 billion.

By then the acceleration was visible, but the market still read it as ordinary cloud scaling. That was the analytical mistake. The framework itself was already changing — fiscal 2026 just exposed the magnitude. RPO jumped to $455 billion in Q1, $523 billion in Q2, and $553 billion in Q3. In a single 30-day stretch during Q2, Oracle signed $65 billion of new infrastructure commitments across seven deals from four customers, excluding OpenAI. That is the tell: a small number of AI-driven buyers can now move Oracle’s backlog by tens of billions in weeks. The early signs were in the 2023 data. They were read as a stronger version of the old story, rather than as the first evidence of a new one.

V. What the Buyers Are Actually Buying

Five Strategic Consequences of Secured Compute

To understand what is sitting inside Oracle’s contracted backlog, we have to step inside the buyer’s head. These are not companies purchasing a commodity. They are companies purchasing capability — and the question is: what does that capability unlock once they have it? The answer is that secured compute activates a set of strategic and product moves that no unsecured competitor can make. The dominos fall in predictable directions. Naming them is how we see what the contracts are actually worth to the people who signed them.

Secured compute makes a product roadmap credible

An AI lab’s commercial value depends on whether it can promise its customers that a better model is coming. Without secured compute, every roadmap commitment is contingent — “we will ship next-generation capabilities if we can get the chips.” With secured compute, the contingency disappears. The domino effect is commercial: credible roadmaps convert into longer-term enterprise contracts, deeper developer ecosystems, and higher valuations. Customers who would have hesitated to build critical workflows on an uncertain supplier will commit once the supplier can commit first. The compute doesn’t just enable the product; it enables the promise of the product, which is often what actually closes the sale.

Secured compute determines the shape of what can even be built

Product imagination in AI is bounded by compute. A feature that costs ten times more compute per interaction than today’s baseline — real-time personalized video, always-on agentic workflows, multi-hour reasoning, continuous per-user model updates — is not a feature a compute-constrained lab can plausibly ship. It is a feature it cannot plan for. Once capacity is secured, the feasible product set expands, and the expansion is asymmetric: the lab with compute can ship products its rivals literally cannot build — not because the rivals lack the ideas but because they lack the substrate. Some categories of product become exclusive to the buyers who secured compute early, and that exclusivity persists as long as the capacity advantage persists.

Secured compute is the new distribution

In prior technology waves, distribution meant control of a layer between producer and user — the browser, the operating system, the app store, the search box. In AI, the equivalent primitive is compute. If you can serve a billion people inference cheaply and reliably, you own the relationship with those people. This is why the buyers signed contracts that look disproportionate to their current user bases — they are not sizing for today’s traffic, they are sizing for the distribution they intend to own tomorrow. User acquisition strategy, pricing strategy, and geographic expansion all key off this reservation. In AI, compute is not just how the product is delivered — it is the channel through which the market is reached.

Secured compute creates a research flywheel that compounds over years

AI research is compute-hungry: every new architecture, training recipe, or data mixture must be tested at scale to know if it works. A lab with secured compute can run more experiments in parallel, learn faster, and iterate more frequently than a compute-constrained rival. That acceleration feeds talent acquisition — top researchers go where they will actually be given compute to use — which produces better models, which justifies more compute contracts, which attracts more researchers. The domino is a self-reinforcing loop that tightens over the life of the contract. A lab that is two years ahead on the flywheel does not stay two years ahead; the lead widens. This is why the simultaneity of the signing matters: everyone at the table understood that falling behind the flywheel once may mean falling behind permanently.

Secured compute unlocks horizontal expansion into new modalities

Language models are the current commercial surface of AI, but the same underlying capability scales to video, robotics, biological design, scientific simulation, and domains that have not yet been named. Each of those modalities requires enormous training compute to establish a frontier model. A buyer with secured capacity can decide, mid-contract, to allocate some portion of it to a new modality and emerge as a leader in a category their competitors cannot enter. The domino here is the reshaping of the company itself. A lab that secures compute is not just buying capacity for its current product; it is buying the optionality to become a fundamentally different and larger company over the contract’s life. The buyers know this. Some of the contract value is not for what they will do — it is for what they might do, and what they want to reserve the right to do before anyone else can.

VI. In Their Own Words: The Buyers on Compute

The strongest validation of an analytical framework is corroboration from the people whose decisions it is trying to explain. The framework we have built — that securing compute purchases strategic capability, not just server time — finds exactly that. The people who signed these contracts have, at various points, said publicly what we have concluded independently, and their language tracks our framework almost exactly. We weight their direct perspective heavily; what follows is what they have said in their own words.

On compute as the binding constraint — not one input among many, but the input

“Increasing compute is the literal key to increasing revenue. We are so compute constrained, and it hits the revenue line so hard … every unit of capacity the company brings online can immediately be put to revenue-generating use.” Sam Altman, OpenAI — September 2025

Dario Amodei told Lex Fridman that even at a hundred-billion-dollar scale of investment, “that’s still not enough compute, that’s still not enough scale.” In his own public commentary on Anthropic’s operations, he has described a company whose demand curve has moved faster than the physical world can accommodate, and characterized the pressure to scale infrastructure as existential rather than optional. Demis Hassabis chose the same framing in reverse, years earlier, when he explained why he sold DeepMind to Google rather than pursue an independent path: Google could guarantee the compute access his research required, and the alternative could not. For three competing lab leaders to independently converge on the same conclusion — that compute is the resource on which everything else depends — is itself a data point about the nature of the demand.

On insatiable scale — the conviction that more compute always converts to more value

“It appears that you can spend arbitrary amounts of money and get continuous and predictable gains; the scaling laws that predict this are accurate over many orders of magnitude … We see no reason for exponentially increasing investment to stop in the near future.” Sam Altman, “Three Observations” — February 2025

Amodei, on Dwarkesh Patel’s podcast, described ambitions to build hundred-billion-dollar training clusters and added, “I think all of that actually will happen.” In a separate conversation, he discussed a scenario in which a single company might purchase a trillion dollars of compute by 2027. Hassabis, at the Axios AI+ Summit in December 2025, put it most directly:

“The scaling of the current systems, we must push that to the maximum.” Demis Hassabis, Google DeepMind — December 2025

These are not hedged projections. They are statements of belief, by the people committing the capital, that the relationship between compute and value has no visible ceiling.

On compute determining what can be built

“Maybe with 10 gigawatts of compute, AI can figure out how to cure cancer. Or with 10 gigawatts of compute, AI can figure out how to provide customized tutoring to every student on earth. If we are limited by compute, we’ll have to choose which one to prioritize; no one wants to make that choice, so let’s go build.” Sam Altman, OpenAI

That framing — that compute is the gate between what is possible and what is not, and that the menu of what can be attempted expands with each gigawatt secured — is precisely the argument we made about horizontal expansion and product exclusivity. The buyers are not securing compute for what they plan to do. They are securing it so they do not have to choose.

The fear is not overspending. The fear is having the model capability arrive and not having the infrastructure to deploy it. That asymmetry — where being one year late on capacity is existential but being one year early is merely expensive — is the revealed risk calculus behind every long-dated contract in the RPO. Altman said the same thing from a revenue perspective: concern about spending would be justified only “if we had large amounts of computing we could not monetize profitably.” Until that moment arrives, every dollar spent on compute is, in his framing, a dollar that immediately converts. The buyers are telling us that the downside of overbuilding is manageable and the downside of underbuilding is fatal. That is why the contracts are the size they are.

VII. The Revenue Bet Inside the Backlog

Oracle’s cloud infrastructure revenue runs at roughly $20 billion a year on the most recent quarterly run rate. Its contracted backlog stands at over $550 billion. That ratio — backlog at nearly twenty-eight times current annual infrastructure revenue — looks, at first glance, like a number that needs explaining. But the explanation is not inside Oracle. It is inside Oracle’s customers. The labs that dominate the RPO are not contracting against revenue they already have. They are contracting against revenue they expect to generate — and to understand whether the backlog is credible, you have to ask whether their growth assumptions are credible.

Start with the numbers as they stand. OpenAI’s annualized revenue reached roughly $24 billion in early 2026, up from $6 billion in 2024 and $2 billion in 2023 — a sustained 3× annual growth rate at a scale where 3× means adding billions of dollars every quarter. And yet its committed compute spend with Oracle alone runs at approximately $60 billion per year over five years. That single contract requires OpenAI to generate substantially more revenue than it currently does, every year, just to service the Oracle commitment — before accounting for its $250 billion Azure obligation, its AWS deal, its own custom silicon program, and every other cost of doing business.

Anthropic’s trajectory is, if anything, steeper: from roughly $1 billion in annualized revenue at the end of 2024 to $30 billion by April 2026, with multi-gigawatt infrastructure agreements of its own in place. These companies are not spending out of current cash flow. They are committing capital on the conviction that their revenue curves will continue to compound at rates that have no precedent in enterprise technology.

The question is whether that conviction has evidence behind it, and on this point the data is unusually direct. OpenAI published a chart in January 2026 showing that its compute capacity and its revenue have tracked each other almost exactly over three years — compute grew roughly 9.5× from 2023 to 2025, and revenue grew roughly 10× over the same period. The company’s own conclusion was explicit: “we firmly believe that more compute in these periods would have led to faster customer adoption and monetization.”

Exhibit 7OpenAI: Compute and Revenue Moved in Lockstep
Exhibit 7 — OpenAI: Compute and Revenue Moved in Lockstep
Compute grew ~9.5× from 2023 to 2025; revenue grew ~10× over the same period. Revenue was not leading compute — compute was leading revenue. Every gigawatt that came online was monetized.

VIII. From Paper to Power: Converting the Backlog into Energized Gigawatts

The backlog establishes what has been promised. More than half a trillion dollars of revenue now sits in Oracle’s remaining performance obligations, legally binding and anchored by the creditworthiness of the largest AI customers in the world. The rest of this section is about what happens between the paper and the power.

An RPO is a contract. A gigawatt is not. A gigawatt is land that has to be acquired, a substation that has to be energized, a building that has to be constructed, cooling loops plumbed, fiber runs completed, and racks of chips delivered, installed, and activated. The credibility of half a trillion dollars of contracts depends entirely on whether the company holding them can convert that paper into physically energized compute — at scale, on schedule, and at a pace no enterprise technology company has ever sustained before.

The answer is not just yes. The answer is that the supply Oracle is preparing to bring online over the next three years is, roughly, an order of magnitude larger than the supply it has delivered to date. The rest of this section explains how that is possible.

The Order-of-Magnitude Gap

Through the first three quarters of fiscal 2026, Oracle has handed over approximately one gigawatt of AI infrastructure capacity to its customers — growing sequentially from a roughly 270-megawatt quarter in Q1, to 400 megawatts in Q2, to more than 400 megawatts in Q3. One gigawatt is a serious number. It is the first real moment in Oracle’s history at which its infrastructure business has begun to physically deliver at the scale its revenue ambitions have implied.

Then comes the pipeline. Through its partners, Oracle has secured more than ten gigawatts of power and data center capacity to come online over the next three years — more than ninety percent of which is already funded, not by Oracle, but by the partners operating each layer of the stack.

Exhibit 8Delivered Capacity vs. Secured Pipeline
Exhibit 8 — Delivered Capacity vs. Secured Pipeline
The picture is not subtle. Oracle’s AI infrastructure business is, today, delivering from a base of roughly one gigawatt. In three years, it will be delivering from a base of more than ten — the great majority of it already funded by partners carrying their own layer of the build.

Oracle’s infrastructure business is currently running at approximately twenty billion dollars of annualized revenue. In the third quarter of fiscal 2026, OCI grew 85% year over year — the steepest print the segment has ever posted. The AI-specific layer inside it grew 243% year over year. The segment is approaching one-third of Oracle’s total revenue, and it is generating that result on the smallest delivered capacity base it will ever operate from. Every incremental gigawatt that comes online lands into a market that is, by every available indication, still supply-constrained — which means the capacity converts directly into revenue, without the lag or ramp that defined earlier infrastructure cycles.

RPOs as Gravity

On a balance sheet, an RPO is a line item. In the physical world, it is a gravitational mass. Once it crosses a certain size, and once the demand behind it is credible, it begins to bend the behavior of every actor in the supply chain around it. Nothing moves because Oracle is paying for it. Things move because the existence of the contract has made each actor’s own self-interest point in the same direction.

The gravity is only as strong as the underlying demand is credible — and on that front, the evidence is unusually complete. The frontier labs are operating on scaling laws that have held across four orders of magnitude of compute, with no published ceiling and no sign of diminishing returns. Their own CEOs say so, in the plainest terms they are willing to use. Altman frames compute as the gate on what humanity can attempt. Amodei discusses trillion-dollar training clusters as something that “will actually happen.” Hassabis says the scaling of current systems must be pushed to the maximum. These are not projections from analysts. They are statements of intent from the buyers.

And the intent is not one-sided. Training is episodic; inference is continuous. On top of the training buildout sits agentic AI being deployed across the millions of private enterprise applications the world already runs — a workload that is cumulative, growing with every user, every workflow, every embedded feature. Both curves are headed in the same direction, at the same time.

What completes the picture is the proof from the people already operating at scale. AWS, Azure, and Google Cloud are now reporting tens of billions of dollars of AI infrastructure revenue, accelerating quarter over quarter, against their own admissions that they remain supply-constrained. This is the ecosystem’s evidence that the compute being contracted is not just being bought — it is being monetized, at price points that support real margins, by customers who keep coming back for more. Every site developer, power provider, chip supplier, and capital provider who evaluates an Oracle RPO can see the same signal reflected in the hyperscalers’ earnings reports. The money is real. The buyers are solvent. The unit economics work. That is the mass behind the gravity.

The Six-Actor Cascade

The mechanism has six actors. Each one makes an independent decision, in its own domain, based on its own self-interest. None is being commanded by Oracle. None is acting on hope. What aligns them is the same upstream signal — the RPO — which has made each actor’s self-interested calculation favorable at roughly the same moment.

Exhibit 9The Six-Actor Cascade
Exhibit 9 — The Six-Actor Cascade
Frontier lab → Oracle → site developer → power provider → chip supplier → capital provider. Each actor moves on the credibility of the layer above. The RPO does not command the chain. It organizes it.

The frontier lab signs the initial commitment. These are companies — OpenAI, Anthropic, Meta, xAI — with conviction that their revenue is compute-gated, and evidence to support that conviction. OpenAI’s own chart shows compute and revenue moving in roughly 10× lockstep over three years. Their treasuries are large enough to backstop gigawatt-scale, multi-year contracts. Their competitive stakes make securing forward compute the defining strategic priority. When they sign, they sign with intent, and the downstream signal is unambiguous.

Oracle receives the commitment and records it as a remaining performance obligation. This is the pivotal act in the system. Up to that moment, the customer’s demand exists as a stated intention — credible to those inside the conversation, but unexercisable by anyone else. Once Oracle reports it as an RPO, the demand becomes something the rest of the supply chain can build against: a named, quantified, legally binding, and in many cases prepaid claim on future infrastructure. The RPO is the instrument that converts intent into a financeable object. Everything that happens downstream is happening against that object.

The site developer responds by breaking ground. Crusoe is executing Abilene and Shackelford. Vantage and others are executing the newer campuses in Wisconsin, Michigan, and New Mexico. In almost any other era, a data center developer would have to acquire land and build on speculation, betting that a tenant would eventually arrive. Here the sequence is inverted — the tenant has signed before the shovel enters the ground. The financing that funds construction is underwritten not against a demand forecast but against a named, contracted counterparty. Leasing risk, the defining uncertainty in commercial real estate, has been removed before the first permit is filed.

The power provider allocates capacity to the site. New generation is itself a capital-intensive, multi-year commitment, and utilities have finite transmission to assign. When a provider decides which AI campus to prioritize, the RPO is the strongest available signal that the megawatts brought online will be consumed. At Abilene, that commitment has taken the form of a new 1-gigawatt substation plus 300 megawatts of on-site gas turbines — physical capital deployed because the offtaker is no longer speculative.

The chip supplier ships scarce inventory to Oracle ahead of competing destinations. This is Nvidia primarily, AMD increasingly, with networking, storage, and optical suppliers arriving in their wake. Nvidia has a structural reason to favor Oracle that goes beyond the size of the order. Its three largest hyperscaler customers — Microsoft, Amazon, Google — are also building their own silicon programs. Oracle is not. From Nvidia’s perspective, allocating scarce product to Oracle does not fund the development of a future substitute for Nvidia’s own product. This is why Jensen Huang names Oracle Cloud Infrastructure from the GTC stage.

The capital provider funds Oracle’s portion of the stack — a role worth drawing out clearly, because it is easily conflated with the partner-funded figure. The 90%+ partner-funded pipeline refers to the partners covering their own layer of the build: the site developer financing the building, the power provider financing the power infrastructure, each with its own capital stack underwritten against the Oracle RPO. Oracle’s own expenditure is concentrated on the compute equipment that sits inside the data center — the GPUs, the networking fabric, the storage. That portion is funded jointly: Oracle’s operating cash flow, customer prepayments, bring-your-own-hardware arrangements where the customer ships its own silicon, and — for the balance — debt raised in the public capital markets. On that last layer, the lender’s calculation is as favorable as project finance gets. The cash flows supporting the debt are contracted to counterparties of strong credit quality. The underlying assets have a demonstrated and active secondary market. And Oracle is publicly committed to maintaining its investment-grade rating throughout the buildout. The paper prices like investment-grade debt because that is, in substance, what it is.

Read across the chain, the point lands: the system coordinates not through a central planner but through a single financial instrument heavy enough to pull every actor independently into motion. Each layer acts on the credibility of the layer above. The RPO does not command the chain. It organizes it. And the loop closes on itself. Every gigawatt delivered becomes recognized revenue. Every quarter of recognized revenue further strengthens the credit profile supporting the next RPO. Every successful delivery cycle makes the next cascade run faster, on larger commitments, among actors who have already worked together and trust the mechanism. The gravity compounds.

Oracle’s Strategic Choice: Coordinator, Not Sole Funder

Read the cascade in that order and the strategic choice becomes visible. Oracle did not attempt to self-fund a ten-gigawatt buildout. It distributed the buildout across an ecosystem of experts, where each actor finances its own layer against the same upstream RPO.

The counterfactual makes the logic stark. Self-funding ten gigawatts would have required hyperscaler-scale capital — hundreds of billions of dollars of new debt or meaningful equity dilution — while simultaneously running internal construction, power, and land acquisition organizations that Oracle did not have. It would have taken longer, cost more, and concentrated execution risk on a single balance sheet. Instead, the site developers handle buildings, the utilities handle power, the chip suppliers handle silicon — each doing what they have spent decades becoming the best at doing. The RPO is what makes this division of labor possible. Without it, none of the partners would be willing to carry their own capital. With it, they are eager to.

The same ethic carries into the portion Oracle does fund. Bring-your-own-hardware lets customers ship their own silicon and bypass Oracle’s capex entirely. Upfront customer payments let the contracted revenue arrive before the equipment purchase does. Supplier arrangements with Nvidia and AMD — including the option for the chip vendor to lease rather than sell — further synchronize Oracle’s outflows with its inflows. More than $29 billion of contracts have been signed using this combined model since the Q2 earnings call alone. Oracle’s guidance is explicit: the company now expects to need substantially less external capital than the roughly $100 billion Wall Street had been modeling.

The Sites Themselves

Numbers lose their edge over a long document. Ten gigawatts secured. Ninety percent partner-funded. $29 billion in new-model contracts. The next five subsections are where those numbers stop being accounting abstractions and start being Texas dirt, Wisconsin earthwork, and New Mexico concrete. Five sites. Five satellite images. Each one is a place on American earth where the cascade described in the previous sections is currently pulling megawatts into existence that did not exist two years ago.

Abilene, Texas — OpenAI’s anchor campus

Exhibit 10Abilene — The Proof of Concept at Gigawatt Scale
Exhibit 10 — Abilene — The Proof of Concept at Gigawatt Scale (data) Exhibit 10 — Abilene — The Proof of Concept at Gigawatt Scale (aerial)
Eight-building campus in Abilene, TX, operated by Crusoe, anchored by OpenAI. 1,100 acres. 6,400 construction workers on site at peak. Construction began May 2024; satellite image dated March 2026. 295 MW already energized and serving live workloads. Trajectory to ~1.2 GW with a new 1 GW substation plus 300 MW of on-site gas turbines. OpenAI workloads went live less than twelve months from groundbreaking.

OpenAI’s workloads went live at this site less than twelve months from the date Crusoe broke ground. One year from permit to production — at gigawatt scale. That is the cascade compressing the schedule. Crusoe did not wait for Oracle’s cash. The utility did not wait for the building to be finished before commissioning the substation. Nvidia did not wait for the substation to be fully energized before rack deliveries began. OpenAI did not wait for the full campus to come online before moving workloads onto the earliest completed phases. Each actor moved on the credibility of the contract upstream, and the sequence telescoped into a delivery that no single party could have produced on its own.

Every one of the six actors from the cascade is visible in this single frame. The buildings (site developer). The substation and gas turbines (power provider). The racks arriving inside (chip supplier). The workloads running for a named anchor tenant (frontier lab). The contracts underpinning all of it — invisible in the image but legally binding (Oracle). And the operating cash flows and debt financing Oracle’s equipment, also invisible but materially present (capital provider). What you are looking at is not a data center under construction. It is the cascade operating in three-dimensional space.

Shackelford County, Texas — the scale-up

Exhibit 11Shackelford — Same Playbook, Double the Scale
Exhibit 11 — Shackelford — Same Playbook, Double the Scale (data) Exhibit 11 — Shackelford — Same Playbook, Double the Scale (aerial)
Same anchor tenant as Abilene, through the same kind of Texas campus, with one defining difference: planned capacity runs toward roughly 2 GW, nearly twice Abilene’s endpoint. Five major buildings under construction at various shell-completion stages, with Building 1 visibly further along than the others.

The significance is in the comparison. Abilene proved the mechanism works at gigawatt scale. Shackelford proves it generalizes — started later, scoped larger, executing concurrently with Abilene rather than waiting for Abilene to finish. Oracle did not have to rediscover the playbook to run it at two gigawatts. It applied more of the same, in parallel. That is the pattern worth tracking across the remaining three sites. Not one thing at a time. Several things at once.

New Mexico — the desert campus

Exhibit 12New Mexico — Geographic Diversification
Exhibit 12 — New Mexico — Geographic Diversification (data) Exhibit 12 — New Mexico — Geographic Diversification (aerial)
Same anchor tenant, similar planned capacity ~2 GW, situated on a desert site. February 2026 shell-stage image: multiple building skeletons up, foundations poured, perimeter infrastructure visible across the parcel.

The interesting detail here is cooling. Oracle’s standardized campus design uses a closed-loop liquid cooling system, which means annual water consumption for a one-hundred-megawatt building runs lower than a single-family home in the same region. That is the design choice that makes a two-gigawatt AI campus in the New Mexico desert possible without competing against municipal water demand. Geographic diversification is not just a hedge against concentration in Texas. It is a demonstration that the cascade runs wherever the power and land math works — and that Oracle’s campus design meaningfully expands where that math works.

Wisconsin — bare earth

Exhibit 13Wisconsin — The Mechanism Moves Before the Site Exists
Exhibit 13 — Wisconsin — The Mechanism Moves Before the Site Exists (data) Exhibit 13 — Wisconsin — The Mechanism Moves Before the Site Exists (aerial)
The earliest site in the pipeline. Bare earth in motion at the date of the image. Full-buildout trajectory toward ~1.2 GW — comparable to Abilene’s endpoint — but no shells yet, no substation visible, no racks arriving. Dirt being graded. Vehicles on site. Construction staging taking shape.

The thing worth noticing is what is already in place despite the appearance. Land is owned. The tenant has signed. The RPO sits on Oracle’s books. Power allocation is underway with the local utility. Equipment is on order. The six-actor cascade has already run its course — months before the first building rises. That is the proof that the mechanism is not speculative. The site does not have to exist first for the cascade to move. The cascade moves first, and the site follows.

Michigan — in parallel

Exhibit 14Michigan — Simultaneity Across Regions
Exhibit 14 — Michigan — Simultaneity Across Regions (data) Exhibit 14 — Michigan — Simultaneity Across Regions (aerial)
Earthwork complete, foundations poured, construction vehicles mobilized, no shells yet. Planned capacity ~1.4 GW at full buildout.

The observation that matters is not the site in isolation but the site alongside Wisconsin. Two northern campuses, similar scale, roughly the same stage of completion, in regions hundreds of miles apart, with different developers, different utilities, different local labor pools — advancing simultaneously. That simultaneity is what makes a 10+ GW pipeline delivering over three years arithmetically possible. If the pipeline were serialized — one site at a time, one region at a time — the math would not work. Because it runs in parallel, across regions and partners who each run their own layer independently against the same upstream RPO, it does.

The Factory Floor: Execution Velocity

The preceding subsection showed what Oracle is building. This one shows how fast it is building it — and more importantly, how fast the cadence itself is accelerating. Oracle disclosed the following operational metrics from the last twelve months on its Q3 earnings call.

Exhibit 15Operational Metrics — Trailing Twelve Months
Exhibit 15 — Operational Metrics — Trailing Twelve Months
Manufacturing sites up 3×. Rack output per site up 4×. Rack-to-revenue time cut 60%. On-time delivery sustained at 90%+ across several quarters. AI infrastructure gross margin at 32%, above the 30% long-run guidance floor.

The Timing Alignment with Nvidia’s Next Generation

Oracle’s ten gigawatts do not come online in 2026. They come online across 2026, 2027, and 2028 — precisely the window in which Nvidia transitions from Blackwell to Rubin to Vera Rubin + LPX. At Nvidia’s Ultra workload tier, published revenue per gigawatt runs roughly $30B on Blackwell, $150B on Rubin, and $300B on Vera Rubin + LPX. A tenfold productivity expansion from today’s baseline silicon to the silicon that will fill the back half of Oracle’s buildout.

Exhibit 16The Pipeline, Redrawn by Silicon Generation
Exhibit 16 — The Pipeline, Redrawn by Silicon Generation
The same 10+ GW pipeline, colored by the silicon generation filling it. A gigawatt energized in 2028 running Vera Rubin + LPX is not worth what a gigawatt energized in 2024 running Hopper was worth. It is worth several times as much, on hardware that was specifically designed to extract that value.

Oracle’s ten gigawatts are not the same economic gigawatts the industry delivered yesterday. They will be filled with the most productive computing hardware ever manufactured, at the moment that hardware is manufactured. This is the multiplier sitting on top of the capacity expansion. Oracle is not just bringing online roughly ten times the capacity it operates today. It is bringing online capacity that will be several-fold more economically productive per watt than today’s standard. Two ramps compounding in the same window. Neither under Oracle’s control — which is the point. Oracle’s role is to be standing, ready, with energized gigawatts to receive both, at the moment both arrive.

The Margin Question, Resolved

Oracle’s AI infrastructure gross margin ran at 32% in Q3 FY26, within a stated long-run range of 30–40%. That is structurally below where AWS, Azure, and Google Cloud sit on parts of their own books. The skeptical read: Oracle is the marginal supplier with the worst unit economics. The honest read starts with a question — what would a higher margin actually require?

Higher margin on Oracle’s AI infrastructure business would require Oracle to own more of the stack. To build its own real estate, own its own power generation, carry more of the financing, run internal organizations across construction, utilities, and procurement. It would also require the scale to buy chips on the same terms as the largest hyperscalers, which Oracle does not yet have, and the balance sheet to absorb hundreds of billions of dollars of capex.

Exhibit 17The Margin Trade, Visualized
Exhibit 17 — The Margin Trade, Visualized
On the left, real margin points — the difference between what Oracle books and what a pure-play hyperscaler might book on an equivalent contract. On the right, what those margin points actually buy: capital-light growth, speed to market, a $550B+ RPO reached in three years, and a durable orchestrator position that strengthens with every delivery cycle.

Oracle chose a different path. The margin points it concedes are the price of an ecosystem that moves faster than any single company could. Partners execute the layers they know best. Oracle orchestrates. The RPO coordinates. Capital gets deployed only where it is most efficient.

The critical observation is not that the trade favors Oracle. It is that the trade favors everyone in the chain. The site developer earns on the building. The utility earns on the gigawatt. Nvidia earns on the chip. The customer receives capacity on credible terms, ahead of competitors who could not sign as fast. The capital markets earn contracted yields on investment-grade paper. And Oracle earns a margin that reflects its role — orchestrator and credit anchor — not sole funder of every layer. No one is losing. Every actor earns what they bring.

This is the design. And it is the design that makes the buildout possible. A higher-margin Oracle in this market is a smaller Oracle. A smaller Oracle does not have a $550B+ RPO. A smaller Oracle does not fill five simultaneous gigawatt-scale sites. A smaller Oracle does not share the stage with Nvidia at GTC. The margin Oracle concedes is the margin that buys its seat in the game at the scale the game is being played — and at that scale, 32% on a vastly larger base generates dramatically more absolute profit than 60% on a base Oracle could not have built alone.

One more point, and this is the one investors should linger on. Today’s 30–40% margin range is, in all likelihood, the lowest range Oracle’s AI infrastructure business will ever post. It is the range on the first contracts in a new market structure — signed while Oracle was still proving it could deliver at gigawatt scale, and signed against silicon that itself was still arriving. Two things change from here. The hardware filling each gigawatt becomes dramatically more productive per watt — the roughly tenfold revenue-per-gigawatt expansion across Nvidia’s three-generation arc, which lifts the revenue earned from the same physical footprint without a proportional rise in cost. And Oracle’s negotiating position strengthens with every cycle: more customer relationships established, more campuses operating, more partners who have already delivered alongside Oracle, more operational data proving the delivery. Better economics on the underlying asset, combined with stronger leverage at the table, in a market with no signs of softening demand. The investor who anchors on today’s margin is anchoring on the floor. The franchise being built is one whose margin profile should expand as the next set of deals replaces the first.

From Backlog Servicer to Compute Provider in Perpetuity

What gets built alongside the infrastructure is the real asset. Every campus delivered adds a repeatable execution template — a known developer, a known utility, a known financing arrangement, a known chip supply channel. Every quarter of on-time delivery deepens customer trust ahead of the next negotiation. Every RPO converted to revenue improves Oracle’s credit profile, lowering the cost of capital for the next cycle. Every phase energized alongside Nvidia’s generational transition teaches the entire chain how to execute the next generation’s buildout faster. None of this shows up in the 10+ GW pipeline figure. All of it shows up in what comes after.

IX. The Floor, Not the Ceiling: Oracle’s Own Forecast

Everything discussed so far — the backlog, the cascade, the sites, the execution velocity, the silicon alignment, the margin trajectory — converges on a single question we can now reasonably ask. What does Oracle itself think this is worth?

The answer was published, twice, roughly five weeks apart. And the delta between the two publications is the most revealing part of the story.

On September 9, 2025 — the Q1 FY26 earnings call — management shared a long-range OCI revenue plan running from $18B in FY26 to $144B in FY30, implying a 71% five-year CAGR. Those are not modest numbers. Most companies would publish them and let them sit for a year.

Oracle raised the plan thirty-seven days later.

At the October 16 Financial Analyst Meeting, the FY30 endpoint moved from $144B to $166B. The implied five-year CAGR moved from 71% to 75%. Read across the years, the upward revisions are not trivial: FY27 rose $2B, FY28 rose $4B, FY29 rose $15B, and FY30 rose $22B. The revisions compound the further out they go — which is what you would expect when a business is signing larger contracts for later delivery faster than its own planning process can keep up with.

The second point is what these numbers mean for Oracle overall. OCI is a segment inside a larger company, and the company’s total revenue target for FY30 was also published at the October Financial Analyst Meeting: $225B, up from roughly $67B this year. Set the OCI plan alongside that total and the transformation becomes visible.

Exhibit 18Oracle Total Revenue Path, FY26–FY30
Exhibit 20 — Oracle Total Revenue Path, FY26–FY30
Oracle’s total company revenue target moves from ~$67B in FY26 to $225B in FY30. The segment composition changes even more than the total.
Exhibit 19OCI as a Share of Oracle Revenue
Exhibit 21 — OCI as a Share of Oracle Revenue
OCI moves from ~27% of Oracle’s revenue in FY26 to ~74% by FY30. By FY28 — two years from now — OCI crosses 50%. By FY29, it is already 70%.

OCI moves from roughly 27% of Oracle’s revenue in FY26 to roughly 74% by FY30. By FY28 — two years from now — OCI crosses 50%. By FY29 it is already 70%. That is not a segment growing inside a mature company. It is a new Oracle emerging inside the old one — one whose center of gravity is compute infrastructure rather than enterprise database licenses, and whose growth rate is set by the pace at which frontier AI demand can be served rather than by the pace at which enterprises migrate from on-premise. The database business remains large and highly profitable, but it is no longer the dominant line. By FY30, on Oracle’s own forecast, more than seven of every ten dollars of revenue come from the infrastructure business the preceding sections have described.

“These figures are as of this moment in time. If we see additional demand that enables us to grow revenue and profits faster, we will accelerate near-term investments in order to capture additional market share.” Oracle management — Financial Analyst Meeting, October 2025