(Yahoo! Finance) - Nvidia (NVDA) is winning the global AI explosion. The company's chips are the most wanted in the world, with hyperscalers ranging from Amazon (AMZN) and Google (GOOG, GOOGL) to Meta (META) and Microsoft (MSFT) spending billions to pack them into their data centers.
That's been a boon for Nvidia's bottom line. Its full-year revenue has jumped from $26.9 billion in 2022 to $215.9 billion in 2025 and is expected to top $358.7 billion in 2026. That has also given the company’s stock price a major boost.
Since OpenAI's (OPAI.PVT) ChatGPT debuted in November 2022, Nvidia stock has skyrocketed nearly 990%. And though the exuberance has slowed over the past year, shares still rose 46% over the past 12 months.
But Nvidia isn't the only company driving the AI build-out. While it develops the chips that are at the center of the AI world, Nvidia doesn't actually put them into data centers. Sure, you've likely seen Nvidia's sleek black-and-gold servers at its events, but those are reference designs, not the actual servers that go into data centers.
Nvidia's partners, including the likes of Dell (DELL), Hewlett Packard Enterprise (HPE), and Foxconn, actually build the massive arrays of computers that power AI models and services.
"What over the years Nvidia has brought to the table has been the GPUs, obviously, then moving into [data processing units] and [network interface cards] ... all the drivers, [software development kits], some of the toolkit that needs to be delivered for those silicon technologies," said Chris Davidson, vice president of high-performance computing and AI customer solutions at HPE. "But really, at the end of the day, without a solution integrator to put it all together, those are just bits and bobs. Those are just the basic components."
"What over the years Nvidia has brought to the table has been the GPUs, obviously, then moving into [data processing units] and [network interface cards] ... all the drivers, [software development kits], some of the toolkit that needs to be delivered for those silicon technologies," said Chris Davidson, vice president of high-performance computing and AI customer solutions at HPE. "But really, at the end of the day, without a solution integrator to put it all together, those are just bits and bobs. Those are just the basic components."
"What over the years Nvidia has brought to the table has been the GPUs, obviously, then moving into [data processing units] and [network interface cards] ... all the drivers, [software development kits], some of the toolkit that needs to be delivered for those silicon technologies," said Chris Davidson, vice president of high-performance computing and AI customer solutions at HPE. "But really, at the end of the day, without a solution integrator to put it all together, those are just bits and bobs. Those are just the basic components."
Early conversations and deploying engineers
Standing up a data center isn't as simple as dumping a few computers in a warehouse. Companies that string together servers work with customers well before any systems are plugged in.
"What maybe a lot of people don't realize is there's a lot of work that goes into the site planning long before any [server kit] or product shows up," Davidson said.
"We're engaged with many customers long before even maybe the data center is built, or if they have a data center, we're typically aware of where it is, what kind of power is left, what kind of cooling capacity it has," he explained.
Arthur Lewis, president of infrastructure at Dell, told Yahoo Finance that the company uses forward-deployed engineers as part of teams that include data center architects, network architects, thermal architects, compute architects, and storage architects to determine how best to meet customers' data center needs.
Part of the reason for such white-glove service is that no two data centers are ever truly the same.
"Nvidia has amazing technology, and they've enabled an ecosystem to build almost anything that they want," Lewis said.
"Every customer wants something a little bit different, because their software is a little bit different," he added. "It depends on the workload. You know, a lot of customers have training workloads, but even within the training workload, their software is optimized in different ways."
And all of that has to happen as quickly as possible. After all, the longer an order of Nvidia-based servers is sitting dark, not crunching data, the more money a customer is losing on their initial investment.
According to Lewis, Dell has developed its own strategies to bring data centers online in as little time as necessary.
"We're at a point now where the semi pulls up to a customer, pulls the rack off the semi with a forklift, plops it onto the cement slab … plugs it in, and we can turn it over into production in 24 hours, which is really unheard of in the industry," he said, adding that the company managed to work with one customer to deploy 100,000 GPUs in six weeks.
Nvidia's software advantage
Nvidia doesn’t simply hand off its chips, though. A major part of the company's appeal is its software offerings, including its CUDA platform, which enables customers to take full advantage of its GPU's processing power.
"A detail that's lost on a lot of the world, that, you know, the majority of Nvidia's employees are software engineers," Nvidia vice president of enterprise platforms Justin Boitano told Yahoo Finance.
"It's having great developer documentation, it's having great developer tools, over decades of building this accelerated computing platform, is what drives the application developers to this platform," he added. "The ubiquity of the developer platform makes it easier for app developers to start anywhere, and then ultimately build these next-generation apps on this architecture."
The company is certain to provide greater insights into its next-gen software and hardware at its annual GTC event, which kicks off in San Jose, Calif., on March 16.
By Daniel Howley - Technology Editor