NVIDIA Is Still Undervalued — High Margin, High Growth, Coexistence-Driven Economic Zone, and a Bright Outlook
![]()
NVIDIA’s stock has not been moving as smoothly as the market expected for a while now. The closing price on April 8, 2026 was $182.08. The all-time closing high was $207.02 on October 29, 2025, and the intraday high was $212.19. So the current price is about 12% below the closing high and about 14% below the intraday high (Macrotrends).
This is not just a short-term correction. Since July 2025, the stock has roughly stayed in a $165–195 range, and Barron’s reports that post-earnings reactions have been weighed down by worries about rising competition and the durability of hyperscaler capex. So for almost half a year, the most central AI name has not really pushed higher.
The market is probably worried about a few things: slowdown in the AI capex cycle, Google’s TPU and other ASIC strategies, the encirclement that includes China, and energy and macro risk from the Middle East.
Press coverage cites competition and capex sustainability as the main weight on the stock, while also noting that the Middle East situation is a broader market risk.
Even so, I think pessimism on NVIDIA has run a bit too far. This company still keeps extremely high profitability, still has high growth, and is not using its profits to play defense. It is using them to invest aggressively and expand the next layer it dominates.
What I think is even more important is the style of that expansion. NVIDIA is avoiding head-on collisions and expanding its economic zone while coexisting with others. It feels like a very Asian idea of “coexistence”, and it is executed in a quite clever way.
High margin, but not playing defense
First, I want to highlight how strong NVIDIA’s earnings structure is. For full-year FY2026, revenue was $215.9 billion, up 65% year over year. GAAP gross margin was 71.1% for the year, and 75.0% in Q4 alone. Q4 revenue was $68.1 billion, up 73% year over year, and data center revenue was $62.3 billion, up 75% year over year. So even though the company is already huge, it is still delivering very high margins and high growth at the same time (NVIDIA Newsroom).
The important point is that this is not “mature company” high margin. Usually, very high margin companies eventually slow down in growth, or start protecting their profits by cutting investment. NVIDIA is different.
Not only is revenue growth still very high, the company itself lists the Q4 cost increase drivers as compute and infrastructure costs, engineering development, new product introductions, and headcount. That tells me they are not maintaining the status quo. They are putting capital and operating expenses into the next stage of growth.
In short, NVIDIA is not “a company protecting its high margins”, it is “a company using high margins to attack further”.
It is not just profitable. The way it uses invested capital is unusually strong.
Another interesting angle is CFROI from HOLT.
CFROI stands for Cash Flow Return on Investment. It was originally developed by HOLT Value Associates and is now used inside UBS’s HOLT framework. UBS describes HOLT as a method that adjusts for accounting bias and distortion, and converts income statement and balance sheet data into CFROI, “a cash-based return that is closer to the real economics of the company”.
Investopedia also explains that CFROI focuses on cash flow rather than accounting profit, and gives a real return on the total invested capital of the company — close to a company-wide IRR view.
The Information notes that the average non-financial company has a CFROI of about 6%, while NVIDIA is at 73% — among the very top of all companies.
CFROI is not just “is the accounting margin high”. It is a measure of how strongly a company generates cash from the capital it has actually deployed. From this angle, 73% means NVIDIA is not simply profitable. It means invested capital is turning at an extraordinarily high efficiency. And the article also notes that NVIDIA is not just high CFROI — it also ranks near the top in asset growth.
In other words, NVIDIA is not only a “high margin company”. It is a company that keeps a very high cash return while also investing very aggressively.
This matters a lot. If it were only high margin, you could say “they are just harvesting today’s market”. But if high margin, high growth, and high investment efficiency are all true at the same time, it is not defense. It looks like they are taking huge profits and pouring them into the next high-growth areas.
From “GPU company” to “AI infrastructure dominance”
If you still see NVIDIA as just a company that sells GPUs, I think you miss what is actually going on. NVIDIA is reaching well beyond compute chips into networking, software, rack design, AI factory design, and even into how customers fund and expand their facilities.
Reuters reported that NVIDIA’s $2 billion investment in Marvell is meant to make it easier for customers to combine Marvell’s custom AI chips with NVIDIA’s networking and CPU. NVIDIA itself explains that Marvell will provide NVLink Fusion compatible XPUs and networking, while NVIDIA provides Vera CPU, ConnectX NIC, BlueField DPU, NVLink, Spectrum-X switches, and rack-scale compute infrastructure.
This is not just an investment. Even if customers move toward ASICs, NVIDIA is making sure that the surrounding standards, interconnects, and operating layers stay inside its economic zone.
NVIDIA is also pushing on the inference, interconnect, and operations layer. As AI shifts from training-heavy to inference-heavy, what matters is no longer just single-GPU performance. It is how you connect and efficiently run a huge mass of GPUs, CPUs, NICs, switches, and memory. In 2025 NVIDIA announced Spectrum-X Photonics and Quantum-X Photonics, making clear that silicon photonics is part of how they intend to raise interconnect efficiency and power efficiency across the whole AI factory.
According to Reuters, GTC 2026 was expected to focus on inference, with NVIDIA pushing inference and agentic AI as competition heats up.
NVIDIA is also reaching downstream into data centers and AI factory build-outs. Reuters reports that NVIDIA-backed Nscale is raising large capital to expand data centers in line with AI compute demand. Nscale also announced, with Microsoft, NVIDIA, and Caterpillar, that they will deploy a 1.35GW Vera Rubin NVL72 platform at a flagship campus in West Virginia (Why Nvidia-backed Nscale is going after 8GW of data center sites).
A long-term partnership with Thinking Machines Lab includes deployment of at least a 1GW-scale next-generation Vera Rubin system and additional investment. So NVIDIA is no longer just a parts supplier. It is embedded inside the equipment plans of the next generation of huge AI players (NVIDIA strengthens its grip on AI infrastructure through startup investments).
Here is the part I think is important. Dominance in the AI era will not be decided by the GPU alone. It will be decided by who controls the standard configuration of the AI factory. Once NVIDIA’s way becomes the default across chips, networking, racks, software, build partners, and even power capacity, the competition is no longer a simple chip-to-chip comparison.
NVIDIA’s cleverness is that it does not fight head-on, it brings rivals in by coexisting
I think this is the most important point. NVIDIA’s strength is not just that the GPU is strong. Jensen Huang is not trying to crush competitors and customers head-on. He is doing something more clever: even when the other side grows, NVIDIA makes money around them.
This is a strategy of coexistence. By not competing head-on, NVIDIA absorbs the growth of others while expanding its own area of dominance.
The Marvell investment looks like a coexistence-style absorption of Broadcom and the ASIC market
The clearest example is the $2 billion investment in Marvell. According to Reuters, NVIDIA invested $2 billion in Marvell and is moving forward with a partnership in which Marvell provides chips and interconnect platforms compatible with NVIDIA technology. NVIDIA brings NVLink Fusion, Vera CPU, ConnectX NIC, BlueField DPU, Spectrum-X switches, and so on. Marvell provides XPUs and surrounding chips that fit naturally with that stack.
What is interesting here is that NVIDIA is not trying to fight Broadcom or Google’s custom chip path head-on, and is not trying to crush them with a competing product either. Reuters reports that Broadcom and Google have signed a long-term agreement to keep co-developing TPUs through 2031, and TPU platforms are also expanding for Anthropic. The push toward ASICs and customization can no longer be stopped.
So instead of trying to take all of the ASIC business back, NVIDIA seems to be making sure that even if ASICs grow, the interconnect, surrounding chips, networking, and operating layers around them stay inside the NVIDIA economic zone (NVIDIA Newsroom).
This is not a head-on fight. It is more like: “you can grow on your own terms, but NVIDIA becomes essential around your growth.” I find that to be a very refined coexistence strategy.
In autonomous driving too, NVIDIA does not sell an “NVIDIA FSD”. It lets each automaker build their own.
The same shape shows up in autonomous driving. Instead of pushing a finished, Tesla-FSD-style self-driving experience to the world, NVIDIA is expanding the foundation that lets automakers develop their own models. In January and March 2026, NVIDIA announced the Alpamayo family of open models and tools, simulation infrastructure, and DRIVE Hyperion, and said they have been adopted by BYD, Geely, Isuzu, Nissan, and others.
The idea is the same. If NVIDIA pushed an “NVIDIA FSD” onto every carmaker in the world, it would compete head-on with its own customers. Adoption would get harder, and accident liability and regulatory questions would become heavier. Instead, NVIDIA provides teacher models, training environments, simulation, and in-vehicle compute, so each automaker can keep its own character and still build its own model.
So instead of commoditizing the carmaker, NVIDIA leaves room for differentiation while still owning the foundation.
I think this is very clever. Because NVIDIA does not take away the customer’s leadership, it is easier to be adopted. But NVIDIA enters the entire development flow, so its chips and software end up being the natural choice. Here too, the idea is looking like coexistence while expanding the toll road.
The same shape is spreading into physical AI
In robotics and physical AI, NVIDIA is again not just selling semiconductors. It is going for training data generation, world models, simulation, and robot foundation models all in one stack. NVIDIA positions Cosmos as a world foundation model for physical AI, and provides Omniverse as the platform for digital twins and robotics simulation.
Instead of trying to monopolize the finished product market, NVIDIA supplies the training environment, simulation, world-understanding model, and foundation model, and lets each company productize on top of that.
In both autonomous driving and robotics, the pattern is leave application diversity to the customer, but make the foundation an NVIDIA layer.
I think this is the real moat of NVIDIA, and it is still not fully priced in by the market today.
So what are the concerns?
There are real concerns, of course.
First, the effort to break the CUDA moat. The Information reports that TensorWave held a “Beyond CUDA” event last year, and renamed it to “Beyond Summit” this year out of consideration for sponsors and attendees.
The rename itself shows how big NVIDIA’s influence is. But it also shows that more and more players are seriously trying to escape CUDA at the compiler, kernel, and optimization layer.
Second, customer custom silicon. As mentioned, Broadcom and Google signed a long-term agreement to co-develop TPUs through 2031, and they will supply 3.5GW of TPU compute platform for Anthropic.
There are also reports that AMD signed a deal with Meta for up to $60 billion of AI chip supply over five years. Big customers are clearly diversifying away from “NVIDIA only”.
Third, geopolitics and power costs. The Middle East situation is expected to have a large impact on AI data center economics through energy prices.
AI factories are heavy power consumers, so an energy shock could delay customer investment decisions (Reuters).
Fourth, memory and component constraints. HBM price increases are already feeding into AI server costs, and rising server memory prices are starting to squeeze data center budgets.
There are also ongoing concerns about HBM4 supply delay risk for the Rubin generation.
NVIDIA has strong purchasing power, but the supply chain can still become a bottleneck.
I still think NVIDIA looks promising
The reason I still see NVIDIA positively is that this is not just “a company selling high-performance GPUs”. They are designing the compute foundation of the AI era itself, and investing across everything around it. And that expansion is not reckless investment that sacrifices profit. A 71.1% full-year and 75.0% Q4 gross margin, an extraordinary 73% CFROI, and continued high growth — they are building a virtuous cycle of using high profits to invest aggressively and produce even higher returns.
The stock is still more than 10% below the all-time high, and it has been stuck in a dull range for the past half year. I understand why the market is putting a lot of weight on competition, geopolitics, and the capex cycle.
Even so, NVIDIA is
- keeping an extremely high return on investment
- still growing fast
- not playing defense, but continuing to invest aggressively
- and expanding its economic zone by coexisting with others rather than fighting them head-on
That is, in my view, a one-of-a-kind company.
I think this is where the gap with the current market valuation lies. As long as high margin, high growth, and a coexistence strategy all hold at the same time, I believe NVIDIA is still undervalued, and there is room to grow further.
Join the conversation on LinkedIn — share your thoughts and comments.
Discuss on LinkedIn