Source: Smart Investor
After last week's GTC 2026 conference, Jensen Huang participated in at least four consecutive interviews.
Three of these interviews were selected for detailed review. The interviewers were different, and the questions chosen varied significantly.
However, one must say that Jensen Huang consistently demonstrates an ability to bring various questions back to the central theme, clearly illustrating his comprehensive understanding of how NVIDIA’s products and businesses address complex ecosystems.
For instance, he always emphasizes that NVIDIA is not merely building computing power but defining a set of survival principles that enable everything to be accelerated. He also skillfully addressed questions about why NVIDIA ventured into CPUs, how AI is helping NVIDIA enter numerous new industries, and why he believes the market potential is much larger than what Wall Street analysts estimate.
He also discussed the highly anticipated $20 billion acquisition of Groq at the end of last year, describing it as a strategic coincidence. The emphasis was on its strategic nature, as it represented a significant complement to NVIDIA’s 'accelerated computing' vision.
Each interview had its highlights, but the all-in session was particularly favored.
The four hosts all had founder backgrounds, with deep insights into industry, investment, and policy. Their exchanges during each episode were quite engaging. With Jensen Huang seated in the middle, there was an amusing sense of him being strategically 'surrounded.' Hahaha.

Almost all the key issues currently concerning the market were addressed in this dialogue.
Can AI’s revenue curve keep pace with its capability curve? Will agents destroy the software industry? How strong is China’s competitiveness in models, robotics, and supply chains? Could conflicts in the Middle East, supply chain security, and U.S. policies alter the trajectory of this race?
Jensen Huang’s responses diverged somewhat from prevailing market views.
His optimism about the commercialization of AI far exceeds that of the market.
In Jensen Huang's view, when AI is merely 'answering questions,' its value primarily remains at the informational level; however, once intelligent agents are introduced and AI begins to utilize tools and execute tasks, its acceleration of industries such as software, industry, biology, and robotics will transition from isolated breakthroughs to systematic diffusion.
Precisely because of this, Jensen Huang repeatedly emphasized that what the market underestimates is not a particular generation of chips, but the amplification effect after intelligent agents re-mobilize the entire industrial chain.
This also directly points to another popular but dissenting view: many people believe that intelligent agents will completely destroy the enterprise software industry.
However, Jensen Huang holds the opposite view. He believes that those mature software tools, whether databases, design software, or various professional platforms, will not lose value due to the emergence of intelligent agents; instead, they will become more important than ever as they are utilized more frequently and on a larger scale.
For Chinese investors, one aspect of these interviews that cannot be overlooked is his evaluation of China.
Jensen Huang not only explicitly praised Chinese models such as DeepSeek, Kimi, and Qwen, but also acknowledged that China's contributions in open-source projects, talent reserves, and original architecture carry far more weight than many in the U.S. market are willing to admit.
Especially when discussing robotics, he frankly stated that China holds a globally leading advantage in the most fundamental areas such as microelectronics, motors, rare earths, and magnets, and that the global robotics industry will largely depend on this ecosystem and supply chain. (It doesn’t sound like mere politeness.)
He also mentioned the geopolitical conflicts in the Middle East, and while Jensen Huang acknowledged that NVIDIA has many employees and their families based there, he stated that NVIDIA would not withdraw from the Middle East but would “absolutely remain rooted” in the region. In addressing geopolitical issues, what matters more is the long-term layout of the supply chain.
Three years ago, Jensen Huang made a statement: 'You won’t lose your job to AI; you’ll lose it to someone who knows how to use AI.' Looking back now, it was remarkably prescient. In this conversation, he reiterated: regardless of your educational background, you must ensure one thing — you need to be extremely, extremely proficient in using AI.'
Smart investors have compiled and shared this dialogue with everyone.
01. Transitioning from the 'large language model processing' era to the 'intelligent agent processing' era.
Host: Your momentum this time is truly astonishing, and the event was organized on such a grand scale. People from almost every industry showed up, every technology company was present, and every AI company attended as well.
One of your most significant moves in the past year has been the acquisition of Groq.
Jensen Huang: Many of our strategies were openly discussed at GTC years ago, long before they were actually implemented.
Two and a half years ago, I first proposed the concept of an 'operating system for AI factories.' It's called Dynamo.
The term Dynamo originally referred to a machine invented by Siemens that essentially converted hydraulic power into electricity. It powered the factories during the previous industrial revolution. So, I believe it's also a fitting name for the operating system of the 'new factory' in the next industrial revolution.
Among the core technologies within Dynamo is decoupled inference.
Decoupled inference refers to the fact that today's inference processing pipelines have become extremely complex. Frankly, this might be one of the most complicated computational challenges in the world right now. It involves massive scale and includes various forms and sizes of mathematical computations.
Later, we thought, why not break down the inference process and assign different stages to different GPUs for execution? Some parts run on one set of GPUs, while the remaining portions are processed on another set of GPUs.
Going further, we realized that if inference can be decoupled, perhaps the entire computing system should be decoupled as well. This means allowing different types of heterogeneous computing units to handle the tasks that best suit them.
It was this line of thinking that later led us to Mellanox.
Look at the present: NVIDIA’s computing landscape is no longer confined to GPUs but extends across GPUs, CPUs, switches, scale-up switches, scale-out switches, and network processors.
Now, we must also incorporate Groq 3 LPUs to ensure the most suitable tasks run on the most appropriate chips.
Host: This is my strongest impression from being on-site. You are no longer just a GPU company but rather an AI factory company.
In the past, people discussed single GPUs, but now we see an increasingly decoupled and complex combination of systems, with even more options expected in the future.
You made a specific statement on stage that I believe everyone involved in high-value inference should listen to carefully.
You said that about a quarter of data center space should be allocated to the combination of Groq 3 LPX and GPUs. In other words, pair approximately 25% Groq with Vera Rubin in data centers.
Could you elaborate on how the industry should interpret this? Specifically, the next-generation architecture that separates 'prefill and decode.' How do you think people will respond?
Jensen Huang: Let’s take a step back. When we integrate this system, the world has essentially transitioned from the era of 'large language model processing' to the era of 'agent processing.'
Once you run an agent, it is no longer simply outputting tokens. It needs to access working memory, long-term memory, invoke tools, and place significant pressure on the storage system.
One agent will also collaborate with another agent. Some agents are backed by large models, some by smaller models, some by diffusion models, and others by autoregressive models.
In other words, within a single data center today, there already exist various types of entirely different models. Our work on Vera Rubin is aimed at handling such an extremely diverse and complex set of workloads.
My feeling is that we were essentially a 'one-rack' company before, but now four new types of racks have suddenly been added.
In other words, NVIDIA's TAM, or Total Addressable Market, has expanded significantly from its original scale—roughly speaking, it may have grown by one-third, or even half, compared to before.
Within this newly expanded market space, a significant portion will be storage processors, namely BlueField; another part, I hope, will consist of Groq processors; there will also be CPUs and a large number of network processors.
This entire suite of components will ultimately work together to operate the computer in the AI revolution, which is the agent.
Host: What about looking into modern industrial and embedded applications? For example, if my daughter’s teddy bear wants to talk to her, what should go inside it? Would it be a custom ASIC?
Or, with the expansion of edge and embedded applications, will the overall TAM become even larger, with different tools emerging for different scenarios?
Jensen Huang: We believe that there are actually three computers involved in this question.
Taking a higher-level perspective, the first computer is the one used to train AI models and create AI itself.
The second computer is used to evaluate AI. Because different problems require different methods of validation. For instance, if you look around now, you'll see systems like robots and cars everywhere.
You need to first place these robots in a 'virtual gym' for evaluation, and this virtual world must represent the real world as realistically as possible, adhering to physical laws.
So, this is the second computer. We call it Omniverse.
The third computer is the one at the edge, which is the brain of the robot itself. It could be an autonomous vehicle, a robot, or even a very small device, like a teddy bear.
Among these edge devices, I think the most important category, and something we're currently advancing, is turning telecom base stations into part of the AI infrastructure. This way, the entire telecom industry, a two-trillion-dollar sector, will gradually transform into an extension of AI infrastructure.
Radio waves will become edge devices, factories will become edge devices, and warehouses will also become edge devices. In short, all sorts of real-world scenarios you see will be integrated into this system.
So essentially, it comes down to these three computers. Each of them will be indispensable in the future.
What does it mean for an inference factory to cost hundreds of billions of dollars?
Host: Jensen Huang, last year you were actually one of the earliest people to say that the scale of inference would grow a thousandfold.
Jensen Huang: Brad, those words of yours are kind of breaking my heart.
Host: So, is it time to change our tune now? Is it a million times? A billion times?
Jensen Huang: Yes, exactly.
Host: I think at the time, many people thought your claim was an exaggeration because the entire world was still focused on pre-training. But look now—reasoning has really exploded, and everyone is now constrained by reasoning capabilities.
This time, you’ve unveiled another 'reasoning factory,' and the next-generation factory appears to improve throughput by another tenfold. But if I listen to outside discussions, there are market voices saying that your reasoning factory will cost between 40 and 50 billion US dollars, while those custom ASICs, AMD, and other solutions might only cost 25 to 30 billion US dollars.
So the result is that you might lose market share.
Why don’t you just explain directly how you view this matter? How do you see market share? Is it really necessary for everyone to pay nearly twice the premium for your system?
Jensen Huang: The key point here is that you cannot equate the price of the factory with the cost of tokens.
It’s very possible that the 50-billion-dollar factory could ultimately produce the lowest-cost tokens. I can prove this. The reason is that our efficiency in producing tokens is astonishingly high, leading by orders of magnitude.
Think about it—out of this 50 billion, 20 billion goes to land, electricity, and data center infrastructure, which you’d have to spend regardless of who you choose. Beyond that, you’ll still need to buy storage, networking, CPUs, servers, and cooling systems.
So the real difference isn’t ‘50 billion versus 30 billion,’ but more like ‘50 billion versus 40 billion.’ Pick any number—it’s roughly on this scale.
The issue, however, is that this data center, valued at 50 billion, has a throughput ten times greater than the competitor's. In light of the overall return on the system, this price difference is negligible.
This is why I have consistently emphasized that for most chips, if they cannot keep pace with technological evolution or the rhythm at which we are advancing, then even if they were free, they would still not be cheap enough.
03. How CEOs Make Strategic Decisions
Host: I’d like to shift to a more general strategic question. You currently run the world’s highest-valued company, which may generate over $350 billion in revenue next year and potentially $200 billion in free cash flow, all while compounding at an astonishing rate.
How do you decide what to focus on? How do you gather information? How do you form judgments about the market? When do you make bold bets, when do you scale back, and when do you enter entirely new greenfield areas? Ultimately, how are these decisions made?
Jensen Huang: That is the job of the CEO.
Our responsibility is to define the vision and strategy. Of course, we listen to the brilliant computer scientists, technical experts, and outstanding colleagues across various teams within the company.
But ultimately, it is our role to shape that future.
One criterion is whether the task is incredibly difficult. If it is not challenging enough, we should probably distance ourselves from it.
Because if something is easy to accomplish, there will obviously be many competitors.
The second question is whether this is something that no one has ever accomplished before.
The third question is whether this is an extremely difficult task, but at the same time, it precisely leverages our company’s most unique capabilities.
Therefore, I must seek a convergence point: something unprecedented, extraordinarily challenging, yet perfectly capable of unleashing our 'superpowers.' Only such endeavors are worth our investment.
Of course, we also understand clearly that behind every truly great achievement lies immense pain and torment. No great invention succeeded effortlessly on the first attempt simply because it was easy.
Truly significant achievements have never come about that way.
Host: So if something is particularly hard to do and no one has ever succeeded at it, it likely means you will endure a great deal of pain and suffering. Given that, you might as well find some enjoyment in the process.
Why don’t you pick three or four longer-tail directions and tell us whether they could eventually take off in the long run? For example, space-based data centers, what you’re doing with ADAS and automobiles, or the biotech line.
Give everyone a sense of how you evaluate these more long-term businesses and determine when they will start trending upward significantly.
Jensen Huang: That's an excellent question.
Let me start with Physical AI. We strongly believe this will be a major category. As I mentioned earlier, we are now seeing three computing systems, along with the entire software platform built on top of them.
The significance of Physical AI lies in the fact that it represents the first opportunity for the technology industry to tap into a sector worth 50 trillion US dollars—a sector that, for a long time, has hardly been transformed by technology.
Therefore, we must invent the entire suite of technologies required to achieve this goal.
I have always believed that this is a decade-long journey. And we embarked on it ten years ago. Now, we are finally beginning to see it take off. For us, it has already become a business generating billions of dollars annually, with revenues approaching nearly 100 billion US dollars per year.
Thus, it has already grown into a substantial business and continues to expand exponentially. That is the first point.
Now, let’s turn to digital biology. I genuinely believe that we are approaching the ChatGPT moment for digital biology.
We are on the verge of truly mastering how to model genes, proteins, and cells within computational frameworks. As for chemical molecules, we have already begun to understand them.
This means that we are approaching a phase where we can not only model the most fundamental building blocks of biology but also comprehend how they dynamically interact with one another.
I believe this breakthrough will occur within the next two to three years, or perhaps within five years at most.
If we look five years ahead, I am fully confident that digital biology will reach a true inflection point in the healthcare industry.
These directions, I believe, are all critically important. And you can already sense these changes happening around you. The same applies to agriculture. Agriculture is currently undergoing an inflection point, and I have no doubt about it.
04, Open-source tools, ecosystem, and policy-driven initiatives
Host: Your company has largely grown from enthusiasts, video gamers, and early graphics card users. Today, speaking to an audience of about ten thousand, you mentioned Claude, Claude Code, and OpenClaw. You said that agents have become a real revolution. Especially among those enthusiasts, there is now tremendous energy. We also see that many innovative breakthroughs hope to return to the desktop first.
I want to ask: What does this power emerging from developers and grassroots communities—doing open-source agents on the desktop and using open-source tools—mean to you? Where do you think it’s heading?
Jensen Huang: This is absolutely fantastic, truly amazing.
Let's take a step back. Over the past two years, we've actually witnessed three inflection points.
The first inflection point was generative dialogue. ChatGPT brought AI into the public eye for the first time. Suddenly, everyone realized that AI had reached this level.
But in reality, this technology had been there all along. It wasn’t until ChatGPT gave it a user interface, making it accessible to ordinary people, that generative AI truly took off.
And as everyone knows, generative AI produces tokens—not just for external use but also internally. Externally, it’s output; internally, it’s essentially thinking.
And when this internal thinking progresses further, it evolves into reasoning capabilities.
Later, models like o1 and o3 continued to push this wave forward.
Moving forward, AI began to establish a more solid grounding in reality. This means that AI not only answers questions but does so in a more substantiated and evidence-based manner, truly becoming useful.
It was also from that point on that we began to see OpenAI's revenue and its underlying economic model start to show a clear upward inflection.
The third inflection point initially occurred within industry circles, with the advent of Claude Code. It was the first truly useful agent system, which, in a sense, was nothing short of revolutionary.
However, Claude Code was only available to enterprises at the time, and most outsiders had no idea what it was.
What truly brought public awareness to the question of 'what can an AI agent actually do' was OpenClaw. For this reason, OpenClaw holds immense cultural significance.
The second reason for its importance is that, although it is open-source, it also successfully structured an entirely new computational model. In essence, it reinvented the very concept of a 'computer.'
It has a memory system. Scratch (temporary workspace) serves as its short-term memory, while the file system acts as its long-term memory.
Host: Were you referring to skills or scales just now?
Jensen Huang: Skills.
First, it has resources and knows how to manage them; it handles scheduling and task initiation; it can break down a problem and assign parts of it to be solved; it also has input-output subsystems, capable of both receiving inputs and delivering outputs; it can integrate with WhatsApp and provides an API that supports running various types of applications, known as skills.
And these elements are precisely the most fundamental components of a computer.
Therefore, what we have now is, in fact, humanity's first truly 'personal artificial intelligence computer.'
It is open-source and can run almost anywhere.
So this is no longer just a small tool. It is actually the blueprint for modern computing and the operating system for the next generation of computing. Eventually, it will be everywhere.
Of course, there is one thing we must ensure: as soon as we enter the era of intelligent agent software, we need to guarantee that these agents are governed, constrained, and protected when accessing sensitive information, executing code, or communicating externally.
We must ensure that the entire system is secure, has clear boundaries, and includes a strategy where agents can access two of these capabilities simultaneously but cannot unlock all three of the most dangerous capabilities at once.
Thus, we have also invested significantly in governance.
Host: Will this paradigm shift render many artificial intelligence regulatory bills already passed by various states or currently under discussion quickly obsolete, or even entirely irrelevant?
Can you elaborate on how fast this paradigm shift is occurring? Because while political discussions around AI regulation are heating up, the technology itself is already rapidly rewriting the original regulatory premises.
Jensen Huang: This is precisely where we must continue to engage with policymakers.
We must always stay ahead of them, letting them know the true state of this technology today—what it is and what it is not.
It is not a living organism, alien life, or a conscious being. Ultimately, it is software—computer software—and nothing more.
Furthermore, we often hear the claim that we know almost nothing about it. This is not entirely accurate. It’s not that we know everything, but we certainly do not know nothing. In fact, we have already understood quite a lot about this technology.
Therefore, I believe that first, we must continue to help policymakers understand this technology; second, we must not allow 'doomsday narratives' or extreme perspectives to sway their judgment regarding this technology.
Of course, on the other hand, we must also acknowledge that this technology evolves too rapidly. Therefore, policies should not get too far ahead of the technology itself.
If regulations are established too early, the greatest risk is that other countries may already be adopting this technology while we, driven by anger, fear, or some form of paranoia, fail to fully leverage AI for our own industries and society.
Thus, what concerns me most right now is actually the insufficient speed at which AI adoption is progressing domestically in the United States.
05. Suggestions for Public Statements by Anthropic's Founder
Host: Let me follow up with another question. Suppose you were on Anthropic’s board during the controversy with the U.S. Department of Defense. To some extent, this incident continues along the same line as what we discussed earlier—that many people already struggle to comprehend AI, and on top of that, layers of aversion, fear, or distrust toward AI software are added.
If you had been in that position, would you have advised Dario and his team to handle things differently so that the eventual outcome and public perception might have turned out somewhat different?
Jensen Huang: Let me say this first—Anthropic's technology is truly remarkable. We ourselves are one of the key users of Anthropic’s technology.
I deeply admire their emphasis on safety, as well as their pursuit of technical excellence. The culture they have cultivated internally to develop these capabilities is also quite impressive.
Moreover, I agree with their intention to remind the public about the boundaries of this technology’s capabilities, which is fundamentally correct.
However, we must also understand that the world exists on a spectrum. It is good to caution others, but if it turns into intentional fear-mongering, then it becomes less constructive.
Because this technology is too important for us.
I believe predicting the future is fine, but our expressions should be more restrained, more cautious, and more humble. After all, we cannot see the future with absolute clarity.
If we make extremely extreme or catastrophic statements without solid evidence to back them up, the harm caused may be greater than many people realize.
Moreover, today, the influence of our technology companies has changed significantly compared to the past.
In the past, perhaps no one paid much attention to what we said, but that is no longer the case. Technology has become embedded in the social structure and is now an extremely critical part of national security.
Therefore, what we say does have real consequences.
We must be more cautious in our speech. We need to exercise more restraint, maintain greater balance, and think more comprehensively.
The Imaginable Space of a $10 Trillion Market
Host: You have already seen efficiency and productivity improvements within the company. But there has been ongoing debate outside about whether AI truly delivers a return on investment, right?
You and I were actually discussing this earlier this year. The biggest question is: Will revenue actually materialize? Can the revenue growth curve keep pace with the expansion curve of intelligent capabilities?
Then in February, Anthropic experienced its own 'Oppenheimer Moment' at the level of tens of billions of dollars.
Do you feel that we are now truly on that curve? In other words, will revenue start to accelerate and amplify alongside intelligent capabilities?
Jensen Huang: I would like to address this question from several perspectives.
Look at the people here today. Of course, Anthropic and OpenAI are present, but in reality, 99% of what is related to AI here is neither Anthropic nor OpenAI.
The reason lies in the fact that AI itself is a highly diverse world.
If we look at model categories, the most popular ones are still open-weight and open-source systems; only after that does Anthropic come into play, and the gap is not small.
This fact alone indicates that the AI companies present at today's event, in terms of both quantity and scale as well as breadth of distribution, have reached a very significant level.
Therefore, this point is crucial. We must first recognize that AI is far more than just a few companies.
But returning to your previous question, I would like to address two points.
First, when we transition from generative AI to reasoning-based AI, the computational demand increases approximately a hundredfold.
When we move from reasoning-based AI to agent-based systems, the computational requirements increase by roughly another hundredfold.
In other words, within just two years, computational demand has grown nearly ten thousand times.
At the same time, while people are certainly willing to pay for information, more often than not, what they are truly willing to pay for is the completion of tasks.
Exchanging a few words with a chatbot and receiving an answer is certainly helpful; having it assist with research is already impressive; but if it can genuinely complete tasks, then I am willing to pay earnestly.
And this is precisely where we stand now. Agent-based systems are no longer just answering questions—they are actually accomplishing tasks and are already helping our software engineers complete their work.
So, when we consider these two factors together—on one side, computational demand has increased ten thousandfold, and on the other, users' willingness to pay has grown approximately a hundredfold.
And we haven't even really begun large-scale expansion yet.
So I would say we are absolutely already at the million-fold magnitude now. And that, coincidentally, brings us back to the imagined space of a $10 trillion market.
In the future, there will be a hundred agents around every engineer.
Host: Speaking of which, does your company currently have twenty to thirty thousand employees?
Jensen Huang: We currently have 43,000 employees.
Host: Among them, I guess about 38,000 might be engineers. Everyone is now lamenting how quickly token usage in their companies is growing. Some have even started asking, when joining a company, how many tokens the company will allocate to them, as this will directly determine whether they can become an efficient employee.
If we work backwards, has NVIDIA effectively allocated approximately $75,000 worth of tokens per engineer? In other words, have you already spent one or two billion dollars internally on purchasing tokens for the engineering team?
Jensen Huang: Let me give you a thought experiment.
Suppose you have a software engineer or an AI researcher, and you give him $500,000 a year. This kind of thing happens every day now.
Then by the end of the year, I would ask this person: How many tokens did you actually spend this year?
If he told me it only cost $5,000, I would have gone ballistic on the spot.
To put it more bluntly, if an engineer earning $500,000 a year fails to use tokens worth $250,000 in a year, I would be extremely alert.
It's like a chip designer coming to me and saying, 'I've decided to use only paper and pencil and won’t use CAD tools.' Fundamentally, there is no difference.
Host: This really represents a paradigm shift.
This reminds me of the NBA. In the past, when people heard that LeBron James spent a million dollars a year maintaining his body, they thought it was excessive. But look, he’s still playing at the age of 41.
If these individuals are truly top-tier knowledge workers, then why shouldn't we equip them with ‘superhuman’ capabilities?
If this trend continues for another two or three years, to what extent will the productivity of a top employee at a company like NVIDIA increase? What will they be capable of achieving?
Jensen Huang: First, all the notions we once took for granted will disappear.
The idea that 'this is too difficult' will vanish.
The notion that 'this will take a long time' will also disappear.
The idea of 'needing to hire a lot of people' will also disappear.
It’s like after the last industrial revolution, no one looks at a building and says, 'Wow, this building is too heavy.'
Nor does anyone look at a mountain and say, 'This mountain is too big to move.'
All limitations such as 'too big,' 'too heavy,' or 'too long' will gradually exit your thinking framework.
In the end, the only thing that truly remains is creativity.
Host: Exactly. In the end, it comes down to what you can come up with.
Jensen Huang: Yes, exactly. The next question becomes: How are you going to work with these intelligent agents?
After all, this is just a shift in the way we program.
In the past, we wrote code; in the future, we will be writing ideas, architectures, and specifications.
We will organize teams and help these intelligent agents understand how to evaluate results, what constitutes good or bad, and what qualifies as an outstanding outcome.
We will iterate with them and conduct brainstorming sessions together.
What truly matters will transform into these capabilities. And I believe that in the future, there will be a hundred intelligent agents around every engineer.
08. The 'Crawfishes' Will Not Destroy the Software Industry but Strengthen It
Host: Returning to the automated research system itself. What is your take on this? A weekend, 600 lines of code, able to run locally, and achieving such results across so many different types of datasets. What does this really indicate? Does it also mean that we are actually still at a very, very early stage in terms of algorithm optimization and hardware optimization?
Jensen Huang: The fundamental reason why OpenClaw is so astonishing lies in two aspects.
First, it happened to emerge at a 'convergence point.' Its breakthrough coincided almost perfectly with that of large language models.
The timing was so precise that it was almost unbelievable.
Many things might not have occurred to Peter (the founder of OpenClaw) if models like Claude, GPT, and ChatGPT had not reached today’s level of advancement.
Second, it has unlocked a new capability—enabling these models to learn how to invoke tools and utilize those that we have accumulated over many years. For instance, web browsers and Excel spreadsheets.
Furthermore, in the field of chip design, there are Synopsys, Cadence, Omniverse, Blender, and Autodesk. In the future, not only will these tools continue to be used, but they will also be utilized more frequently.
Some people today claim that the enterprise software industry will ultimately be destroyed by AI. However, I hold a completely different view.
In the past, the essence of the enterprise software business was selling 'per seat.' At its core, it served individuals sitting at their workstations.
However, the situation is about to change completely. In the future, these tools will not only be used by humans but also by a much larger number of intelligent agents. They will operate SQL, call vector databases, use Blender, and even edit in Photoshop.
Why do I believe these software tools will not be replaced?
First, these tools are already highly mature and powerful.
Second, more importantly, they serve as a bridge between us and the final output. Even if AI completes the work, the results must still return to a place where I can see, modify, and control them.
These tools happen to be the familiar and truly controllable environments for me.
Therefore, I hope all outcomes eventually return to Synopsys and Cadence.
Because only there can I truly inspect, verify, and confirm whether the results are correct.
You could interpret this as AI potentially changing how enterprise software is used, but it may not necessarily destroy these tools. On the contrary, it could make them even more important than before.
Almost every venture company we invest in today starts with an open-source approach.
Host: I would like to ask you another question about open source based on this topic.
Now we have these closed-source models, which are very powerful. We also have open-weight models, and many Chinese models are performing exceptionally well—really exceptionally well.
Where do you think open source will ultimately lead? Do you think that not only will the architecture become increasingly decentralized, but computational power itself will also become more decentralized, thereby supporting open weights and enabling a truly comprehensive open-source path, allowing AI to be widely used by everyone in the end?
Jensen Huang: I have always believed that fundamentally, we need to treat models both as first-class products—proprietary products—and as open-source models.
It’s not an either-or situation, not A or B, but rather that we need both A and B. I have no doubt about this.
The reason is that a model is first and foremost a technology, not a final product. It is a technology and not just a standalone service.
For the vast majority of users, at that general intelligence horizontal layer, I actually don’t want to fine-tune a model myself. I’m quite happy to continue using ChatGPT directly, and I also really like using Claude, Gemini, and xAI's products.
They each have different 'personalities,' as you know, which to some extent depends on my mood at the time and also on what problem I’m trying to solve.
Sometimes, I might prefer to work on X; other times, I might prefer to work on ChatGPT.
Therefore, this segment of the market will develop well on its own and continue to thrive.
At the same time, there are many industries whose domain knowledge and expertise must be genuinely consolidated and captured in a way that they can control.
And this can only come from open models.
We are also heavily investing in the open model ecosystem. It is already very close to the cutting edge. Frankly speaking, even if it reaches the very frontier, I still believe that world-class models existing in the form of products and services will continue to thrive.
Almost every venture we invest in starts with an open-source approach and then gradually integrates proprietary models.
The most beautiful aspect is that as long as you have a sufficiently good router connecting the two, from day one and every day thereafter, you can access the best models in the world.
At the same time, it gives you ample time to reduce costs, fine-tune, and specialize.
Thus, almost every move you make can possess world-class capabilities.
NVIDIA has lost a significant share of the world's second-largest market.
Host: Let me ask from a different perspective. No one wants the U.S. to win the global AI race more than you, right? Give it a grade. Where are we in terms of the diffusion of U.S. AI technology globally? Are we at A, B, or C? What are we doing well, and where do we fall short?
First, President Trump hopes that the U.S. industry will lead, that the U.S. technology sector will lead, that the U.S. technology sector will win, and also that U.S. technology will spread worldwide. He wants the United States to become the wealthiest country in the world. These are all things he wishes to achieve.
But as we speak, NVIDIA has already lost its original 95% market share in the world’s second-largest market, which is now at 0%.
And President Trump hopes that we can return there.
The first thing right now is to get licenses back for those companies to whom we can sell our products. Many companies have applied, and we have submitted license applications on their behalf. Moreover, Lutnick has approved some of these licenses.
Following this, we have also informed Chinese companies about the matter, many of which have placed procurement orders with us. Therefore, we are now working to restart the supply chain and prepare to begin shipping.
However, when viewed from a higher level, I believe we must first acknowledge one thing.
When we cannot access small motors and rare earth minerals, our national security is actually being weakened.
When we cannot control our own communication networks, our national security is being weakened.
When we cannot provide sustainable energy for this country, our national security is similarly being weakened.
In each of these industries, they represent what I do not want the AI industry to become.
If we look ahead and ask: What would it look like for the U.S. technology industry and the U.S. AI industry to truly lead the world?
We all actually know that there cannot be a situation where the whole world ends up with only one AI model. This is simply unreasonable and unrealistic.
However, we can fully imagine another scenario: The entire U.S. technology stack—from chips, computing systems to platforms—being widely adopted around the world. As for whether they use this technology stack to build their own AI, use public AI, or private AI, that's all fine; they can also build their own applications within their societies based on this technology stack.
I would very much hope that the U.S. technology stack could account for 90% of the global market.
11. Still 100% committed to the Middle East
Host: How closely are you following these global conflict situations? Do these issues concern you? And how much energy are you devoting to them?
Jensen Huang: First of all, the Middle East is not just a region on the map for us. We have 6,000 families there.
NVIDIA has many Iranian employees whose families are still in Iran. Therefore, we have many, many families tied to that region.
First of all, they are now very anxious, very worried, and very afraid. We have been thinking about them and keeping a close eye on their situation. They will receive our full support.
Some have also asked me whether, under such circumstances, we will continue to stay in Israel.
My response is that we are one hundred percent committed to staying in Israel. We fully stand behind our employees and their families there. We are deeply rooted in the Middle East.
Some have also asked me whether, after these events in the Middle East, we still consider it a region worth expanding artificial intelligence into.
In my view, this war has occurred for specific reasons. And I also believe that after the war ends, the Middle East will be more stable than before the conflict.
Therefore, if we believed it was worth investing in before the war, then after the war, we should certainly give it even more serious consideration. My stance on this matter is one of full commitment.
As for supply chain issues, I believe we need to do three things.
First, we must accelerate the re-industrialization of the United States as quickly as possible. Whether it's semiconductor fabrication plants, computer manufacturing facilities, or AI factories, these initiatives must be expedited.
Host: How are we doing right now?
Jensen Huang: Very well.
Thanks to the strategic support from Taiwan’s supply chain, we have been able to make remarkable progress in Arizona, Texas, and California at an astonishing pace.
Second, we must further diversify the manufacturing supply chain. Whether it's Korea, Japan, or Europe, we should promote supply chain diversification to make it more resilient.
Third, as we enhance both diversification and resilience, we must exercise restraint. Avoid applying unnecessary pressure. We need to be patient and more prudent.
Host: Could helium become a problem? Many reports suggest helium might face issues.
Jensen Huang: I think helium could potentially become a problem, but it's also possible that there is already some buffer space built into the entire supply chain.
Such things usually have a certain amount of buffer. So for now, I would say it deserves attention, but it hasn't reached a level of significant concern for me.
12. Open Solutions for Autonomous Driving
Host: You've made significant progress in autonomous driving, and there are many new partners this time, including BYD. I want to ask, what’s your overall strategy here? Because it seems like you have a deep stack, with some areas being competitive and others collaborative.
Jensen Huang: Taking a step back, we believe that anything that moves will eventually become automated—either fully or partially—at some point in the future.
Second, we don’t intend to manufacture autonomous vehicles ourselves, but we want to enable every automaker in the world to have the capability to build autonomous vehicles.
So, we’ve developed all three computers: the computer for training, the computer for simulation, the computer for validation and evaluation, and the onboard computer.
We’ve also developed the world's safest autonomous driving operating system.
At the same time, we have also developed the first autonomous driving system with reasoning capabilities. It can break down a complex scenario into several simpler ones and then navigate through them step by step, just like the reasoning systems we see now.
This reasoning system is called Halos. It has enabled us to achieve astonishing results.
Our approach is to make the foundational capabilities open. We optimize deeply in the vertical direction and continuously innovate horizontally, then let each customer decide: do you only want to buy one of our computers?
Like Musk and Tesla, do you only want to buy our training system? Or do you want to buy both the training system and the simulation system? Or would you like us to integrate all three systems, or even install the onboard computer into your vehicle?
Therefore, our attitude has always been very simple: we want to solve problems. We are not determined to impose a specific solution. We are happy however you choose to cooperate with us.
13. On the Issue of Clients Continuously Becoming Competitors
Host: You have indeed built such a platform that allows a thousand flowers to bloom. But at the same time, some 'flowers' are starting to take root deeper, re-entering the lower stack, and wanting to compete with you in certain areas.
Google has TPU, Amazon has Inferentia and Trainium. Now almost every company is developing their own version, more or less with the idea of 'I can outdo NVIDIA,' even though they are simultaneously major clients of yours.
How do you view this relationship? In the long run, how will these developments evolve? In your view, what role do they play in the overall strategic landscape?
Jensen Huang: This is an excellent question.
First, we are the only company in the world that is truly an AI company. We are working on cutting-edge foundational models ourselves and advancing simultaneously across many different fields. We have built every layer and every stack.
We are also the only AI company in the world that collaborates with every other AI company globally.
They never tell me what they are doing, but I always lay out exactly what I am doing right in front of them.
Therefore, our confidence stems from this.
First, we are very willing to compete on the question of 'what is the best technology.' And as long as we can maintain this pace, I believe that for them, buying NVIDIA remains one of the most cost-effective choices.
I have very strong confidence in this.
Second, we are the only architecture that can be deployed on every cloud. This gives us some very fundamental advantages.
We are also the only architecture that can be taken out of the cloud and deployed locally, in vehicles, in any region, or even in space.
A significant portion of our market is something many people do not realize. Approximately 40% of the business requires the CUDA stack; if you cannot build an entire AI factory, customers actually do not know how to use your product.
This is because they are not looking to make chips or simply buy chips; what they truly want to build is AI infrastructure. Therefore, they need you to come in with a full-stack capability.
And we happen to have the entire stack.
So if you look at the situation today, you'll find an interesting fact: NVIDIA is actually still gaining market share.
Host: So what you're saying is that these companies first try it on their own, and eventually realize, 'Oh my gosh, this is so complicated,' and then they come back. And as a result, your market share continues to grow. Is that the logic?
Jensen Huang: There are several reasons for the increase in market share.
First, we are moving faster. We are also making more and more people realize that the real issue has never been about making a chip, but about building a system.
And building a system is extremely difficult. So their scale of cooperation with us is expanding.
For example, AWS. I remember they just announced yesterday that they plan to purchase one million chips in the coming years. That's already an incredibly large number. And this doesn't even include what they've already purchased before.
So of course, we are very happy to do this business.
But more importantly, the reason our market share has been growing over the past few years is not only because of these cloud vendors, but also because Anthropic is moving towards NVIDIA, Meta’s Llama ecosystem is shifting to NVIDIA, and the growth of open models has been astonishing—all running on NVIDIA.
Therefore, the reason our market share is growing is because the number of models is increasing.
At the same time, much of the growth is already happening outside the cloud. It is expanding in regional markets, enterprise markets, industry-specific markets, and at the edge. And if you just focus on building an ASIC, it's actually very difficult to penetrate this entire segment.
14. Analysts fail to comprehend the scale and breadth of AI
Host: Following up on that question, without going into specific numbers, it seems the analysts don't really buy into your narrative.
You mentioned that computing power might grow a millionfold, right? But if you look at consensus market expectations, they project about 30% growth for next year, 20% for the following year, and by 2029 — which should be a peak year — only 7%.
If you apply their growth rates to your TAM (Total Addressable Market), the implicit conclusion is that your market share will decline significantly. So, when you look at your future order book, do you see anything that could justify such skepticism?
Jensen Huang: First of all, they don't understand the scale of AI, nor do they grasp its breadth.
When many people think of AI, they only picture the top five tech giants or just a few leading companies. But AI goes far beyond that.
Host: There’s also a typical perspective that likes to bring up the 'Law of Large Numbers' to challenge you. When they return to the investment bank’s risk committee, they need to present a model that looks 'reasonable.' They simply cannot fathom a $5 trillion company growing to $15 trillion. For them, that exceeds the limits of comprehension, so they cap it at $7 trillion. In many cases, it's just this kind of ingrained thinking.
Jensen Huang: Exactly. Because first, you have to redefine what you're actually doing.
Just the other day, someone asked me, 'Jensen Huang, how could NVIDIA possibly surpass Intel in the server market?'
The issue is that the entire data center CPU market was only about $25 billion annually in the past. But now, frankly, while we are sitting here talking, our business has already reached nearly that scale.
But this is not a performance forecast.
The key point is, how big a company can grow first depends on what exactly you are doing.
NVIDIA is not selling chips. And now, merely having chips can no longer solve the problems of AI infrastructure. It is far too complex.
The third point is that most people’s understanding of AI remains too narrow. What they see and hear is just the most prominent small fraction.
OpenAI is remarkable and will become extremely large. Anthropic is also remarkable and will become extremely large as well.
But AI will be much, much larger than these. And the broader world we serve is precisely that vast expanse.
15, On Space Data Centers and Healthcare
Host: Tell us about space data centers. How should ordinary people understand this? What is the difference compared to the large-scale data center constructions we see on the ground?
Jensen Huang: First, we must focus on what is happening on the ground because we are currently living on Earth. That is the first point.
Second, we should also prepare for entering space. After all, there are abundant energy resources in space.
Of course, the biggest challenge is heat dissipation. In space, you cannot rely on conduction or convection as you would on Earth; it mostly depends on radiative cooling.
Radiative cooling implies that you need a very large surface area.
This is not an entirely unsolvable problem. After all, space does offer a lot of room.
Nevertheless, the costs remain a significant barrier at present.
However, we have already begun exploration. In fact, we are already operating in space. Our systems have been radiation-hardened, and many satellites worldwide are already running CUDA.
They perform tasks such as imaging, image processing, and AI-based imaging. Many of these tasks should ideally be completed directly in space rather than transmitting all raw data back to Earth for processing.
Thus, there is already a substantial amount of work that logically should be conducted in space.
Meanwhile, we will continue to explore what the architecture of a data center in space should look like if it were to be deployed there.
This will take many years, but that's fine—I have plenty of time.
Host: I would like to follow up on healthcare. How exactly do you take care of yourself? You look great. Is this an advantage unique to Asians?
Jensen Huang: Squats, push-ups, and wall sits.
Host: Alright, that sounds reasonable.
But back to the main topic, where is the overall development in healthcare heading? What progress have we made so far?
I think we all know that large language models like ChatGPT are already performing more consistently and reliably in initial consultations.
So, what else needs to happen for AI to truly break through regulatory and systemic constraints and make a substantial impact on the entire healthcare system?
Jensen Huang: We have several directions in the field of healthcare.
The first direction is AI biology, which involves using AI to understand, model, and predict the behavior of biological systems. This plays a crucial role in drug discovery.
The second direction is intelligent agents, which assist in diagnosis, decision-making, and the entire medical process. Open Evidence is an excellent example, and Hippocratic is another good one. We really enjoy collaborating with these companies.
I genuinely believe this is an area where agent technology will completely transform how we interact with doctors and the healthcare system.
The third direction is Physical AI.
The first direction is using AI to understand and predict the laws of life systems. The third direction is enabling AI to truly comprehend the rules of the physical world, which will be applied in robotic surgery.
There have already been significant advancements in this area.
In the future, every piece of equipment you encounter in a hospital—whether it’s ultrasound, CT, or other devices—will possess agent-like intelligence capabilities.
In a sense, a secure version of OpenClaw will eventually be integrated into every device.
Thus, from many perspectives, these instruments will interact with patients, nurses, and doctors in significantly innovative ways in the future.
16. The global robotics industry will deeply rely on China's supply chain.
Host: How far are we from robots truly entering our daily lives? For instance, robot chefs, robot nurses, robot housekeepers, or more broadly, humanoid robots—when will they achieve large-scale adoption in the real world?
Especially in China, they appear to perform as well as we do, and in some areas, perhaps even faster.
Jensen Huang: To a large extent, this industry was pioneered by us. The U.S. was the first to establish it. You could also say that we entered too early.
Later, we became somewhat exhausted and could no longer endure the wait. This was because the key enabling technology would not truly emerge for another five years.
Host: In other words, the 'brain.'
Jensen Huang: Yes, the 'brain.' We were just slightly ahead of it, and as a result, we ended up exhausting our own patience first.
But now, it has truly arrived.
Therefore, the question now is no longer 'whether it will arrive,' but rather how long it will take to transition from the current stage—where feasibility and functionality have already been proven and are quite robust—to a product that can be widely accepted by the market and achieve scalable implementation.
Typically, for a technology to progress from this stage to a fully mature product, it does not exceed two or three iteration cycles.
And two or three cycles roughly equate to three to five years. Therefore, within three to five years, we will see robots in many places.
As for China, I believe it is a very strong competitor. The reason is simple: its microelectronics, electric motors, rare earths, and magnets are precisely the most fundamental components of robotics, and in these areas, they are the strongest globally.
Thus, from many perspectives, our robotics industry actually depends deeply on their ecosystem and supply chain. Not only us, but the global robotics industry will rely significantly on it.
Therefore, I believe you will soon witness this industry begin to accelerate noticeably.
Host: Then how far will it ultimately go? Musk seems to believe that in the future, there will be one robot for every person. 7 billion people will have 7 billion robots, and 8 billion people will have 8 billion robots.
Jensen Huang: I actually hope it will be even more than that.
First, a large number of robots will stay in factories, working around the clock. There will also be many factory robots, which may not move around everywhere but will perform various automated tasks.
In the end, almost everything will become robotized.
Host: To me, the most important aspect of robots is that they will open up opportunities for upward mobility for every ordinary person.
When everyone had cars, they were suddenly able to do many jobs that they couldn't do before. In the future, when everyone has a robot, their robot will be able to do many things for them. They can open an Etsy store or a Shopify store, using robots to manufacture, execute, and complete tasks that they originally couldn’t do alone.
I think robots may eventually become the strongest 'universal prosperity unlocker' in human history.
Jensen Huang: I completely agree.
Moreover, from the simplest arithmetic perspective, we are already short of millions of workers today. We currently have an urgent need for robots. Many companies could grow even faster if they had more labor.
Additionally, there are many particularly interesting extensions within the scenarios you just described.
For example, with the advent of robots, we will possess a kind of 'virtual presence' capability. While I am on a business trip, I can directly connect to the robot at home and remotely control it to move around the house.
Host: Walking the dog.
Jensen Huang: Walking the dog might still be a bit challenging, but at least you can take a tour of your home, check the situation, chat with the dog, or talk to the kids.
Looking further ahead, in the future, humans may even send their 'presence' out at near-light speed. By then, of course, we wouldn’t send ourselves first; instead, we would send robots to scout for us.
Host: And then upload my AI over there.
Jensen Huang: This is almost inevitable.
Host: This would essentially open up the Moon and Mars. They would become real colonizable targets, which implies virtually unlimited resources. Especially transporting materials back from the Moon, where theoretically, energy consumption could approach nearly zero since you can use solar power for acceleration.
In other words, in the future, it is entirely possible that factories could be built on the Moon to manufacture everything Earth needs. And robots would be the key to unlocking all of this.
Jensen Huang: Exactly. At that point, distance will no longer matter.
17. Advice for entrepreneurs: Master your vertical field thoroughly.
The more revenue models and agents generate, the more we can continue to invest in infrastructure; the better the infrastructure, the further the capabilities of models and agents will be amplified.
Dalio recently mentioned on a podcast that by 2027 or 2028, model companies and agent companies will achieve hundreds of billions of dollars in revenue, and by 2030, he believes this figure will reach one trillion dollars. And this is not even counting infrastructure revenue but only the revenue from the AI application layer itself.
Jensen Huang: I think he has been quite conservative in his estimates. I believe Dalio and Anthropic will perform far better than that.
Host: Wow. From 30 billion to one trillion?
Jensen Huang: Yes. And the reason is that he may not have fully factored in another layer of power.
I believe that in the future, every enterprise software company will become a distributor, or even a value-added distributor, for model companies like Anthropic and OpenAI. They will repackage their own services on top of these models and then sell them to customers.
Host: Yes, this will lead to an almost exponential expansion.
Jensen Huang: Exactly. Their market reach will be greatly amplified.
Host: Then, in such a world, what will be the ultimate moat that remains?
Frankly speaking, some of your moats are already so strong that they are almost insurmountable. Perhaps the most underestimated one is CUDA, which is an astonishing strategic advantage.
But in the future, if a model can create something particularly powerful, the next-generation model might overturn it. So how do you view these application-layer companies? Where exactly lies their competitive moat? What will they rely on to differentiate themselves?
Jensen Huang: Deep specialization.
I believe that in the future, there will certainly be some general-purpose models integrated into the intelligent systems of these software companies. Many of them will be closed-source models on the cloud, but at the same time, there will also be many highly specialized sub-agents trained by these companies themselves.
Therefore, my advice to entrepreneurs is quite simple: thoroughly master your vertical field, understand it better than anyone else, and delve deeper into it.
Then wait for these tools to catch up. Because they are catching up with you. Once they do, you'll be able to fully inject your knowledge, your judgment, and your marginal advantage.
Moreover, the earlier you connect your intelligent agent to customers, the sooner that flywheel will start spinning. Once it starts spinning, your intelligent agent will grow stronger at an incredibly rapid pace.
Host: This actually runs counter to the logic we use today when developing software.
Today, we usually first develop a piece of software, then think about which parts can be generalized, try to sell it horizontally to more users, and finally customize around it to lock clients into this system.
Jensen Huang: Yes, absolutely correct. We first create a horizontal platform.
But pay attention, in reality, there have always been so many GSIs and consulting firms, which are essentially experts in various industries. They take your horizontal platform and apply deep customization to each industry.
Moreover, it can be said that this entire 'customized' market may be five to six times larger than your general-purpose platform itself.
Therefore, I believe these platform companies actually have the opportunity in the future to grow into that 'expert layer,' becoming experts in their vertical fields and true owners of industry knowledge.
18. Become an expert who truly knows how to use AI
Host: I really need to give you a flower. I remember three years ago, you said something: You won’t lose your job to AI; you’ll lose your job to someone who knows how to use AI.
Looking back now, our entire conversation has essentially revolved around this judgment. Agents are turning people into 'superhumans,' business opportunities are expanding, and entrepreneurial opportunities are also growing. You did indeed foresee this early on and very clearly.
So, have you changed your perspective now?
Jensen Huang: I agree, jobs will change.
For example, many drivers today may not necessarily disappear in the future. On the contrary, I think many of them will still sit in the car, but they won’t be responsible for driving anymore. Instead, they will sit behind the steering wheel, watching the car drive itself.
Because ultimately, what a driver does is never just 'driving.'
These drivers are essentially there to assist you. They are your assistants. They help you with your luggage and handle various small but important tasks.
Therefore, I would not be surprised at all if, in the future, drivers actually become your 'mobility assistants,' taking care of many other tasks during a journey.
Host: Exactly. For example, they could check you into a hotel while the car drives itself.
Jensen Huang: Precisely. Look, after the introduction of autopilot systems in airplanes, pilots were not removed from the cockpit; instead, the aviation industry required even more pilots.
Although, most of the time today, airplanes are flying on autopilot.
Similarly, when cars drive autonomously in the future, those drivers will not be idle either. They will handle many other tasks on their phones and continue earning income by doing so, such as coordinating arrangements and handling various affairs for you.
Ultimately, the entire pie is actually growing, rather than simply cutting someone out.
So, there is one thing I am very certain about: every job will change, and some jobs will indeed disappear. However, at the same time, we also know that many, many new jobs will be created.
If I were to give one piece of advice to young people who have just graduated from school and are feeling anxious because of AI, it would be this: become an expert in truly using AI.
We all want our employees to be experts in using AI, and this is actually not as simple as it sounds.
You need to know how to set requirements, but you cannot make them too rigid. You must leave enough room for AI to innovate and generate, while still guiding it toward the results you truly desire.
There is craftsmanship involved in this.
Host: When you were at Stanford, did you once say something famous: 'I wish you suffering and hardship'? Do you remember that?
Jensen Huang: I do.
Host: So what advice would you give to young people now? What should they study?
Especially those who haven't yet gone to college and are about to graduate from high school. This generation is truly the first native AI generation. They haven’t decided whether to attend university or what to study. If it were you, how would you guide them?
Jensen Huang: I still believe a solid foundation in science, strong mathematical skills, and language proficiency are all extremely important.
As you know, language has now become the programming language of AI—it could be considered the ultimate programming language.
So in a sense, people who study English or other languages might end up being among the most successful group.
So I would say, regardless of the education you ultimately pursue, make sure of one thing: you need to become extremely, extremely proficient in using AI.
Additionally, regarding work, I’d like to add one more point, and I hope everyone takes it to heart.
At the onset of the deep learning revolution, there was a top computer scientist whom I held in extremely high regard (namely Geoffrey Hinton) who predicted that computer vision would completely replace radiologists. He even advised people that radiology was the last field anyone should enter.
A decade has passed, and his prediction regarding the first part has proven to be one hundred percent correct. Computer vision has indeed been fully integrated into nearly all radiology equipment and platforms worldwide.
However, the truly unexpected outcome is that the number of radiologists has not only failed to decline but has instead increased, with demand surging even further.
The reason lies in the fact that everyone’s work essentially comprises two levels: one being the tasks they perform daily, and the other being the true purpose they are meant to fulfill.
The task of radiologists, of course, involves examining scans and interpreting results. However, their real purpose is to assist doctors, help patients, and contribute to disease diagnosis and treatment.
Ultimately, you will find that as scanning becomes faster, hospitals can conduct more scans. Medical efficiency improves, patients can be admitted and treated more quickly.
And let us not forget, hospitals need to generate revenue. They will perform more scans, accept more patients, detect diseases earlier, and see an increase in income. What is the result? The result is that they end up needing even more radiologists.
This is a quintessential example.
A country that grows faster and achieves higher productivity will ultimately become wealthier. And a wealthier nation can certainly afford to place more teachers in classrooms rather than fewer.
You could even allow every teacher to have a personalized teaching plan tailored to each student. This would make teachers more effective rather than replace them.
In the future, every student will have AI assistance, but each student will still need excellent teachers.
Host: Brilliant. You are truly a guide that this era needs. You’ve been consistently emphasizing the positive aspects of this technology, and I believe there is already too much doomsday narrative out there.
Moreover, the fact that you can remain so humble after achieving such tremendous success, stating that it all comes down to developing software, serves as an important reminder for many people.
Jensen Huang: We have done this kind of thing before. In the past, we created new categories and even entire industries.
Therefore, there’s absolutely no need to resort to fear-mongering to drive discussions. It doesn’t help at all.
Moreover, in the end, we have the autonomy to make our own choices. We possess agency. Ultimately, the path forward will be determined by us.
Editor/Rice