Home News Nvidia - Q & A | NVidia's logic

Nvidia - Q & A | NVidia's logic

2024-03-19

Share this article :

Question: After Nvidia (NVDA) recently reached a high of $1,000, the long-short divergence began to become fierce. Many people compared it with the peak chart of Cisco during the Internet bubble. What do you think?

Answer: But in most cases, it is meaningless to use old data to predict the stock market in the short term: because there are too many variables, the probability of the situation being completely repeated is extremely low. However, when analyzing technology stocks, one must analyze their development logic in the next few decades, and this logic is consistent.


Question: However, few people on the market can explain Nvidia's logic clearly. At least I didn't see it.

Answer: It is indeed relatively high to understand Nvidia's logic. It has at least three levels:

  • Is the total AI market demand (TAM) big enough? How many years can this business be profitable?

  • Will Nvidia's current booming business be short-lived? After the major manufacturers complete the hardware arms race, will they no longer need to purchase?

  • Is Nvidia's moat deep enough? How much share can competitors take away in the future?


Question: I know that most technology stock giants have a characteristic, that is, they have a so-called "moat." But what's going on with Nvidia's moat? All kinds of gods and demons on the road often talk about it randomly, and I really don't know who to listen to.

Answer: I have always believed that the moats of technology companies are mostly integrated with software and hardware, so that they can last for more than half a century. There are definitely companies that are successful in pure software or pure hardware, but this does not affect my core point of view.


Question: Can you give some examples?

Answer: I will find an example from each decade of the past 50 years to illustrate.

The most important software technology in the 1970s was relational database, let's call it SQL for short. SQL enabled the informatization of enterprises, turning handwritten accounts into electronic accounts, bringing about the first trillion-level track. The software and hardware integrated company we want to mention is IBM. IBM's job, to put it simply, is to use SQL with IBM's mainframe for enterprises. In the 1980s, everyone had heard of the magical CEO dancing with the IBM elephant. In fact, it was mainly the self-bragging of professional managers. The so-called dance is to cut off hardware transformation services, but the fact is that the mainframe is IBM's real moat, and it has not been cut at all. The "service" business of installing SQL is an exploding market demand, and it was not invented by professional managers. IBM's mainframe business has been enduring for 50 years, and most of our savings are still managed by IBM mainframes. Although many people thought IBM was doomed, its stock price is actually at a new high.

The most important software technology in the 1980s was the graphical interface, let's call it GUI for short. GUI has made the PC explode and become a tool for everyone. The representative company here is WINTEL. Microsoft and Intel are indeed two companies, but because they are the de facto setters of PC standards, their software and hardware are deeply bound. Their success continues to this day. Although Intel has encountered many problems in recent years, it is still the leader in the PC field.

The most important software technology in the 1990s was the World Wide Web, let's call it WWW for short. The most important integrated software and hardware company in the WWW is Cisco (CSCO). Cisco is still the most important company in the Internet backend from then to today. If we exclude the peak of the bubble in 2000, Cisco's stock price is actually rising steadily. In other words, similar to IBM, Cisco's core business has been making money since then, but this business is not as big as everyone imagined, and the frequency of machine replacement is relatively low. Cisco once made a huge mistake, which was that its terminal router WRT54G mistakenly chose Linux and was forced to release the source code due to the GPL agreement. This caused all companies to suddenly make ordinary routers. If Cisco had chosen FreeBSD like Apple's MacOS, it would have made a lot more money by integrating software and hardware.

The most important software technology in the 2000s was virtualization, also called hypervisor. The most famous company here is VMWare, and its core product is actually the server's underlying (Bare-Metal) operating system. VMWare's software is very, very good, but it lacks deep hardware bindings and cannot become a super giant on its own. Instead, it is bought and sold by hardware companies. The companies that finally successfully applied virtualization to integrate software and hardware are Amazon, Google and Microsoft. They have made the Internet a necessary infrastructure for work and life, providing you with a variety of information and products.

The most important software technologies of the 2010s are mobile operating systems, iOS and Android. Apple has integrated iOS software and hardware and earned 90% of the profits of the mobile phone industry. There is no need to go into details here.

     The most important software technology in the 2020s is obviously the large language model, which we call LLM for short.


Question: The software and hardware integrations you listed earlier are all aimed at making a lot of money continuously for 50 years. I admit it. But can LLM really be compared with those predecessors?

Answer: The answer is yes. This is the first layer of being bullish on Nvidia, and I believe Wall Street has reached a consensus, so it is crazy about it.


Hinton said that GenAI (LLM) marks the victory of bionic intelligence (connection school) in the competition with symbolic logic intelligence (symbol school). And this victory suddenly drew a clear road map for almost everything that AI will do to replace humans. It even has an idea of how machines can do things that humans can't do.

Huang Renxun said that the human DNA sequence is also a language. We still don’t know what it means and what the functions of the various proteins formed are, but LLM will most likely be able to tell us in the future. This opens a huge door for future medicine.

Simply put, in a few years, LLM will be an indispensable companion for everyone, and it will become meaningless for ordinary people to spend more than ten years learning foreign languages, mathematics, physics and chemistry at a huge cost.


Question: Stop! You've been talking about LLM for a long time, but NVIDIA doesn't do LLM. Didn't you say that software and hardware are integrated, and there are many GPUs that can train LLM. In addition, Nvidia's performance in the past few quarters has exceeded expectations, which is the result of major manufacturers stepping up procurement. They urgently need AI platform elixir to catch up with companies such as OpenAI. After they have completed their purchasing and the data center is built, NVIDIA's performance will not be good, right?

Answer: Indeed, it is currently an arms race in the AI industry. Buy enough equipment first. Judging from the data given by analysts, the top two major manufacturers have the largest purchasing volume, and the others are far from there. It is currently impossible to judge that demand has peaked. Judging from NVIDIA's guidance, delivery is still a problem. This arms race will continue for at least another year. Reasoning needs to rebuild data centers will also continue for many years. When this round of armaments is in place, it’s time for a new generation. Because the current hardware still has obvious performance problems, we have seen that it takes more than a year to refine elixirs like GPT-4. According to Boss Huang's estimation, AI computing power will increase by 1 million times in the next ten years. This is interesting, forcing major manufacturers to constantly update. This is what Boss Huang said, Nvidia will compete with itself.


Question: I understand the second level. Let’s look at the third level. These increases in computing power are not necessarily exclusive to NVIDIA. How deep is NVIDIA's moat? I think CUDA has been praised to the sky. Isn’t it just a software library? I think competitors have competing products. AMD has ROCm and Intel has oneAPI.

Answer: Have you ever found that it is difficult to find real comparative reviews online? why? Because the gap between them and NVIDIA is much bigger than your perception.


Question: I saw Intel CEO Pat Gelsinger say: "We think the CUDA moat is shallow and small." Silicon sage Jim Keller said, "CUDA is a swamp, not a moat." These big guys obviously look down on CUDA.

Answer: I have to admit that after reading these comments, you will definitely feel that CUDA is nothing great. But in fact, they deliberately use vague tone to give you this illusion. Gelsinger actually added a little explanation. He believes that CUDA is only useful for training and does not need to be used for reasoning. Intel AI processors can be used for reasoning. Jim Keller didn't explain carefully what he meant by swampland. He thought X86 was also a swampland. In fact, it is the swamp that has been accumulated for more than ten years that makes it impossible for competitors to copy it. You know how to pave an asphalt road, but you don't know how to build an identical swamp. Just like Microsoft Office, the design and code are a mess, but it's a swamp of backward and forward compatibility.


Question: What you said is still not clear enough. Manufacturers like AMD that already have powerful GPGPUs do not need to copy CUDA. They can just build a new standard library by themselves, which is like paving a new asphalt road.

Answer: This talks about what NVIDIA's software and hardware are integrated into. In fact, 15 years ago, manufacturers worked together to create a computing framework called OpenCL. However, because the market was too small and each company was in the same bed, and there were bugs that were not resolved for a long time, it is now half-dead. ROCm, AMD's competitive platform for CUDA, has been around for more than seven years. However, it has also invested insufficient resources, and various problems have caused users to crash, resulting in the loss of almost all users. As for NVIDIA, Boss Huang claims to be a software company with more software engineers than hardware engineers.


Question: Does that mean that CUDA has no decent competitors?

Answer: Intel saw that both OpenCL and ROCm were in a quagmire, and decided to step out and pave a new road. This is oneAPI. Objectively speaking, oneAPI does have lofty ambitions. It tries to cover all GPUs, CPUs, FPGAs, etc. as a high-level abstraction platform.


Question: I didn't quite understand. AMD can't handle one kind of hardware, but you can handle it with all kinds of hardware?

Answer: To give an analogy, this thing developed by Intel is a bit like Google's Android using Java, which can run on different hardware from various manufacturers; while CUDA, like iOS, can only run on NVIDIA GPUs, but has the best performance. Intel acquired a very powerful company called Codeplay, with the goal of implementing cross-platform and various portable libraries through the SYCL language. But the challenge is that SYCL is far less popular than Java back then and has extremely rich programmer resources.


Question: Got it, then CUDA has met a challenger?

Answer: Far from it. High-performance computing requires complete in-depth binding of hardware layers, drivers, clusters, underlying libraries, and upper-layer applications (PyTorch, compilers, etc.). CUDA has no shortcomings. While its competitors have similar performance in terms of the most basic GPU chip, other parts are far behind, and a single driver is full of bugs. Although AMD's MI300 has strong stand-alone performance, it does not have that great practical significance. This is what Boss Huang said. Their hardware is given away for free, and the cost of setting up LLM is higher than that of NVIDIA, because they can't afford to waste time on various errors.


Question: You are really bragging about NVIDIA. Doesn't it have any weak points that can be broken?

Answer: In the US H1B work visa lottery environment, programmers are a very scarce resource. Coupled with the developed Internet industry and weak basic education in the United States, there is a huge gap in programmers. Most American programmers don't like to do low-cost things like drivers and computing libraries.


In stark contrast to the United States, China has a huge amount of basic education and a torrent of programmer resources. We have seen that due to the high-tech decoupling between China and the United States, China will inevitably make every effort to develop its own new productive forces. The libraries that AMD and Intel are struggling with are all open source. As long as we invest enough and make a difference, we will definitely be able to significantly shorten the gap with NVIDIA CUDA.



View more at EASELINK

HOT NEWS

Understanding the Importance of Signal Buffers in Electronics

Nvidia,Nvidia,news,AMD

Have you ever wondered how your electronic devices manage to transmit and receive signals with such precision? The secret lies in a small ...

2023-11-13

How to understand Linear Analog Multipliers and Dividers?

IntroductionLinear analog multipliers and dividers are an advanced-looking device at first glance, but they're actually crucial player...

2023-09-08

Demystifying Data Acquisition ADCs/DACs: Special Purpose Applications

Introduction to Data Acquisition ADCs/DACsUnlocking the potential of data has become an integral part of our ever-evolving technol...

2023-10-12

Another century of Japanese electronics giant comes to an end

"Toshiba, Toshiba, the Toshiba of the new era!" In the 1980s, this advertising slogan was once popular all over the country.S...

2023-10-13

Understanding the World of Encoders, Decoders, and Converters: A Comprehensive Guide

Encoders play a crucial role in the world of technology, enabling the conversion of analog signals into digital formats.

2023-10-20

The Future of Linear Amplifiers: Unlocking Breakthroughs in High-Fidelity Audio and Communication

Introduction to Linear AmplifiersWelcome to the world of linear amplifiers, where breakthroughs in high-fidelity audio and communication...

2023-09-22

In 2023, ASIC chips aim at two major directions

ASIC chip (Application-Specific Integrated Circuit) is an integrated circuit designed and manufactured specifically to meet the need...

2023-10-05

Financial Times Documentary "The Battle for Global Semiconductor Dominance"

On September 28, the Financial Times, a century-old media giant, launched a documentary titled "The race for semiconductor suprema...

2023-10-16

Address: 73 Upper Paya Lebar Road #06-01CCentro Bianco Singapore

Nvidia,Nvidia,news,AMD Nvidia,Nvidia,news,AMD
Nvidia,Nvidia,news,AMD
Copyright © 2023 EASELINK. All rights reserved. Website Map
×

Send request/ Leave your message

Please leave your message here and we will reply to you as soon as possible. Thank you for your support.

send
×

RECYCLE Electronic Components

Sell us your Excess here. We buy ICs, Transistors, Diodes, Capacitors, Connectors, Military&Commercial Electronic components.

BOM File
Nvidia,Nvidia,news,AMD
send

Leave Your Message

Send