Home News NVIDIA in the course of history

NVIDIA in the course of history

2023-10-02

Share this article :

On April 5, 1993, Huang Jen-Hsun, who just turned 30 that day, was discussing starting a business with Prim and Malachowski in a chain restaurant in San Jose, California.

This Denny's restaurant is located next to an overpass in a noisy and poorly decorated environment. The front of the store is riddled with bullet holes because passers-by often shoot at police cars parked in front. The three founders were wrapping sausage slices in egg rolls and drinking low-quality burnt coffee. It was in this environment that Nvidia was born.

Huang Jen-Hsun was born in Tainan, less than 20 kilometers northeast of this place today is TSMC's 18th wafer fab.


Construction of this production base started at the end of 2017, and the total investment this year is expected to exceed US$100 billion. It is TSMC's most expensive wafer factory, and its most advanced 5-nanometer and 3-nanometer processes are implemented here. Whether it's Nvidia's RTX4000 series gaming graphics cards or the hard-to-find H100, the key transformation from silicon to chip is completed here. The two companies have achieved mutual success and now jointly control Apex Star Technology with a combined market value of US$1.5 trillion.

Counting from the date of its establishment, Huang Renxun has served as NVIDIA CEO for thirty years. In terms of level of professionalism, there are probably few colleagues in Silicon Valley who can match it.


In the narrative model of Silicon Valley, successful technology companies here usually grow rapidly into world giants under the leadership of young founders, or companies that have become world giants quickly decline because they cannot keep up with the situation, or after decline, they rely on a certain fist again. Products are great again. In short, it goes back and forth, booming and dying, and the core is the word "kuai".

Looking at it from this perspective, Nvidia is obviously different. Except for the almost "sudden death" when it first started its business, NVIDIA has lived a quiet life for two decades. It is not amazing but it has not suffered any major disasters.

Until the past few years, Nvidia suddenly exploded with cheating, from artificial intelligence to cryptocurrency, from the Metaverse to ChatGPT. It was not that Nvidia was chasing the trend, but more like the trend had lost its mind and crashed into Nvidia.

As the chief designer, Huang Renxun himself obviously does not agree with this statement. To him, Nvidia's story should be an excellent example of technical understanding, business vision, and long-termism.

At most, add a little bit of luck.


01 The East Asian wind narrowed Huang Renxun's eyes

When Huang Renxun and the other two partners finally decided to start a business, they had quite simple but at the same time profound thinking. Whether it’s movies, television, printed books, newspapers, or music, these are the mediums humans use to express ideas and tell stories.


The 3D image technology, which was just beginning to take shape at that time, may become the next new mass media with its real-time generated images and interactivity.

The only problem is that the 3D imaging task at this time requires an extremely high amount of calculations and can only be completed using an extremely large professional workstation. What Huang Renxun and Nvidia want to do is to make this technology cheap enough and then popularize it.


In fact, when Nvidia was founded in 1993, it was the first company to try to bring 3D images to ordinary consumers. However, Nvidia was not the first company to do this. The original PS game console launched by Sony one year and eight months later was much more successful. Nine years after its launch, the console shipped more than 100 million units. .

On the other hand, Nvidia's early first-mover advantage soon ceased to exist. In just two years, 90 Nvidia competitors emerged in geek-filled Silicon Valley. In the past two years, the market situation has undergone earth-shaking changes, and the problem lies in the DRAM components.

From a technical perspective, the storage of images within a computer requires considerable space resources. Without compression, a 1080P picture now contains more than two million pixels (1920 × 1080). According to an 8-bit color depth, each pixel requires 4 bytes of storage space. All pixels The total is over 8M bytes. To create a dynamic and continuous look and feel, the images displayed on the screen need to be refreshed multiple times per second.


Although the display specifications of screens, whether in terms of resolution or color, were not so high 30 years ago, storage space is still a key bottleneck in 3D image technology in addition to computing power. In 1993, when Nvidia was founded, the price of DRAM memory was at the level of 50 US dollars/MB, and the resolution of mainstream VGA monitors was 640*480.

Even using the most advanced solution at the time, the DRAM required for 3D image display was 4MB, so the cost of this DRAM alone was US$200.

Including the control chip and other components on the board, and a certain profit margin, the final price of these products will be upwards of US$1,000. Although it is understandable for gamers today to spend thousands or tens of thousands on a graphics card, this is because a rich ecosystem has been established around this graphics card. Thirty years ago, there were not so many so-called 3A masterpieces, and no one in the consumer market was willing to spend so much money to buy electronic components for display.

Therefore, Huang Renxun and the others have engaged in technological innovation around DRAM, which has greatly reduced the memory space required by NVIDIA products. A large part of the US$10 million raised from Sequoia was also spent on this "far ahead" solution. Huang Renxun originally thought that this would bring differentiated competitiveness to their products, but in the second year of starting his business, the price of DRAM dropped from US$50 to US$5 per unit.


This means that Nvidia's huge early investment in memory optimization was completely in vain.


Almost all subsequent articles attributed the failure of NVIDIA NV1 to the incompatibility with the triangle rendering mode of the OpenGL route, but in fact incompatibility was not the real reason for the failure of NV1. What dealt a fatal blow to NV1 was that the plummeting price of memory turned NV1 from advanced to less advanced. A less advanced product did not have the opportunity to participate in the shaping of industry standards. The final incompatibility was actually the result rather than the cause.

The reason for the collapse in DRAM prices is oversupply, and the main contributors to new capacity are Asians. In the 1990s, the economies of South Korea and Taiwan both experienced explosive growth. Their economic models were both export-oriented, and their pillar industries were electronics and semiconductors. Taking South Korea as an example, with government support, several chaebols used loans obtained from banks to carry out large-scale capital expenditures and rapidly expand their production scale. The production capacity of South Korea's memory industry alone increased 8.3 times in just a few years from 1991 to 1995.

Due to industry characteristics, memory fabs need to maintain high capacity utilization in order to amortize fixed costs. Memory manufacturers will continue to produce even when they know that supply far exceeds demand, which further pushes down product prices. But these aftermaths are no longer important to Huang Renxun, because Nvidia is now leaving the past and starting a new one.


02 Intel is not an opponent, it is a black hole


Two years after the release of NV1, NVIDIA ushered in its first successful product: RIVA128. The RIVA128 product marks a major shift in NVIDIA's technical route. NVIDIA completely abandoned its previous solutions that were incompatible with the industry and chose to fully embrace the technical specifications of Direct3D and OpenGL.


At this time, the Voodoo graphics card produced by 3Dfx has become the industry benchmark, but the newly launched RIVA128 has a clear gap in image quality with Voodoo due to driver problems.

However, through subsequent firmware updates, NVIDIA quickly made the image quality output by RIVA128 catch up with or even surpass Voodoo. In this case, two design advantages of RIVA128 are reflected.


On the one hand, it is probably the technical foundation accumulated through the independent route in the early stage. NVIDIA has designed a special memory architecture on the RIVA128, allowing it to output higher-resolution images. On the other hand, unlike the Voodoo graphics card, the RIVA128 integrates both 2D and 3D graphics chips, which means that the computer plugged into the RIVA128 does not need a separate 2D graphics card to process output.

In the following time, RIVA128 and its modified products helped Nvidia gain a firm foothold in the graphics card market. In 1999, NVIDIA released its first GeForce series product: GeForce256. Although similar concepts had existed before, NVIDIA still calls GeForce256 "the world's first GPU". This is where the popularization of the new GPU term began.


GeForce256 is indeed an epoch-making GPU. For the first time, it frees the geometric calculations related to polygon conversion and light source processing (T&L) from the shoulders of the CPU.

Here we might as well do a simple science popularization on the characteristics of the chip. All chips can be roughly divided into two types, generalists and specialists. Generalist types of chips are CPUs, which are out-and-out generalists and can perform a variety of general-purpose computing tasks. GPUs are closer to professionals. They are like great craftsmen who are proficient in a certain craft and are good at completing the tasks within the focus.

A major trend in the history of the chip industry is that due to the existence of Moore's Law, CPUs can do more and more things and their performance is getting stronger and stronger. This resulted in many computing tasks being initially performed by proprietary devices, but when these tasks became common and stabilized, integrating them into the CPU was a natural transition.

In this process, CPU manufacturers have strengthened their product competitiveness, and consumers have gained cost-effectiveness due to integration. Only these original proprietary manufacturers have been swept into the garbage heap of history: generalist chips are like black holes, sucking in specialist chips.


For example, cryptography, as a key infrastructure for modern network communications, was initially implemented by separate integrated circuits, and later became a few instructions of the CPU. Other products such as audio cards and video cards have also repeated similar development paths.

In this context, if we look at GeForce256's operation of taking T&L computing tasks away from the CPU, it has extraordinary symbolic meaning. Because from the perspective of industry development direction, this is completely opposite to the process of "generalists absorbing specialists" we described above.

As a player who has just started in the chip industry, after seeing the tragic fate of his peers, Huang Renxun understands that only by creating unique value can a company have the possibility of long-term existence.


Fortunately, compared with cryptography or ordinary audio and video decoding, the field of 3D images has a much higher ceiling. Ordinary users are always looking forward to more refined picture quality and higher image refresh rates, which brings about the pursuit of GPU performance. In turn, when the GPU responds to existing demands, new demands will be stimulated again, which constitutes a long-term positive cycle of incentives.

Until now, the most flagship GPUs have often only just met the performance requirements of the most demanding AAA games. At the same time, new application prospects such as the Metaverse have been difficult to reach because they hit the existing computing power bottleneck.

So the reality is always that even if the core display reaches the performance of independent GPUs a few years ago, Nvidia's new GPUs have opened a significant gap. For Intel, it has not been a rival to Nvidia for a long time, because it does not want to enter and dominate the GPU industry, but is waiting to annex and eliminate the industry, as it has done many times before That way.


03 CUDA that grows over the long season


In the past 30 years so far, except for the early days of its founding, NVIDIA has almost never encountered a critical moment of life and death. After acquiring the former giant 3Dfx, the GPU industry entered the era of NVIDIA/ATI oligopoly. Nvidia's market share increased steadily over the next decade or so, but after reaching $4 billion in 2008, Nvidia's overall revenue slowly fluctuated and increased, and finally reached the milestone of $5 billion in 2016.


Correspondingly, before 2016, Nvidia's stock price remained below $10, fluctuating with performance. This was a long season, during which Huang Renxun led his colleagues to make diligent attempts, and the foundation of Nvidia's huge empire was quietly established during this period.

We talked about it in the previous section. In order to avoid being integrated, Nvidia, which makes GPUs, needs to run faster than Intel, which makes CPUs. This goal is achieved by building more powerful chips and providing the developer community with more tool stacks that can effectively utilize these powerful chips.

The largest proportion of this group is game developers, so Nvidia invented technologies such as programmable shaders to give game development more flexibility and features in screen presentation.

Huang Renxun himself regards programmable shaders as one of the most critical innovations in NVIDIA's history. It has expanded the boundaries of the industry. It is this technology that enables GPUs to consume more and more transistors and computing power, thus avoiding being Integrated annexation of the chipset on the CPU or motherboard.


But Huang Renxun's technical foresight is that he not only led NVIDIA along the road of computational graphics to the end, he also saw the essential difference in computing paradigms between GPU and CPU very early, and used great patience and courage to do this. Made adequate preparations.

Shortly after entering the new millennium, people actually discovered that Moore's Law was gradually failing, and multi-core CPUs became a trend. Today, top consumer CPUs have 16 physical cores, and server CPUs even have 128 physical cores. Therefore, multi-threaded parallel programming for CPUs has become quite common.

However, the starting point of multi-core parallelism of CPU is completely different from that of GPU. The emergence of multi-core CPU is more of a secondary solution to the difficulty of maintaining a large improvement in single-core performance, while the methodology of GPU from the beginning is to decompose large problems into as many as possible. Small problems, and then use as many "weak" computing cores as possible to solve these small problems.

For example, compared with the 16 cores of today's high-end consumer CPUs, high-end consumer GPUs like the 4090 have more than 16,000 cores. A thousand-fold quantitative change obviously means a qualitative change.

The technology represented by programmable shaders is just the application of the above model in the field of graphics computing. Extending it to the field of general computing will lead to a broader new world, and NVIDIA's key to opening this new world is CUDA.


In Huang Renxun's words, the first step is to "make graphics programmable", and the second step is to "open up GPU for programmability for all kinds of things".

Before NVIDIA launched CUDA, GPU programming was a very troublesome matter and required writing a lot of low-level code. CUDA's ease of use enables a wider range of people to become developers, unleashing the potential of the GPU computing platform. However, embedding CUDA support in every Nvidia graphics card is a very costly approach, and the development and maintenance of the CUDA system also requires a huge investment of resources.

However, no matter how earnestly Huang Renxun explained what a great innovation CUDA was, Wall Street analysts did not buy it, so Nvidia's stock has remained hovering at single-digit US dollars for a long time. Looking back today, of course some people will scold these analysts for not knowing the treasures of the mountains. But analysts also have something to say. After all, technology companies have boasted of too many great things over the years but failed to realize them.


04 Tegra failed, Orin succeeded


On September 5, 2013, Lei Jun released the third-generation Xiaomi mobile phone at the National Convention Center in Beijing. Huang Renxun also came to the scene as a special guest. At that time, Xiaomi did not have a PC product line. Huang Renxun came to Beijing to provide a platform for its mobile phone SoC. The mobile version of Xiaomi 3 uses NVIDIA Tegra4 quad-core CPU.


Although the two talked and laughed happily at the press conference, Lei Jun did not put all his money on Nvidia. Both China Unicom and China Telecom versions of Xiaomi Mi 3 use Qualcomm processors.

This was the highlight of NVIDIA’s entry into the field of mobile computing, and it was also the next star that Huang Renxun was looking for for NVIDIA. After the rise of the smartphone wave, Huang Renxun believes that the booming demand in the field of mobile computing will induce a revolution, which will eventually even subvert the PC and server markets.

In early 2011, Huang Renxun said in an interview with technology media VentureBeat that the Tegra series of chips would expand Nvidia's effective market by six times.

However, after saying this, Huang Renxun also said that Nokia's transition from Symbian to Windows would be an "excellent opportunity" for Nvidia. In addition to Nokia, another key customer of Tegra chips is Motorola. Its Android tablet Motorola Xoom uses the Tegra2 chip. While Apple's iPad 2 starts at $500, the Xoom tablet is priced at $800.


In hindsight, these details may have foreshadowed the failure of Nvidia's Tegra product line when it entered the mobile chip market.

However, Nvidia's investment in Tegra is not a straw basket, but it is a sense of luck that something was lost in the past. After eventually withdrawing from the mobile phone market completely due to baseband issues, the Tegra series products changed their design goals from previously targeting power consumption and efficiency to focusing on performance.

A typical representative product is Tegra X1. This chip is used in Nintendo's Switch game console and is considered to bring a high-quality picture experience.

Furthermore, in the process of developing the Tegra series of chips, Nvidia accumulated rich experience in SoC development. Although this ultimately failed to open up the mobile market, it helped Nvidia quickly lay out the smart car era. Before adopting full-stack self-developed software and hardware, Tesla cars were equipped with the Tegra X2 chip.

At the same time, the Orin chip that is currently standard on high-end smart cars also belongs to the Tegra series. In fact, Nvidia's current "Nvidia Drive" for driving assistance and "Nvidia Jetson" for embedded device automation are both based on the Tegra series of chips.


Autonomous driving chips are a key element of intelligent driving platforms. In addition to Nvidia, Qualcomm and Intel are also ambitious about this. However, as far as the current situation is concerned, Nvidia is still in a clear leading position.

On Intel's side, after paying a high premium of US$15.3 billion to acquire Mobileye, although Intel has entered the first camp of auto parts suppliers, the gap has been widened by Nvidia in terms of high-end product performance. According to Mobileye’s filings with the SEC, Mobileye’s valuation has fallen sharply to US$16 billion from US$50 billion in March last year. On the Qualcomm side, after the $44 billion acquisition of NXP fell through, the main focus was on cockpit chips represented by Snapdragon 8155.


05 Algorithms, Hardware and Lottery


Edison built the world's first phonograph in 1877, and this invention, along with vinyl records, spread among the music lovers of that era. But Edison himself was frustrated and disappointed by this reality, since his original purpose for the phonograph was to record the last words of the dying. Compared with this idea, listening to music with a gramophone is a bit too low.


In the history of science and technology, there are actually many such inventions that do not follow the script. Another well-known example is the cardiovascular drug sildenafil developed by Pfizer. Sometimes this odd arrangement of fate is a trick, and other times it proves to be a stroke of luck, as in the case of Nvidia.

Of course, saying this is not to deny Huang Renxun's technical vision and business talents, but to emphasize that in addition to the factors of personal struggle, the historical journey must also be taken into consideration.

Of course, Huang Renxun has always known that Nvidia's GPUs have more potential outside of video games. Understanding GPUs from the perspective of a new computing paradigm will bring much greater possibilities. But according to a 2016 Forbes article, he didn’t actually anticipate that deep learning would become an explosive application for GPUs.


The basic work of deep learning has been laid as early as the last century: the backpropagation algorithm was first proposed in 1963, and the deep convolutional neural network existed in 1979. But in the intervening decades, these concepts were not carried forward until we had enough data and sufficient computing power.

In fact, a Google Brain researcher named Sarah Hook calls the progress made in deep learning through modern GPU devices winning the "hardware lottery." Although the core point of Sarah's paper is to remind the public that the research ideas that are successful in academia and industry today are probably not because these ideas themselves are better than other failed ideas in solving corresponding problems, but just because This type of idea is more in line with the existing hardware environment.

She believes that the achievements of deep learning with the help of parallel computing devices such as GPUs may be an example. But from Sara’s point of view, we can obviously also get a glimpse of the contingency that led to the GPU being finally selected for deep learning.


In fact, a Google Brain researcher named Sarah Hook calls the progress made in deep learning through modern GPU devices winning the "hardware lottery." Although the core point of Sarah's paper is to remind the public that the research ideas that are successful in academia and industry today are probably not because these ideas themselves are better than other failed ideas in solving corresponding problems, but just because This type of idea is more in line with the existing hardware environment.

She believes that the achievements of deep learning with the help of parallel computing devices such as GPUs may be an example. But from Sara’s point of view, we can obviously also get a glimpse of the contingency that led to the GPU being finally selected for deep learning.

When we talk about the origin of deep learning and the turning point of Nvidia's destiny today, we can't avoid a landmark event. That is, when Hinton and his doctoral students Krizhevsky and Sutskever participated in the ImageNet image recognition competition in 2012, they used convolutional neural networks. The error rate dropped from 25% in the previous year to 15% in one fell swoop. When Krizhevsky and others trained the neural network model, they used two NVIDIA GeForce game graphics cards to learn 1.2 million images.


But Krizhevsky and others are not the first scholars to use GPUs to train deep neural networks. Ng's Stanford team wrote a paper "Large-scale Deep Unsupervised Learning using Graphics Processors" in 2008, which mentioned that GPUs can significantly accelerate the learning process of neural network models.

However, until the 2013 Nvidia GTC conference, Huang Renxun’s keynote barely mentioned AI. We will have to wait until the second year of GTC to see Huang Renxun regard the field of artificial intelligence as Nvidia's most critical business.

This also shows that Huang Renxun did not promote all in AI from the beginning, but did this after the industry had already formed a trend. But even from this time point, Nvidia's actions are considered quite early.

Of course, as mentioned before, the failure to accurately predict that AI will become the foundation of Nvidia’s empire many years later does not affect Huang Renxun’s greatness.


To a certain extent, from the initial innovation in the direction of image computing to the subsequent technological ecosystem built around CUDA, NVIDIA's layout is destined to bear amazing fruits. Huang Renxun did not predict what the fruit would be, but he knew it would definitely grow.

The latest second-quarter financial report shows that Nvidia’s revenue this quarter reached a record-breaking US$13.51 billion, a year-on-year increase of 101%. Among them, the largest contribution was the data center business. Business revenue in the quarter was US$10.32 billion, a year-on-year increase of 171%, and a record high. Even in the previous performance guidance, it was expected that this quarter's revenue and profits would explode, but neither management nor Wall Street expected it to explode so much.

Since Q2 data center revenue exceeded the gaming business in fiscal year 2021, the former has increasingly become the ballast of Nvidia's performance. In the second quarter, the proportion of data center business revenue in total revenue increased from 35% in the same period last year to 76%.


If we want to mention the only setback that Nvidia has experienced in recent years, it is probably that the acquisition of Arm in 2020 failed due to regulatory reasons.

We mentioned before in Part 2 that in terms of chip classification, Intel and NVIDIA are at the two ends of generalists and specialists respectively. The current development trend of dahttps://easelinkelec.com/search.html?modelid=14&mid=1&keyword=AMDta centers is that the level of system integration is getting higher and higher, and it is moving more and more towards SoC. Chip companies need to integrate CPUs and GPUs, just like Apple does with its M-series chips in its consumer business.

As the importance of the data center business to the two companies continues to increase, both companies are trying to make up for their own shortcomings.

Intel has always been in the CPU business, and it needs to lean towards specialists, so it has successively acquired companies such as Altera, Mobileye, and Habana Labs in the past few years. Nvidia has always been in the GPU business, so it needs to lean towards generalists. , which constitutes the basic logic of its acquisition of Arm. In addition, from this perspective, AMD acquired ATI a long time ago and has many years of screwing experience in both CPU and GPU, so in theory it actually has some unique advantages.

Of course, the biggest victim of the collapse of this acquisition may be Masayoshi Son, even though he has just recovered from the listing of Arm.


At that time, the breakup fee of US$1.25 billion that SoftBank received for terminating the acquisition was not worth mentioning compared to the benefits that could have been obtained. Because Nvidia's acquisition offer in 2020 was US$12 billion in cash, plus US$21.5 billion in Nvidia stock. If calculated according to the latest market value, the value of this transaction is already in the hundreds of billions of dollars. Looking at it this way, Masayoshi Son’s Vision Fund may have a chance to make up for the US$32 billion it lost last fiscal year.


06 The end


Intel's founding employee and third CEO Andy Grove once said, "Success leads to complacency, complacency leads to failure, and only the paranoid can survive."

It’s hard to say whether this explains why Intel missed the new wave, but what is certain is that Huang’s paranoia is the key to Nvidia’s success today. He was a genius in both business and technology.

As far as the eye can see, Nvidia currently has no rivals that can match it. But the simple fact that Trillion Nvidia illustrates is that it is on a track with a bright future but a very attractive one. Major manufacturers such as Google, Amazon, and Microsoft are all trying to design their own AI acceleration chips. Startups targeting autonomous driving and other AI computing capabilities are springing up. AMD and Intel, which have been left behind, are more likely to make a comeback.


View more at EASELINK

HOT NEWS

Understanding the Importance of Signal Buffers in Electronics

Easelink Electronics - Electronic components Search

Have you ever wondered how your electronic devices manage to transmit and receive signals with such precision? The secret lies in a small ...

2023-11-13

How to understand Linear Analog Multipliers and Dividers?

IntroductionLinear analog multipliers and dividers are an advanced-looking device at first glance, but they're actually crucial player...

2023-09-08

Demystifying Data Acquisition ADCs/DACs: Special Purpose Applications

Introduction to Data Acquisition ADCs/DACsUnlocking the potential of data has become an integral part of our ever-evolving technol...

2023-10-12

Another century of Japanese electronics giant comes to an end

"Toshiba, Toshiba, the Toshiba of the new era!" In the 1980s, this advertising slogan was once popular all over the country.S...

2023-10-13

The Future of Linear Amplifiers: Unlocking Breakthroughs in High-Fidelity Audio and Communication

Introduction to Linear AmplifiersWelcome to the world of linear amplifiers, where breakthroughs in high-fidelity audio and communication...

2023-09-22

Understanding the World of Encoders, Decoders, and Converters: A Comprehensive Guide

Encoders play a crucial role in the world of technology, enabling the conversion of analog signals into digital formats.

2023-10-20

Financial Times Documentary "The Battle for Global Semiconductor Dominance"

On September 28, the Financial Times, a century-old media giant, launched a documentary titled "The race for semiconductor suprema...

2023-10-16

What signals does the United States send out on these IC companies?

According to Yonhap News Agency, Choi Sang-moo, the chief economic secretary of the South Korean Presidential Office, held a press...

2023-10-14

Address: 73 Upper Paya Lebar Road #06-01CCentro Bianco Singapore

Easelink Electronics - Electronic components Search Easelink Electronics - Electronic components Search
Easelink Electronics - Electronic components Search
Copyright © 2023 EASELINK. All rights reserved. Website Map
×

Send request/ Leave your message

Please leave your message here and we will reply to you as soon as possible. Thank you for your support.

send
×

RECYCLE Electronic Components

Sell us your Excess here. We buy ICs, Transistors, Diodes, Capacitors, Connectors, Military&Commercial Electronic components.

BOM File
Easelink Electronics - Electronic components Search
send

Leave Your Message

Send