Source: Content compiled from wccftech
Intel is reportedly looking at utilizing disaggregated GPU architectures as the blue team has filed a patent that would allow them to utilize specialized logic "chiplets.”
Moving away from monolithic designs to smaller, more specialized chiplets is an ambitious goal for enthusiasts who certainly see a future in the market, and with Intel filing new patents related to this, it's certainly exciting to see the company embracing disaggregated GPU designs in the future, though we don't know how far away from actual implementation that is.
According to a patent application, Intel is currently exploring the use of logic chips to handle GPU workloads, and we would like to see the market adopt it now.
For those who don't know, disaggregated GPU architecture is a rather innovative approach to GPU design that involves moving from a monolithic configuration to small, specialized chips and then interconnecting them using related technologies. Dividing the GPU design into chips allows manufacturers to fine-tune each chip for a specific use case, such as compute, graphics, or AI, allowing them to be used for larger applications.
Another huge advantage of disaggregated GPU architectures is that they are able to be more power efficient compared to existing technologies because the individual chips allow for power gating, which means that when they are not in use, the power can be turned off to save energy. This design technique also brings several other benefits, such as workload customization, modularity, and flexibility, which is why in the world of GPU design, this technique is seen as a future benchmark.
Interestingly, we saw AMD file a similar patent earlier.
AMD's patent application shows that the company is exploring "multi-chip" GPU design options, which indicates that the next-generation RDNA architecture may undergo a huge change.
The concept of MCM (multi-chip module) is not completely new to the graphics field, but the industry's inclination towards MCM is definitely growing due to the limitations of monolithic design.
For AMD's new patent, its actual focus is on the use of chiplets and how to manage individual units in "three" different modes, similar to the multi-chip module structure, so the competition for GPU chiplets is definitely one of the competitions worth watching in the future, and Intel and AMD are now working hard to innovate in the GPU field.
The patent describes three different chip utilization "modes", which differ in how resources are allocated and managed in advance. The patent reveals three different modes, the first of which is a "single GPU" mode, which is very similar to how modern GPUs operate. All onboard chips will act as a single, unified processing unit, sharing resources in a collaborative environment.
The second mode is called "independent mode", where the individual chips will operate independently, with their functions being taken care of by a dedicated front-end chip that is responsible for scheduling tasks for its associated shader engine chip. The third mode is the most optimistic, called "hybrid mode", where the chips can operate independently or coexist. It takes full advantage of the advantages of unified and independent processing, providing scalability and efficient resource utilization.
The patent does not reveal details about AMD's MCM design approach, so we cannot comment on whether the red team will decide to adopt the ideas mentioned in the patent. However, when it comes to multi-chip configurations, while they do provide performance advantages and scalability, producing them is a more complex task that requires high-end equipment and processes, which will ultimately increase costs. Here is the patent's description of the multi-chip approach:
By partitioning a GPU into multiple GPU chips, a processing system can flexibly and cost-effectively configure a certain number of active GPU physical resources according to the operating mode.
In addition, a configurable number of GPU chips are assembled into a single GPU, making it possible to assemble multiple different GPUs with different numbers of GPU chips using a small number of tapeouts, and to build a multi-chip GPU using GPU chips implementing different generations of technology.
Currently, AMD does not have a proper multi-GPU chiplet solution for the consumer market. The Navi 31 GPU is still a monolithic design with a single GCD, but the MCD carrying the Infinity Cache and memory controller has been moved to a chiplet package. With the next-generation RDNA architecture, we can expect AMD to do more with multi-chiplets, with multiple GCDs having their own dedicated shader engine blocks. AMD had planned to launch one such GPU in the RDNA 4 series, codenamed Navi 4X/Navi 4C, but the plan was reportedly scrapped in favor of a more mainstream chiplet package, so perhaps we can see it return in a future RDNA 5 chiplet.
However, implementing multiple GPUs is not as simple as it seems, as it brings manufacturing complexity, as well as the need for proper interconnect technology. Chiplet designs have appeared on the market, such as AMD's design in its EPYC CPUs, but a "real" implementation has not yet been achieved, and Intel's patent does give us hope that disaggregated GPUs will become a reality after all.
The adoption of MCM designs may also increase with the availability of High NA devices and rapidly evolving technologies, and considering that the red team has already experimented with multi-chipsets, this is certainly not something that should be considered if we are talking about a future RDNA architecture shift away from a monolithic design.
View more at EASELINK
2023-11-13
2023-09-08
2023-10-12
2023-10-20
2023-10-13
2023-09-22
2023-10-05
2023-10-16
Please leave your message here and we will reply to you as soon as possible. Thank you for your support.
Sell us your Excess here. We buy ICs, Transistors, Diodes, Capacitors, Connectors, Military&Commercial Electronic components.
Leave Your Message