Historically CPUs have rapidly increased performance in accordance with the informal “Moore’s Law”. Moore’s law is an observation that the number of transistors in processors, and thus the processing power of the processors, doubles roughly every two years.
Moore’s law held pretty consistently for decades since it was first posited in 1965, primarily due to processor manufacturers making continuous advances in how small they could make the transistors. Shrinking the processor transistor size increases performance because more transistors can then fit into a smaller space and because smaller components are more power-efficient.
Moore’s law is dead
Realistically though, Moore’s law was never going to hold forever, as it gets harder and harder to shrink components the smaller they get. Since 2010, at the 14, and 10-nanometre scale – that’s 10 billionths of a metre – processor manufacturers have started to run into the edge of what is physically possible. Processor manufacturers have really struggled to continue to shrink the process size below 10 nm, although as of 2020 some 7 nm chips are available and 5 nm chips are in the design stage.
To combat the lack of process shrinkage, processor manufacturers have had to use other methods to continue to increase processor performance. One of these methods is simply making bigger processors.
Yield
One of the issues with creating an incredibly complex processor like this is that the yield of the process is not 100%. Some of the processors that are made are simply faulty when they are made and need to be thrown away. When making a bigger processor, the larger area means there is a higher chance for each chip to have a flaw requiring it to be thrown away.
Processors are made in batches, with many processors on a single silicon wafer. For example, if these wafers contain 20 errors each on average, then roughly 20 processors per wafer will need to be thrown away. With a small CPU design there could be, say a hundred processors on a single wafer; losing 20 isn’t great, but an 80% yield should be profitable. With a larger design, however, you can’t fit as many processors on a single wafer, with perhaps only 50 larger processors fitting on a wafer. Losing 20 out of these 50 is much more painful and is much less likely to be profitable.
Note: The values in this example are only used for demonstration purposes and are not necessarily representative of real-world yields.
Chiplets
To combat this issue, processor manufacturers have separated out some of the functionality and components into one or more separate chips, although they stay in the same overall package. These separated chips are smaller than a single monolithic chip would be and are known as “Chiplets”.
Each individual chiplet doesn’t even need to use the same process node. It’s entirely possible to have both 7 nm and 14 nm based chiplets in the same overall package. Using a different process node can help save costs, as it is easier to make larger nodes and yields are generally higher as the technology is less cutting edge.
Tip: Process node is the term used to refer to the scale of transistors being used.
For example, in AMD’s second-generation EPYC server CPUs, the CPU processor cores are split across eight separate chiplets, each using the 7 nm processor node. A separate 14 nm node chiplet is also used to process the I/O, or Input/Output of the chiplets and the overall CPU package.
Intel is designing some of its future CPUs to have two separate CPU processor chips, each of which runs on a different process node. The idea is that the older larder node can be used for tasks with lower power requirements, while the newer smaller node CPU cores can be used when maximum performance is needed. The design using a split processing node will be especially helpful for Intel which has struggled to achieve acceptable yields for its 10 nm process
Did this help? Let us know!