At the heart of every computer, you will find the CPU. The Central Processing Unit is critical hardware. It runs the operating system and all the programs on your computer. CPUs are designed as general-purpose processors. By their very nature, they’re supposed to be able to handle everything.
However, CPUs aren’t very good at some types of workloads because their general-purpose hardware can’t be optimized for specific tasks without losing its general-purpose nature. Or becoming hopelessly large, complex, and expensive. Additionally, any CPU will only be capable of handling so much data and processing at once. A coprocessor is a second processing unit explicitly designed to take one or both of these scenarios.
A coprocessor is simply a second processing unit within a computer. In some scenarios, these can be a dual physical CPU on the same motherboard as in some servers. In High-Performance Computing and supercomputing scenarios, these general-purpose coprocessors can also be found on PCIe add-in cards. The coprocessor is often focused on a specific task rather than a general-purpose processor. These task-specific processors can be attached directly to the motherboard or included on a separate daughterboard like a PCIe add-in card.
The First Coprocessors
The first coprocessors were relatively simple. They were designed to handle the I/O or Input and Output for mainframe computers. The problem was that I/O processing was a very time-consuming task for the CPU. The actual processing task, however, was relatively simple. So it was cheap enough to make a processor to handle it. While the coprocessor took the I/O efficiently, the CPU had to issue simple I/O parameters, freed up processor time, and increased system performance.
The original IBM PC included an optional floating point arithmetic coprocessor. CPUs of the day performed this type of math in software which was slow but functional enough for the rare cases it was needed for most users. However, computer-Aided Design, or CAD systems, used this type of math constantly. By separating the floating point arithmetic onto a coprocessor, not only were speeds increased when needed, thanks to hardware acceleration, but users that didn’t need it could save money by buying a system without the coprocessor.
Ultimately, these simple coprocessors had their functions integrated into CPU architecture. This is partly a natural result of continuous CPU development but is also related to difficulties in continuing the simple synchronization as CPU clock speeds increase. While these CPUs and coprocessors ran well enough at 75MHz, there would be a massive time delay, power consumption, and radio-frequency interference issues at the GHz frequencies of today. These issues necessitated more complex signaling systems between CPUs and modern coprocessors.
The GPU or Graphics Processing Unit is probably the best-known form of the coprocessor. They’re designed to be optimized for the highly parallelizable workload of graphics rendering. CPUs can perform this task in software or with an integrated graphics chip. To offer the high-performance of modern GPUs, though, they’d need to integrate the entire GPU die into the CPU die.
This would massively increase the cost and complexity of a CPU and significantly increase the heat production too. Integrated graphics chips already take up a fair amount of CPU die space. They can reduce the overall speed of the CPU because of their heat output.
Historically, CPUs could process audio signals but weren’t fantastic at it. The resulting audio artifacts and static led to the creation of sound cards. These would provide audio input and output ports and perform the actual audio processing on the sound card itself. This significantly increased the signal isolation and the quality of the sound output. While some soundcards are still around, they are entirely unnecessary in modern computers as integrated sound processing directly on motherboards. CPUs are much better than in the heyday of soundcards.
A relatively recent type of coprocessor is the NPU or Neural Processing Unit. These are designed to perform or accelerate AI workloads. NPUs at a high level are pretty similar to GPUs, just with optimizations specific to AI workloads. As AI workload performance becomes more of a thing that normal users use on smartphones and computers, these will likely become more common.
Modern CPUs integrate many forms of coprocessor directly into the overall CPU die or architecture. This can be easily seen with integrated graphics chips etched into the same silicon as the rest of the CPU. However, the actual processing isn’t performed by the CPU cores. In AMD’s Ryzen CPUs, there’s also a separate I/O die that handles communication between chiplets and the rest of the computer. Some modern mobile devices also come with NPUs for AI processing.
A coprocessor is a secondary, tertiary, quaternary, etc., processor in a computing device where the CPU is the primary processor. There is no limit to the number of coprocessors in a system. However, software/hardware support, heat dissipation, physical space, and cost will all play a role.
A coprocessor handles tasks for the CPU that boost overall performance in both the specific task by performing it in an optimized fashion, and in other tasks, by negating the need for the CPU to waste its processing power performing the task in an unoptimized fashion. Over time, many coprocessors get integrated into CPUs as technology advances. However, power and thermal limits restrict this in some scenarios.
Did this help? Let us know!