A CPU typically has several different parts. The core of the CPU, however, is the CPU core. These provide the actual processing power of the CPU. A modern CPU core will consist of an array of memory split between cache, registers, and buffers. These are used to offer high-speed storage, though the capacity is minimal. A CPU will have one or more cores, though typically an even number. Each CPU core will have the same selection of supporting infrastructure, the same amount of cache, the same number of registers, etc.
Early designs of CPUs only used one CPU core and were fundamentally sequential in operation. The CPU core would complete the instructions in the order it received them. It would also fully complete each instruction before moving on to the next one. This is a simple design, but it’s not very efficient. The actual circuitry for the CPU core has several distinct parts; each part corresponds to a stage of the execution of an instruction.
However, having these circuits be distinct doesn’t need to be as strictly linked in their actual usage. The significant advantage of doing so is that you can use all of the segments of the pipeline at once. This approach is called pipelining, and it can achieve a significant performance increase over completing each instruction in full before moving on to the next one. To make good use of a pipeline, a little more effort does need to go into designing the CPU. But the performance boost is more than worth it.
Making the Best Use of the Pipeline
The pipeline is designed to keep instructions flowing through the CPU as efficiently as possible. Unfortunately, like everything, it’s not quite that simple. Instructions in the pipeline may need to be paused for one or more clock cycles while the CPU cache is consulted for data. This leads to a pipeline bubble or a pipeline stall.
However, this issue can be easily overcome by having a secondary pipeline in place. This pipeline offers instructions to operate immediately if the main pipeline is otherwise halted. This doesn’t add too much complexity to the CPU but enables increased performance by filling in any gaps in the pipeline. This concept is marketed as SMT (Simultaneous MultiThreading) in AMD CPUs and Hyperthreading in Intel CPUs.
The secondary pipeline is essentially advertised to the operating system as another CPU core. In the underlying CPU architecture, however, only one physical CPU core exists. This conceptual second core is referred to as a virtual core.
Note: All consumer CPUs that utilize virtual cores only add one virtual core to each physical core. The concept can be pushed further, though, with some specialized processing cores offering four or even 8 “cores” on a single physical core.
Performance and Issues
Given the minimal extra circuitry needed to add another pipeline, it may not surprise you to learn that the power increase directly associated with a virtual core is minimal. It is worth noting, however, that as it keeps the physical core busy more of the time, it also increases the power draw in a secondary manner. Intel’s research indicates that adding a virtual core generally increases performance by 30%. This figure, however, can vary significantly depending on the application. Some applications can see a near doubling in performance, while others can even see a small amount of negative scaling. This negative scaling is generally only seen in software specifically unoptimized for virtual cores.
There are some security risks as the virtual core utilizes most of the same system resources as the physical core. Monitoring cache hits and access times, as well as the Translation Lookaside Buffer, can enable the leaking of encryption keys. These types of side-channel attack vectors are pretty limited. To perform the attack, you need to execute arbitrary code already; you also need to have that code run on the virtual core associated with the physical core running the main thread that you want to monitor. Nevertheless, some serious recommendations exist to disable hyperthreading and SMT for security reasons.
A virtual core is not a CPU core at all. Instead, it is a second pipeline connected to the same physical CPU core. When the actual processing unit would otherwise be sitting idle, it takes instructions from the second pipeline. This is an efficient use of resources, increasing the utilization of the actual CPU core for essentially no cost. However, the shared resources open up some interesting potential security issues that are difficult, if not impossible, to protect against.
Did this help? Let us know!