In modern CPUs, two statistics are at the core of advertising. The first is the clock speed in GHz, and the second is the number of processor cores. While both are important to performance, there are also many other factors. This menagerie of elements means comparing the performance of two different CPU architectures is difficult.
For example, you can’t directly compare the performance of a CPU from Intel and a CPU from AMD purely by comparing the stats. The only reliable way to tell the difference in performance is to perform real-world benchmarks. Frustratingly, you’ll find that the performance comparison can vary somewhat between benchmarks. Each CPU architecture has strengths and weaknesses that suit or struggle with how specific programs are coded.
To get the best performance for you, reviewing performance stats in the program or programs you intend to use is best. If the specific software isn’t benchmarked, your best bet is to check for overall average performance differences, ideally in programs that do similar things.
As long as the overall CPU architecture doesn’t change too much between generations, you can generally compare the stats to see which is fastest. This works only when there have been relatively small incremental architecture changes. Significant changes to CPU architecture often make these comparisons significantly more difficult.
For example, it’s relatively simple to compare the performance of AMD’s Ryzen CPUs across generations. It is, however, more difficult to directly predict performance differences between AMD’s Ryzen family CPUs and the Bulldozer family CPUs as the architectures have significant differences.
Single-Threaded Performance
Single-threaded performance measures how fast a single CPU core can run. The performance differentiator within a single generation of CPUs will be the clock speed. As discussed above, direct comparisons between CPU generations are a little more complicated due to minor architectural differences. In contrast, comparisons between CPU manufacturers are even more complex.
A CPUs clock speed is measured in MHz or GHz. MHz and GHz are the normal contractions for “Mega Hertz” and “Giga Hertz.” Mega is the SI standard prefix for million, while Giga is the SI standard prefix for billion. On hard drives, data is typically stored in bytes; a MegaByte is a million bytes, while a GigaByte is a billion bytes. Hertz is a unit of frequency, with 1Hz being once per second. A clock ticks at a rate of 1 Hertz, while a standard computer monitor has a refresh rate of 60 Hertz.
The first ever CPU, the Intel 4004, released in 1971, had a clock speed of 740KHz, 0.74MHz, or 0.00074GHz. Over time, as CPU design and manufacturing capabilities have increased, the speed of the CPU clock has increased. In 1974 Intel released the 8008, which reached the heady clock speed of 2MHz. In 1999, AMD released the Athlon CPU, the first to hit the 1GHz mark.
Only a little after this, CPU designs ran into thermal constraints, struggling to dissipate the heat generated in such a small area. A multi-core approach was adopted to achieve further performance increases, allowing two parallel processes to operate simultaneously. This doubled the work that could be done in a set timeframe but crucially didn’t increase the speed of completing a single process. Despite the difficulties, modern CPUs are just about becoming capable of reaching a clock speed of 5.5GHz.
Conclusion
CPUs use a clock to govern their speed. This clock goes through many, many cycles every second. To make it easier to read and understand, CPU manufacturers present the clock speed of CPUs in GHz. GHz is a unit that means billions of changes per second. A 5GHz CPU runs through five billion clock cycles per second.
All things being equal, it would be five times as fast as a CPU running at 1GHz. CPU architecture is exceedingly complex, though. Over time, CPU architecture had improved substantially, meaning that a modern 5GHz CPU will be more than five times faster than a 1GHz CPU from back when that was cutting edge.
Additionally, GHz isn’t a unique measure to CPUs. For example, GPUs now have clock speeds measured in GHz. Electromagnetic radiation with a wavelength with a frequency in the GHz range will be somewhere between Super High-Frequency Microwaves and the Far Infrared portions of the electromagnetic spectrum.
Did this help? Let us know!