One of the core features of early computers was that they were sequential in their operation. Every task was completed in the order in which it was added to the queue of functions to be executed. This is relatively easy to code, conceptualize, and execute. However, it presents some performance-limiting issues that eventually necessitated its replacement with parallel processing.
A sequential processor works on the concept of FIFO or First-In-First-Out. All processes to be completed are added to a queue, which is then worked through in chronological order. This is simple to process, understand, and model but has potentially severe performance limitations.
Needing to Wait for Data
The single most significant performance limiting factor of sequential processing is down to necessary delays. Most tasks need to access data in some form, which takes time. The data can be accessed quickly within a few CPU cycles if stored in the CPU cache. Data in system RAM can take longer to access, around 50 cycles. Though all hard drives were back when sequential processers were used, data stored on an HDD is much slower to access, taking hundreds or thousands of CPU cycles.
Because the CPU can only complete its instructions in order, it must wait for this data to finish each instruction. While a single CPU cycle is concise (a 1GHz CPU would have a CPU cycle time of one nanosecond), these delays are CPU cycles where nothing can be done. While waiting for data, the CPU must sit idle because it can’t put that instruction on the back burner and pick up something else in the meantime. The wasted time can quickly stack up and result in the CPU spending an appreciable percentage of its time sitting idle, waiting for data.
Another issue that sequential processors face is that they can’t prioritize or deprioritize tasks depending on their urgency or importance. For a good user experience, you want the computer interface to be smooth to use. When you move the mouse, you want the mouse cursor to move. When you press a key on the keyboard, you want the letter to be typed on the screen. Each of these actions takes processing time. A sequential processor adds them to the back of the queue.
This means that you will have a delay if you have several slow instructions ahead of your mouse movement instruction, all reading data from the HDD. The delay shouldn’t be noticeable in the case of one slow instruction, but with multiple delays, it can be noticeable.
The solutions to these problems were out-of-order processing and pipelining. With a pipeline, it becomes possible to run multiple parts of different instructions simultaneously, using the other hardware for each pipeline stage. It even becomes possible to configure a superscalar pipeline architecture. In out-of-order processing, a scheduler tries to optimize the ordering of tasks. It can also interrupt the processing of a task if it needs to wait on something.
Sequential processing was a style of processor design in early CPUs that processed all instructions in the order in which they were queued. This design was simple and easy but led to performance problems due to delays and the inability to prioritize tasks. Sequential processors were eventually replaced with out-of-order processors that could adjust and optimize the order of execution.
These processors also included pipelines allowing better use of CPU resources and the ability for an instruction to be interrupted during execution. All modern CPUs are out-of-order processors. While they are a lot more complex to design, they offer a significant performance benefit that more than makes up for it.
Did this help? Let us know!