DRAM is a form of computer memory that is used as system RAM. All modern computing devices use one flavor or another of Synchronous DRAM as their system RAM. The current generation is DDR4, though DDR5 has just made it to market.
Before DDR RAM, though, there was SDR RAM. Technically, SDR RAM is a retronym, as it was initially referred to as SDRAM, short for Synchronous Dynamic Random Access Memory. This made it distinct from previous forms of DRAM, which were asynchronous.
Unlike in synchronous DRAM, the memory clock is not synchronized with the CPU clock for asynchronous DRAM. This means that the CPU is unaware of the speed at which the RAM operates. The CPU issues instructions and provides data to be written to RAM as fast as the command and I/O busses allow, with the expectation that the memory controller will handle that at an appropriate speed. It also means that the CPU asks for data without knowing how long it will have to wait for the response.
This meant the CPU needed to send commands less often than the specification allowed. If a second command was sent too quickly, its operation could impact the first one. This sort of situation would have led to data corruption and non-sensical responses. The system worked and was the standard for DRAM from its inception in the 1960s until synchronous DRAM showed its superiority and became the dominant form of DRAM.
History of Asynchronous DRAM
The first iteration of asynchronous DRAM had an inefficiency in it. All DRAM is interacted with by providing a row and column of memory cells. After providing this information, you can either write data to those cells or read data from them, depending on the commands provided. To interact with any memory cells, the row needs to be provided first, in what is the slowest part of the read or write process. Only once the row has been opened can a column be selected for interaction with specific memory cells.
The first iteration of asynchronous DRAM required the row address to be provided for each interaction. Importantly, this meant the slow process of opening the row had to happen every time. Even if the interaction was with the same row. The second iteration, called Page Mode RAM, allowed for a row to be held open and multiple read or write operations performed on any of the columns in that row.
Page Mode DRAM was later improved on with Fast Page Mode DRAM. Page Mode DRAM only allowed an actual column address to be specified after opening a row. A separate command was issued that gave instructions to select a column. Fast Page Mode allowed the column address to be provided before the instruction to select a column, providing a minor latency reduction.
EDO DRAM
EDO DRAM or Extended Data Out DRAM added the ability to select a new column. At the same time, data is still being read out of the previously specified column. This allowed for commands to be pipelined and provided a performance boost of up to 30%.
Burst EDO RAM was the last asynchronous DRAM standard. As it made it to market, synchronous DRAM was already making strides toward becoming the dominant form of DRAM. It allowed a burst of column addresses to be specified in a single clock cycle by selecting an address and then determining to read from up to the following three columns in the row for decreased latency.
Conclusion
Asynchronous DRAM was an early form of DRAM that didn’t synchronize the DRAM clock with the CPU’s clock. This worked well enough while CPU frequencies were low. But as they increased, it started showing its weakness. Synchronous RAM eventually became the dominant player in the DRAM market. Its increased efficiency and scalable performance continue to improve. Currently, essentially no asynchronous DRAM is actively made as nothing really uses it. It’s unlikely to ever make a comeback.