Both SRAM and DRAM are forms of volatile memory. This means they need a power supply to retain the data they store. You may have heard about data being deleted from RAM when your computer shuts down, but this isn’t entirely true. The data isn’t explicitly deleted; the charge that indicates a binary 1 or 0 in the memory cells escapes. While the method differs, the effect is the same; the data is rendered inaccessible.
The process of escaping the charge is essential for RAM. It’s so important that it’s the distinguishing feature between SRAM and DRAM. Static Random Access Memory (SRAM) cells utilize six transistors connected as a pair of cross-coupled inverters. This structure maintains its charge indefinitely as long as the memory cell has a power supply. Dynamic Random Access Memory (DRAM) cells use a single transistor that constantly loses its charge and needs to be regularly refreshed.
This structure difference also lends itself to the differences in use between SRAM and DRAM. DRAM offers significantly greater storage density but requires more complicated refresh circuitry, though this effect is not enough to offset the density advantage. SRAM is, however, faster than DRAM. In processor caches, SRAM is used in small quantities, while DRAM provides high-volume system RAM.
The Anatomy of a Refresh
To understand how DRAM is refreshed, it’s helpful to know how its read. DRAM data is read in rows, with a whole row being read at once. To do so, a row’s word line is charged. This causes the row of memory cells to discharge to their respective bit lines. The comparative voltages of the bit lines are fed into sense amplifiers, which amplify the charge to the minimum or maximum depending on the state of each bit line.
The sense amplifiers then latch open and are available to be read. Data is then read from each specified column into the memory bus to be transferred to the CPU. Once the required data has been read from the row, the row’s word line and the sense amplifiers are switched off while the bit lines are precharged again.
While this is very complex, you may have noticed something important. The reading process discharges the memory cells. With the cell discharged, rereading them would get all 0’s the data would be lost. Reading DRAM is destructive, but the data stays in your RAM when you read it. There’s a missing step that explains this discrepancy. While the sense amplifiers are latched, their state is fed back into the memory cells they read from, keeping low cells low and charging high cells. This is done automatically on every read operation and is a refresh operation.
A refresh operation works on the same basis, but instead of transferring requested data to the memory bus, the sense amplifiers only recharge the memory cells before switching off again.
Why Is a Refresh Necessary?
It’s easy to understand why it’s necessary to refresh a memory cell after a destructive read operation. It is less intuitive why other refreshes are needed. Unfortunately, the tiny transistors used to maintain the charge of each cell are not perfect at retaining a charge. It just leaks away. This happens pretty quickly. The JEDEC standard for current memory standards requires all rows in a DRAM chip to be refreshed every 64ms.
To prevent performance loss, the process is performed opportunistically every 64ms refreshing the whole DRAM chip in one batch. Rows that are read are already refreshed, but while the DRAM is sitting idle unread rows are being refreshed in the background.
Research has shown that DRAM cells can retain their data for 10 seconds without being refreshed. Some statistical outliers can even maintain data for up to a minute. Unfortunately, you also get outliers in the other direction that can’t hold their charge even for a second. A very conservative refresh cycle timer is chosen to avoid data loss or corruption. Still, modern DRAM is fast enough that refreshing every 64ms doesn’t apply an appreciable performance loss.
Tip: Researchers have found that charge retention can vary significantly between cells, even in a single DRAM chip. Occasionally, good cells suddenly become worse at holding their charge, so you can’t reliably cherry-pick either.
Research has also found that temperature plays a significant role in the charge decay rate. Above 85 degrees Celsius charge can decay significantly faster, so the refresh cycle time is halved. Conversely, cold DRAM can maintain its charge longer. This is known enough that “cold boot” attacks can be used to attempt to recover data “lost” on shutdown from RAM by cooling it.
Conclusion
DRAM cells need regular refreshing to store data long-term for two reasons. Firstly, the read operation is destructive. Secondly, the transistor’s charge decays over time. To prevent data loss, read data is written back to the same memory cells, and cells that haven’t been recently read are regularly refreshed. The refresh process is generally only necessary every few seconds. However, all rows are refreshed on a very conservative time scale to prevent data loss from cells that are statistical outliers in how fast their charge decays.
Reducing how often refreshes are needed with temperature sensors and retention awareness technologies would be possible. This would involve preferring the use of cells that are good at holding a charge. Doing so would avoid, where possible, the statistical outliers that require such conservative tuning. However, such technologies are not generally used, as they add cost and complexity to resolve a problem with a minimal performance impact. Share your thoughts in the comments below.
Did this help? Let us know!