You might think that all your documents and photos are stored neatly in a logical folder structure on your computer. You’d be wrong, though. That is the view that the computer shows you. In reality, though, if you’re using an SSD, the data is spread out all over the drive.
HDDs work best if you occasionally put them through a defragmentation process. This sorted all the chunks of data on the HDD, so related bits were close together and could be read from the drive sequentially. This is because HDDs are much faster at reading sequential bits of data from their platters than at making random reads.
SSDs are much better at random reads because they don’t have to wait for the read head to get to the right place first. They’re also much faster in general, and there are plenty of other reasons to prefer them.
The thing is, SSDs suffer a lot more from wearing. Each time data is read from, and primarily when data is written to a memory cell, the cell degrades slightly. To minimize the wear and increase drive longevity, SSDs use a process called wear leveling. When writing data, the SSD chooses to place it on the least worn cells first.
This results in weird things like data technically remaining on the drive after you overwrite a file, simply because the new version is saved in different memory cells. The “deleted” data is marked as “can be overwritten” rather than actively deleted. Deleting it uses one more of the limited number of writes to the affected memory cells.
Keeping Track
SSDs keep a table of where everything is saved and what can and can’t be overwritten to work efficiently. This doesn’t take much space, but any operating system constantly makes small write operations. These constant changes would mean many writes being performed to the SSD, specifically to one section, which will reduce its life span.
To avoid this, most SSDs include some onboard DRAM. DRAM doesn’t suffer from the same wear that flash memory does, so it can be updated as often as needed. Incidentally, it’s also faster. So, when you request a file, SSDs with DRAM will return the result slightly faster as the lookup time is reduced.
Some budget SSDs choose to forego DRAM though as a cost-saving measure. This does cause a performance impact and reduces the lifespan of the drive.
Enter HMB
HMB was designed to reduce the performance and lifespan associated with DRAM-less SSDs. The Host Memory Buffer uses another source of DRAM to store at least a partial logical to physical map of the drive. The great thing with this is that every computer already has an abundant source of DRAM in the computer’s main RAM.
SSD drivers allow the SSD to request a small portion of the system RAM to be set aside and allocated to store the lookup table. While SSDs typically feature 1GB of DRAM per TB of flash memory, the HMB typically is nowhere near that size. Exact implementations vary between manufacturers and drives, but around 100MB is standard. This allows for the most commonly used data to have its location mapped for faster access. Other data has to be accessed the slow way.
This results in improved latency in most workloads compared to straight DRAM-less SSDs. Performance is not entirely in-line with using onboard DRAM, though. It also helps reduce some of the wear on the SSD itself. However, this benefit is difficult to measure and likely minimal.
Conclusion
HMB is a helpful addition to DRAM-less SSDs. It comes at literally no extra monetary cost. It helps to alleviate a good portion of the performance degradation associated with DRAM-less SSDs. HMB still doesn’t offer the same level of performance as onboard DRAM. It results in slightly higher system RAM usage, which could be an issue on budget computers with minimal RAM.
The RAM allocated to HMB is typically small in size, and the system can offer less than the SSD requests if needed. All in all, HMB is essentially a win with no downsides. In a direct comparison between a DRAM-less SSD with HMB support and one without, go for the HMB model, all other factors notwithstanding. We still recommend SSDs with onboard DRAM, though, as these offer the best performance for only a slight increase in cost. What are your thoughts? Share them in the comments below.
Xenek Stoehr says
Can HMB be used on linux? Can it be set or increased to reduce wear and make more complex tasks functional (using lots of data at once) by increasing the HMB manually to 1GB for the 1TB disks?