Released in 1981, the IBM PC was a game-changer for computers. It took the market by storm, becoming utterly dominant. With that much success and its hardware an open standard, clones soon popped up. Many software programs were also designed for the PC. These had to allow for its limitations and, in some cases, rely on them. This reliance on specific features and its market dominance left an indelible mark on the home computer marketplace.
The PC’s main issue was that it could only address 1MiB of memory. This had to be used for RAM, as well as the BIOS, operating system, and any expansion hardware, such as video adaptors. This wasn’t too much of an issue at release as memory prices were sky-high, and software vendors practically fell over themselves to make their software compatible. As memory prices fell and more memory-intensive programs were released, this became an issue.
Successors to the IBM PC included newer CPUs that could address more RAM. Many programs, however, had tailored their software precisely to the PC’s RAM layout and couldn’t take advantage of that extra memory space. Conversely, many people still had PCs with limited RAM but wanted the ability to load software that needed more RAM. The solution was expanded memory.
Expanded Memory
Expanded memory refers to bank switching in the upper area to offer more memory in the same space. The PC separated its 1MiB of memory into two areas, the conventional memory area used as RAM, and the upper memory area, used as BIOS ROM and for expansion cards. It had already been possible to use some of the upper memory areas as RAM, such as one of the two sections allocated for graphics. This technique, however, only provided access to a few more KiB of memory. Expanded memory was needed to increase capacity.
Expanded memory uses an area of memory, such as that allocated for a graphics device, not in use. It then used a system of banks to page in and out extra memory sections to that one window. This did require the use of a custom driver. It also needed an expansion card with more physical memory, at least at first. Later generations, with CPU support, could use software to map expanded memory to extended memory. This, however, required CPU support as well as the presence of extra memory to map to. The software also needed to be configured to be able to use it.
How Did It Work?
Expanded memory worked by utilizing a window section of the memory. It was mapped from 1 to 1 to a larger pool of memory. Mapping 1 to 1, however, doesn’t allow the use of any more memory. Instead, when needed, the driver would swap the mapping to another portion, or bank, of expanded memory. This is like changing your desktop background. You still have the same monitor but a new picture. The software had to keep track of which bank of expanded memory contained what data, a critical task if it wanted to recall that data.
Having to swap banks did mean that performance was down compared to a larger native memory pool. Where possible, using extended memory would have been better. But in systems and software limited to that 1MiB memory limit, expanded memory offered the only method to gain more memory.
The first mainstream public system to use expanded memory was LIM EMS 3.0. LIM was an acronym of the three companies that made it, Lotus Development, Intel, and Microsoft. EMS stands for Expanded Memory Specification. Version 3.0 was able to add 4 MiB to the PC. By modern standards, that’s essentially nothing, but that quintupled the memory capacity of the IBM PC. The final version of EMS, version 4.0, offered support for up to 32MiB of memory.
Version 3.2 was the first version to see real products hit the market. It used a 64KiB window, split into four 16KiB pages, to provide stability when switching pages out.
Decline
By the 1990s, graphical operating systems such as Windows were taking over text-based operating systems such as DOS. This put the final nail in the coffin for expanded memory. Expanded memory was always a bit of a bodge job. It was implemented to fix a functionality issue that would otherwise have required a complete hardware replacement. Newer generations of DOS-based PCs weren’t limited to 1MB of RAM. Still, the software had to allow for that little because of the vast installed user base.
The switch to entirely new classes of the operating systems allowed protected mode, with its support for larger memory pools and virtual memory addresses, to be utilized appropriately. The switch to using larger memory pools via protected mode by default signaled the death knell for expanded memory. The whole concept of the memory areas was essentially rendered obsolete.
Conclusion
Expanded memory was the term used to refer to the practice of paging in and out banks of memory via a window address in the upper memory area. The entire concept was developed to bypass IBM PC’s hard 1MiB memory limit caused by CPU limitations. It was impossible to add new memory space. Still, it was possible to use a section of memory repeatedly by using and then swapping out sections of memory and swapping them back in when needed. Doing so required a special driver as well as hardware. Later implementations could perform the hardware functions in software, though that was based on the pre-existing presence of more memory to use.
Expanded memory was a problem caused by the IBM PC and the need to provide compatible software but also capable of using more significant amounts of RAM than were possible. The advent of graphical operating systems, such as Windows, changed the operating system compatibility landscape. As they were, at that point, capable of natively addressing much more than the 1MiB of memory the original PC was, the problem and the need for the solution of expanded memory evaporated. The concept has been essentially obsolete since the early 1990s.
Did this help? Let us know!