On 8 August 1953, engineers installed a component that resembled a chain link fence on the Whirlwind computer, a joint project between the Massachusetts Institute of Technology (MIT) and the U.S. Navy. The project, led by MIT’s Jay W. Forrester and built by engineer William Papian, entailed a real-time interactive simulator and stabilizer analyzer for Navy flight training. The component was the first magnetic-core memory and helped the Whirlwind compute at an impressive speed. This month marks the 50th anniversary of this momentous event, which forever changed computer history.
Why was the magnetic-core memory so significant? A recent IEEE Milestone nomination underscores the reason. IEEE’s Santa Clara Valley Section and the Magnetic Disk Heritage Center have proposed recognizing the site where RAMAC, the first magnetic disk drive, was produced (from 1952 through 1958). The RAMAC’s contributions are similar to innovations fostered by magnetic-core memory: real-time processing. RAMAC magnetic disk drives allowed computers to move toward processing in real time by incorporating magnetic, random-access storage. The Sabre Airline Reservation System of 1960, the first truly real-time, online computing transaction system, represnts RAMAC’s first successful application.
Forrester’s earlier magnetic-core memory benefited RAMAC’s disk drive. The magnetic-core, a wire mesh of ferrite rings and metal wire, created a location where binary information could be recorded and retrieved magnetically. The ability to pinpoint specific intersections or addresses within the core rings, from which information could be stored and then recalled at random, created an unparalleled innovation in computing. The computer’s central processing unit and its memory of stored data, procedures and programs, could now be operated interactively. This interactivity boiled down to one major innovative gain: speed. Random-access memory was born.
Although increased computing speed was always a goal, it was not terribly feasible in the early systems that relied on tape drives for memory access. Magnetic-core memory changed this technological bottleneck. When Forrester began working on the Navy’s flight simulator system in 1945, he quickly realized that the system needed a computer that could respond to pilots’ actions. The system required real-time reaction and lightening-speed access to binary bits of stored memory. The Whirlwind computer became the first digital computer with a magnetic-core memory that could operate in real, interactive time.
Whirlwind was first demonstrated on 20 April 1951, and the core memory was installed in August of 1953. Whirlwind’s triumph underwrote the growth of the MIT Lincoln Labs, the MITRE corporation and the SAGE (Semi-Automatic Ground Environment) air defense system, which operated in the United States until the Air Force ended the program in 1983.
Magnetic-core memory’s popularity lasted until integrated circuitry superceded it in the 1970s. But the greatest legacy that Whirlwind, Forrester and magnetic-core memory left lies in the conceptualization of random-access memory and the instantaneous speed of real-time processing. Where would we be today if we could not withdraw money from the ATM, buy gas, or have our checking accounts updated in real time? Or make a hotel or plane reservation? Or sit down with our laptops and work online while our personal computers encompass storage, memory, real time and networking all in one immediately gratifying package? Magnetic-core memory spawned the birth of the random-access era; its anniversary is one worth noting.
The capacity and speed of magnetic storage devices continue to improve, mainly with inceasing areal density by decreasing thestorage spot size. Inceasing storage densities requie reduced size of read/write heads, inceases in sensitivities, close flying the head to disc distances and improved materials. The highest density mag storage achieved was 35Gb per square inch, demonstrated in October 1999 at an IBM Laboratory. They were using giant magnetoresistive (GMR) technology to achieve these results. Current thinking are that limits of 100Gb per square inch exist. Other limits that maybe encountered are economic with the advent of other competing technologies.