First-Hand:Engineering the Technology of the Future: Building High-Speed Computing Machines in the 1950s
Submitted by John Alrich
The decision I had to make about my lifetime career was whether it would be in engineering, mathematics or physics. Since in those days physics and math majors were required to be conversant with a foreign language (usually German) and I disliked learning languages, engineering became my choice almost by default. I use the term "almost" because the other positive factors in selecting engineering were its utility, pragmatism and immediacy of application, often showing positive results in a relatively short time span. (Perhaps something much less true today than it was forty years ago.)
I graduated with a BS in electrical engineering from University of California Berkeley in June, 1948. Jobs were easy to find and most of my friends and I had several offers before we graduated. I look back at the last two years I spent at Berkeley as being among the happiest, most care-free (the GI Bill paid nearly all my expenses), stimulating periods of my life.
The instructor I remember the most vividly was John R. Whinnery, who was teaching electromagnetic theory while he completed his Ph.D. He didn't know it at the time, but if there was anything that discouraged many of us from going on to graduate work, it was John! He was about our age but had already published in the IRE Proceedings (as it was called) a number of times and co-authored a textbook with Simon Ramo. Quite a few of us thought that if one had to be as bright and disciplined as Whinnery to go beyond a BS, there was little hope for us mortals.
Also I was twenty-five years old by this time and anxious to get into industry, marry raise a family; i.e. plunge into a rather conventional life-style that had been denied most young men like myself, recently returned from the service. Although over the years I did do graduate work from time-to-time, I never did get back for an advanced degree. This may have been a mistake, but in those days advanced degrees were less necessary than they are today.
I much preferred California, so I looked in the want ads. I spotted a firm (still in Pasadena) that I knew something about, based on my earlier work for Bendix, Consolidated Electrodynamics Corporation (CEC). It was 1951 and this was where my career really got rolling.
CEC made two major products of fine quality and reputation; photographic stripchart recorders and mass spectrometers. Their technical staff was an excellent combination of talented engineers and physicists and was well supported with a good manufacturing facility; perhaps three hundred to four hundred people in all. Also, its president, Phil Fogg, was a man of great business courage and vision.
Fogg had been informed by several of his technical people that -their method of data reduction for their mass spectrometers, using an analog computer of in-house design, was becoming time-consuming and of limited accuracy. The answer was to design something like the machine John von Neumann and some of his people were completing back at Princeton, New Jersey. Since back then there were probably less than several hundred people in the U.S. (perhaps more in Britain due to Turing and the Enigma cipher-breakers) who knew a great deal about this new technology, Fogg and his advisors probably underestimated the difficulty of what was proposed. At any rate, CEC decided to sponsor this development without much fanfare and began hiring for the program.
I was the second person hired for this aspiring computer group and there were about a half-dozen professionals from CEC who worked part or full time on the program initially. This included an excellent mathematician, a physicist, Cliff Berry, and a Program Manager, Martin Shuler, who had worked on radars during the war. They knew no more about digital computers than I did, which was exactly nothing.
Early on, CEC hired two part-time consultants who had computer experience; Dr. Harry Huskey who was developing the SWAC at UCLA for the National Bureau of Standards, and Dr. Ernst Selmer (from Norway) who had worked with John von Neumann and happened to be teaching for a short time at Cal Tech. Also as part of our education, there was one text that we knew about from which we could learn the elements of computer design (High-Speed Computing Devices by the staff of Engineering Research Associates, Inc., McGraw-Hill, 1950). So this book started out as our "bible."
Huskey gave evening seminars once a week but Selmer did virtually all of the logic design, now usually referred to as "architecture." The rest of us scrambled along as best we could developing the circuitry and modules.
It was sometime during this period that CEC spun us off as a wholly-owned subsidiary with our new name, "ElectroData." We were on our own, at least as a cost-center.
I became program manager of the arithmetic section which included everything except the drum memory, the paper tape reader and punch, the console, the magnetic tape drives, the typewriter control unit, punched card equipment and the power supply. I won't go into a description since this was published in the IRE Transactions on Electronic Computers (John Alrich, "Engineering Description of the ElectroData Computer," March 1955, vol. EC-4, No.1) except to say that the main-frame was largely composed of one hundred and seventythree plug-in vacuum-tube modules, eight tubes per module (usually dual triodes), arranged in an air-cooled cabinet about twelve feet long, twentyeight inches deep, and seventy-eight inches high. Quite an impressive sight when all the tubes were lit!
Its operating mode was as a single address, fixed point, binary-coded-decimal machine with numbers represented as absolute value and sign, shifted serially by half-byte. I mention this level of detail because of what happened.
Shortly after shipment of the computers (now called the Datatron 201 or the later model, the Datatron 205) began around 1954, usually to large corporations and universities, a significant advance in capability was requested (in at least one case, demanded!) by the scientific users. Burroughs, which now owned ElectroData, decided to comply partly because we thought other firms were also working on this feature.
Virtually all machines in those days operated fixed-point internally and most were binary machines. Floating Point (FP) operations were done by programming special subroutines for add, subtract, multiply, divide and conversion from fixed point to floating point and the reverse. In fact, using IBM punched cards and conventional key-punch equipment, the FP work format was already pretty well established and FP arithmetic was being done on electromechanical calculators. (These calculations were very time-consuming and, hence, used only where absolutely necessary.)
The field making up the word, in our case ten BCD digits plus sign, was divided into a twodigit exponent to the base ten and an eight digit mantissa, always less than one. Before each floating point addition or subtraction, the exponents of each operand were compared. If they were not equal, the mantissa of the smaller number was shifted right and its smaller exponent incremented until both exponents were equal.
In a similar manner, multiply and divide were implemented, taking care that exponentiation was proper and the mantissawas in formal form after completion of the operation. With this procedure, no change to the mainframe was needed except for the added circuitry. The range of a number was increased by fifty orders of magnitude with a penalty of two orders of magnitude in resolution. The speed of operation was improved considerably over the classical subroutine method, or course probably by several orders of magnitude.
The scientific programmers, who were the heavy weights, were ecstatic over what this new capability meant in their work. What follows is generally true but my remembrance of some of the details may have suffered with time:
When a scientific sale was in the offing, usually a Ph.D. in mathematics was sent to the customer's site where they could discuss the problem one-on-one. These non-commercial customers were relatively rare and internal FP operations were non-existent so far as we knew at that time. Therefore, I had a completely free hand as to cost and performance subject only to very mild constraints, self imposed-the unit should be made of standard enclosure and modules already in use on the mainframe as much as possible. It should be capable of being retrofitted in the field, and it should satisfy our senior programmers in technique and performance.
It took about a year to build a working prototype. This was one of the last projects within Burroughs (or within any computer company for that matter) which was not designed by a committee! Our marketing people generally did not know what floating-point operations were since most of their customers were commercial users. The finished unit was styled exactly like the main-frame and bolted onto one end of the cabinet. It had about thirtyfive plug-in modules, most of which were the same design as those in the CPU. There was enough "space" in our command structure to add the six new commands.
Cost was secondary since no other FPC was available as a standard unit when we started our design. IBM, Burroughs' great competitor, announced an add-on for its Model 650 shortly after my prototype was finished. I remember our marketing vice president, who had tentatively set the cost of the FPC at around twenty thousand dollars, immediately bumped it up to twenty-two thousand five hundred dollars after IBM announced their version would be available for twenty-five thousand dollars or thereabouts. I think we also shipped our first FPC before IBM did as well. I don't think many engineers today have as much fun as we did back in the late '50s.