Oral-History:Jan Rajchman
About Jan Rajchman
Rajchman, a computer pioneer, was born in England and educated in Zurich, Switzerland. He graduated with a degree in electrical engineering from the Swiss Federal Institute of Technology in 1935. Rajchman began working for RCA in 1935, and in 1936 began his work on the electron multiplier with Vladimir Zworykin. In 1939, Rajchman began working on the possibilities of computation, developing the idea of the selective storage electrostatic tube and magnetic core memories. Rajchman was involved in Project Lightning and held the position of director of RCA's computer research laboratory from 1957 to 1967. In 1974, he received the IEEE Edison Medal for a career of meritorious achievement.
The interview covers Rajchman's work with electron multipliers and his involvement with the first computation developments in the late 1930s. There is an extended discussion of ENIAC and Rajchman's activities with various people involved in the project, including John von Neumann and Herman H. Goldstine. Rajchman also discusses his involvement with Project Lightning. The interview continues with a discussion of Rajchman's extensive work on computer memory, including the Selectron, magnetic cores, and transfluxors. The development of timesharing software and RCA's pioneering work in superconductive memory is also covered. The interview concludes with Rajchman's remarks concerning what he sees to be the fundamental problem to be resolved in computer technology — the development of an inexpensive, purely electronic memory. He includes a discussion of possible approaches to this problem, including holographic and magnetic bubble memory.
See Also: Jan Rajchman and Albert S. Hoagland Oral History
About the Interview
DR. JAN RAJCHMAN: An Interview Conducted by Mark Heyer and Al Pinsky, IEEE History Center, 11 July 1975
Interview # 024 for the IEEE History Center, The Institute of Electrical and Electronics Engineers, Inc.
Copyright Statement
This manuscript is being made available for research purposes only. All literary rights in the manuscript, including the right to publish, are reserved to the IEEE History Center. No part of the manuscript may be quoted for publication without the written permission of the Director of IEEE History Center.
Request for permission to quote for publication should be addressed to the IEEE History Center Oral History Program, IEEE History Center, 445 Hoes Lane, Piscataway, NJ 08854 USA or ieee-history@ieee.org. It should include identification of the specific passages to be quoted, anticipated use of the passages, and identification of the user.
It is recommended that this oral history be cited as follows:
Dr. Jan Rajchman, an oral history conducted in 1975 by Mark Heyer and Al Pinksy, IEEE History Center, Piscataway, NJ, USA
Interview
INTERVIEW: Dr. Jan Rajchman
INTERVIEWED BY: Mark Heyer and Al Pinksy
PLACE: Princeton, New Jersey
DATE: July 11, 1975
Heyer and Pinsky:
I have jotted down many things that you did that were interesting: electron multipliers, the rubber-sheet model, the dark printer problem, digital computation at the beginnings of World War II, the Computron, the resistive matrix. Maybe you want to talk about Johnny von Neumann a little bit, the selective electrostatic storage tube, core memories, the transfluxor, and Project Lightning?
Rajchman:
Sure.
Heyer and Pinsky:
Then if we want to in the end, perhaps you can discuss your optical storage?
Rajchman:
Of course that is very recent. Let's start with the electron multiplier.
Background
Heyer and Pinsky:
Why don't we start with your education? How you came to the lab.
Rajchman:
Oh, I see. As you know I was born in England; however, my parents were Polish. At a young age, I went to Poland for a couple of years, but then we quickly moved to Switzerland. My entire family moved to Switzerland, so I was brought up in Geneva — in French. I had all my elementary and high school education and my college education in Zurich at the Swiss Federal Institute of Technology, where I graduated in electrical engineering in 1935.
At that time, I was seeking to do some research in electronics. I was already interested in electronics and the opportunities were far greater in America than they were in Europe. There was generally a depression in the world. The industrial research as we think of it in the U.S. was almost a unique American idea, although there were some laboratories in Europe. I decided to go, as a classic immigrant, to America, having essentially no country anyway since I was an expatriate in Switzerland. I had heard of Zworykin and his work, and I had my ambitions already to work for him. I did hear of RCA, but I must say I heard of Zworykin before I heard of RCA. Eventually, I applied for a job at RCA. I didn't get one when I first arrived, and so I went to MIT for a summer session, where I brushed up on my English. I was very fortunate that at my first interviews at RCA, I met Mr. [Edward W.] Kellogg, who was kind enough to let me know that there was an opening at RCA. I was at MIT, but he advised me not to linger, so I was in Camden the next day. I got the job with the factory at RCA, and that's how I started with RCA. I worked there only a very short time. Quickly after that I got transferred to the laboratory of Dr. Zworykin, where I really wanted to work. That was on January 1 or 2, 1936.
Electron Multiplier
Rajchman:
My first assignment with Zworykin was the electron multiplier. At that time the most interesting thing in electronics was electron optics. That was really the forte of Zworykin's lab. The multipliers had already been conceived. They were of a magnetically focused type that were complicated. They also suffered from large dark current; in other words, when there was no light there still was a fairly large output. There was a limit to how small light you can measure with. What I really did was to make them simpler by making them electrostatic and also discover the reasons for the dark current problem and try to avoid it. In order to do that you have to understand how electrons move in complex electrostatic fields. While there is no mystery to the equations, still in complicated electrodes these equations are actually unsolvable by analytic means.
We did have to resort to a rubber model to do this, which is an analogy of motion of a little ball rolling in a stretched rubber membrane to electrons moving in a vacuum. That was also known as an idea. I didn't think of that per se. However, I think that we were about the first people to ever use this analogy as an actual tool in design. People have more or less thought of this as a toy to show things in a classroom. But we used it really as a very serious tool for the design of the device. By doing this you could use a very complicated shape. And, by having the freedom to use very complicated shapes, you could obtain very good focusing and also master all the problems that really were at the root of dark current, the most significant of which was the so-called ion feedback. [This] was an effect in which very large output currents produce ions in a tube even though there is a so-called vacuum in the tube, but there is nevertheless some atmosphere left. If a few ions are produced and these few ions find their way to the early stages, they will produce electrons; and if there is a very high gain, these electrons can produce enough ions to produce a regenerative feedback. Even if it is regenerative, even if it's partly one that is augmenting the current, it produces great instabilities.
Therefore the main cause of instability and dark current was ion feedback. The way to avoid it of course was to prevent the ions from coming back by making the path sufficiently tortuous. By the same token, you have to still get the electrons to go through. Very briefly, that is the multiplier story. Of course, there is much more to it in detail.
Heyer and Pinsky:
Would you rather talk about some other things.
Rajchman:
I don't know how much detail you want on the multipliers.
Heyer and Pinsky:
I think that was a pretty well-known piece of work.
Rajchman:
Yes, and we are making multipliers today, still of that same design. It hasn't changed since then. The thing that's interesting is that the production line for making them is completely computerized, and all the processing, the tubes they are making are very good nowadays. RCA is one of the best suppliers in the world.
Heyer and Pinsky:
Making these very basic devices hasn't changed.
Rajchman:
That's a very rare thing.
Heyer and Pinsky:
So this was from roughly from 1936 to 1938?
Rajchman:
Yes, that's right.
Work on Computation
Heyer and Pinsky:
In 1939 you began working on the possibilities of computation.
Rajchman:
That's right. The war actually started in Europe in 1939. The way RCA got into it was that the Frankford Arsenal approached us because our anti-aircraft fire control was notoriously poor and the Germans at the time had a great superiority in the air. So the question was, "Could we make some computers that would direct the anti-aircraft guns in a better way than the mechanical devices existing at the time?" That's how I got into it. I was, in fact, the first man to get into it. Our first work was what we call today "analog devices" and then later I switched to digital devices. The analog devices were taken over by Arthur Vance, another man in the group. He was in Zworykin's lab. I worked mostly on digital devices, mostly because it was so difficult to get the right accuracy on our devices. So during that time we made many basic concepts of how to do: how to do logic, how to do read-only memories, and so forth. Perhaps the most notable thing is the read-only memory because it was an easy thing to put a label on and it was used quite widely afterwards. Many of the circuits developed at the time for logic were tube circuits, which later became the transistor circuits. High-level concepts were generally used.
Origins of ENIAC
Rajchman:
- Audio File
- MP3 Audio
(024 - rajchman - clip 1.mp3)
We got quite involved with the other groups that were working on computers at the time, including the Moore School of Engineering at the University of Pennsylvania, which was interested in making computers. Also John von Neumann, who became a consultant for the government on computing generally, used to come and see us. Eventually, as the war progressed, it became evident that the analog techniques could be developed to be used in the field whereas the digital techniques could be used mostly for computations that were needed for other applications for the war, but not for field use. The most notable one of those was to make ballistic tables. The Moore School was asked to build a huge machine to make such a thing, which later became the ENIAC. There was a long time of negotiation, and during that time RCA was asked to undertake the building of this machine. We turned the offer down and in fact they offered Zworykin more. I was very sorry by the way he did decline the offer because I thought it would be rather fun to do it and we were by far the most able group, I think, in the country at the time to do it. In fact, we were asked to tell everything we knew to the Moore School, and I went to the Moore School many times. There were some of the other people who went too. I wasn't the only one. It was all an atmosphere, of course, of great fervor for the war, and nobody worried about patents or priorities or anything like this.
Heyer and Pinsky:
What were some of these reasons for turning it down?
Rajchman:
Zworykin's reasons for turning it down were that the machine would estimate it would take about 20,000 tubes. As it turns out, his estimate was about right, because I think it took about 25,000. Also that the mean-free path, the time between failure, would therefore be ten minutes or so; and it turned out to be about that. He didn't want to be involved in something as massive and unreliable as that.
Heyer and Pinsky:
It would break down every ten minutes.
Rajchman:
Yes, that's right. You would have to replace a tube or something. But I was very active with all the people at that time: von Neumann, Goldstine, Eckert, Shepherd, Brainerd, and many other people at the time. The ENIAC was at first a machine dedicated to one problem, but as the machine was being built people had ideas of solving other problems.
Therefore the idea was advanced to make a matrix of wires to change the wiring, depending on the problem, and then you changed one matrix from another to change problems. Then the idea was maybe you could have some read-only memories to do that. Then gradually the idea came that maybe you could put just a plain matrix memory to do that and have a universal machine. The way I see the concept of the universal random machine of today, it was born in a very gradual way — and not by one man. It was after the fact, as far I can see, that the people dug up the fact that Babbage or Ada or whoever had the idea — it was a contraption that they built in the nineteenth century. I mean, reality told half of it in a very gradual way and nobody read the literature in the past, which is usually the way things happen. And so, I frankly don't ascribe to such a clear-cut way that somebody invented this total program computer. It is not so, as far I can see.
However, it turns out that was a tremendously big concept of course, behind the whole art. During that time we also developed a tube that was a computing tube, a "computron" we called it. It was an exercise of integrated vacuum electronics you might call it today because it had, I think, a 14 x 14 array of calculating sets. It was like an integrated chip of a little calculator of today, but only it was implemented in vacuum technology. Actually, we never made a full tube, but we made a few cells of it and it did work alright. We had a contract from somebody or other in the government to do this.
Computron, Betatron, “Strong Focusing”
Rajchman:
After this, the laboratory decided not to go with the most to make the machine that the Moore School virtually did. Gradually, we had no longer any government contracts, except the Computron, which I think terminated just about the time that we moved from Camden to Princeton. We were still working on it, I think, when we moved here.
Then, microwaves, was of course, a big thing. One of the ideas I had was to make the equivalent to the traveling wave tube. In a traveling wave tube you slow down the wave to the speed of the electron. I thought of the opposite, speeding up the electron to the speed of the wave in a betatron. The idea was to make a very small betatron, where the electron goes almost at the speed of light, so then it would go at the speed of the wave and therefore the copy would be very good.
We made a small betatron. We actually built one. It worked very well. We got a half million volts, but we never got any microwaves because it turned out that you couldn't capture enough current. As a result we made an analysis of what diminishes the current in induction accelerating machines. I might as well admit that we missed by a hair making what I consider to be one of the greater concepts of accelerating machine, namely the idea of strong focusing.
It turns out by the laws of nature that in these accelerating machines if you try to focus the electrons, say up and down, then you defocus them right and left. So that you are stuck. If you want to focus them both ways, then there is a certain limit that nature imposes on you on how much you can do, and after you've done that you can put so much charge. After the charge that you have captured produces its field that's equal to the forces that you focused, and that's it. Actually, you can't do that well, anyway, but if you get a certain percentage of that, that's it — the nature of it won't let you do anything else.
So that's effectively what we did. However, you can ask yourself what happens if I squeeze it up and down, good and hard, and then let it escape right and left — and a little moment later I squeeze it right and left and let it escape up and down. If I do this just right, so that she can't escape very much, could I make the game fast like this so that while that beast is trying to escape I can still contain it fast enough? This is effectively the idea of strong focusing. It turns out that it is quite possible. We thought of the idea, but the mathematics was so tough that we gave up. It turned out of course that this turned out to be a great idea. Cristopholus is really the man who thought of this. He did calculations and proposed it, and this is the principle that all machines use today.
Heyer and Pinsky:
You get alternating wheels.
Rajchman:
Every machine that is built uses that principle. It's a universal principle. So it turned out to be a very interesting project.
Development of a Stored-Program Machine
Rajchman:
Anyway, von Neumann was at the Institute [for Advanced Study]. We had already moved to Princeton. By then the ENIAC was approaching its completion, and it was obvious that the stored-program machine was the thing and the ENIAC was sort of a patched-up stored program machine because it wasn't designed that way to start with.
But the main difficulty with a stored-program machine of course is that you need a memory. So the Institute asked us whether we would be partners with them, whether we would undertake making the memory. I was part of the job, and I cooked up the idea of the selective storage electrostatic tube at the time. It was already called a "Selectron," only the lawyers didn't let us call it by that name anymore; but everybody still knows it under that name. It is like a cathode-ray tube storage, except that the selection is not by directing a beam of electrons, like an electron garden hose, to a certain place, but rather showering electrons all over the place and then excluding them in every place except the place you wish by means of grid arrangement and gating arrangement. By doing this then you have an absolute certainty of getting to where you want.
The problem with the garden hose idea is that it's a type of deflection that has its uncertainty whether you can come back to the same spot twenty minutes later. The fact that you have the rain of electrons available all over also means that you can use it to keep the information locked in, because you can use a mechanism of storage that depends on the fact that locally you have a bistable element that can sit in one way or another. So therefore you are also free from losing the information due to insulating losses. Not only are you certain about where you go, but you are also certain not to lose information due to accidental poor insulation. Therefore it was a very positive digital device, which I always believed would be the only way to proceed. It is the digital positive way. We made tubes like this.
Magnetic Cores
Rajchman:
However, it turned out of course it was quite a technology because, looking at it from a perspective of 1970s or the 1960s, it was integrated vacuum technology. In the microscopic group we did not really have the advantage of the industry that it takes to do integrated stuff. So it took us some time to do it, even though the idea was very straightforward. Von Neumann actually used another tube, which was an electron beam [Williams tube]. In the meantime I got pretty annoyed at all these things. I dreamed about other ways of doing it. That is really not the right connotation because that wasn't at all the thing that I was thinking of. How could it be done some other way, and I thought of the magnetic core.
Actually, the origin of that idea is really quite different. The idea of using a magnetic core "on and off" I thought of it for a long time. The only problem is that in a magnetic core, you could magnetize a magnet in one way or another and, once you have done it, it stays one way or the other forever. It's obvious to use its memory. When you change this magnetization, you can use it as a voltage, so therefore you can get a signal when you change and can know what it is or was. Knowing what it was, you can restore it to where it was and therefore you don't lose the information — that part is fairly obvious.
The trick is when you have many of them. How do you do this on one without deserting the others and without having to associate each one with a tube or switch of some sort. Because if you did that of course, you defeated the whole purpose. To do the switching you have to make it so that you would add currents. You would intuitively think right away of putting them in an array and you should add currents to a point where you would magnetize them. But if you didn't quite add them up, you wouldn't magnetize. So you would have to have a sharp threshold for excitation. It turned out that when you looked at any materials that existed in nature, they weren't sharp. So the idea didn't look good. Perhaps all of these issues weren't set quite in that logical way, but all those thoughts were in the background.
One day I saw where it says in a book or in magazine that the Germans have virtually developed magnetic amplifiers, one of the few things that the Germans had done fairly well in electronics. And I said, "It's obvious, you can make a thing and its square loop, so here you are. I got a hold of the materials, and we started to work on magnetic memories. I also thought that, surprisingly enough, it was fairly obvious you could do it with single cores, but that it would not be all that easy to assemble many cores. Therefore the trick would be to start making a sheet with holes and making it integrated to start with. We made a sheet and put holes in it, but when we tried that it was miserable. So we said, "Why don't we just take a few individual cores, wire them up, and see how it works, even though it's going to take us forever to assemble them." We did it just to see how it works so we would have a little idea of the system. To my great amazement it took only an afternoon for somebody to wire 256 cores. I was totally amazed that it was so trivial a task. Later we thought of using ferrites and so forth. The story is fairly well known.
Heyer and Pinsky:
What were the first cores made out of?
Rajchman:
Ten magnetic ribbons. That was very fragile and very expensive. We went to the ferrite synthesis people, [Humboldt] Leverenz, and we asked him, "Could you make a material which would have a square-looking trace?" and he said, "Well, we can try." They made one in about six months, which I thought was pretty fast.
Then, we made little doughnuts out of it because ferrite is like a powder. I wanted a machine that could make them very fast, and it turned out that the [F. J. Stokes [Machine] Company] in Philadelphia made aspirin and other pharmaceutical tablets for a lot of pharmacists. They made tablets automatically with a very small machine, but very fast. We bought one of these machines and adapted it to make doughnuts instead of pills by making the right dies on the end. That's how we made the cores. Then we made automatic machines to test them because it's obvious that the more uniform you make them, the better they work. The idea was to make them quickly and then to test them and automatically reject some. If you make a very fast testing machine, you get the thing to work. So we made a testing machine.
We did learn by then of MIT's parallel effort in this area. There was a suit, and I don't want to go into the details. The point is that we wanted to have RCA make the cores. When I went to Camden, they weren't very much interested. It's an amusing incident in retrospect because I went to ask them whether they would want to make cores and they asked me, "How many tons of ferrites are you going to use?" I told them, "Ounces would be a better measure." Also, I wasn't prepared to tell them how many cores they would use. They said, "A few million cores and we will saturate the market in the U.S.," which was entirely, tremendously naive. But neither was I prepared to tell them how much the market would be, so I guess we were all naive.
The point is that we didn't go into the commercialization right away, we went into it later on. RCA wasn't the first to get into it. While all this proceeded, I still thought that if you could avoid the making of the cores one by one, instead making the whole block of stuff, then it would be better or cheaper than making the one at a time. The idea was that an integrated approach was better than an individual approach. So, for example, we made plates with holes and asked, "How close can you put the holes?" We started to put the holes very close together. When we put the holes very close together, we found that they started to interact.
Transfluxors
Rajchman:
One thing that was interesting is they started to interact in ways that were very complex. That's what led to the concept of the transfluxor. The simplest way was just to make a plate with two holes. Then it turns out that the interaction in itself was of great significance because you could not only store "on-off" information but you could store a gradual amount. Furthermore, you could store it in a non-destructive way. A core, in order to know what it is, you have to reverse it and momentarily the information is lost. You have it in a circuit and you rewrite. But in a transfluxor the information that's around one of the holes always remains the same, and the information around the other one depends on what's on the first one, but it can be reversed indefinitely without affecting the other one. Therefore it's completely non-destructive. So, it’s both non-destructive and analog and therefore provides all kinds of possibilities. For example, one possibility was to make a flat TV display, which we made at the time. The way that was made is that we put in an electroluminescent cell. All of the cells were driven so that if the flux could change around the small hole, the cell would light. You could either block it or not block it.
Let's just take one transfluxor. If you put a very big pulse through the big hole, you would saturate the two outer legs around the small hole and therefore the flux could not circulate around the small hole. Because if you try to turn clockwise or counterclockwise, in one direction or the other, you encounter a leg that is already saturated, meaning that you cannot pass any more flux. If you threaded the large holes in a fashion in the matrix, you could selectively block or unblock the transfluxor at will. Furthermore, you could block and unblock them partially as it turns out.
Therefore, by putting the scanning signals on these wires, you could put the TV signals on those. You put the video signals and the scanning signals on them. Each one would then be put into the video signal. Then if you light them all, they would all light up. Contrary to the cathode ray tube that lights only at the place where the beam is, all of the places light up at once and each one with the video signal that it received last. We made a gadget like this and it worked fine. The only problem was it was frightfully expensive because at each point there was a transfluxor.
The advantage of the transfluxor is its utter liability because there is nothing but a chunk of ferrite and wires. The controlling used is exceedingly sophisticated. You cannot drive one by another and do logic and so forth. After the concept of the transfluxor, there was a flurry of work all over. Bell Labs and Stanford Research [Institute] were the two main places — much more work than here as a matter of fact. The result of that work was that some industrial uses were made. Notably, the New York subways use transfluxors today, as do some of the industrial controls such as elevators of the Buick motor assembly lines. They are also used in satellites and many other places. But people are using only one or two. They never fail and they cost very little. The net result is that it's a very small market. It's one of those things for which part of it is lost by its very virtue because it's very cheap and very good, so once you have it, it doesn't bring you much money. People use it and that's it. The general use was completely supplanted by transistors, which came about at that time. The general logic is much better with transistors because it would take much less power.
Heyer and Pinsky:
How would you use the transfluxor in an industrial use such as an elevator?
Rajchman:
They would replace relays. In an elevator or in a subway, if certain buttons are pushed, you want certain other things to happen. A logical, certain, combination of signals is used to create another signal. You want the combination to create another signal and know with absolute certainty what will happen. Today I think that transistors are probably good enough to do that, and so you can hang your life on it. But the point is that in places where the absolute safety was paramount in the 1950s and the early 1960s, transistors had not reached the quality whereas the transfluxor had. For that reason, the transfluxor was used.
Heyer and Pinsky:
I can think of one subway system in San Francisco that still is having a great deal of trouble with the latest transistor technology.
Rajchman:
Yes, right. I don't know how much more I can say about the transfluxor story.
Heyer and Pinsky:
What was the group that you were working with here at RCA at that time?
Rajchman:
I guess I was still in Zworykin's group at the time. By that time RCA had a division that was making cores, so we tried to interest them in making transfluxors. I guess they made some, but other companies made them.
Project Lightning
Heyer and Pinsky:
Yours was a research group?
Rajchman:
That's right.
Heyer and Pinsky:
From 1957 to 1967 you were director of research?
Rajchman:
That's right.
Heyer and Pinsky:
It sounds like you were moving more into management.
Rajchman:
That's right. But before that there was Project Lightning. I think it was 1957 when the government approached the laboratory, or rather RCA generally, with the proposition to work on the so-called 1,000 mega-cycle computers. In those days, the highest speeds achieved were one megacycle. That seemed to be a fantastic increase in speed. The idea was that any technology would be alright to pursue except integrated circuits because the government believed that integrated circuits would be pursued by industry in any case.
As it turns out, looking in retrospect, the integrated circuits turned out to be the only technique that prevailed. IBM, Sperry-Rand, and RCA were the three contractors. At RCA we did most of the work in the labs at first. Later much of the work was done in the computer division. Our approach was first with parametric amplifiers using varactor diodes, which can be made to oscillate with two stable phases. If you put varactor diodes in a turned circuit, it turns out that there are two stable phases to which you can make them go. This is somewhat an analogy to the parametron, which was invented in Japan, which used a non-linear inductor resonator circuit. We got the elements to work reasonably fast that way. We could make elements that would switch in 10 nanoseconds, that is ten times slower than what was desired. Perhaps they could go a little faster, I'm not too sure.
During that period, the announcement of the tunnel diode by Esaki came out. We noticed this very fast. We were one of the first people in this country to make some. Those switched enormously fast. In fact, the limit was obviously measuring instruments and not the device. Among other things, we improved the measuring instruments. We were some of the first people to make so-called sampling scopes to improve the measuring instruments — and many of the techniques that go with high speed and transmission lines, transmission circuit lines and the like.
So while the project did not focus on the technology that eventually gave high speed to the computers, it nevertheless focused the whole industry towards high speed, produced the right technology, the right approach, the right wiring, the right concepts for coupling, the right instrumentation and so forth. So that when the technology of transistors slowly evolved towards it, these other things were no longer the bottleneck. I, for one, consider Project Lightning a good thing. Many people don't consider it that way, but I do. It did produce a great boost that would otherwise have not happened. Let's not dwell on it much more.
Heyer and Pinsky:
Have transistor circuits been gone up to 1,000 megacycles?
Rajchman:
Yes. In fact, even higher. They go into microwave frequencies. But there were people at the time of Project Lightning that said you could never go to those frequencies. They were right and wrong. They were right in principle except that the conditions they put in their equation were not right. These people made things smaller and they made materials more pure, this, that, and the other thing. There were papers written that you could never go more than a megacycle. We achieved a megacycle, then it was more than five, and so forth. Every time the limit was beaten, there was a higher limit established.
Directing Computer Research
Heyer and Pinsky:
Seems like a classic situation. Because the megatron goes faster than a 25 megacycles. Then you were a director of the Computer Research Laboratory?
Rajchman:
Right.
Heyer and Pinsky:
Did that lab have a different schedule or a special function or was it an ongoing thing?
Rajchman:
When I became the director it was already an ongoing function. It was more or less making official what essentially existed. By that time the company had decided to go into the computer business in a big way. It was obvious that the laboratory should engage in computer research even more than it had before. We were of course pioneers. I was trying to get us into doing this, to consider computer research as the upcoming technology of the age. But I wasn't the only one, of course. There were lots of people in the company who were doing this. It was just the recognition of the fact that we needed to increase the effort to make it more efficient.
The one significant thing at that time was that we had done most of our work in so-called "hardware" and not so much in software. We had a computer in the laboratory that had a service organization associated to it. It was not doing any advanced programming — although it actually really was doing some subtly, but that wasn't really its function. That was one area where we had some software strength. One of the main things we thought of doing at the time was to get some software strength, so we created one group. Eventually we took the group that was a service group and we made it a bona fide research group. That group eventually became much stronger than the one that we had created especially for the job. Eventually, that service group was the one that developed the timesharing software, first for the laboratory. Timesharing is an interesting case in point. The timesharing idea was in the air in the late 1950s and early 1960s. MIT was a pioneer in the area. They wanted the companies to help put them on their feet. We were approached, but we weren't interested as a company. However I, for one, thought it was a very interesting thing to do.
Sarnoff and Timesharing
Rajchman:
As a personal anecdote, I don't mind saying that when General Sarnoff once visited us — it was approximately at that time. We showed him around because the computer research was being expanded. It was at that time when I was already quite anxious to do timesharing and so was that group that eventually joined us and became very active in timesharing. We showed him around and at the end of the day he was very happy. But I told him, "This is all very fine, but you know that there is one thing that we are not doing — that's timesharing." And he said, "What is it?" So I explained it to him. He said that's very interesting.
By the way, he asked a very good question, "If I understand, timesharing is important because the processor is so expensive you want to sell as many computers as possible. The day that the processor becomes very cheap it may not be bought." I think that was a very foresighted thing of him to say because that's precisely what's happening today. Very much as a result of that conversation we got the mandate to do something about this. The corporation and eventually the laboratory came in to do something about this.
Then the group under [Nathan L.] Gordon — this service group that in the meantime joined the computer laboratory undertook to do it. The idea was to do it for the laboratory, it wouldn't be a product. But it turned out that it became really the basis for the timesharing that the company used in its products. It was an interesting episode, I must say. I had very little do with it except originate it and make it possible. It was mostly done by that group under Gordon.
Heyer and Pinsky:
They set up a system for the lab?
Rajchman:
For the lab, which is still the system that we use today. This system may be replaced by an IBM system one of these days, but until that day it's still existing. We're using it today.
Heyer and Pinsky:
It became part of the system of the computers that come from all over?
Rajchman:
There was another system that was developed in Canada. There was a lot of discussion between the two. Eventually our system became the company system. Eventually we went out of the business so it became Univac's System. These things grew. If we eventually obtain an IBM machine, we'll have to have an IBM system. So it's all history now.
Timesharing may become less important as stand-alone machines become so much cheaper because their microprocessors become so much cheaper. In some of these smaller machines, it simply doesn't pay to share the processor part. It pays to share the database or large files, but there are many cases where you don't need to do it. The concept is certainly being revised — it's quite a different thing.
That was one main, novel thing that happened in the lab that became computer research. The other was that we recognized the semiconductor technology was the main thing that was going on, so we took what we considered to be the very good work that was dying in the other parts of this lab — the COSMOS [Complementary Silicon Metal Oxide Semiconductor] technology, or MOS technology. We recognized that this was just the thing that we needed for computers, so we took this stuff and pioneered in the development of computer circuits with MOS.
For example, we were great believers in the so-called CMOS, i.e. complementary MOS, in which you have both positive and negative current, so that you could make circuits without resistors and have all the flexibility. As a result we have many basic patents for basic concepts in this area. We became very proficient in this area. Much of our RCA strength, I think, comes from the practical combination of effort in semiconductor technology work as well as in circuit and system work. We provided that second part, the first part being provided by the other parts of the team. We worked very well with them. Jerry Herzog was in the group at that time. He worked very well with Charlie Mueller and the other people in the lab. It was an excellent cooperation.
Advances in Core Memory
Rajchman:
- Audio File
- MP3 Audio
(024 - rajchman - clip 2.mp3)
That was one of the great things that we did at the time. We built up a lot of work in the semiconductor area. We continued in magnetics. It was obvious after a while that magnetics was. . . I still have to believe that integrated memories would be better than cores; however, I gradually lost belief because it became evident that as the core industry grew and grew, it was always more difficult to displace it. Furthermore, you always had to promise much more. Every time you made an advance — and in fact we did make fantastic advances — and every time we produced one miracle after another, it wasn't enough because in the meantime the core memory had become that much better.
We then started to work on superconductive memories with the idea that you could make a much bigger jump in capacity. By the way, the superconductive technology was used by IBM. It was conceived by the people at MIT, it was exploited by IBM during Project Lightning but we thought for the wrong reason — for logic, for which we we thought it was too slow, and transistors were much better anyway.
However, we thought that for memory it would be quite the ticket because it would be a way of making huge planes. The fact that you have to provide liquid helium wouldn't be so painful since you amortize it over something that is worthwhile. We in fact did make huge planes. In fact, we made planes that even today make the biggest plane of semiconductors look like peanuts. We made planes of a quarter million bits, whereas the biggest one in semiconductors are only 16,000 bits.
But looking with the perspective of time I see that we were somewhat Don Quixotic because to get the 16,000 memory planes without errors, the industry has a whole industry behind it with millions of dollars and crews of thousands and thousands of people working. Here we were with a tiny little group trying to do a technology that of course is somewhat easier I think than semiconductors but nevertheless quite a fancy technology. But we had to do a perfect job because nothing but perfection wins on a much bigger scale. I think the fact that we made a few planes without error was a fantastic achievement. That was another thing that we did during that time.
Heyer and Pinsky:
Maybe you can tell me just a little about the peculiar advantages of the superconducting devices?
Rajchman:
Well, the superconducting elements provide a very low power, very small size, and very fast switching and have inherent storing properties. In other words, once you start a current going in a loop it flows forever, so in that sense it's like a core. You can also make a memory like you make in a transistor; that is, a bistable circuit. Either way you can make a memory. In either case, it's a very tiny device that consists merely of a thin film of lead that is made by photolithography. You can switch it from one state to another by momentarily making it go normal, that is, by making it non-superconductive momentarily. The super-conductive material momentarily becomes normal, so you can switch it. Therefore you can make gates, switches, memory devices and so forth. You need only to evaporate two different metals, very thin — less than a micron thick — separated by an insulator into patterns. The patterns are simply geometrical patterns. In principle that's all there is to it. You stick this in helium and there you are.
Heyer and Pinsky:
Do it on any kind of a substrate?
Rajchman:
Any kind of a glass substrate. In principle, it is a very easy thing to do. In reality, we found it very difficult to do well.
Heyer and Pinsky:
You were talking about superconductive memory.
Rajchman:
All I needed was indium and lead. The difficulty was mostly technology, obviously technological. We had a hard time making the good planes containing big enough signals and so forth. There was nothing wrong with the principle.
The Problem of Electronic Memory
Heyer and Pinsky:
Did you try to put this in the market?
Rajchman:
No. No one has. The only thing I can say is that I believe the memory of the computer is still something that needs great improvement. It's still a fairly expensive device, one of which you would like to have more if you could afford it. By and large, it's not well-solved. You would like to conserve the storage, i.e. the information that's mechanically retrievable, and merge the stuff that is archival with the stuff that you play with in microseconds. There is a sort of continuity in all of that stuff. What you would really like is something in which you could store for centuries, yet have access in nanoseconds. It wouldn't cost you anything to do it — putting it in very blunt terms.
You have the feeling that if you had such a thing you would have an enormous effect on civilization. As an example, I can say that the magnetic disk used for computers in which you store information — a rotating disk with a moving arm like this, which is the mainstay of the computer art, happens to represent a large of fraction of IBM's business. Beyond its own right — just the plain device itself. If you conserve its implication under business, i.e. the implication of the software and the whole business, it's essentially their business. So IBM gives you an idea of the importance of a storage device.
To my mind logic, which is simply taking some electrical signals and combining them to produce new ones, we know how to do this at any speed practically for nothing. You put it in watches and you do it cheaper than you do with levers even though you don't need to do it fast. But you do it electronically because it's cheaper. In fact you give it away when you buy soap. The logic is nothing; you can have as much of it you want for nothing. However, memory is very expensive, and the fact is you still resort to mechanical motion which is like resorting to an ax in the days of electronics.
There is no way of storing large amounts of information purely electronically. If you want to make transistor memories, even if you believe the greatest visionaries in conferences, you still have to pay a tenth of a cent or so per bit; and if you want a million million bits you can do the arithmetic and see you won't be able to afford it. If I want to transcend the problem beyond the point of a view of myself or even RCA and consider the problem on a national basis, finding a memory for a storage device that is a very cheap way of storing information accessible electronically is the outstanding problem of electronics. Maybe not electronics, but some other science or technology will do it.
You may ask, "What kind of problems can the great laboratories of this country do?" People say, "Maybe we have too much technology. What are all these people good for?" And many people ask, "Is this related to the GNP? Shouldn't we do better things? And so on." I propose this is a problem of enormous intellectual challenge, an incredible leverage from civilization's point of view. It has an importance that I can compare only to the printing press or some such thing like this. Because it would put the entire intelligence of man at the command of everybody, always, at any time and in such a way that [it] can be manipulated in the most sophisticated way. But in order to do all these things you need much more than the memory itself. You would need afterwards ways of retrieving it, manipulating the information, knowing what to do, and so on. The only thing is, you can't dream all of these other things until you have the memory. That's the outstanding problem.
We have made a smaller attempt in that direction with the holographic memory in recent years, which was attempting to produce a memory that would have the capacities of, say, disks (which is of the order of several billion bits), would be archival, and at the same time would be accessible at electronic speeds. We had some success. We demonstrated the principle in a small mock-up of it that had only a million bits. It was pretty awkward because it was a laboratory set-up and many difficulties were there. However it did demonstrate the principle and it also demonstrated that there were lots of shortcomings, like the storage material being the worst of all. It required a better storage material.
Heyer and Pinsky:
Were you using photographic film?
Rajchman:
No. We want to get a material in which you can read and write at very high speed without processing. So photographic material is out. We do have a material now (lithium carbonate) that is only a factor or so away from what it might be, maybe ten times too slow and maybe too expensive, but it's not that far off from being usable. So that was one approach. There may be other approaches to that problem. I think it is a very important problem — still unresolved.
Heyer and Pinsky:
I can see that as the processors become more common and cheaper, the demand for memory goes up.
Rajchman:
That's right. My own feeling of what's going to happen is that in reality it's much easier to be gradual than to make a big step. So, in reality, people are going to improve semiconductor memories because the whole industry is going that way — and they are going to make them cheaper and bigger. That's going to become like the core memory used to be. It's going to be the yardstick that you have to beat by a big factor in order to come in. It's already the yardstick because the core memory has already been effectively displaced by this semiconductor memory. So the semiconductor memories are going to become bigger and cheaper.
Heyer and Pinsky:
It may take as long as it took to displace the core memory to displace the semiconductor memory.
Rajchman:
That's right. On the other hand, it's very difficult to visualize that the technology of the semiconductor can be that good that you will be able to make 10 to the 12th.
Heyer and Pinsky:
Especially, if you get archival storage; that seems to be the basic drawback. It seems when you turn it off it is off.
Holographic Memory
Rajchman:
Not necessarily. There are some types where it stays. At least for a good while, like a year. So it might be possible to do it. The semiconductor memory is by definition the kind of thing that is completely digital. You make wires come to every cell. Eventually you can't make it any cheaper than the cost of making the wires because even if you put nothing at the intersection you still have to have the wires; so the minimum cost is the cost of the wires. If you imagine something that's a hundred million, the cost of the wiring alone is a fantastic problem. So that you can ask yourself, will any type of memory that depends on making cells, even if you make the cells for nothing, ever be cheap enough?
We thought in the case of the holographic memory that we had an answer because we didn't do that. We merely directed the light to that position. We had the advantage of digital addressing because, even though we directed the light to it, we didn't have to direct the light accurately. The reason for that comes from a sort of magic, by the fact that when you direct two beams to a certain spot, where they intersect it produces interference patterns, which is like producing tiny little fingers at the end of an arm and however long the arm is, the fingers are still very fine. It turns out that the exact position of the finger doesn't matter because it's only the frequency that matters. Consequently, it turns out that however far you are and however crudely you do it, the precision which you read at very far distances is still preserved.
Therefore, philosophically you can have a huge thing that you read from very far and still retrieve a very small detail for free. Therefore, philosophically it seemed to be the right way to approach the very large storage. Whereas, if you try to do it, say, with an electron beam and direct it to a certain point, then you direct that electron beam to that point but you really have to do it just there. If you use an array of physical things to do it, that's equivalent to making the array of physical things. Forget the beam and use the array of stuff to do the storage. If you do it with a beam from far away like a garden hose, then you have to be damned nimble in your hands to make it steady so that you don't miss. That's the problem. There are some stunts; in between you may place some components to do the direction twice. But the general problem of addressing is a big problem.
Heyer and Pinsky:
Are any of the other unique characteristics of holograms applicable - e.g., to be able to cut off a section of the memory and still be able to use it?
Rajchman:
That's right, that's another feature so that you don't have to make it perfect. That's another feature that was very good. The holographic approach I think philosophically is very good. The only thing is that we found when we went into it in great detail there are other limitations. First of all, the size of the material is not adequate. Then we found when we tried to do it that the optics are not simple.
Heyer and Pinsky:
Any movement or vibration problems?
Rajchman:
Not so much that as the fact the lenses turned out to be fairly expensive and the energy you get from the laser is not sufficient because you spread the energy over many bits and therefore at any one location you got very little energy.
Heyer and Pinsky:
So then the problem is photographic.
Rajchman:
Yes. Since the laser is very inefficient to start with; i.e., it gives very little power from the circuit to start with. So it just a losing game.
Heyer and Pinsky:
How do magnetic bubble memories fit into this? They're relatively slower, I guess.
Rajchman:
They are relatively slow, but not all that bad. But they are serial, of course. To my mind that is a great drawback because even though there is a huge number of applications where serial information is alright — in fact is natural — still there are many where it isn't. It's very, very difficult to master too.
That is, I think, the main drawback. The fact that it's serial and the fact that in respect to transistors is not much — at best a little if you really went into it in a big way. But the question is, will you go into it in a big way because the advantage occurs only if you did. So it's a self-fulfilling prophecy whether you will or you won't. But the performance of the transistor memory might be better. The one thing it has of course, it has permanence. It stays, so that in applications where you don't mind a serial part and you absolutely wanted to stay without power. . . .
Heyer and Pinsky:
Maybe the main idea would be a combination of the bubble serial memory, where you could use that as your database and put into a large semiconductor run fast as memory and just read it in serially and then operate on that.
Rajchman:
Well, the only problem is that you have to be smart and that says that it is expensive. Because the fact that you are smart means that you are writing a program, and the fact you are writing a program first of all means that somebody had to pay somebody smart to write it. Secondly, after he wrote it you have to store it, so part of the memory is taken by the program. If you are getting to be that sophisticated, there is no memory left to do the data.
In fact, in the early MIT timesharing machine there was only one-tenth of the memory capacity that was left for the data, nine-tenths was used for the administration of the machine and the memory. But it's not unusual today that half of the memory is used for administrative purposes and operating procedures only. It's not at all unusual, and sometimes it is much more than that — showing that it's totally idiotic. Not only did he use that amount of hardware, which is to my mind not so bad, but it also means that you used a lot of smart people who use their intelligence doing the wrong thing, mainly worrying about the foibles of the machines rather than solving problems. Right? I mean that's really what it means. To put it bluntly, right? Because the machines weren't good enough. They had to be clever to avoid them, to avoid them by the stunts that you just mentioned, or other stunts that are very, very ingenious, e.g. the cache and the virtual memories and this, that, and the other. There is an exceeding number of clever stunts, one on top of the other. They are so complicated after a while you don't know what you have got in the machine.
Continued Application of Timesharing
Heyer and Pinsky:
It really says something that timesharing was so important that even when nine-tenths of the machine used for administration it was still feasible to do it.
Rajchman:
Yes, of course timesharing is still very important in many applications. For example, the typical one is airplane reservations tickets, in banking applications, and others where you do want to share the database to start with by definition. But the point is that if you have a lab like this, where, by and large, people don't have a common database. There is some database that is coming, programs which are common, but then it's just as easy to walk down the hall.
Heyer and Pinsky:
It seems like if you got your own processor that the necessity for high speed never becomes less. If you are doing your own problems, if you are the computer designer, you don't have to worry as much about getting the information in and getting it out so that the next guy can use it. I can stand to wait an hour for a program to run which would be intolerable in a large time sharing system. That brings the cost down. So maybe that is another mitigating factor.
Rajchman:
I don't know. It is a good problem. All you have to do is come up with a solution.
Heyer and Pinsky:
Maybe they will reinvent the book.
Rajchman:
Yes.
Further Reading
Hittinger, William C. "Jan A. Rajchman," Memorial Tributes: National Academy of Engineering, Vol. 5 (National Academies Press, 1992), p. 229-31.
Oral History of Jan Rajchman by Richard R. Mertz, 26 October 1970, National Museum of American History Computer Oral History Collection.