Oral-History:Martin Graham
About Martin Graham
Martin Graham is Professor Emeritus of Electrical Engineering and Computer Science at the University of California at Berkeley. Beginning his work with the Brookhaven and Los Alamos Laboratories, Graham later moved to Rice University, before settling down at Berkeley. In this interview, Graham speaks about his early work in building computers at various laboratories and universities and the different machines he worked with and created. He also reflects on debates around the place of Computer Science within the engineering sciences at Berkeley, as well as debates about the feasibility of building computers in universities. While making a case for autonomy in decision-making with respect to building computers and using research funding, Graham also weighs in on the bureaucratic problems associated with National Science Foundation grants. He also speaks at length about his research interests, which include computer systems, medical instrumentation systems and electrocardiogram analysis.
About the Interview
MARTIN GRAHAM : An Interview Conducted by Andrew Goldstein, Center for the History of Electrical Engineering, August 9, 1991
Interview #131 for the Center for the History of Electrical Engineering, The Institute of Electrical and Electronics Engineering, Inc.
Copyright Statement
This manuscript is being made available for research purposes only. All literary rights in the manuscript, including the right to publish, are reserved to the IEEE History Center. No part of the manuscript may be quoted for publication without the written permission of the Director of IEEE History Center.
Request for permission to quote for publication should be addressed to the IEEE History Center Oral History Program, IEEE History Center, 445 Hoes Lane, Piscataway, NJ 08854 USA or ieee-history@ieee.org. It should include identification of the specific passages to be quoted, anticipated use of the passages, and identification of the user.
It is recommended that this oral history be cited as follows:
Martin Graham, an oral history conducted in 1991 by Andrew Goldstein, IEEE History Center, Piscataway, NJ, USA
Interview
INTERVIEWEE: Dr. Martin Graham
INTERVIEWER: Andrew Goldstein
DATE: 9 August 1991
PLACE: University of California at Berkeley
Work at the Brookhaven and Los Alamos Laboratories
Graham:
I met a fellow when I was working at Los Alamos Laboratory for one month, in the summer of 1957, and his name was Zevi Salsburg. He was a theoretical chemist on the faculty at Rice, and they wanted to build a computer. I ended up going to Rice eventually because of the initial contact with him. Then, later, when I came to Berkeley, he was very close personal friends with Bernie Alder. I think they were both graduate students together at Caltech. And that is how I got to know Bernie. And Ken is his son. So it was that route to Uvall’s computer on the contact.
Goldstein:
Let’s talk about the Rice computer.
Graham:
Now that one, I was talking to my wife about it. I believe that one was financed by the Atomic Energy Commission. Have you run into the names Milt Rose and John Pasta.
Goldstein:
John Pasta. Yes.
Graham:
Okay. I used to be at Brookhaven Laboratory. I was in the instrumentation division. And Brookhaven Laboratory had no electronic computers. Somebody in the physics department who was doing experiments on the Advantograph machine ordered a computer. Then all hell broke loose, because the computer was expensive at that time, and they thought it should be handled and purchased and administered by the central administration. They, in fact, yanked it out of that guy’s hands, and the administration started looking at it. They asked, I guess, the instrumentation division for some advice. The guy in charge of instrumentation was J.B.H. Kuper, and he was a physicist from Princeton, and his wife was Marietta Kuper, who had been John Von Neumann’s wife. And she is Marietta Whitman’s mother. So there was a connection sort of that way.
At any rate, and under Kuper there was a fellow named Willie Higginbotham, who had done electronics instrumentation at Los Alamos. He was the main electronic technical person there. They ended up buying a computer from Remington Rand. It had ten storage registers, all vacuum tube, and it was a mess to program. They hired a programmer that was expert on it. And it was a sort of central facility. Then they needed something bigger, and it was a question of what to buy. And the machines that you could buy at that time were rotating drum machines, where the stuff came out in serial. You did your arithmetic and put it back on the drum. I think they were in the range of $50,000. And just at that time the Illiac-I was getting finished, and that and the machine at Oak Ridge were essentially identical. And I went to see the machine at Illinois, and I think it was Meacham, who was running that project. He got hired by IBM shortly thereafter. But, at any rate, it seemed that Brookhaven could build a machine like that in the same general cost range as buying a drum machine. And they decided to build their own machine.
Then I got sent to Los Alamos for a month. That was where I met Zevi Salsburg. They were building a machine call Maniac-2 which was an improvement over Maniac-1. The person in charge of that was Nick Metropolis. I spent the month there. The head engineer was Jim Richardson. They had the arithmetic unit built, and they were having problems with the control circuitry. There was a circuit I had worked out at Brookhaven in connection with some other instrumentation that did the AND function in a peculiar way. Normally it did the AND with two diodes. To get an AND both conditions had to be there to do the next thing. In those days you were still using vacuum tube diodes. It was just switching into solid state. But you could get something that was an answer, but if you added two voltages together essentially instead of two currents, added them linearly and then went through a diode. But one of the voltages has to be ungrounded in order to add it in series with the other. But you could get that with a transformer secondary. Most electronic designers never worked with transformers, but you could make small transformers at those speeds. And we tried out a circuit using that technique when I was at Los Alamos. It worked very nicely. The main advantage of it is that with a transformer you can match impedances. So you can have a low output impedance and get your 10 volts one, let’s say, with a 100 volts one from a vacuum tube. And that’s a 10 to 1 change in voltage, a 100 to 1 change in impedance. And it means those wires can be long and yet have no problems. In the arithmetic circuits you could build things localized for each bit. But in the control circuit the wires got long. So this turned out to be a very nice arrangement. And they used that circuitry in the Maniac-2.
Then when I went back to Brookhaven we were building a new machine there with some name, Merlin was the name of the machine. And Milt Rose was head of the applied math group. And I was in the instrumentation division. And there were jurisdictional fights over who had control over what.
Goldstein:
Responsibility for design?
Graham:
Architecture and circuits. And there are two [inaudible word]. There are textbooks today that say they teach top band design, and you do the architecture and you start implementing. That’s bullshit. What you can do depends on the elements you have. You have to be able to go back and forth, and sometimes the elements determine what the architecture should be. This was true to a fairly large extent on the machine at Brookhaven. Because it turned out you could build the arithmetic unit using the same technology as one into the control that I had worked on at Los Alamos. And when you do these transformers, it depends whether you have somebody else make them or you make them. If someone else makes them, you are limited by how many pins come out of the package. They are these fur-like claws put in with the winders. But if you make it yourself, you can put the claw out in the open, and then you make the windings go through, and it is part of your wiring of the machine. And you can put in a lot of winders, and that gives you a richness in what you can do in the architecture. That was done to some extent at Brookhaven. Then when I went to Rice we used the same technology that explored it even further.
Work at Rice University
Goldstein:
At Rice you were working on a machine that was funded by AEC you say.
Graham:
Yes. Now let me tell you a little more about that. There were three people. It was Zevi Salsburg and John Kilpatrick who were the physical chemists, and a physicist named Larry Biedenharm. Now Zevi Salsburg died some years ago, but Biedenharm and Kilpatrick are still alive, and Kilpatrick is retired. I saw him two years ago in Houston. They had put in a proposal to the AEC to build a machine and nothing was done. Then John Von Neumann died, at least this is what I was told. Some questions were asked about what is the government doing to continue the research in the area of his interest. So they found their proposal and then wanted to fund it. But they would only fund it if they had an engineer. So I got asked if I was interested.
At the time the idea of leaving New York seemed very alien to me. And I remember telling Zevi Salsburg I didn’t want to come down because it was only a 25 percent chance I would take the job. He said that was enough for them to have me come. They used to do their recruiting from the northeast usually in February, when it was miserable in the northeast but nice down there. I remember my wife and I, I had a heavy tweed jacket, and roasted as soon as I got off the plane. It was hot and humid. But they clearly intended to build the machine, and the people seemed very sharp. But they wanted to do an exact copy of Maniac-2 in Los Alamos. Of course, all three of them worked at Los Alamos during the summers on occasion, and they knew how much trouble it was to write software, and they wanted it identical so they could use the software that Los Alamos developed. And I said I was not interested in coming and building an exact copy, for one thing. Then we split off and discussed things and decided I should come back for further discussions. And the president of Rice, who would have some role, was a physicist who had been at Caltech. So there was a very scientific, technical group involved in these decisions. And they agreed that I did not have to build an exact copy of Maniac-2 and that the decision should be made by a committee of the four of us as to what to do.
Autonomy in Decision-Making at Rice
Graham:
There was also a close friend of theirs from the psychology department who was usually at the social things in the evening. So I said I would not go along with a committee making the decisions. Generally committees cannot make technical decisions. The way that was resolved was the committee would make the decision but I could out-vote the other three. And that worked out extremely well. There were a tremendous number of technical decisions that had to be made. There was only one in the whole construction and design of the machine where there was a disagreement, and I remember it was on a technical point. When you do a floating point addition, if there is any kind of remainder left you normalize so that there are no lead zeros. Now what do you do if you do the subtraction and the result is all zeros? Well, there are two possibilities. One is if it is all zeros you do not bother normalizing them. The other possibility is you normalize as if there was a one in the least significant place, and you get it up to the front. If there is not, you normalize to that extent and you still account for raw zeros. But you’ve done the normalization. So my expertise is not in numerical analysis, but it just seemed to me that there was a discontinuity in the algorithm, depending on whether you had a one or a naught. Naught in the least significant place. So we wired it up the way they wanted it, which was not to normalize it. Then they ran some test problems. And the errors were too great, so [inaudible phrase]. But in retrospect it is remarkable to me that there was only one disagreement like that. And it was resolved I think exactly the right way. Not arbitrarily.
Goldstein:
One disagreement between you and the three of them on the committee, or one disagreement between all four individuals?
Graham:
No, it was between me and one of the three. But they were all very strong-willed people. Some more strong than others. And I guess that happened about halfway into the project, which was about four years altogether to get it built. There was never a personal animosity between the people involved, which is really remarkable. There was an animosity between myself and the chairman of the electrical engineering department, because I was hired as an associate professor of electrical engineering and I had come down and spent three or four days there meeting with the president and these people on the committee and had not met the chairman of the department that I was hired into. So he had a right to be really angry about it, and it took a while for that to sort of mellow. But Houston was a very nice place to live, and the faculty were very friendly, and they looked after newcomers, and it was one of the best decisions I think I ever made, to go from Brookhaven to Rice. Now at the time I had one other offer, to go to the University of Chicago to build Maniac-3 under Nick Metropolis. I would have been an assistant professor of electrical engineering, and they had no electrical engineering department. Which seemed kind of wild. I had visited Chicago. It did not seem nearly as pleasant a place to live as Houston, and at the time I picked Houston. I think that was 1957.
Goldstein:
And you made that choice yourself in 1961?
Graham:
And I made that choice. Yes.
Work Funded by the National Science Foundation
Goldstein:
When did you become involved with the National Science Foundation?
Graham:
Milt Rose went to the National Science Foundation, and passed to visit the AEC. When I was at Brookhaven working on the Maniac machine, all the AEC contractors would get together for a meeting. I forget whether it was once a year or twice a year. And there were the people from Illinois, Oakridge, Westinghouse, GE, Los Alamos and Brookhaven. There must have been one or two more. That was when I got to know a lot of these people. I was not involved with the National Science Foundation, because Milt did not go there until later. My direct involvement with the National Science Foundation was probably when the University of California wanted to put in a proposal to enlarge the computer center and put in a Control Data 6400 system.
Goldstein:
This was under the facilities program they wanted to establish?
Graham:
Yes. In fact, I was cleaning some cartons at the Richmond field station, and I ran across that proposal. Abe Taub was the Principal Investor. The proposal was about an inch thick about what kind of computations would be done and why that was a good machine. So that was my, let’s say, first direct involvement with the NSF. And I was not even the Principal Investigator or the proposal writer.
Debates on the Place of Computer Science at Berkeley
Graham:
I had come to Berkeley from Rice, and there was a bitter fight going on at Berkeley at the time. I don’t know, did I mention this to you on the phone? The computer center needed a director. And they got Abe Taub from Illinois to come to be director.
Goldstein:
Here at Berkeley.
Graham:
At Berkeley. One of the things he wanted as part of the package that was coming was the establishment of a department in computer science in the College of Letters and Science, sort of along with math and statistics that they had. Math is very big, statistics is small; this would have been small but it would have had the jurisdiction over computer science. The electrical engineering department, the chairman at least, who was Lofti Zadeh, believed that the computer science should be in electrical engineering. There were all kinds of debates in it. Suppose you built a machine that did not use electronics but was a pneumatic digital computer. Should that be in electrical engineering? I am not sure how that ever got answered. You could do it optically, you could do it pneumatically, and you could do it electrically.
Goldstein:
I have never heard of pneumatics. Optical is apparently making —
Graham:
Optics is in now.
Goldstein:
Yes, it is making headway.
Graham:
Pneumatics, it was an interesting idea. You could make regenerative flip-flops. And it was insensitive to nuclear radiation and things like that. But it was very slow compared to the electric. It never got very far, but you could use it when you were arguing this.
Goldstein:
Right.
Graham:
Philosophical questions. And the proposal would you believe had the firm commitment from the chancellor, who was Professor Strong, from philosophy. That was his name, Professor Strong. And between the time Taub came and they were going to establish the department they had the student uprisings, and Strong was still the chancellor, I think, and the next chancellor did not feel he had to honor the commitment that Strong had made. Before they formed the departments and all of that, there was a fellow in electrical engineering named Dave Evans who was associate director of the computer center and professor of electrical engineering. He told them he was resigning to take a faculty position at Utah. He was one of the two founders of Evans Sutherland. They make large computers particularly oriented to graphics. So Berkeley was left needing a faculty member and an associate director of the computer center. An associate director of the computer center would be under Abe Taub, and the professor would be under Zadeh. They were having this argument. Well, I had been here on a sabbatical leave from Rice, and I seemed to be acceptable to both of them. I had been here on a year’s sabbatical in 1964, and I went back to Rice for a year and then came here in both those positions. This is just the way events had happened. The computer center had an electrical engineer. So my involvement with the NSF was initially through the computer center, through upgrading the services.
Work Funded by the National Science Foundation (Contd.)
Graham:
Now, let me tell you my own personal views in retrospect how much the NSF contributed in that period.
Goldstein:
Yes, but when you are done with this, I would like to discuss the research you did under NSF support in greater detail.
Graham:
Yes. Actually there was no research. No, that is not true. I did do some research. I did some the year I was here on leave. But let me put this down, and it is just as well that it be recorded.
Goldstein:
Alright.
Graham:
It was very important that the universities get large computers for people to use. The people that are still the heaviest users are the chemists and physicists, like these days in pharmacology in doing custom drugs and so on. It is sort of an aspect of chemistry that they need very large computers. And there were two ways they could finance it. I think the way that most heavily influenced the installation of computers was that IBM gave a 60 percent discount. That was a very big subsidy. Then somehow the universities felt they were getting a good deal and raised the other 40. I suspect if they could not raise the 40, IBM did something else to help them further. But I think in terms of probably total dollar amount, I am not sure if NSF put more in or IBM. And around that time IBM also hired seemed like all the people with bachelor’s degrees in math, and they sent them out to work with customers. So they were involved and built the software and the hardware, but there was a lot of hand holding. And I think they largely dominated computing for many years because of that 60 percent discount.
NSF-Related Conflicts and Problems
Graham:
The other support was from the NSF. That one I think was very good in that it helped the universities do the financing. I think it was horrible in one other respect. There was a condition put on it that you could not charge more per hour or whatever your unit of time was to the people with government contracts than you could anyone else. In other words, if you set the rate of $100 an hour, everybody had to pay at least $100 an hour if you were charging the people with government grants that. So if you wanted students to use it, they had to pay $100 an hour. And so what happened — Go ahead.
Goldstein:
Well, I’m wondering where the —
Graham:
Where did the money come from?
Goldstein:
Well, whether the department would open accounts for students and pay the money for them?
Graham:
Sure. It was a big money laundering operation. The money was big, when you get into a large computer and lot of hours. So it was normally not done within a department. There were arguments whether the department could have its own computer or everybody had to use the central facility. It was administered at least indirectly by a vice president or a provost. Berkeley was the second one. So there was a central computer facility that was worried about how they were going to not have a deficit, so they would set up a fund and they would put money in it. It was supposed to flow out, pay for student computing, and that would go back in. But if you had a deficit, you did not want to put it all back in. The way you did not run a deficit is you would take some of it and not put it back in and pay your bills. It was a horrible way to get the work going. And I think a lot of people around the country who were running then were mainly engaged in the accounting and how you managed all of that.
Goldstein:
Making policy decisions, or taking care of bookkeeping?
Graham:
Taking care of bookkeeping. It was on everybody’s mind all the time. If the computer had a lifetime say of six years, which is long, at the end of six years you did not accumulate enough capital to replace anything. The end of six years you were in the same bag you were when you went for your first grant. Then of course what have you done to run this in a decent business-like way. You could not run it in a decent business-like way, because the rules did not permit it. These are my opinions of it. The other thing was that since the money had to re-circulate, if somebody bought a smaller machine and had their own program and was running a good operation for their research, the computer center in general wanted to take it over, because it was competition. So there were these fights between different groups.
Now, Berkeley had an IBM system. It was a 7044/7090. 7044 was the front end, and it got the data in and out, and the 7090 was the main computing machine. And then the proposal to the NSF was to put in a Control Data system. So there had been a lot of discussion about whether their choice should be an upgraded IBM machine or a Control Data machine. And I remember several of those discussions, in that Control Data made an offer of lowered prices and its importance and so on. And it was an interesting machine, because it was a 6400. Not the biggest, not the 6600, but it was to have the so-called extended core memory, which is a very fast memory transfer, from this memory into the machine, so you could be running a job and then very quickly switch to another job. Now our machines were always being compared with Lawrence Berkeley Lab and Lawrence Livermore Lab, which were funded by the AEC and did not have the bookkeeping problems. So Berkeley was unique in a way among universities, in that the money you would usually get on grants to pay for your computing would be computing you supplied. The faculty members on this campus would do their work at Lawrence Berkeley Lab. So it is like phone companies that skim the long distance tariffs, and it can run a profitable operation because they do not have to handle local subscribers. Berkeley was in that kind of a position. They had the local subscribers, but they did not have the most profitable subscribers. It made it a very difficult economic problem for the computer center.
Goldstein:
So the center was just constantly running at a loss?
Graham:
It was harassed on all sides. And yes, I believe it was always at a loss. You cannot help. If you do not want to show a loss, you make up another fictitious account and you put the money from that into the first and there is no loss. So it depends on who is at the upper echelon and what it is, how they want it to look, and then they can make it look that way. But in terms of the people running it, it was a concern to them from beginning to end. There was one project that was funded, financed by the NSF in building a computer system. It was under the electrical engineering department. There was a concern as to when it started running how would it sell its time? Would it be in competition, or would its administration in that respect come under the computer center. Well, the computer center has one kind of goal, the person doing the research on the computer has a different goal. He would like it to be used to tell whether he really achieved it or not, instead of having the people that designed it use it. And they say it is free. The way you tell is you get people that did not design it to use it and see if they can get their work done. And if you are doing that, then it is in competition with the other center. So the overall set of rules, and the way the financing worked, did not really help on letting, I feel, the university get its act together.
Goldstein:
And all that was a result of NSF policy? It was not some combination between — ?
Graham:
I doubt that the NSF thought up this policy all on its own, but my recollection is it came to us from the NSF. So I think that that part was not good. Now, there was another part. They financed a lot of research around that period, and I was not doing NSF financed research in that particular period. My financing came just out of my salary being paid by the computer center, and teaching assistants or research assistants was something like $3,000 a year, and it was not hard to take somebody and give them that kind of support. And there was enough money involved so that you did not have to go out and get a grant, the two or three of those, if that is what you wanted. There was one fellow at the time where support was not available for him in the electrical engineering department. And I got him support from the computer center. I guess you could say it was through the NSF because they were supporting the purchase of so much equipment. And he did a thesis on interleaving memories and how that improved the performance. And he had come to Berkeley from, I believe, Minneapolis Honeywell and had done some work there. He did similar work there. I did some work on how you organize memories, the time sharing operations, and what you could do and what would be economical.
And the other thing I did which you might say was NSF research, is they wanted to be able to connect remote terminals up to the computer system. The remote terminals were teletype machines at the time. We put up a network that I think had about 100 or 200 machines on it. It ran over the telephone wires. We used one twisted pair of telephone wires, because that was what was installed, and it’s a very low data rate, so we used one wire with the earth ground to go to the teletype, and the other wire was the earth ground to come back. We had cards that handled the noise and buffered it so that it would accumulate a character. Then there was an interface to the Control Data 6400, peripheral processor, and it would pull these. Actually, not pull them. It would take the smallest number address. If that had a character that was to go out it would take it and get it into the 6400. If it had something in its buffer that should go out, it would go out. It was built with RTL logic, resistor transistor logic, which had, you could get either two AND circuits in the package or an inerter or a buffer, and the whole thing was built out of that. Afterwards people asked why we bothered building it, why didn't we just buy one. I think we built it for $20,000. You could not buy anything at all. As time went on you could buy it, but it was still far more expensive than $20,000. That got used for a number of years, and some of the channels were upgraded.
Goldstein:
Who was using it? These 100 or 200 terminals, where were they stationed?
Graham:
All over the campus.
Goldstein:
The different professors and that?
Graham:
Professors, research groups. That was really quite successful.
Goldstein:
The facilities grant to install the Control Data machine, did not specify that a network would be established?
Graham:
I do not believe so.
Goldstein:
So who did you have to clear it with in order to install the network?
Separation of Computer Science from Electrical Engineering at Berkeley
Graham:
That did not get clear. Let me tell you about when I had a grant on my own. A couple years later we then had two separate departments. There was a computer science department in the College of Letters and Science, and the electrical engineering department in the College of Engineering. So what Taub had wanted actually got established. Electrical engineering I think had about 60 faculty members, and computer science had about seven or eight. And there were about three of them, or four, that had gone over from electrical engineering into computer science, and most of them were either theorists or software people. I was probably the only one that was dominantly hardware. There a couple of other faculty members that just at that time left Berkeley and formed a company to build a time sharing system. That was an outgrowth of research.
I guess the NSF supported the time sharing operation. No, the NSF did not. It was a DARPA project. And the machine that they developed was built by Scientific Data Systems, and became the SDS-940. They left to build a machine that was really oriented for time sharing. It never worked properly. The company went bankrupt. But those people did not go in as regular faculty members to the other department. I was the main hardware person. I was interested at that time in what the machine architecture would look like if you made use of block oriented random access memory. You know on disks you can get a very high transfer rate, but there is a big latency until you start getting it blocked. And at that time there was a memory that I guess RCA was building called BORAM, block oriented random access memory. It was a very exotic technology. It was in film and a wire that would pick up magnetic changes. And the film had a BH curve that depended on stress. So you sent the sonic wave down. The stress level of these things, one after the other. As the wave went by you could read out and pick up a signal and tell what all the bits were. When you wanted to store, the BH curve was such that by the current you put on the single wire you could write each bit as the sound wave went past.
Goldstein:
Maybe I am not understanding right, but that does not sound like random access. It sounds like you have to proceed serially and —
Graham:
It is sequential in the block, but when you decide you want a certain block, you do not have to wait for it to physically come by the head. You can block the ultrasonic transducer and leave the block. See, so it was a sequential memory, not random access by word but random access by block.
Goldstein:
I see. And how big were the blocks?
Graham:
A couple hundred words. It sort of fit in with the sort of overall memory size you had at that time. So if you have a memory and you can get any block you want, but it has to be the whole block, what kind of problems can you do? It seemed there were a couple problems that were really oriented, could be oriented to that kind of organization. If you were working on images and you stored your images in sub-squares, so you could get a square at a time, then you could work on the details of the square. Actually, a lot of this is now important on the highly parallel machines, because the machine comes iterated the same way as these sort of data are iterated at that time. And you are faced with the problem of how do you handle the edges? And you can do a gross analysis of the block and then take the whole thing and worry about edges and gross. In the end you have to worry about how you handle the edges in detail. And I think I put in for something like $500,000 to build a whole machine that would use that kind of memory.
Goldstein:
This was to the NSF?
Graham:
Yes.
Goldstein:
In what year?
Graham:
It must have been early 1970s. It was about the time this other department existed, and that was the early 1970s, I believe. I received a grant for about a tenth of what I asked for, for an exploratory step. And there is a whole difference between people using the theoretical machine and simulating what will happen and using a machine which will really solve their problem quickly. So the whole character was grossly different. And I was told that they didn’t want to finance building a machine, because people were not able to build machines at universities that work. Which was largely true. But the machine I built at Rice worked and was used for years after I left, so I did not think it was right to turn me down because of other people’s shortcomings. You are supposed to put in the grant based not on the average proposal, but based on your proposal. I was also later told that one of the higher ups in electrical engineering had gone to the National Science Foundation and said that they did not want the NSF to finance that one because it was a hardware project that should have been in the Electrical Engineering department. They were having this argument about where computer science should be. And my personal belief is that that contributed at least somewhat.
Goldstein:
Yours being reduced to the —
Graham:
To being reduced. Maybe I should be grateful it was not reduced to zero. But at any rate, I had one student working on it and then they did not maintain interest because it was not going to be, I felt, a real machine. There were other things they could work on.
Involvement in Medical Instrumentation Research
Graham:
And there was something else I got very interested in, which was analyzing electrocardiograms in a way so that you could get something quantitatively measured so that when you had a yearly physical you could look at the numbers and say there was a significant change from year to year. Cardiologists have a hard time doing that. Unless you have a heart attack in between. There was a big difference. But in terms of telling predictively whether things would deteriorate, they do a terrible job.
Goldstein:
Was that a computational problem, or is it — ?
Graham:
I had an idea on how to look at it as a computational problem. It has to do with how you decompose the cardiogram into components. Almost all the work had been decomposed and into Fourier analysis. Because that is the software that is available. There was a lot of work that was supported by IBM in analyzing it by putting 100 electrodes on the chest and assuming that you had 20 cycles, and it did an inverse solution. Knowing the surface, you could tell what the 20 was. You can do that if you can tell the coupling coefficient of each of the 20, each of the 100 electrodes. And that involved modeling the chest either as a uniform conducting cylinder or putting in the lungs or getting more advanced and putting in the heart. Because each of those have different conductivities. IBM supported this mainly at the University of Alabama in one of their research labs.
Goldstein:
I really want to focus on the research that the NSF supported.
Graham:
Yes. I will get to that. Let me tell you about the algorithm. I was using three lead electrocardiograms, so-called vector cardiograms. And my [inaudible word] position was not that there were 20 dipoles in the heart, but if I had only three leads, not a hundred, I should be able to determine at any instant of time, three things. So I could look at three components that varied in time. And I said a dipole, I am not looking at the repolarization, I am only looking at the action when the part contracts. And that would be one polarity. Therefore once I see a component that can go up and it can come down and I am looking at that thing and its free projections of it. And after it comes back down to zero it’s gone, but I can now pick up another one. So I was looking for a decomposition with the constraints being with a zero, when only one polarity went back to zero and stopped. And at any instant of time I could have only three. And it doesn’t lead to a nice closed forum or unique, because depending on how you programmed certain determinations, that did it hit zero, are you going to call it quits, are you going to go to the next one, you get slightly different results. I spent a lot of hours in the middle of the night working on it and trying things. And I can give you a reprint for that. And the algorithms in it.
I was interested in what happens with time. And I was able to get some pathological cases. This is a child with the arteries transposed. And this is what the raw data looked like. These are the components that I got for it, and how they projected. And then this was the year of releasing my components. These are my components from this, which was some months later. And a lot of it looks similar, though you cannot see there are differences and the angles of the dipoles had not changed. So I think I was really looking at the same thing as time went on. I got some enlarged data for the little p-wave, and yes, this is it. And on autopsy they could tell whether it was the left atrium or the right atrium that was enlarged. And my components were large and small for both [inaudible phrase]. There was only one guy in the whole cardiology business that would even comment on this. His comments were so good they were published, and then he died a little while later.
Goldstein:
Who was that?
Graham:
Brodie.
Goldstein:
D.A. Brodie?
Graham:
Now these guys were using characteristic oscillations as waveforms, and there was a guy who worked for IBM. I think their publication internally is much more honest than what they ever published in the journals. Because they showed how the details change as you go through these different models of the heartbeat. And for awhile afterwards he was the director of the computer center here. Also he was director of the computer center at Rice after I left Rice. I am not sure where he is now. But at any rate that was NSF supported.
Goldstein:
How did you get involved in this? It seems out of the blue.
Graham:
Oh. No, I have been interested in medical instrumentation for years, and I did a lot on instrumentation for NASA when I was at Rice in connection with monitoring the electroencephalograms with astronauts and what you could tell about it from that. That was joint between Methodist Hospital and UCLA Brain Research Institute. I am not sure who was funding. It was NASA funding on the part that I did there. What I should do is mail you my biography.
Goldstein:
CV?
Graham:
Yes, CV, so you can see these other publications and so on. So this was sort of related to that. I was doing some instrumentation for Multiphasic Exams, a private company, doing work Kaiser does. And that is when I got particularly interested in what you could do on the cardiogram if you had a Multiphasic exam. And I tried doing some more on that, and had one student that decided to go back to a lucrative job analyzing 24 hour cardiograms. The analysis I did here at OPEC is as good as you can do using [inaudible phrase] compositions, which is a scheme where you do not assume what the waveform will be, but it tells it to you. And if you use that program, they look a little like this, but they do not have the constraint of not being to go negative. So you get things that do not match a dipole in the heart that match a mathematical concept better. My current thought is if you take those and you look at the four or five new components and you look at the variability of them beat to beat that you would be able to diagnose or predict arrhythmias. Because when you have an oscillator that is on the ragged edge, there is more cycle to cycle variation. And this would let you look at that. I never got around to doing it, although I would like to.
Work Funded by the National Science Foundation (Contd.)
Graham:
Now, there was a fellow running the NSF computing stuff who had been before that at Lawrence Berkeley Lab. I do not remember his name. This was in I guess — what was the date on this?
Goldstein:
1976, I think.
Graham:
Yes. So it was the early 1970s. I wanted to do two things. One is I wanted to switch it from computer architecture to data analysis. At that time I was in the computer science department.
Goldstein:
When you say computer architecture, is that the work you were doing on the block oriented machine?
Graham:
Yes, that sort of thing. Which was not panning out, because we wanted them to build a big machine, and people were not interested in the simulations.
Goldstein:
Right. So you had that grant, I mean architecture, and you wanted to renew the grant but move to this?
Graham:
No. I just wanted to take what I had left. I think it was a few years for $60,000.
Goldstein:
And you had not spent it all.
Graham:
I had not spent it all, and I wanted to spend it on this.
Goldstein:
I just wanted to clear that up. The reason why you did not pursue that other grant was because you were not going to get to build the actual machine and the interest was not in simulation?
Graham:
Yes. It was not going to be like it was at Rice. Unless you really build a machine and it works, you do not know what you have done. There are lots and lots of papers that say this is a great architecture and that is a great architecture. What really matters at the end is that it works. And the final question is, does the company that is doing it go bankrupt or not. If it is a really good idea and it is cost effective, people make a fortune out of it. And if everybody says it is a great idea and it is not economically good, what happens is the companies go bankrupt but the people that have written the papers end up as professors. Overall it is not a good situation. But at any rate, I wanted to be able to use the remainder of the $60,000 on this. And the NSF let me do that. That part was very nice. Then, at the very tail end, Evans which is the math building was opening, and the university had put aside money to equip laboratories. And they were putting in I think a PDP-45 or something like that. And they had neglected to order a disk system, which was necessary if they were going to be able to use it the way they wanted in a time sharing operation. I think that was it. And I wanted to give I think it was $10,000 or so that I had left so that they could buy the disk so that the statistics department and some other people would have a viable system. I had a terrible time. I had to get permission from the people administering the grant. They did not want to spend the money that way without the NSF’s permission. So I had to go on up the line. But that worked out also. I guess in retrospect it was fine. I did not get the $600,000. Probably if I had got the $600,000 I would have been dead ten years ago from the work involved.
Goldstein:
You said that you did not want to pursue that because there was not enough and you wanted to actually build the machine. Do you think that had you completed the test, was that supposed to be seed money to allow you to demonstrate its possibility and then perhaps they would come across with the big money?
Graham:
No. That is what they said. But it does not work that way.
Goldstein:
You mean because you cannot prove the effectiveness of design in simulation?
Graham:
No. Because the effectiveness is not in the performance you measure; it is in the performance that the users perceive.
Goldstein:
Right. Okay.
Graham:
There are a whole bunch of machines. Livermore built a machine. It is going to be just great. What came out of it was the people that were designing it developed computer aided design tools, and they went into business selling those, and the machine I believe was built they never really used. If you have a machine and the people that build it use it and they say it is just great, that does not mean anything. It has to be a different group that uses it and says it is great.
Now, what was so unique about the machine at Rice is the design was copied by Burroughs and considered one of the best architectures. It was the Burroughs B-5000. I think I told you about the students that were involved. And the programming techniques, the segmentation of memory instead of the block orientation was exploited very heavily at project Mac. The two programmers that were working on the machine at Rice spent the summer at MIT, and they are the ones that told him how the segmentation should work. And the segmentation works well only if you have an indirect addressing scheme that lets you do the translation from a name to a memory location. That was one of the things that was put into the machine at Rice. That is part of the architecture I did, and that was put in mainly because I wanted a certain symmetry in the circuit design and the architecture. I knew about indirect addressing. There were two things that made it work.
There was index modification, and after you finished the index modification then you did the indirect address and you got a new address. In the IBM 704 there were three bits for index modification initially, and they had three index registers, and you could add none or one or two or all three. But then they changed it to interpret the three to be any one of eight. In our machine you put in eight bits so you could add in one of the eight, two of the eight, three of the eight, and it lets you manipulate two or three dimensional arrays in the software very nicely. But the other big thing was that when you got the new address you brought out eight new index modifiers and a new indirect address bit. So you could modify, look it up, bring it back, modify, and look it up. In fact, you could do an infinite number of levels. And if you made a programming error it would get caught in the loop. So we had to add a piece, when you add I think there are only 39 levels before the machine squatted. And it really just squatted. You would look up and you would see the lights and you would figure out what you had done. But it was that indirect addressing with the index modification at multiple levels that had a tremendous effect on all the software.
Now, that was an algorithm of the circuitry. It was too expensive to do that with vacuum tubes without this kind of gating with the multiple winders. With the multiple winders we could do it economically enough to incorporate it physically. And then once it was in and people used it, then it got used. There was another great feature in that machine where you could tag every work. There were two bits to tag. So if you were doing some kind of operation in an array, you’re going the long rows. Normally you always do rectangular array. You do it up to the end, you know you’re at the end, and then you start the next row. So you count to see whether you are at the first or the second or the third or the fourth. So you count the fifteenth, and you start again. Because you compared it to fifteen. The idea was that you could tag the data words that were at the end of your array. So you could have a ragged edge and you could start. You put like an x on it. And there were orders in the machine that would test on that. So you did not have to count to fifteen. You could go along and ask, “Is this the end?” and go. I thought that was absolutely great. That never got used.
On Building Computers in Universities
Graham:
Now, you do not know that that will never get used until you deal with the community of users with their perceptions of what is good and what is not. In fact, I still currently have arguments about when you teach engineering. To my mind, you build it and you test it and you see if it works or not. Not that you simulate it. And if it is not in line with the current part of what you should do.
Goldstein:
Right. In some cases I could see it being unwieldy to build. There's an example of when might try to simulate it numerically, that particular design.
Graham:
Do you fly on airplanes that were simulated and not tested? No, you do not. Fortunately, the FAA does not approve airplanes until they fly. There was also a whole culture at that time that you do not build airplanes in universities, and therefore you should not build computers. It has changed. They build computers in universities, and they even integrate the stuff. The reason is it is economically feasible. It was economically feasible at that time. I think it was very shortsighted not to have more of them in universities. There was a machine built at Berkeley called CALDIC, which was a drum type machine. As far as I know it never really ran, never really computed anything. But the guy that worked on the magnetic memories is one of the outstanding people in that field; he worked for IBM in the development of magnetic disks. I think he got a big spurt from that at the beginning. Even though it did not achieve the goal that it was originally funded for.
Goldstein:
Was this work mostly hardware implementation?
Graham:
No, this was all software.
Involvement in Medical Instrumentation Research (Contd.)
Goldstein:
So is that unique for your career?
Graham:
It was unique. The part that would have made this really work is building the electrocardiogram system and getting the electrodes on so you got a finer detail of what was going on. I had a student that was going to work on that. I needed data, so he was going to work with UC San Francisco to get the data from the cardiology group. This is one of the things I am still angry about 15 years later. He went over. I had him build the pre-amplifiers, which he never wanted to do, but he has made a lot of use out of what he knows about hardware since then. We were all ready to take data and analyze it. They had a PDP-45, I think, that they had just bought. And he did not have the software to take real time data and get it on the disk. He had the A to D converters, but not that software. He did all that software. We were about ready to do our experiments, and somebody over there asked him to write up a manual for them so that they could use it on their dog experiments. And it would only take a month. And I told him it will not take a month, it will take much longer, and you can forget this. He would not believe me. So he wrote that, and his thesis was on the software, and it never went any further. The involvement of personalities in these things is very much deeper and more important than a lot of the things. And so you said you were working with Aspray ?
Goldstein:
Yes.
Graham:
I noticed the Aspray, and I noticed the cover for this book in the library, so I got it out. And I have not had a chance to really look at it. But it seems that it really goes into personalities a lot more, and I think that is great. Now, there was a 5-volume set written by A.H. Taub on Von Neumann, and I understand that he left out some of the computing stuff because it was not high class enough, that Von Neumann was really a mathematician and that is how you should think of him.
Goldstein:
Right.
Graham:
And this has a whole different flavor.
Goldstein:
Right. Well, that is a theme to the book, that —
Graham:
Yes. I think books of this type are very good. I think some of the people that he spoke to are looking out for their own reputations rather than what really happened. I guess by now I should get used to that. It still bothers me.
NSF-Related Conflicts and Problems (Contd.)
Goldstein:
Are there any other research highlights in your career that you just want to mention to me so I can pick up later perhaps? Research that was sponsored by the NSF?
Graham:
Yes. The other thing is the people at the NSF that were deciding what to fund, I felt, were not first rate people.
Goldstein:
You mean the program officers? Not the top?
Graham:
Not the top. The ones I had to deal with. Because when they tell me that they are not going to fund building a machine because nobody gets them to work, and I built one and it worked, that does not increase my respect for their statements or integrity. And then after you get the money, Berkeley is very bureaucratic, and you have to fight about the accounts. The sheets that I would get from the accounting group would not tell me what I wanted to know about how much had gone out on this and that. You can tell what went out to a research associate, but you could not tell what went to a machine shop and what they did for it. Which does not matter if you are not doing hardware, but if you are doing hardware you work in the dark economically. And about that time I decided that the research I would do would be smaller research. And I would not get the money from the NSF or the government. I would go out and consult, and I would make enough money to finance the research assistants. And I did that. I did that for a number of times. I could do it because RAs did not get paid much. RAs are now up to about $12,000. And I find it hard to do without one.
Also, I have gotten interested lately in doing two things: I am interested in high speed data on telephone lines, and I finance some of that and you exploit that commercially. And the other thing I have gotten interested in is how you teach engineering to include design, and whether you can do it at the freshman/sophomore level. I have been teaching some courses in math the standard way, and I feel the answer is a resounding no. It is a disaster. So this semester I am going to try it a drastically different way. I went to a meeting from the United Engineering Fund and there were a number of speakers from the NSF from these consortiums on design and so on. I had the feeling that they are still putting money out not quite the right way, not quite to the right people, but their heart is in the right place. And I came back in, and it was really worth working on that. It does not take that much money to do that. So, after the one on the heart, my interactions with NSF did not have a lot to offer. There is a lot of money, and you end up administering it and not doing the work. I feel being a faculty member has given me a luxury of doing what I wanted to do by myself, or with one or two others. The whole thing at Rice had a group of about five people. That is a whole different flavor than a research project that had twenty. And none of them were graduate students. There were a couple undergraduate students and a few staff members, and the other faculty members. I guess if you include the faculty maybe it is eight or ten people. But that was a lean operation that was very satisfying. The current way a lot of research gets done does not appeal to me. So I have sort of in a number of ways dropped that.
Work Funded by the National Science Foundation (Contd.)
Goldstein:
I see. Let me just clear up one last thing that I am unsure about. The work you did on networking in the Control Data machine here at Berkeley, was that sponsored by the NSF?
Graham:
I cannot tell where everybody came from because of the way the bookkeeping was done. I would say in large measure, yes.
Goldstein:
In the sense that the NSF contributed to the university that sponsored the project?
Graham:
Yes. Now whether there was a $20,000 fund, probably you could not trace where the $20,000 came from most likely. But they were certainly a contributor.
Goldstein:
Right. But you did not apply for a grant for this specific purpose?
Graham:
No. In retrospect the only NSF grant I applied for was that one on architecture, and I told you quite a bit about how that one went.
Goldstein:
You started to say that the concepts are useful in parallelism. Is that right?
Graham:
Yes.
Goldstein:
Have you noticed any other product of that work?
Graham:
Yes. There is a whole business these days about RISC computers.
Goldstein:
Right.
Graham:
I think that in some respects it is a fraud. Because the speed does not come because of the reduced instruction set; it comes because of the bandwidth in the interconnections of memory to the processor. And that comes about because you can fabricate integrated circuits. It is hard to fabricate random logic, which is the control logic, so what you fabricate is the orderly logic which is the buses. So you put in caches, big memories, lots of registers that are directly addressing them. The machine at Rice had four registers that were directly addressable. The architecture was such that you could store four instructions. Not only four pieces of data. You could actually get very fast loops on doing that. There was another order that these theoretical chemists used, where the words, the ones in the word represented not numbers but locations in molecules. And you wanted to be able to tell if a molecule was next to another one. You would do it by taking the word and the next word in the lattice and [inaudible word] it. If you had a one, it meant that there were two next to each other. Or you would shift and do it. So you could tell how many they had that were next to each other. Then after they did that they would want to know how many there were. So you wanted to count how many ones are there in the work. We put in an order for that. And that machine ran half as fast as the IBM Stretch in doing a problem where that was a basic thing. Of course to do it with their instructions [inaudible word] a sequence of orders. Now, putting in special orders for what you really need to do always speeds up a machine. In fact the 701 to the 704 put in floating point. Now that was no reduced instruction set.
In fact all the RISC machines that you buy, if you are going to do anything with them, have another chip on them to test the floating point. So they are not talking about not having the orders at all. It means taking them out so they have room for the bus structure. In a sense the Boram stuff was related to that, because it had to do with how you transferred data to the CPU and then out, and could you do it efficiently. And the orientation there had to do with the blocks. Some of the bottleneck in the highly parallel machines is not in the CPUs but the communications between them. And you can always do the communications faster if it is a versed than if it is random single words. So that same concept and the same constraint still exists because the wiring still exists, the concept on that is not new. And but there is a guy named Austin Hogatt who is writing a proposal to IBM on how to organize an array of processors to essentially do parallel processing where the communication might, is of one order of speed, and you do a different order of speed in the CPU and you arrange enough computations so the band width is adequate. If your question is, if I had gotten $600,000 what difference would it have made, I would not be complaining about it being only $60,000. But in terms of what impact it would have had. The impact that the Rice machine had was different than I think what we first thought it would be. The impact on memory segmentation and time sharing was not on my mind at the time particularly.
Goldstein:
Okay. Thank you for talking to me.
- Bioengineering
- Biomedical computing
- Biomedical equipment
- Biomedical measurements
- Electroencephalography
- Biomedical monitoring
- Computing and electronics
- Instrumentation
- Computerized instrumentation
- Integrated circuits
- Computer architecture
- Profession
- People and organizations
- Research and development labs
- Universities