Oral-History:Arthur Burks
About Arthur Burks
Arthur Burks received a B.A. in Mathematics from DePauw University in 1936 and his Ph.D. from the University of Michigan in 1941. In 1943, as part of the war effort, he was recruited to work with as a major designer of what was to become the ENIAC, the first general purpose electronic computer. He also collaborated with John von Neumann, co-authoring several scholarly papers and a book with him. He joined the University of Michigan's Department of Philosophy in 1946 and founded the Logic of Computers Group in 1949. He helped start the graduate program in Communication Sciences in 1957 and the Department of Computer and Communication Sciences (CCS) in 1967. He served as the first chair of CCS in 1967-1968. Professor Burks passed away on May 14, 2008 at the age of 92.
In the interview, he describes the work he and his colleagues did in the areas of cellular automata (directly inspired by Jon von Neumann), the simulation of natural systems (including heart fibrillation), artificial (and "natural") intelligence, and other topics that link logic, computation and biology. He also discusses the role of the National Science Foundation in supporting this work.
About the Interview
ARTHUR BURKS: An Interview Conducted by Andrew Goldstein, Center for the History of Electrical Engineering, July 29, 1991
Interview #116 for the Center for the History of Electrical Engineering, IEEE History Center, The Institute of Electrical and Electronics Engineers, Inc.
Copyright Statement
This manuscript is being made available for research purposes only. All literary rights in the manuscript, including the right to publish, are reserved to the IEEE History Center. No part of the manuscript may be quoted for publication without the written permission of the Director of IEEE History Center.
Request for permission to quote for publication should be addressed to the IEEE History Center Oral History Program, IEEE History Center, 445 Hoes Lane, Piscataway, NJ 08854 USA or ieee-history@ieee.org. It should include identification of the specific passages to be quoted, anticipated use of the passages, and identification of the user.
It is recommended that this oral history be cited as follows:
Arthur Burks, an oral history conducted in 1991 by Andrew Goldstein, IEEE History Center, Piscataway, NJ, USA.
Interview
INTERVIEW: Arthur Burks
INTERVIEWER: Andrew Goldstein
DATE: 29 July 1991
PLACE: Telephone interview
Logic of Computers Group
Goldstein:
I am calling about the National Science Foundation. I just want to remind you of my purpose for the book we want to write about the research that was supported by the National Science Foundation. I am interested in the work that you did and your goals, the things you were trying to achieve, the impact that you had both in terms of applications and in terms of stimulating further research. I want to know about some of your collaborators and people working in the same area. I know that we talked about this briefly once, and I have my notes from that conversation, so I guess we should start with the origins of your research. In 1948 you began the Logic of Computers group?
Burks:
That's right. Well, let's see. I guess technically it was '49. I started to consult for what was then Burroughs in '48. Irv Travis, who was one of the pioneers in computing — built differential analyzers, for example, for the proving grounds in Aberdeen — he was the chief consultant for Burroughs, advising them on how to make the transition from mechanical computers to electronic computers, and he asked me to start the group in '49. It was supported by Burroughs until the end of '54, when I went off to Harvard for a year. After I came back it was supported, and has been supported ever since, by government agencies.
Goldstein:
So the group continued while you were away.
Burks:
No. It was really reconstituted after I came back.
Goldstein:
I see. In the early years between '49 and —
Burks:
Jesse Wright, for example, and Don Warren, who had been my chief investigators for Burroughs, came back with me, and then I added John Holland and Cal Elgot, who later went to IBM.
Goldstein:
Okay. I remember you telling me that you were working in the beginning on models of heart fibrillation.
Burks:
Of what?
Goldstein:
Models of heart fibrillation. Is that right?
Burks:
Well, that came a little later. That came, let's see, when was that? That would have come when we established connection with Dr. Henry Swain of the med school, who was studying heart fibrillation and dog tissue. Larry Flanagan , who is on the faculty now, wrote his thesis on that and then later we had another thesis on that.
Goldstein:
All right.
Burks:
A matter of simulating on a computer — using a hexagonal or a Norman cellular array — simulating fibrillation.
Goldstein:
All right. I want to know when you started with that work, but why don't you tell me what you were working on from the very beginning. It was simply —
Early Government and NSF Support
Burks:
Let's see, my first NSF grant was what? Fifty-eight?
Goldstein:
Yes, '58.
Burks:
At that stage we were working on logical networks that were relevant to hardware. Switching systems. Earlier we worked on switching and switching trees. We were interested in networks as related to computer hardware. Then not long after that, members of the group proved some theorems about logic, about the application of quantification theory to networks. The others and I designed a logic machine that used the Polish notation and therefore used a push down store. I guess that was actually done for Burroughs, and I think it was the antecedent of the first town store in the Burroughs machines of the '60s.
Goldstein:
Okay.
Burks:
It was really all of one piece. A way of viewing it is that first Burroughs funded it, and then the government came in and funded it.
Goldstein:
What attracted government funding in '55? What happened then?
Burks:
Well, it started when Howell Long, who was a logician then at Harvard, and I were talking about automata and logical networks for automata. We got a grant from the Air Force. And then the people that I had had in my Logic of Computers group were interested in getting money, so we asked the Office of Naval Research and they gave us money. There were two Ph.D.s, Jesse Wright and me. Later Richard Buchi, a logician, came in. He was a Ph.D. Then we had students who started getting Ph.D.s, and John Holland was the first of those.
Goldstein:
Did Burroughs withdraw its support and that's why you turned to the government, or did you have joint funding?
Burks:
Burroughs seemed less interested in the more theoretical stuff we were doing. When we were doing more hardware oriented stuff and also some work on automatic programming languages for business machines, they were interested in that, but my colleagues and I wanted to go to more theoretical stuff. It was really pretty easy to get money from the government then, I would say. Much easier than it is now.
Goldstein:
I see. So you were government funded for a few years before you went to NSF. Did they suddenly open up?
Burks:
Well, the grant that Howell Long and I got was in '56, after I was at Harvard the calendar year of '55. According to this letter you sent me, our NSF grants started in '58. But that's right. We had Air Force support, and we had Office of Naval Ordnance support, then NSF support. We also had Army support.
Goldstein:
You were one of the first to receive NSF support, and I wonder whether the money became available and it was publicized, or had you been aware of NSF money and only in '58 turned to them?
Burks:
I don't think there was much publicity. There weren't a great many people doing research in computers then, and so there was a lot of communication by word of mouth. The people that had worked for me went out to Willow Run, which was a center attached to the University of Michigan that did government research, and they worked on what later became the Sage, for example, and they also developed holography out there, and things like that. These people had gone to work in operations research and they wanted to come back to the main campus and do more theoretical research, so it was a matter of looking around and seeing where we might get some money for it.
Logic of Switching Networks
Goldstein:
Okay. You say that you were working in some logical theorems. Were these outstanding problems in logic, or were they —
Burks:
Were they what?
Goldstein:
Outstanding problems in the field of logic, or were these computer problems?
Burks:
These were related to computer problems. Sometimes they were derived from computer problems, so we'd get a result in pure logic, but its origin was computers and sometimes they were applications to computers. For example, showing that you could measure the complexity of cycles in switching networks was one theorem that Bernie Ziegler proved. He's now a professional in computer science. What he proved in his thesis was that you must have larger and larger, more complicated cycles to get more complicated behaviors.
Goldstein:
Okay. Maybe I'm not clear on what you mean when you say switching networks. I have an understanding, but I don't know if it's consistent.
Burks:
Alright. Well, this goes back to Von Neumann.
Goldstein:
Right.
Burks:
He designed and worked out the logical design of the network of the first stored program computer, the Edvac, using little logical switches and little memory elements. You make a network of these. The reason he did that was he didn't know the details of circuitry. When we designed something on the Edvac, we paid attention to the fact that there had to be four different voltages coming into a tube, and how long the wires were and things like that, because we had to make a workable computer. When he did it, he wanted to just give the basic logical design. So a logical map as we developed it consisted of a structure of primitive switches and primitive memory elements connected by lines according to certain rules.
Goldstein:
Okay. And you actually executed these in hardware.
Burks:
Well, we didn't build our own. We studied the theory, but when people built hardware then, they first drew up those networks and then converted them into electronics.
Goldstein:
Okay. But you were just working theoretically.
Burks:
We were working theoretically, yes. There was an issue of the Institute of Radio Engineers in 1953 which was devoted to computers, and Jesse Wright and I had an article there, for example, entitled "Theory of Logical Nets."
Simulations of Natural Systems
Goldstein:
Okay. What other progress did you make? Any significant breakthroughs or results?
Burks:
Well you mean going on up to the present?
Goldstein:
Okay, I mean in that era, but you can start to push forward with where your research went.
Burks:
Yes. Well, you see, that original work we did at a time when there were no computers easily available. Then when it got to our association with the Willow Run branch at the University, they got an IBM machine. We did simulations on neural networks following Donald Hepp, who was a Canadian psychologist who projected that learning would take place by assemblies, feedback assemblies of neurons in the brain. This is a subject that's heavily studied now. People work on neural nets and try to get them to learn, and so forth. Two of our graduate students wrote theses on that, and John Holland had already worked on that at IBM on the 701. They had done the first simulations of nerve nets.
Goldstein:
I see. And when did Hepp publish?
Burks:
Hepp published around 1948.
Goldstein:
And you guys picked it up?
Burks:
Holland had picked it up at IBM around 1950, and he was working on the 701. He and Nate Rochester used the machine at night to run simulations, and then, after he came out here and worked with me, we started that activity again. Then by the middle '60s we were able to buy our own machine, an IBM 1800 for example, and a modified PDP-78 machine. We started simulations with those. Then we were interested in simulating natural systems and studying the general theory of modeling natural systems on computers.
Goldstein:
All right. And by natural systems you mean —
Burks:
Well, the heart fibrillation would be an example.
Goldstein:
I'm wondering whether "natural" has a precise technical definition. Did you mean simply something found in nature, or are there certain properties of a natural system?
Burks:
Well, there are a lot of important properties, and this is what we've emphasized. There are feedback systems, there are learning systems, there are adaptive systems, and more recently the work on the current last few grants has to do with algorithms for learning and discovery that are based on natural principles, including principles of evolution.
Goldstein:
I see. So the neural networks are a subset of the natural system.
Burks:
Yes. Right.
Goldstein:
In what ways are natural systems more general? Do they simply have wider applicability?
Burks:
What's more general about natural systems than —
Goldstein:
Than the neural networks.
Burks:
Oh. Well, the kind of neural networks that people study now, the ones you see in the literature, are pretty highly limited in the number of layers, for example. The neural networks of Hepp are not so limited in the number of layers.
Von Neumann and Cellular Automata
Burks:
Heart fibrillation is a kind of cellular automaton system, so it's related to things like the Game of Life and Von Neumann's self-reproducing automata. A lot of the work that I did in the period from '60 to '70 was working on Von Neumann's model of self-reproducing automata. He had written some manuscripts, but he hadn't finished them.
Goldstein:
Right.
Burks:
So my own work consisted of finishing them and then working on alternative designs and simplifications.
Goldstein:
Well, tell me about that work. What did you show, or where did you take Von Neumann's ideas?
Burks:
Well, he had laid down the fundamental building blocks, you might say. That is, he defined a cell with twenty-nine states, he showed how to build switches and delays and counters, and he started to show how to embed an indefinitely long storage tape in the system, but he hadn't finished. Actually, he went down kind of a blind alley. It was my task to take this incomplete manuscript, and also some lectures he delivered and notes on them, and to figure out how far he had gotten and then complete that. Jim Thatcher, for example, who is now at IBM, was a Ph.D. student, and he worked out some improved designs on how to make a self-reproducing automaton. This was a kind of logical problem and a logical result, only it's in the context of cellular automata, something people study today under the title artificial life.
Goldstein:
Right, right. And I'm looking forward to getting to that topic, but when you say designing, again you're talking theoretically. You didn't implement it in hardware.
Burks:
No, no. Yes, this might have been simulated, but it wouldn't have been built.
Goldstein:
I see. So what were some of the results from that?
Burks:
We got different designs again, proving certain theorems, and I edited a book in '70 published by the University of Illinois Press, Essays on Cellular Automata. Various people wrote papers, and John Mayo, a logician, related this to Turing machines. Stan Ulam had some results, early computer simulations in three dimensions as well as two dimensions, of what could happen in the cellular automata, which is published in that book. So I guess today it would be called artificial life. And that's essentially the same thing as we do now, only now, of course, with big, powerful computers you can do so much more with simulation.
Goldstein:
Then what happened? Well, I guess what I'm looking for is maybe an example. You say you demonstrated certain theorems or achieved certain results in the automata. Do you have any examples?
Burks:
You're thinking about cellular automata.
Goldstein:
That's right.
Burks:
Or the earlier logical networks related to hardware.
Goldstein:
No, no. I'm thinking about the cellular automata. Your work in the '60s.
Burks:
Well, these would be different results in the sense of different constructions. For example, Dick Laing showed how such a machine could read itself. It didn't have to have a description on a tape, but it could read itself and reproduce itself.
John Holland & Self-Organizing Systems
Goldstein:
Okay. I see. And then you say your research interests changed in the '70s?
Burks:
Well, yes. There's been a gradual shift. But Holland wrote his thesis on cycles in logical nets, his Ph.D. thesis, under me here at the University, and then he gradually shifted. He worked on parallel architectures. For example, he published a paper laying down a basic design and having the communication channels in this design develop according to the needs of the computation. This was a theoretical, suggestive sort of paper, but it influenced people.
Goldstein:
When did he do that?
Burks:
Pardon?
Goldstein:
When did he publish that and finish his thesis?
Burks:
Oh, he started our Ph.D. program in '57, and he finished his thesis in '59.
Goldstein:
Okay.
Burks:
Then he worked on parallel architectures, and part of his results were published in that Essays on Cellular Automata. Then he gradually shifted to studying self-organizing systems, self-adapting systems. Of course, that's another way of talking about natural systems.
Goldstein:
Right.
Burks:
And so I would say by the mid-'60s we were all working on adaptive systems and artificial life, as you would call it today.
Goldstein:
Okay. I'm trying to remember where we were in the '70s. You brought up John Holland to describe something.
Burks:
Yeah, and then he went on and by '75, he published his classic book on the Theory of Adaption, which incidentally is in the process of being reprinted by MIT Press.
Goldstein:
Okay.
Burks:
We had an M.D., no I guess he was a Ph.D. in Biology, working on simulating E. coli systems.
Goldstein:
Was John Holland supported by the NSF? Did he work with you on research grants that you had, or did he have independent funding?
Burks:
Over our history we have worked together. First he was my student and he worked on grants that I got. Then we got grants together. At one stage for several years we had the National Institutes of Health grants because of the relation of our work to biology and life. As time went on and he became a regular member of the faculty, he and I would have joint grants, and the grants we have now are joint grants. The one we're working on now is joint. He got one at one time I think directly from NSF so he could go away on a sabbatical. But by and large our grants in the last decades have been joint grants.
Goldstein:
You were talking about simulations that you were doing of natural systems, of E. coli for instance. How successful were these simulations?
Burks:
Well, I guess it depends how you measure it, because over the years we produced a bibliography, but John and I continue now. I'm emeritus, so we don't really have any Ph.D. students at the moment. Chris Langton, who is now at Los Alamos and created the term "artificial life" in his organizing sessions on artificial life, was my last Ph.D. student. But taking our productivity over all this time —
I could get the bibliography, there must be something like 250 papers published in good journals, including biological type journals as well as engineering and computer journals. And I don't know, two or three books.
Goldstein:
Can you think of any features you added to simulations of natural systems that were new?
Burks:
Any features?
Goldstein:
Right. Any new responsiveness to certain environmental conditions that may not have been in the simulation previously.
Genetic Algorithms
Overview
Burks:
Well, the systems that we're working on now have come out of John's work, but others of us have collaborated with him in what he calls classifier systems and more originally genetic algorithms. Generic algorithms came out of our research. Now, as I say, we've had other support as well besides the NSF, but we've always since '58 had NSF. And, for example, when the National Institutes of Health had a cutback and it was much harder to get grants, John Pasta from NSF came out and visited us and agreed that we should continue the work that the National Institutes of Health was not going to fund anymore, and gave us extra money and so forth. So NSF has been, I'd have to say, the core of support of everything that we've done. The genetic algorithm of John Holland comes out of this and then these classified learning systems come out of this. A genetic algorithm is now publicized pretty often in various places, even The Economist.
Goldstein:
Could you describe the genetic algorithm?
Burks:
The idea is this. Well, it can take many different forms, but let's take a standard form. And I've written about this too. Some of my NSF reports have been about this. Let the instructions of your computer be conditional statements, statements of the form "if A and B, then do C."
Goldstein:
All right.
Burks:
And let the data of the system be simple messages, things of the same length as A, B and C. The messages are specific. They are just binary sequences. The A and the B are more general. They have what are called in the trade "don't cares." Are you familiar with that?
Goldstein:
Yes.
Burks:
Okay.
Goldstein:
It's a logical result. It's either true/false or don't care?
Burks:
Yes.
Goldstein:
Right.
Burks:
So that an A can be satisfied by many different messages. And you can actually use the fraction of don't cares to measure the generality of a condition like A. We use this parameter of generality to help drive the system. Think now of a robot which is receiving messages through its sensors, and which has actors that can respond to messages so that the data of the system consists of a bunch of messages. There are input messages, there are output messages, and there are also internal messages that you use in computation. And the program is an unordered set of these conditional rules. So there is no sequencing in this system, and you can't then upset the system by changing the order of the program.
Goldstein:
Okay.
Burks:
And actually this system operates, in this sense it's derived from evolution, with an excess of rules and in a moment the rules would be competing. So if some rule is knocked out, there typically will be other rules that can do the job. This is highly parallel.
Connection Machines
Burks:
Now the best way to understand how it operates is to think of it in terms of the connection machine. Are you familiar with that?
Goldstein:
Sure. To my knowledge that's the largest parallel machine. Is that not right?
Burks:
Well, it depends on how you measure large. I guess the big number cruncher people like the Crays and so forth wouldn't like it.
Goldstein:
Right.
Burks:
But in a sense, you see, the connection machine is a cellular automaton. That is, it has a whole bunch of little processors, and they can communicate through an end cube connection with one another, and there's also a central processor. So the connection machine consists of a kind of a cellular automata at the bottom, a central processor up above, and then a broadcast network so that the central processor can talk to all of the cells at once and all of the cells can talk to it at once.
Goldstein:
All right.
Burks:
We've run problems on the connection machine. You put the classifiers for these rules in the cells, and then you send a list of messages to every cell. The instruction in that cell reacts to some of these messages and produces new messages.
Goldstein:
Is it a different message for each cell?
Burks:
No. Think of it in terms of cycles. In a major cycle every instruction looks at every message.
Goldstein:
Okay.
Burks:
And then the instructions produce a new message set and you throw the old message set away.
Goldstein:
Right.
Burks:
So you are constantly transforming the message set. That doesn't mean you can't remember things, because there have been instructions that just preserve messages. Okay, so that's the lowest level of operation of the system. Now at the next level of the system it's an economy. Every instruction or classifier has some capital, or wealth or strength, we call it. We call these classifiers because they are classifying the messages into those that they use and those they do not. Before, I said that at each major cycle a new message set is generated and sent over to the next message cycle. At this intermediate economic learning level, there is economic competition for messages being sent over to the next cycle. Every instruction has some capital or strength, and after it produces a new message, it makes a competitive bid to get its new message carried over.
Goldstein:
Okay.
Burks:
In other words, instead of letting the size of the message just grow indefinitely, as I did in the first round, it's now constrained quite naturally to a certain memory size. If a message is carried over, then the instruction that created it has to pay those instructions which at the previous cycle gave it the input messages that it worked on. So you can think of messages as coming into an instruction as raw material and going out as a finished product. And the instruction has to pay for any raw material that it uses to make a successful message. Now of course if its message is very good and is used by others, then it will get a lot of payback.
Goldstein:
Okay.
"Natural" Intelligence
Burks:
With current computers like the connection machine — also we are using now a workstation that costs ten thousand — you can make lots of runs of these. So over these cycles, you get a competitive economy based on an auction system. The strength of the instruction goes up and down with the success in getting messages carried over. Now suppose you put this in an environment of solving some problem, say getting from one end of a room to another end with barriers in between.
Goldstein:
An artificial intelligence problem.
Burks:
Yes. Well, I'd call it a natural intelligence problem, because we think more in terms of natural intelligence than artificial intelligence.
Goldstein:
Okay.
Burks:
There are two different names for the same thing. The artificial is a means but the intelligence is the natural.
Goldstein:
I see.
Burks:
- Audio File
- MP3 Audio
(116_-_burks_-_clip_1.mp3)
So now think of this all in a robot which wanders around the room and meets barriers and so forth and finally gets to the other end and recognizes a certain target that it's supposed to position itself in front of, a piece of furniture of a certain shape. Then, when it is successful, you reward those instructions that were active at the end. Here you meet the problem of what's called allocation of credit. When something is solved, you know the instructions that worked at the end are likely to have been important. Now, of course, the instructions that worked earlier were also important, but in this system of feedback, each time a classifier is successful in selling its message to the next cycle, it has to pay those classifiers that fed it input messages from the previous cycle. Those payoffs at the end of a problem's solution gradually work back. And actually, one of the things we're working on now is putting in ways of getting the payback faster than step by step; in other words, having some conditional rules that can make a jump over many cycles at the same time.
Goldstein:
It sounds like there's an enormous amount of bookkeeping to be done.
Burks:
Yes, and that's why you need a rather good machine to do this.
Goldstein:
It sounds like all the credit is allocated at the end. So how do they compete during the operation?
Burks:
Well, initially the credit is allocated at the end because you don't know what's successful until you get there, right?
Goldstein:
Right.
Burks:
You can't a priori allocate it earlier. It's like in chess or checkers, when somebody early on made a very critical move, a triple jump, playing checkers. That will in the end lead to success, but you have got to find it in an artificial system. You've got to be able to pick out that earlier triple jump and get credit back to it.
Goldstein:
And this is the way it learns, because as it experiences it assigns credit to things that are at least hopefully important.
Offspring and Selection
Burks:
Yes. Okay, now I've given you two layers of the system. The bottom layer, which consisted of these rules all operating at the same time on a dataset, and the middle economic system, where they compete to get their output messages used. Now the genetic algorithm comes into play. Periodically the system looks at all of the rules, it chooses the most successful, i.e., strongest or wealthiest rule, and it reads them by genetic operators to produce new rules. So they become the parents that are left. But let's say there are two instructions here. We don't need to distinguish [inaudible word], but there is a rule 1 and rule 2, and rule 1 can mutate to make offspring and rule 1 and rule 2 can combine by what's called crossover in biology —
Goldstein:
Right.
Burks:
— by taking the front part of one and the tail of the other and vice versa. So the most successful rules have offspring, and those are used to replace the poorest rules, to wipe out the poorest rules and replace them by the offspring. All of this is done probabilistically so that the thing doesn't get stuck. That's the genetic algorithm that I started out to describe. It can be used in many other contexts, and it's used by people in many different ways now.
Goldstein:
Now it sounds like this is work that you are doing currently.
Burks:
This is the work that we are doing currently, but I'm trying to emphasize that everything that we did before was a background for this work, even the work in parallel networks for computer architectures. The parallelism comes out in the system. The work in biology, biological simulations and nerve net simulations, that's the natural intelligence side of the system.
Goldstein:
Right, right. And that's why I want to be sure I have all the links every stage of your career, so I can see where they fit in. In the '70s you were working on, let me see where I wrote this down — We have the natural systems, right?
Burks:
Yes.
Goldstein:
And then did that just glide into your current work, or was there a different, another stage?
Burks:
No, because you can think of the systems that I am describing and their various ramifications and alternatives that we consider as being an amalgam of what's known about natural systems and what's known or what we found out about artificial systems, computers.
Artificial Life
Goldstein:
All right. And how does this relate to the work in artificial life?
Burks:
Well, artificial intelligence — what Chris Langton was doing on his thesis — he had started out by discovering Ted Codd's work on cellular automata when he was an undergraduate. Ted Codd was one of our research assistants. Now he wasn't paid by us, because he was sent here by IBM. He had worked on the IBM stretch machine, and he and some others at IBM decided that they could use more training, so they sent these people to various universities to get Ph.Ds. Ted came here and he took as his thesis problem finding a simpler construction than Von Neumann's construction of a self-reproducing automaton.
This will give you a good example of the kind of theoretical logical automaton or result that we did. And this would have been in the, what, in the early '60s. We had one other IBM student sent in the same way. These were people already with degrees who had done very useful work at IBM, and IBM wanted them to get more training as a kind of reward for the work that they had done on that computer. Codd took as his problem the problem of finding a more natural based set of primitives for the cell in the 29 states that Von Neumann had chosen.
Goldstein:
By more natural, what were his criteria?
Burks:
Yes, I'll explain that.
Goldstein:
Okay.
Burks:
Von Neumann had essentially used the kind of building blocks that we had used in designing machines like the first stored-program computer. Ted Codd wanted to take ideas from biology, such as the laying down of a nerve and then sheathing the nerve to isolate it from this rounding organism, and that was his object. His methodology was an exploratory use of the computing system. We didn't have our own at that time, but we had available, I think it was a PDP-1, in the physics department.
So Codd's procedure would be to partially define a cell and see what behaviors he could get out of that partial definition by running simulation. To get that to do something more he would make a modification and see if the modification could, for example, enable this system to have a hand in it when it didn't have a hand in before, or to send out an insulator when it couldn't send an insulator before. If that failed, then he would back off and try a different line of definition in the truth table which he was doing here. He ended up with a natural based automaton, we would say, that used eight states per cell.
Goldstein:
Okay.
Burks:
That should not be compared with life, which uses only two, because life has a larger neighborhood.
Goldstein:
Okay.
Burks:
Yes, it has a 9-cell neighborhood.
Goldstein:
That's right. And that's not the case with these?
Burks:
Von Neumann always used a 5-cell.
Goldstein:
Which —
Burks:
In other words the square and —
Goldstein:
And its top, left, right and bottom.
Burks:
[inaudible phrase] boundaries.
Goldstein:
Right. Okay.
Burks:
Also, the construction of self-reproducing automaton life is terribly complicated. Codd came up with a fairly simple one. To get back to Langton, he read Codd's book and he borrowed some money and went out and bought an Apple so he could work on this. When Langton came to me, he had finished his undergraduate degree at the University of Arizona. He wanted to work on this, and we were delighted to have him, and so then he went on from that. The kind of simulations that he did for his thesis were — I don't know whether you are familiar with the work of [Stephan] Wolfram on cellular automata?
Goldstein:
I've heard of it, but I'm not technically savvy about it.
Burks:
Wolfram was a physicist, and so he wanted to see the classes of automata that would relate to certain physical states. Langton went ahead and did this to get analogies of cellular automata behavior to gaseous states and to solid states and to liquid states and to transitions between one state and another.
Goldstein:
Working in collaboration with the physicists for his description of these states?
Burks:
He was working by himself. Well, he knew of Wolfram's work, but no, he was doing it himself under my direction, with Holland's advice, too.
Goldstein:
All right.
Burks:
You can see that it's a different kind of natural system connection. The connection of automata to physical systems does not specifically solve some problem like the weather problem or an astronomical problem but talks about the general states of physical systems, solid and liquid gas.
Applications of Research
Goldstein:
Then applications are worked out later or perhaps never.
Burks:
Or perhaps never. I don't know what that artificial life will produce.
Goldstein:
All right.
Burks:
I think if you look, if you demand too many applications, too much applicability up front, you restrict possibilities.
Goldstein:
Have you been approached by more applied scientists who want to try to take your software and apply it to a certain problem?
Burks:
Yes. There are certainly a lot of people interested in using classifiers and genetic algorithms for practical problems, predicting the stock market and, well, say, controlling pipelines, oil pipelines. The thesis was written under Holland. It had to do with the classifier system, although actually it's a genetic algorithm. There weren't classifiers in it. Learning of a simulated oil pipeline system or gas pipeline system, how to route the material over different paths, and this thesis for example done by David Goldberg was able to detect when a breach in the line was introduced into the simulation.
Goldstein:
When was that work done?
Burks:
Not only did it detect the breach, but it automatically rerouted the oil around that portion of the line.
Goldstein:
When did Goldberg do that?
Burks:
Oh, five years ago, maybe.
Goldstein:
Did you, or people in your group, work closely with him when he was working on it?
Burks:
Well, John Holland was his supervisor.
Goldstein:
I wonder whether you're ever approached by —
Burks:
Not in our group. He was in mechanical engineering.
Goldstein:
Right. Is that just an example of something that happens fairly commonly, where people —
Burks:
Yes. We have always been highly interdisciplinary, and we have worked with people in the med school. I mentioned Hank Swain, and we've worked with other people in the medical school.
Graduate Program at U of M
Goldstein:
In terms of the structure of the University and your group, you say that the Ph.D. program began in '57. Now that's early. Was that a Ph.D. program in computer science?
Burks:
Yes. We didn't call it computer science. The name we adopted for the department and for the program earlier was Computer and Communication Sciences. It was started by myself and a physicist named Gordon Peterson, who was a speech analyst at Bell Labs and came out to Michigan in the speech department. He was a physicist who was trying to synthesize speech, for example, but doing basic research on speech to find the characteristics of the different phonetic sounds and things like that so all this could be automated. He had some students that didn't fit in the speech department. I had students like Holland that didn't fit into philosophy, so he and I got together with other faculty and started this program. I think it's fair to say that it's the first separate Ph.D. program in computers and communications that was ever established. It was not the first department. It didn't become a department until ten years later, and by then there were other departments, including Stanford, for example.
Goldstein:
Right.
Burks:
And of course there were Ph.D.s in computers before that in the more standard fields of electrical engineering and mathematics and physics.
Goldstein:
Do you feel that NSF had any interest in supporting you because you were establishing an educational system?
Burks:
Oh, I think so. We always emphasized in our proposal that we were training graduate students.
Goldstein:
And they responded positively to that.
Burks:
They responded positively, yes.
Interactions with NSF
Goldstein:
How much interaction with the program officers did you have in describing your research objectives? Did they have any input?
Burks:
Yes. Of course, we put in proposals, and they visited us a few times, and we visited them a few times to explain what we were doing.
Goldstein:
I see.
Burks:
I wouldn't say it was as much as once every year, but there were certainly visits. Of course, some of those people like John Pasta, whom I had known from way back, we'd known a long time.
Goldstein:
Now you say that they encouraged your training of graduate students. Were there other things that they would like to see in your progress reports or the summaries you'd write at the end of a grant?
Burks:
Say that again?
Goldstein:
I'm just wondering if there were any particular results that they felt they would like to see in the project reports you would write or the project summaries when a grant came to its conclusion?
Burks:
Oh, they definitely wanted to know what we thought our most important results were. On some occasions, at least a couple occasions, they would ask us to write a statement of what our results were that would be understood by more of a lay audience, so that they could publicize their work generally and in particular with Congress.
Goldstein:
I see. That's largely the purpose of this chapter, not to publicize it but to describe it. I may make an effort to get hold of those reports you wrote. Do you feel they are on file at the NSF?
Burks:
Well, we certainly submitted —
Goldstein:
And then it was out of your hands?
Burks:
While our Logic of Computers group files went to the historical library here. I wouldn't have the ability to search them, but I would think the people at NSF who are giving us our present grant ought to have some kind of historical record. I don't know. I mean you've got to ask them, I guess.
Goldstein:
Are there any highlights of your research that we haven't covered? Is there anything that you feel you'd like to say?
Burks:
Okay. Let me look through my notes a minute, but first of all, tell me what it is you will produce. A book?
Goldstein:
Yes. Well, we're generating a full length book. It will be somewhere between three and five hundred pages, somewhere over three hundred pages. This section in particular is a chapter describing the research that the NSF supported, and we have computer science broken down into seven sub-categories. Computer theory is one of them, and we're going to do — they're not quite case studies, because they are a little too brief for that — but we're picking some key researchers and just in a few pages we'll describe their research, what the research concerned, some of the impact that it's had. Does that answer your question?
Burks:
Something that's commissioned by NSF?
Goldstein:
That's right. NSF is producing a full-sized history of their operations. One volume of the full history is computer science. There is also one in the biological sciences, and really every area of their operation. I think none of them are delivered yet, let alone printed. This will be one of the first.
Burks:
Is one of the motives to ascertain how much influence the NSF research has had on industry, for example?
Goldstein:
I don't know if this is necessarily a research effort to discover their significance. It's not a study, if you see what I mean. I think it may be supposed to be a straight history. But I'm not sure about that. I really can't speculate as to their motives.
Burks:
Let me make a couple points in answer to your query. First of all, I mentioned John Pasta as someone we worked with a long time.
Goldstein:
That's right.
Burks:
I don't whether you've ever heard of Kent Curtis.
Goldstein:
Sure.
Burks:
Yes. We worked with him quite a long time. The other people we worked with were more transitory, as is the nature of the institution. That's one.
Sante Fe Institute
Burks:
But the other point I want to make concerns an activity of John Holland, and I'm also associated with it. Have you heard of the Santa Fe Institute?
Goldstein:
No, no I haven't.
Burks:
Yes. Well, this is a new research institute that's being established in the town of Santa Fe by Murray Gell-Mann, who is a physicist, as you know; Ken Arrow, an economist; and Phil Anderson, who is a physicist at Princeton. John is one of the leaders in this, and the goal of the Institution is to study complex nonlinear systems, including adaptive systems. It's interdisciplinary in that they seek common ground between physical systems and biological systems, and given the background that I have just been describing of our research with emphasis on what John has done, you can see why he is a leader in that activity.
Goldstein:
Right. In what ways, what significant ways, is it different from the Center for Nonlinear Studies?
Burks:
The Center for Nonlinear Studies is a center of Los Alamos.
Goldstein:
That's right.
Burks:
Los Alamos is near Santa Fe, but not in Santa Fe. There are lots of informal connections. The heads of the Santa Fe Institute have been senior fellows from Los Alamos, so Los Alamos has a definite interest and plays a strong role in the Santa Fe Institute. The Santa Fe Institute is a privately endowed institute still seeking money, but it has been supported by the McArthur Foundation as well as by the government in grants. It's looking to build up a permanent foundation, and permanent financial foundation endowment, and be a kind of center for this area, in some respects similar to the Institute for Advanced Study, but in other respects not. It's more deliberately interdisciplinary along the lines that I have been describing. Well, both our research and what they are doing.
Relations to Industry
Goldstein:
All right. A minute ago you asked if the NSF was interested in their influence in industry, if that's why they were sponsoring this work. Do you think so?
Burks:
Not the only reason, but one of their interests might be.
Goldstein:
Right. It may very well be. Have you ever been sponsored by industry, or have you had an impact in industry?
Burks:
Well, of course that original research was from industry, from Burroughs.
Goldstein:
Right.
Burks:
Other than that, well, John has consulted for industry off and on and so have I, but no, the kind of funding that we have had to support our research has been endowment money, mostly from the government. Recently we had some money from the Kellogg Foundation, for example, and I was mentioning the Santa Fe Institute, and there's a kind of a Michigan branch of it, and it has received a small grant from a Chicago endowment. So there's been a little industrial money, but most, practically all, our money is government money.
Goldstein:
I can see some industrial interest though, in your description of Goldberg's paper on oil pipes.
Burks:
Yes.
Goldstein:
These adaptive systems. Has industry, if not supported you, at least tried to apply some of your work?
Burks:
You could learn more by talking with John, because he's closer to that than I am. I guess it's fair to say that our work has always been far enough out in front that industry would not put up money for it as ongoing research.
Goldstein:
Right.
Burks:
I'm sure they'll be using it.
Goldstein:
All right. Well, I'm very pleased with what I got here today.
Burks:
Okay.
Goldstein:
Let me again offer you an opportunity to express anything that you feel ought to be said.
Burks:
Well, I think I said everything I want to. Any portion that you wrote that relates to what we have been talking about, I wouldn't mind getting a copy if that's appropriate.
Goldstein:
Okay, thank you.
Burks:
All right, thanks. Nice to talk to you.
Goldstein:
You too.
Burks:
Yes.
Goldstein:
Goodbye.
- People and organizations
- Corporations
- Government
- Inventors
- Scientists
- Universities
- Engineering and society
- Military applications
- IEEE
- Publications
- Automation
- Behavioral science
- Switches
- Control theory
- Bioengineering
- Biomedical engineering
- Communications
- Computing and electronics
- Logic devices
- Logic gates
- Computational and artificial intelligence
- Computation theory
- Learning
- Learning systems
- Logic
- Machine intelligence
- Machine learning
- Natural language processing
- Engineering fundamentals
- Algorithms
- Systems engineering and theory
- Adaptive control