# Oral-History:Murray Wonham

## About Murray Wonham

Murray Wonham received his B.Eng. in Engineering Physics from McGill University (Montreal, Quebec, Canada) in 1956, and his Ph.D. in Control Engineering from the University of Cambridge (Cambridge, England) in 1961. Following graduation, from 1961 to 1969, he was involved with several control research groups in the United States, including the Control and Information Systems Laboratory at Purdue University, the Research Institute of Advanced Studies (RIAS), Brown University's Department of Applied Mathematics, and NASA's Electronics Research Center. In 1970, he became a faculty member at the University of Toronto in the Department of Electrical and Computer Engineering, Systems Control. Additionally, he has served several visiting academic appointments, including at MIT, Washington University, University of Bremen, the Mathematics Institute of Academia Sinica (Beijing), the Indian Institute of Technology (Kanpur), and the Universidade Federal de Santa Catarina (Florianopolis).

Wonham's research interests include stochastic control and filtering, geometric theory of multivariable control, and discrete-event systems. For his work. he has received several awards and honors, including becoming a Fellow of the Royal Society of Canada and the IEEE, and receiving the Brouwer Medal from the Netherlands Mathematical Society in 1990.

In this interview, Murray Wonham discusses his career in control. He outlines his education at McGill University and Cambridge, as well as his research work and contributions at Purdue, RIAS, Brown, NASA, and the University of Toronto. He reflects on the development of the Wonham Nonlinear Filter, his work on discrete-event systems, and the future challenges and applications of the field. Additionally, he provides advice for young people interested in the control.

## About the Interview

MURRAY WONHAM: An Interview Conducted by Peter Caines, IEEE History Center, 15 December 2014.

Interview #755 for the IEEE History Center, The Institute of Electrical and Electronics Engineers Inc.

## Copyright Statement

This manuscript is being made available for research purposes only. All literary rights in the manuscript, including the right to publish, are reserved to the IEEE History Center. No part of the manuscript may be quoted for publication without the written permission of the Director of IEEE History Center.

Request for permission to quote for publication should be addressed to Oral History Program, IEEE History Center at Stevens Institute of Technology, Samuel C. Williams Library, 3rd Floor, Castle Point on Hudson, Hoboken NJ 07030 USA or ieee-history@ieee.org. It should include identification of the specific passages to be quoted, anticipated use of the passages, and identification of the user.

It is recommended that this oral history be cited as follows:

Murray Wonham, an oral history conducted in 2014 by Peter Caines, IEEE History Center, Hoboken, NJ, USA.

## Interview

INTERVIEWEE: Murray Wonham

INTERVIEWER: Peter Caines

DATE: 15 December 2014

PLACE: IEEE CDC, Los Angeles

### Video

### Early Life and Education

**Caines:**

Hello everyone. My name is Peter Caines and it's my honor and pleasure to interview Murray Wonham today for an IEEE oral history documentary.

**Wonham:**

Well, thank you, Peter. It's a pleasure to be here with such an old friend. I think we first met in 1969. It was something like that.

**Caines:**

It was something like that.

**Wonham:**

Yeah, so pleasure to be here.

**Caines:**

Off we go with our questions in a roughly chronological order. I think my first question is do you recall your first encounters with mathematics, science, or engineering?

**Wonham:**

Well, in the engineering department I think I owe my first exposure to matters electrical to my father, who had been trained as an electrical engineer in the early 1920's at McGill University, and started his working life in that capacity. I don't remember quite how it happened originally but I do remember constructing a device which I called an Electrobox. Certainly it was electrical. It was housed in a small metal box. It had many wires tortuously interconnected with various bits and pieces. I've forgotten whether there were any resisters or capacitors. I don't think there could have been at that stage. In any case, the last item to be installed in this Electrobox was a plug-in cord. So I took this and showed it to my father proudly. I think I was about ten years old at that point. And he looked at it gravely and pronounced that it was a very interesting construction but felt that perhaps it required a little more development before we actually plugged it in.

**Caines:**

Into the wall?

**Wonham:**

For which I've been everlasting grateful.

**Caines:**

As it had no insulation?

**Wonham:**

That's right. So that's really one of the earliest things that I can recall. And then I think at that point he got me a couple of beginners books on electricity, sort of electricity for junior high school students, something like that. And I found them fascinating, so developed an interest in amateur radio, which became quite consuming. And I can remember building my own receiver and transmitter and actually qualifying for amateur radio status and getting on the air and becoming hopelessly tongue tied and terrified on my first--I think they--I think we called it a QSL. That was amateur radio jargon for making a contact with somebody else. It was all done in Morse code I should say.

**Caines:**

Oh.

**Wonham:**

Morse code was something that we learned at school in the cadet corps. So all these things sort of came together. I can remember building an oscillator for use at school in Morse code practice. And was very much a sort of hands on thing. You know, it was the fun of building things, getting them to work. In those days of course there were no transistors. We had vacuum tubes and once you've built something you'd have the pleasure of plugging in the tubes.

**Caines:**

Tubes.

**Wonham:**

That was the last stage of the procedure and seeing them sort of glow to life and you'd hear a hiss from the speaker. And, you know, it was a very real living kind of organism, this thing that you were working with.

**Caines:**

Right.

**Wonham:**

So that was great fun. I think the other part of your question had to do with sort of--

**Caines:**

[Interposing] Mathematics and science.

**Wonham:**

Mathematics and science?

**Caines:**

In general--

**Wonham:**

Well, again, and, you know, this developed in high school. I had forgotten just what--we had good teachers in mathematics and physics and so on. So I tended to find that side of human endeavor more interesting than Latin and history, although I did find almost everything at school very interesting. I think probably Latin and history came last not because of their intrinsic merits or demerits but rather because of the teaching quality and the fact that you couldn't quite play with Latin and history the way you could with bits of electronics. As for mathematics--so that would have come later, probably as an undergraduate. I took engineering physics at McGill, which is basically a blend of electrical engineering, and mathematics and physics in kind of one package. And, well, I remember reading wonderful classics like Birkhoff and Mac Lane’s book called I think Modern Algebra.

**Caines:**

Modern Algebra.

**Wonham:**

So back in, this would have been between 1952 and 1956. And just being struck by the beauty of that work and particularly the style of Birkhoff and Mac Lane, which really is incomparable. And then, you know, books like G.H. Hardy's Pure Mathematics.

**Caines:**

A gem.

**Wonham:**

Again a real classic.

**Caines:**

A gem, yes.

**Wonham:**

Just bring out all the beauty of the subject. So all these things tended to come together. As to systems and control, well, there wasn't any in the early 1950's at McGill. There were no courses in control. Really what we took was the basic stuff, the physics. And on the electrical engineering side I could remember studying the construction of electric motors and generators in quite some detail. We had a fairly lengthy sort of algorithm for designing these things which you went through point by point. Unfortunately we never got to build one. That would have been fun but again it might well have blown the circuits had we been allowed to do so. Well, those were some of the things that happened--

**Caines:**

[Interposing] So…

**Wonham:**

--I can remember.

**Caines:**

And directly from that, were there any authors whose writings made a special impact upon you as an adolescent or a student? I can remember us reminiscing several years ago about the influence of Norbert Wiener's Cybernetics--

**Wonham:**

[Interposing] Yes.

**Caines:**

--it had upon us.

**Wonham:**

Well, I did discover Wiener's Cybernetics in my probably senior year as an undergraduate at McGill. I didn't understand it very well but it certainly struck me as, you know, a very exciting kind of area to potentially go into. Nobody on the McGill staff whom I consulted had even read the book I think. They did tell me that somewhere in darkest Ontario there was a course being given in control but of course that wouldn't have done me any good. There was nothing where I was. But when I started thinking about graduate work and from there decided, well, it might be a nice idea to go to Cambridge where a lot of other pretty good people had gone, or I should say some pretty good people because I had certainly no certainty of being accepted at Cambridge. In any case, they did have a graduate program in control which listed among other things self-organizing system. And I thought, wow, this is what I want to do. So the combination I guess of reading but not really understanding Norbert Wiener's book, looking at the Cambridge catalogue and reading but not understanding what self-organizing systems might mean, although one of my professors at McGill did comment when I consulted him on this that the more self-organizing systems we have in the world in the better he said. So anyway with that encouragement I did apply to Cambridge and got in and then started the next chapter that was my period of graduate study. The most influential book at just that point I think was Paul Halmos' Finite Dimensional Vector Spaces. At McGill I hadn't even had a course in matrices as an engineer.

**Caines:**

Oh, right.

**Wonham:**

It wouldn’t have been on the program and you just didn't have time to do much outside the program.

**Caines:**

Yeah.

**Wonham:**

I did a bit of classical Greek at that point.

**Caines:**

Oh, I remember that--

**Wonham:**

[Interposing] Just for fun.

**Caines:**

--anecdote of a classmate of yours that you took some of your notes in Greek or Latin. Was that apocryphal or--

**Wonham:**

That is definitely apocryphal.

**Caines:**

It shows how far a good reputation goes.

**Wonham:**

Well, exactly, yes. The, how should we say, instability of information content is something that is--

**Caines:**

[Interposing] Which we illustrated by this--

**Wonham:**

--in transmission from one human brain to the next.

**Caines:**

--by this story.

**Wonham:**

So, well, no, but I certainly enjoyed--that was then that I discovered Homer and the wonderful rhythm and driving power of Homeric verse. And that was just as interesting an experience to me as an undergrad at McGill as my proper courses in mathematics and physics and so forth.

**Caines:**

So what are the particular individuals who inspired or influenced you over this period before being--let's just say before you became a professional--

**Wonham:**

[Interposing] You mean before I--during my graduate study?

**Caines:**

Yes. Well, you mentioned your undergraduate years and--

**Wonham:**

Well, I would say that in my undergraduate years the most influential person was--what was his name now? P.R. Wallace, he was a mathematical physicist and he talked to us about people he knew, he knew directly. He knew this person, that person. And another one was Terroux. Terroux had actually been a contemporary of my father and so he was a fairly elderly man at that point, but he had worked under Rutherford at Cambridge. And told us many an anecdote about counting alpha particle scintillations on the screen and so forth, the kind of drudge work that in his day a graduate student in physics under Rutherford was assigned to do. But this was fascinating and it sort of painted a picture of Cambridge as a place where just about everybody on the street was a walking genius. And then the whole traditional and style of the place. So all these influences sort of came together and the fact, the material fact that there was a scholarship available that would take me to Cambridge for I don't know how many years as a graduate student. So applying for that and being awarded that and made it all possible and so to Cambridge I went.

### University of Cambridge

**Caines:**

I remember in a conversation we had many years ago. You made some remarks about your exposure to probability theory in Cambridge. They were a little bit skeptical, the remarks.

**Wonham:**

Oh, I see. All right. Well, there is another case where I suppose things get slightly changed to be in the maturing of this sort of wine. Anyhow, I had started reading up on probability theory before I got to Cambridge and thought it really quite fascinating but certainly Cambridge immediately reinforced this. Cambridge, in control under a person by the name of John Coales --

**Caines:**

[Interposing] John Coales, right.

**Wonham:**

--was very much under the influence of Wiener. It was just as, for example, MIT was at the time. So it was considered that the main business of a control system was to anticipate and compensate for random disturbances, to track random target signals. And so all this of course required that the first book you should read was Wiener's monograph, Interpolation, Extrapolation, and Smoothing of Stationary Time Series. So I dutifully bought a copy of that on day two and began to try to read it. Realized very quickly I didn't know nearly enough complex function theory to read Wiener. Now, a person from McGill, a very brilliant person, I don't know actually what his subsequent career was but he went to work with--went to Cambridge to work with--now who was the fellow who was doing radio astronomy? You know, very, very well known.

**Caines:**

Who got the Nobel Prize? [Gilbert Ryle]

**Wonham:**

Anyway he was--so I encountered him and complained to him that, gosh, I didn't know enough complex function theory to read Wiener's monograph. And he immediately said, "Well, nobody knows enough complex function theory to read Norbert Wiener," although he would have had a lot less trouble than I did with it. In any case, I plugged along. There was no point taking courses. Courses at Cambridge didn’t really exist at the graduate level. What did exist were lecture series. You could go and you could hear some master of the subject expounding the subject but unless you had already become quite familiar with the subject there was no hope of understanding anything.

**Caines:**

[Interposing] Beforehand.

**Wonham:**

So you were forced to read. And so the first thing really I learned at Cambridge was how to read. Up to that point I never really had to read. I simply took lectures, wrote down notes, looked at the textbook now and then, and made my way through. And that was the style of teaching in those days in, in Canada and probably the US.

**Caines:**

Well, you want to--the special problem with that introduction to probability theory by Norbert Wiener's book is that doesn't contain the Kolmogorov axioms or anything like that because it's--

**Wonham:**

[Interposing] No, no, no, it doesn't. It just assumes.

**Caines:**

--all in terms of long range--

**Wonham:**

[Interposing] It just assumes that you know probability theory.

**Caines:**

Yeah, and long range averages [to yield correlation functions].

**Wonham:**

But particular second order theory, and so you can work your way through the theory of the Wiener-Hopf equation.

**Caines:**

Right.

**Wonham:**

Just without any problems.

**Wonham:**

Well, I mean all these contour integrals and things, I'd seen something like that before but had never actually seen anything like that depth. So it was quite exciting to discover that really I had an enormous amount to learn before I could even start to read what I was supposed to know about, before beginning research which was as yet undefined. The advice I got from Coales at our first interview was something like this. He gave me a list of books of which Wiener's book was one. I've forgotten the others, maybe Bartlett's. Not Bartlett's Familiar Quotations, another Bartlett who wrote a book on stochastic processes. It's quite unreadable I should say. Coales' advice was “Take a look at some of these books and come back when you have a bright idea.” Well, you--

**Caines:**

[Interposing] Very high level advice.

**Wonham:**

Yeah, excellent advice, yeah. I mean we're--but with British understatement that I hadn't quite yet learned to read, take a look at--gentlemen took a look at books. They didn't sort of sit down and take notes and try to figure out what the books were. They took a look at them. And then they came back with a bright idea, presumably acquired the night before between the first and second pint at the local pub. So this went on for a while and it became more and more difficult for me to figure out what in the world I was supposed to do there. Of course nobody ever really defined a research problem. That was the bright idea I was supposed to come up with.

**Caines:**

You were supposed to have.

**Wonham:**

Other students seemed to have research problems defined for them but presumably they were a lesser breed of some sort. Oh happy breed I thought to myself. But so, well, things went on. I went to some lectures by Paul Dirac, the great Paul Dirac who swept in, in his long black gown and so forth. And stood at the front of the class facing the blackboard, drawing little bras and kets, and muttering completely unintelligibly at the blackboard, not at the class. I mean it was fun to see him at work but I can't say that I learned anything from it. It was clear that everybody else in the class had long since mastered Dirac's book. So exactly what role they had there I don't know. But it was fun to see that for a while until I quit quantum mechanics with Paul Dirac. One of the next people I learned--I went to hear was Zeeman; E.C. Zeeman who later became of course a great proponent of catastrophe theory but at that time he was simply teaching algebraic topology, and in particular in this course knot theory. And what was fascinating about Zeeman is that he would sort of draw a knot in three dimensions and then say, well, to untie this knot I need five dimensions. And then he would proceed to untie it on the blackboard in five dimensions. So all that was great fun to watch.

**Caines:**

Great stuff.

**Wonham:**

It was like going to the circus. I mean you really had no idea what was going on. I finally got a textbook on knot theory, realized I really had to know a lot of group theory before knot theory could make any sense. So I thought, well, maybe it would be better to start with elementary group theory, which I didn't really know anything about there. So what with one thing and another, and you'd go into hall, the great hall at Trinity College founded by Henry the Eighth in 1540 something I think, and there was Isaac Newton. Well, first of all of course there was Henry the Eighth himself over high table - - and that was impressive. And then the portraits of people. I mean there was Newton and Maxwell I think, and a few--and then literary luminaries and so forth. And, well, I sort of realized that this should be terribly inspiring but in fact it was a bit at that stage demoralizing. Somehow what were you doing there was the sort of question that came to mind. Well, things would have gone from bad to worse until I met Tom Fuller. Tom was a wonderful person, a real scholarly--he was a student, Ph.D. student, but about ten years older than I. In other words he would be in his early thirties at that point. And had had, who might have known, six years' experience in industry. And so he came back to do his Ph.D. but with some half good ideas in mind as to what would be significant from the viewpoint of applications. He was trying to figure out the notion of state, what was a state space. So and he was a real scholar. He went right back and sort of started at Maxwell and worked up through Boltzmann and all these people with statistical mechanics, and then started reading the literature on control. This was the late '50's now, sort of '58, '59. And decided that people simply didn't understand what they were talking about. And it's true. You know, there was a lot of confusion that was just from the notion of state that was coming in. I mean people knew about the phase plane and so on but when it came to actually modeling a physical system, a control, something you wanted to control with a state description and putting in the control technology as well as the actual dynamics of the state of the motion in state space, there was a lot of confusion. People were trying to sort of follow in the path of Donald Bushaw who had done that wonderful work on relay control in the phase plane.

**Caines:**

Right, right, yeah.

**Wonham:**

Relay control of linear systems but it was a highly nonlinear kind of control of course, bang-bang control.

**Caines:**

Right.

**Wonham:**

So bang-bang control, in fact what they really wanted you to do was to combine bang-bang control with stochastic processes somehow. And so it began slowly to emerge that maybe that was something that you could get into without having read for example, J.L. Doob’s treatise from cover to cover. I think that appeared about, I don't know, 1954 I think it was.

**Caines:**

'54?

**Wonham:**

Yeah, and it was virtually unreadable by the unwashed. I mean you really had to be a mathematician. You had to have a solid grounding in measure theory. You had to know lots of stuff I hadn't a clue about. But it was there. It was something to strive for.

**Caines:**

What about the Russian literature?

**Wonham:**

There was a wonderful--sorry?

**Caines:**

What about the Russian literature at that time? Because the notion of state in principle was being developed from Lyapunov and through Pontryagin.

**Wonham:**

Well, that was just--

**Caines:**

Was it filtering through?

**Wonham:**

We were just beginning to realize that the Russians knew an awful lot about this stuff, and therefore we had to learn Russian. Well, that was sort of fun.

**Caines:**

Which you did?

**Wonham:**

Yes. Learning enough Russian to read technical Russian. I can remember my first Russian translations. They were, how should we say, creative. But in any case, that was sort of a challenge on the side. You know, a language learning exercise which I found sort of a relief from the technical stuff. So, anyway, this idea that bang-bang control maybe was a place where you could get into all of this suggested itself. And the final sort of kick in this direction was provided by Tom Fuller's telling me that they had a device in the lab. Tom never set foot in the lab. He would take you as far as the lab instead but he would never actually go in. Remember there were no computers.

**Caines:**

Right.

**Wonham:**

No computers. Well, there was a computer but that's an interesting story that comes in a bit later. There was analog equipment. They needed a source of random noise. So they had this wonderful contraption, as Tom put it probably left over from Rutherford, consisting of a kind of brass cylinder, which went back and forth on a threaded screw. It went into a handle to move this thing back and forth. At the other end was a Geiger counter. So you can adjust the rate. Oh, yes, the brass cylinder contained a lump of radioactive substance.

**Caines:**

Material.

**Wonham:**

Alpha particles [impinged on] a Geiger counter, so you had more or less a process, a series of clicks. And then the clicks triggered a multivibrator. So what you got was a random square wave with exponential holding times and plus some Poisson-distributed number of changeovers per unit time through--

**Caines:**

So you actually generated a random telegraph wave.

**Wonham:**

Yeah, so you had Rice's random telegraph signal, which by that time I had already read about in that wonderful collection of--

**Caines:**

[Interposing] Right, in the Dover series.

**Wonham:**

--Dover series stochastic processes I think it was called with that great paper by S.O. Rice from Bell Labs.

**Caines:**

Right.

**Wonham:**

And a whole bunch of other people. There was one by Doob but I can skip that. And there was one, Ornstein and Uhlenbeck I think, and a whole bunch of people of whom I realized later were leaders in their field. But it was Rice's paper that really got me excited. Anyhow, so Rice talks about the random telegraph signal. So we have the Rice random telegraph signal and now we have this contraption that generates something that should be the Rice random telegraph signal. Well, what they wanted, I mean the people in the lab wanted, was Gaussian noise. This was far from being Gaussian but the reasoning was that if you passed it through a stage or two of RC low pass filtering, then by waving your hands over the central limit theorem, what would come out should be Gaussian distributed, a nice sort of continuous Gaussian distributed. So I said, well, has anybody actually computed this and checked it out and nobody had. So that was really the first problem that I managed to solve in part. I took the Rice random telegraph signal, put it through one stage of RC, low pass filtering, and tried to compute the steady state distribution at the output, hoping that it would be something like Gaussian. Well, I had no idea how to do this but I could see that, well, maybe I at least could compute the moments. So after struggling for quite a while with massive multiple intervals I was able to compute all the moments of this thing. So I had an analytic expression for all the moments of this thing. Of course they didn't look anything like a Gaussian distribution at all but there they were. And so I took this to Tom Fuller and he had an uncanny ability to guess right answers. So he looked at these moments and he sort of thought about it after a couple of days he came back actually with the distribution. With the probability density.

**Caines:**

Gosh, that's impressive.

**Wonham:**

And so it was really fun. And so there we had this thing and then of course you had the parameter of the changeover rate times the time constant of smoothing in that product. That's a one-parameter family of probability densities and you could show that as the parameter tended to infinity, in other words the rate of the relative smoothing became infinitely heavy, indeed you were able to derive the Gaussian distribution in the limit. In fact, you were able as--gosh, what do they call that way of approximating Gaussian distributions? Gosh, it's hard to remember now. There is a name to it that's the name of it that's in every book on statistics. [Edgeworth expansion.] It was in Cramer's book on statistics which was one of the bibles I found that I could both read and profit from. And, in any case, we were able to get the sort of asymptotic behavior of this probability density as the mu-T product approached infinity. And we found a whole family--I mean depending on the value of mu-T. At the very beginning of course you would have essentially two spikes because it would be a plus one or minus one. And then as the smoothing got heavier and heavier you had sort of a concave thing like this and then a square characteristic, and then a parabolic literally square, literally parabolic. And then gradually that melted into the Gaussian as you--

**Caines:**

[Interposing] Gosh.

**Wonham:**

--went to the limit. So we wrote this up as a joint paper, about 1958, in the Journal of Electronics and Control and that was a fun thing to do. Actually, it turned out that this result had been discovered previously but not written up by a graduate student in the same department there, Doug Lamport in Australia, who had gone back to Australia and had unfortunately a rather short career there. I think he died rather young. But in any case that didn't matter. You know, we'd found it so that was okay. We published it. In the 30 years that followed there must have been half a dozen different people who independently--

**Caines:**

[Interposing] Rediscovered this?

**Wonham:**

--rediscovered this result. And each time we became aware of this of course we would send them a reprint.

**Caines:**

Right.

**Wonham:**

We never got any acknowledgements from any of these people about how pleased they were to see that we had anticipated our discovery of this thing, this phenomenon. I went on and said, well, okay, this is fine. Now, what do we have--can I compute the transition function, the time-variable transition function. And so that turned out to--then we discovered that we could compute these things by differential equations. The computation by moments was impossible really but by differential equations it turned out that the initial problem was solved in sort of two pages quite easily. And what happened next? Yes. Well, I struggled and struggled. I never--I mean I knew what every other graduate learns about partial differential equations, which is that they always separate. And each separable component is readily integrated. Well, in this case, this was far from being the case. It didn't separate. It was sort of a bizarre hyperbolic differential equation, partial differential equation, a bit like the telegraph partial differential equation that had been solved by Kelvin. But unlike Kelvin's case it didn't separate in anything like that way. So I went to the books on partial differential equations which were very hard to read. The best one was Sommerfeld's Partial Differential Equations, in which in the first or second chapter he had said, well, one of the ways you can solve a hyperbolic PDE is by developing the Riemann -Green function. Well, that's--you know, how on earth to develop the Riemann-Green function if you're not Riemann or Green. So I played with that week after week. And it turned out that the trick was to find, to convert the PDE to an ODE by finding the right argument to substitute into the PDE. And then the whole thing would reduce. Well, this is sort of a magic thing, sort of a cross ratio of variables and so forth. And it wasn't particularly obvious how to do this. Now, in Sommerfeld's book there were about 12 different ways of doing it and I tried each and every one. And the last one worked.

**Caines:**

[Laughs]

**Wonham:**

Turned out you could reduce the thing to an ordinary differential equation which was easily recognizable as a hypergeometric equation. And so you can then write down a hypergeometric series with this cross ratio thing involving four variables as an argument. And low and behold that was your solution. I had been brought up at McGill in the school of thought that says if you can write down some sort of expression involving an infinite series with coefficients that are gamma functions, something else, and something else, then you've somehow solved the equation. Of course there were no computers. You had no idea what this thing looked like. And unless you were an expert analyst you had no idea how it was going to behave if you wanted to, say, take a limit of some parameters. Well, in any case I was very proud of this solution and Tom was very complimentary about it. So we wrote that up too. And so then we had these two papers on the smoothed random telegraph signal.

**Caines:**

And when was the second paper published?

**Wonham:**

The second paper would have been about, probably published about, let's see, 1960 probably.

**Caines:**

Uh-huh. And where?

**Wonham:**

International Journal of Control--it was called at that time the Journal of Electronics and Control. It later morphed into the International Journal of Control. But that wouldn't have been enough for a Ph.D. I had to get control into this somehow. So I thought, well, what we're going to do is we're going to use this sort of random telegraph signal as the signal to be tracked. And we're going to take something simple like just a first order integrator with a relay control. Okay, we're going to track this thing so you can see that your plant is sort of a first order lag. I mean at that point trying to deal with second order--because of course each time you increase the order of smoothing you increase the order of the differential equation. So you can start off with an ordinary differential equation but at the next stage it would be partial with and so on. So it becomes pretty hopeless. Anyway, I was able to solve some of these problems at least at the differential, ordinary differential equation level, and you could plot the results and compute the probability, steady state probability of zero error. And I guess the piece de resistance in all this was a double integrator, and you had kind of a wonderful phase plane situation where you had bounding parabolas for the limiting--for the boundary trajectories corresponding to putting the relay full on one way and full on the other way. And then in between you'd be tracking this square wave. And, as I say, you could work out an expression for the probability of zero error. The problem was to compute this expression. Well, to compute it we simply didn't have the tools for computing anything but there was on campus luckily a computer which was presided over by a gentleman by the name of Wilkes in the computing laboratory, in sort of the heart of Cavendish.

**Caines:**

[Interposing] Cavendish.

**Wonham:**

--lab land. And but in order--you couldn't just sort of go over there and tinker. You had to get permission from Mr. Wilkes to have your problem solved. And you couldn't do it yourself because you didn't know how to program. FORTRAN had just come in. There was one person in the department who was a lady from Idaho who knew how to--

**Caines:**

[Interposing] Idaho?

**Wonham:**

Idaho, who had been hired to come there and do FORTRAN programming for this incredible machine. The machine of course was famous in the annals of computer science. It was called the EDSAC. Maybe EDSAC 2 at that point. So it was one of the very early computers and it probably occupied sort of this room kind of thing. And it would be equivalent to about this machine here I think in terms of computing power. And she was able to program my problem in FORTRAN and developed an integral equation method for her to do and you can turn it into an integral equation of Volterra type. And they converge pretty nicely. And so she was able to do this and she got this beautiful graph up and that agreed with an approximation I had already computed fairly closely. And so it all worked out in the end and I finally got this wretched thesis done.

**Caines:**

Marvelous.

**Wonham:**

Oh, I'll have to tell you the story of my Ph.D. oral.

**Caines:**

Right, okay.

**Wonham:**

I think that's where your tale came from, so repetition of that through several.

**Caines:**

And so what happened at your Ph.D. exam?

**Wonham:**

Well, Cambridge was a very informal place in a certain sense. I should say that up to that point at my Cambridge career, about four and a half years, I hadn't had to sit for a single examination, take any kind of examination whatever. Every now and then Coales would make ominous noises about my apparently taking quite a long time. I had no defense against that. Time had passed. Anyway, got the thesis done, handed it in. And then I set up a committee consisting of two gentlemen. One was Coales and the other was a mathematician by the name of Friedlander , who was a specialist in partial differential equations and some of whose lectures I'd attended. And there I'd learned something about the characteristics of hyperbolic equations and so forth and dimly related. It was mainly Sommerfeld's book that saved my life, not these lectures. And I should say that was no fault of Friedlander's. Anyhow, so the day came and there was Friedlander and he was essentially my external examiner, Coales being--and they started chatting and I was sitting there silently waiting for a question and it was clear that each was sort of trying to impress the other I think, and I was just sort of an incidental bit of furniture there. But, anyway, Friedlander finally got around to asking me two questions and the first one was how do you know this iterative method for solving the integral equation that I mentioned. "How do you know it converges?" And so I said, "Well, that's proved in appendix 17 or something." I muttered something about Volterra equations. And so he was satisfied with that. And then a moment or so later he asked, "Well, what do you think of probability theory?" It sort of threw me. You know, it's like what do you think of gravity? Well, I mean probability theory is there, isn't it? I muttered something about it's been useful for certain purposes. I mean it's a totally inane answer but frankly I thought the question a little bizarre.

**Caines:**

An inane question.

**Wonham:**

And that was it. That was it, finished. So the whole process lasted about maybe 40 minutes. That was the only exam I took in four and a half years. I don't think they're quite as laissez faire now.

### Purdue University

**Caines:**

I'm sure they're not. So after graduation with that, with your Ph.D., then there was a period at Brown, Johns Hopkins, NASA, I believe, and tell me your reflections on that period.

**Wonham:**

Well, the first--since we're going chronologically, the next thing that happened--it turned out that Purdue University there was a professor in the EE department by the name of Jim McFadden who taught probability theory and he taught it in a very intuitive way which sort of created apparently as I learned later some kind of political problems with his colleagues and so on. But he was a--he had superb intuition. And the other person at Purdue whom I related to somehow had actually visited Cambridge during my final year. It was John Gibson. And we had chatted and anyway the result of these discussions was that I was invited to Purdue. It's sort of bang in the middle of the Midwest. It was Lafayette Indiana. Well, culturally I would say it was the north and the south pole. I mean going from Cambridge to Lafayette Indiana. Lafayette Indiana was of course this was kind of at the. I guess when was this? This was 1961, the fall of 1961. And Lafayette was it 1962 or 1961? I don't know. Either '61 or '62. In any case, anticommunist fervor was at its height in the US at that time. I think it was the McCarthy era was just about on it at the time or it came shortly afterward. I can't quite remember the dates. But I was sort of lighthearted about this kind of thing. I mean at Cambridge we didn't take these things terribly seriously. And I remember talking to a lady at some sort of staff party there and the lady was saying, "Well, we're having this seminar," she said very earnestly, "on anti--on communism. And there is going to be this speaker and this speaker invited to it." So I sort of asked casually, "Well, have you invited any communists to address your group?" And of course she was shocked beyond measure.

**Caines:**

Horrified.

**Wonham:**

Was horrified. You know, where had I come from? What?

**Caines:**

Moscow Cambridge.

**Wonham:**

Undoubtedly under suspicion immediately. Well, I must say I found Lafayette Indiana a bit culturally difficult. I had gotten down there by bus I think, I couldn't afford a car at that point, and I couldn't even order a beer. I mean it was on the campus. There was no liquor at all, so you couldn’t have a beer in the student union. Or you--

**Caines:**

[Interposing] Gosh.

**Wonham:**

--had been on the road for 24 hours but you couldn't get anything to drink. You could buy beer and wine but you had to sort of go way out of town somewhere. And you could only consume it on the east side of Lafayette. Lafayette is divided by the Wabash River into east and west Lafayette. The university is on the west side which is the sort of tonier side of the river. The east side is a bit slummy. That's where I had my digs. And I could remember very--a staff member there, Sid Sridhar, who sadly a few years later died of kidney failure but a very kind person. He helped me move into the little house where I was staying on the east side. And a day or two later the two old farming couple who owned the house and leased a room approached me and said, "Well, you know, we can't have any black people here."

**Caines:**

Good grief.

**Wonham:**

And I explained to them that, well, in the first place, this man was not black in their sense. He was an Indian. Well, it didn't matter. See, he was black enough for them.

**Caines:**

Yes.

**Wonham:**

And so, well, and I explained that he was a prominent scientist and so forth and so on. It didn't make a difference with them. So we let that matter drop and just, well, it was clear there was a kind of cultural divide.

**Caines:**

Yeah, yeah.

**Wonham:**

But anyhow the group itself was fun. There was Sid and somebody called Zeno Rekazius . There was George Szego, an Italian, and others whose names I've forgotten. One of their big activities was trying to find Lyapunov functions. That was a kind of cottage industry in those days. People had all kinds of funny little methods of trying to generate Lyapunov functions to prove the stability of this or that kind of weird differential equation. And what you did of course was invent a Lyapunov function and work backwards to find the differential equation that would prove its--

**Caines:**

[Interposing] It's reverse engineering.

**Wonham:**

Yeah.

[Laughter]

**Wonham:**

So anyway I can remember giving some lectures there on sort of Norbert Wiener-ish optimization theory, kind of in MIT style I guess. And that seemed to go down well. All this was really quite new to them at that point. So it was a fun place to be, a lively place. Although of course in retrospect it seems to have had a rather limited agenda.

Describing functions was a big thing and that was a main interest. But I guess the most significant thing to my personal history was that George Szego had spent part of the previous summer at RIAS [Research Institute of Advanced Studies] in Maryland, which I hadn't heard of really. Well, I'd heard of Kalman at that point. Knew something about the filter but not really the details. But George was going to make a trip back to RIAS at some point in the fall or early spring. Maybe it was the winter of, I guess it must have been '62. And he said, well, come along, you might find this place interesting. So I was--we went there and I was introduced to Rudy Kalman. And of course he was a very impressive figure. I think I gave a little talk on what I had been doing at Cambridge or something. Anyway, the result was that an invitation to go to RIAS the next academic year materialized.

### Research Institute of Advanced Studies (RIAS)

**Wonham:**

And so I guess in the fall of whatever it was, 1962, I found myself at RIAS in Baltimore. And that was of course a completely new world. Here was a place where there was something called system theory control science and it had to be pursued mathematically. That was the key to everything. So you had Rudy Kalman, Joe LaSalle, Jack Hale, an expert in stability theory, and a continual inflow and outflow of visitors who come for anything between a day and six months. One of them was Donald Bushaw. Who had turned out, I hadn't known this but he was quite an expert in Chinese studies, read Chinese. Corrected me one day when I got a name backwards in Chinese by showing me what the Chinese order properly was. People didn't know, you know.

**Caines:**

Yes.

**Wonham:**

They would publish papers and the editor of the journal would get the name backwards. So things like that. So it was all at a pretty rudimentary level. The Xerox machine had just made its appearance. So we had a Xerox machine. Of course that just made everything so much simpler. I mean that was, as they say, a game changer when it came to collecting literature. So that was exciting but of course the intellectually exciting thing was that you were expected to use mathematics properly. Well, I had then really for the first time to sit down. I had done some analysis and stuff at Cambridge for fun, everything for fun. It had to be for fun. And at RIAS in the same spirit I acquired Halmos' book on measure theory and worked my way through it doing all the exercises. So then I was in a position to--

**Caines:**

[Interposing] Another gem.

**Wonham:**

--assess the probability theory properly.

**Wonham:**

So that was really the beginning of doing research in systems and probability stochastic processes, stochastic control as it began to be called at a proper level I guess.

**Caines:**

In a mathematically principled way.

**Wonham:**

So to be respectable and publishable and respectable mathematical journals like SIAM Journal on Control. Or maybe I can just interpolate here a sort of funny incident that happened. When I was at Purdue one day the gang decided to drive over to the University of Illinois, an hour's drive or so, Champagne. And, well, the one thing I knew about the University of Illinois was that the great Doob was a professor there. And I had come away from my Ph.D. experience with a lingering question about how to prove that a process which intuitively had to be Markov, it was kind of as we might say today a hybrid process. It had the discrete sort of square wave state component together with a continuous component of its smoothed version. And it seemed to me that this pair not--well, not the output itself but the pair had to be a Markov process. But I really only had an intuitive argument for that although it seemed quite compelling. So I thought, well, I'm going to ask if I could find Doob. I will ask him how one might prove this if indeed he thought it was true in the first place. So...

**Caines:**

[Interposing] And you managed to track him down?

**Wonham:**

So I can remember tracking down his office and knocking and going in. And the person who was there was not Doob but apparently one of his graduate students. His graduate student was, as far as I could see, clearly a genius level east European, perhaps Pole or something or Czech. And this august person sort of looked me up and down and inquired rather severely what I wanted with, Professor Doob. And so I sort of muttered something and muttered something. And, well, he wasn't at all sure that it would be appropriate for me to stay in the office any longer but it wasn't in his power to throw me out. So I stood my ground and eventually Doob appeared. And he couldn't have been nicer. I explained as best I could this problem. He said it must be trivial. So he went to the blackboard and you see he began, "Hmm, no, trivial, trivial." And then his student got into the act too and I just sat there watching. You see the two great minds at work. And after about three-quarters of an hour of this, "Well, I'll send you the answer," or I'll send--you know, he never did but it did--

**Caines:**

[Interposing] Prove that the problem was not trivial.

**Wonham:**

--drive home to me the fact that from a rigorous point of view maybe this was not just a simple thing to do after all. And I've never actually tried to do it formally and rigorously since then but I'm convinced it could be done.

**Caines:**

Wow. What about that? Yeah.

**Wonham:**

But it was just great to see him working. And his super-fast mind.

**Caines:**

Yes.

**Wonham:**

And you could--although I couldn't follow his--the ramifications of his thinking, it was certainly very impressive. And what was also very impressive was the fact that he was so patient and tolerant of this ignoramus who had wandered in off the streets, so to speak, with no credentials at all, but did seem to have in his funny brain a problem of some kind. So I remember that very vividly.

**Caines:**

Yes. He--I think Doob really--I believe he's passed away--was a scholar and a gentleman.

**Wonham:**

Yeah.

**Caines:**

And in fact Sean, Sean Meyn said that he had actually formally qualified to be able to marry people. So he had a--

**Wonham:**

[Interposing] Oh, really?

**Caines:**

He had a magistrate status.

**Wonham:**

Definitely involved sort of in the, on the human side of the community.

**Caines:**

Yes, that's right, yes.

**Wonham:**

Something you'd never guess from reading, reading his work.

### Wonham Nonlinear Filter

**Caines:**

Reading his book, yes. So on the system theory developmental path of the subject and your career in it would you like to make some remarks about your work on the Riccati equation? Which I remember very, very well from my period as a Ph.D. student studying your paper amongst many on the Riccati equation. And maybe would you like to make a remark on what we call the Wonham Nonlinear Filter? Although you--

**Wonham:**

[Interposing] Yes, well, okay. Yes, the Riccati equation--

**Caines:**

Mainly refer to as the Nonlinear Filter, yourself.

**Wonham:**

This was still in the stochastic side and it was a generalized Riccati equation because I needed this to study I think the optimization, the linear quadratic problem with state dependent noise. So you had white noise and multiplying the state. So you'd have a quite large noise for large deviations from zero and so on, small otherwise. So I mean it was a potentially doable problem anyway. So but it turned out that you had to develop some sort of iterative method of proving that the Riccati equation could be solved by quasi linearization, Bellman's method adapted to this particular situation. The interesting thing about that was actually that it led directly to the pole assignment problem. I needed to stabilize the linear system. I could assume the linear system was controllable. And then I wondered, well, if it's controllable what can I do with feedback and in particular can I stabilize it? And this seemed like such a simple minded problem that I assumed almost everybody else in the world had solved it or knew the solution. So I went around to--where was this now? Yes, I guess I was, I must have been at Brown at that point at the time because RIAS itself folded after two years, the two years that I was there, for purely political reasons I think having to do with the zoning of the area where the--

**Caines:**

[Interposing] The zoning? Ah-hah.

**Wonham:**

--RIAS institute was located and that's kind of another story. Anyhow, getting into the Riccati equation and trying to solve this and then realizing I would like to have this story about stabilizing something if it was controllable. So I went around and talked to--I guess I was--where was I? Would I have been at Brown at that point? Went up to Boston and sort of poked around MIT. I think I talked to Mike Athans and one or two other people. And they simply didn't know that that was the case. Nobody seemed to have addressed this problem. So I sort of put the Riccati equation for a moment aside.

**Caines:**

Yes.

**Wonham:**

Worked out the pole assignment.

**Caines:**

Oh, was that a problem then?

**Wonham:**

Yeah.

**Caines:**

Yeah?

**Wonham:**

And went back to the--and then I could use the result to stabilize--because what you had to do at each stage of the equation of linearization was stabilize a certain A plus--AB pair or something. Or A plus BF1, let's say B, to get an F2 that would stabilize the thing. Anyway, so you needed this result. And so it was quite interesting that that's where that pole assignment idea came from, having nothing to do with actually linear design as such but more a technical requirement to--

**Caines:**

[Interposing] Well, that's fascinating.

**Wonham:**

--get the quasi linearization business--

**Caines:**

[Interposing] I never knew that.

**Wonham:**

--to work out for the Riccati equation.

**Caines:**

That's--

**Wonham:**

[Interposing] Well, no. I mean it was never said but that's how it happened. And so there was that then and all that was done I guess sort of in the mid 1960's. That was the Brown University stage. And then somehow all that seemed so interesting in itself. It led to the question, well, are there subspaces of the state space of a linear system, linear time-invariant system finite dimensional on which you can do pole assignment. And that led to the idea of a controllability subspace. And so all that I started to develop before or at about the same time that I first moved to NASA. As an NAS [National Academy of Sciences] Fellow on leave from Brown University. So the idea was I'd go to NASA for a year or two and come back to Brown.

### NASA

**Caines:**

And where was NASA then? That was in--

**Wonham:**

[Interposing] It was--

**Caines:**

--physically? Oh, that was in Cambridge?

**Wonham:**

In Cambridge.

**Caines:**

Oh, right.

**Wonham:**

Yeah. So I was at Brown but I moved up to Cambridge.

**Caines:**

Massachusetts we should add.

**Wonham:**

Cambridge Mass I should say.

**Caines:**

Yes, for the viewership.

**Wonham:**

With Tech in the backyard of MIT, Tech--what was it? Technology Square, and they had this office building on several floors which was sort of had been turned into a control research institute under Hugo Shuck’s leadership. Hugo had come from Honeywell, I think, where he'd been the vice president or something. I don't remember how exactly but he was retired from Honeywell and was doing this, kind of as a retirement project I guess. And that was a fun place. I mean there was George Zames and Peter Falb and then Steve Morse walked in one day and Falb and--let's see. Falb and somebody, Marvin--what was his name? [Friedman] I don't remember. He was a young researcher and has had I think a distinguished career. I can't quite remember but I haven't seen him for many years. So I was playing with these geometric ideas at that point and sort of inspired by Halmos' notion of doing things in a coordinate-free way just to-- And the beauty of the geometry, you know

**Caines:**

Yes.

**Wonham:**

I mean it was just beautiful to think that you could create a subspace of which the motion of the system would be in where you create this by feedback.

**Caines:**

Right, right.

**Wonham:**

Of which the motion of the system would be the invariant. So you could sort of see it, you know. And the visual aspect of this was just quite, quite--

**Caines:**

[Interposing] Entrancing.

**Wonham:**

--compelling. And so as I say it turns out that Falb and Marvin [Friedman] were working on the decoupling problem using quite different methods. Steve Morse came in, who was there sort of--I think he was doing his military service somehow. So he came into my office and told me what those guys were doing. And he defined what the problem that they were working on. I thought, well, look, I mean we can do this geometrically, you see. So we did that, and so Steve and I published our first paper on geometric control theory. And then a second paper to which Steve contributed a most ingenious analysis of doing things sort of in a minimal state compensation way, maximally it's sort of economical. And so we got a lot of sort of structure into all this and published those two papers on the decoupling problem. So it's sort of fun to think of like how one almost fortuitous thing leads to another. And, well, you're just sort of like playing sort of a follow--kind of like a treasure hunt. One step leads to another and you don't really know what it's ultimate objective is but it--

**Caines:**

[Interposing] Right, right.

**Wonham:**

They seemed interesting things to do at the time and so you just simply track it along.

**Caines:**

Right. And the one on filter?

**Wonham:**

Well, that was because--that I guess I did at Brown because stochastic filtering, nonlinear filtering which had been pioneered by someone called--someone whose name--R.L., I don't know his name but those were his initials. You know, the Russians were always known by their initials.

**Caines:**

Oh, right, yes.

**Wonham:**

R.L. Stratonovich in Moscow. And he had developed a stochastic differential equation for conditional probabilities. And of course nobody knew how to solve this thing or even if it made any sense. But there was--but then a number of other people sort of jumped in and said, well, let's take this Stratonovich equation and turn it into an Ito type differential stochastic differential equation. It would be sort of a partial differential equation I guess. And nobody quite knew what that meant either. But we became aware of the Ito stochastic differential equations in the case of ODEs and it turned out--and so it occurred to me, well, look, why don't I try to do this? Develop a stochastic differential equation for the a posteriori probability density of the output of this famous little low pass filter with the Rice random telegraph signal input. And we put some white noise just for fun on the output. So we're observing this thing at the output of the filter with the white noise added and what we'd like to know is the conditional distribution that the input square wave is at plus one or minus one. See, so it's kind of tractable problem in the sense that everything remained nice and finite dimensional.

**Caines:**

Finite dimensional.

**Wonham:**

We didn't get into sort of partial differential equation difficulties. So I was able to--so I did this kind of as a fun exercise. There was no reason to be particularly curious about this particular configuration. It's just that I had the tools to deal with its various components. And of course I had known about the smooth random telegraph signal for years and years.

**Caines:**

For a long time.

**Wonham:**

A long time.

**Caines:**

Right.

**Wonham:**

I suppose you could say I've spent more time on that than anybody else in the world. So, anyway, with that credential whatever it might have mattered, so I had no ulterior motive in doing this but it turned out that it was beautiful. It did work, you know, and we were able to actually compute things from the result and so forth. And so after that was published I sort of lost touch with it all together, didn’t have any follow-up from me. And I learned quite a few years later that some people in I think Photonics or anyway quantum something or quantum optics maybe, you know, had discovered that this had relevance to their interests. And I've never actually known--

**Caines:**

[Interposing] Right.

**Wonham:**

---why or how or what. And it was they who dubbed this thing the Wonham filter. It was like, oh, gosh, I didn't know I had a filter named after me. So I mean it was certainly not on the level of interest of the Kalman filter but, well, you don’t turn it down if they're going to name something after you I guess.

**Caines:**

So the development of linear theory began in the coordinate-free setting. You had traced it back to NASA and--

**Wonham:**

[Interposing] Yes, yes, that--

**Caines:**

--and then fully developed it in Toronto?

**Wonham:**

Essentially the initial work was done with Steve Morse at NASA and then I went on and worked with other people like--I mean I left NASA. NASA in fact--I wanted to stay on as a permanent employee. They wouldn't let me do that because they said I was a Canadian and therefore a security risk.

**Caines:**

Clearly.

**Wonham:**

Yeah. So I wouldn't take any chances. It turned out the real reason was that they knew but nobody else did that NASA was going to be closed down anyway and the whole place turned into a facility for the Department of Transportation, which is what happened three months after I left. So I left at the end of 1969.

**Caines:**

Uh-huh, yes, yes.

### University of Toronto

**Wonham:**

We moved up to Toronto at that point. And, as I say, for quite a time afterward I got many compliments about my insight and perspicacity in moving out of NASA. Of course in fact I was quite disappointed, having no idea that it was under threat. Anyhow. But so then I went on working on this geometric stuff with people like Boyd Pearson. We worked out the regulator problem. And it was a lot of fun because essentially it was simply beautiful stuff. First of all, it was mathematically quite elementary. You didn't have to know any functional analysis. It was all finite dimensional. You didn't really use any topology. It was just ordinary linear algebra done in a coordinate free way. And there are people who had talked about subspaces and provided some of the algebra like Jacobson and his book on algebra. And so you could work on a lot of stuff that you needed on your own, which wasn't in any textbook, including Jacobson's. But it was very satisfying.

**Caines:**

And then I believe when I was at Toronto that the first-- was it the soft cover edition of--

**Wonham:**

[Interposing] 1974. Yes.

**Caines:**

That the textbook came out.

**Wonham:**

That's right, it came in soft cover, yes. That was on the advice of Bal, A.V., Balakrishnan, at that point in time at UCLA. Who suggested taking that route and made the wise remark that it could always be polished up later and turned into hard cover or something after it had seen the light of day for the first time and probably been revised. His advice indeed was very sound. So that came out in '74 and then I guess the second edition which was the first hard cover edition a few years later, about '78. And then finally the third edition in '85. But it seemed to catch on. There were translations into Russian and Chinese and some other language I was told about at some point which I don't remember now. But anyhow that was sort of fun to see. And people started using it as a textbook in of course those graduate courses in control. It drove most of them crazy. You see, they were all used to doing matrices. And the idea that I'd picked up from Halmos probably that you should really separate the issues of computing on the one hand and conceptualizing on the other, and really in linear spaces you didn't need to have a matrix until you wanted to compute numbers. So if you wanted the concepts and you wanted the algebra you didn't bother with the matrices. But everybody--of course in the engineering literature everybody was doing matrices. And as they thought about more and more complicated problems these matrices got more and more cumbersome in the lab and frankly ugly. Whereas so you could say something sort of in one line of simple abstract linear algebra which these other guys you see would take half a page and draw a matrix for. And so I figured, well, they could do it their way. I'll do it mine.

### Discrete-Event System

**Caines:**

So here we are in the transition between the '70's and the '80's, let's say. So what would you say was the key impetus that led you from linear systems to the development with Peter Ramadge of a discrete event system?

**Wonham:**

Well, it--you know, very mundane. I mean there was no road to Damascus here. I think I mean it was that I was--I had spent ten years doing linear systems and I thought, well, perhaps there are other things in the world and there was more to life than linear systems. And then the other thought I had was that, well, I was teaching at that point, you see, and graduate students. I thought, well, these guys want to move closer to computers. They want to be more computer-oriented. You see? I mean it's all very well to do abstract linear algebra but they knew where the excitement was at least to them. It seemed to me that you should be a bit computer science-y I think.

**Caines:**

Right, yeah.

**Wonham:**

So and it seemed to me to be quite a valid viewpoint. It was like Peter Ramadge was asking yesterday "What are students interested in doing?" And so the computers [were] the big thing.

**Caines:**

Yes, yes, yes.

**Wonham:**

So I thought, well, what can we do that's sort of discrete and can we take some of these ideas from the linear system theory into sort of simple automata models. So we started doing that. And then I thought, well, this is not quite working as well as I think it should. And then Peter Ramadge turned up by very good fortune. This must have been about end of, well, probably September of '79. Yes, I guess--was that when it was? September?

**Caines:**

Yes.

**Wonham:**

September of--

**Caines:**

[Interposing] Because that was after he actually--that was after he left Cambridge where he had been with Peter--

**Wonham:**

[Interposing] Maybe it was about early 1980.

**Caines:**

Graham Goodwin and Peter Ramadge were visiting me at Harvard.

**Wonham:**

Yeah. Well, I remember and then Peter was doing that interesting stuff on the adaptive--

**Caines:**

[Interposing] And so that was his adaptive control.

**Wonham:**

--control. Yes, that's right.

**Caines:**

It was the three of us.

**Wonham:**

Yeah.

**Caines:**

And then that was really his master's degree. And then he came up to - - with you.

**Wonham:**

Well, it was a question he had--yes. He would have stayed where he was if he'd had the support I'm sure. But it was my good luck if not his that he was on some Australian fellowship that compelled him to continue on in a commonwealth country.

**Caines:**

Oh.

**Wonham:**

The nearest commonwealth being--

**Caines:**

[Interposing] Being Canada.

**Wonham:**

--Canada. And that much--at that point there was not a lot of control research going on in Canada anyway. So the nearest point in Canada was Toronto, and so he turned up in Toronto. And so we sat down one day and it was when--it must have been--confused about dates now. Anyway, I can remember quite clearly we had this discussion one day in the classroom about where he might go on. Where he might--what sort of project he wants to take his Ph.D. And so I said sort of vaguely, well, I think we should think about trying to control automata models. And then I had this sort of hazy idea of Petri nets and the idea of firing transitions. And sort of that led us to the notion of the simple technology of transitions that could be enabled or disabled. And so we had that as some kind of technology. And then we of course saw that we could think of a language maybe as a boundary of legality, as a specification device. And so with the idea of an automaton that was the physical device to be controlled with this technology and the languages involved that would provide the more abstract but neater and cleaner notion of an optimization problem of some kind, Peter took that up at once. And over the next I don't know how long it took, two, three years, not all that long, but he was able to pretty much develop the monolithic DES control theory on the basis of regular languages, the technology of enablement and disablement and the idea of these automata as the physical things to be controlled.

And so, yeah, it turned out I think to the surprise of both of us that there was enough in this kind of rudimentary or primitive setup to develop a nontrivial theory. So I mean it was a wonderful success and it was very exciting to discover that a lot of the thinking that had gone into the linear multivariable control--Peter had taken my course on that before he started this project-- was applicable in a changed, somewhat changed form, altered form but in essence the same kind of idea instead of working with subspaces. Instead of working with lattices of subspaces, actually semi-lattices of subspaces, you worked instead with semi-lattices of sublanguages but you did essentially the same thing with them. And just as in the linear theory you found a supremal let's say controllable subspace, that was a subspace of the kernel of some output map or something that defined a legal specification in exactly the same way you could attack the discrete event problem. So I mean it was just a sort of a happy happenstance that everything sort of worked out and worked out very beautifully. Now of course it turned out that, gosh, for anything like a realistic problem you'd have zillions of states and we didn't have a ghost of an idea really whether we could compute anything. But anyway more students came in. This would have been about 1980-81. The Chinese were beginning--it was the cultural revolution had finished in sort of '78.

### Observability and Feedback

**Caines:**

Right, right, there was the opening up.

**Wonham:**

They were opening up. They were sending out some of their brightest students of which Lin Feng was a distinguished representative. And so he got into this too because we saw, well, look, we've got this idea of controllability. What about observability? It would be very Kalman oriented. I'm pleased to say. I mean what Rudy has contributed by way of sort of an intellectual guide to the control world no matter what you're doing is astounding.

**Caines:**

[Interposing] Yeah.

**Wonham:**

I think his insight here is sort of a genius level thing that he's done. So anyway because it suggests at once what kind of question you should ask next. What can you do with feedback? If you observe certain things, how do you process these observations to make decisions about the next step in control and so on and so on? So, anyway, Lin Feng turned up. I remember it was a freezing cold day. He just arrived early January of whatever year it was. '80? Yes, I guess it would have been 1980. I can remember--so, yes, he turns up for our class. I was running a little class in mathematical logic because we realized we had to know about these things.

**Caines:**

Right.

**Wonham:**

So there was some crazy British textbook that we were using by a person by the name of Gries, G-R-I-E-S. But very nice little textbook. But with a lot of sort of British humor in it which was fun. And the exercise at this class was to take sentence after sentence from this list of exercises in the book and go up and translate them into, I don't know, Boolean logic, whatever it was. And so it came to be Lin Feng's turn at this exercise and here he is freezing cold and sick with jetlag and sort of just arrived. So he's faced with the sentence "If it rains cats and dogs, I'll eat my hat."

**Caines:**

[Laughs] Translate.

**Wonham:**

Traduisez en chinois. So now he had no trouble with the cats and dogs but the eating of the hat was something he hadn't encountered before.

**Caines:**

Right, right.

**Wonham:**

Well, to his credit, he just sort of plugged ahead and we said, well, pretend it's something else and just do the mathematical logic, which he had no problem doing; but something I remember about Lin Feng's first experience of western education.

**Caines:**

But without existential quantifiers.

**Wonham:**

Yeah.

### Future Developments and Challenges

**Caines:**

So I think what people would like to hear as part of the culminating questions would be what do you think is the relationship of the future, in the future of discrete event system theory and the development of computer science, computer engineering, and control science. So discrete event system theory first and then secondly of course we'd like to know what you think are the most significant challenges to and prospects for systems and control theory and its applications in the next few years. And you made of course some remarks about this yesterday at the workshop in your honor.

**Wonham:**

Well, okay, about--I mean what we're doing in the current period of discrete event systems I think is fun to do and interesting because you can get a lot of cybernetic mileage at a relatively low cost of technical intricacy. What put me off--I mean after doing the linear system stuff one could have gone the non-linear differential equation route and worked on differentiable manifolds and so on as people, like Alberto Isidori, have done with brilliant success. And we did do a little bit of that. I had a very good student from South Africa, John Hepburn, who worked out a version of the internal model principle in that setting. But it seemed to me that, gosh, it's going to get very technical and you're going to have to worry about every kind of singularity and this thing or that thing, which has nothing to do with what was my main interest.

What is cybernetically interesting here? Cybernetics, if you like the interaction of control with information can't we look at this in a technically simple setting where we can get some sort of basic answers without paying such a heavy price in sort of mathematical preparation which might in fact not buy us anything in the end anyway? So that was a major motivation for going this discrete event work route. And it's turned out really quite successfully so far in terms of ideas of control. Architecture, distributed control, decentralized control, the issues that come up there still looking at the same criteria that motivated the original monolithic control that we worked out, Peter Ramadge and I, where you had basically the optimization, qualitative optimization problem of maximum permissive behavior together with the sort of quasi-liveness criterion of maintaining the reachability of target states, the non-blocking property. And so these two criteria have turned out in themselves to be sufficiently challenging, at the same time basic enough, to warrant carrying on with the theory sort of in more architectural settings and in settings where you have to deal with partial observations. So I would say that the story really hasn't been fully worked out yet even in those terms. Now, the other thing is that if you want to--now, there have--there have been applications of this quite independently of the group in Toronto or any group who's been exploring the theory of any of this. The notable one has been done at--it's a university in the Netherlands. I'm trying to think.

**Caines:**

Eindhoven?

**Wonham:**

Not--yes, it is in Eindhoven. It is Eindhoven, yes, and they have a group there who worked on something called a--what is it--a patient support system which all this thing does, doesn't sound very glamorous, is to move a little bed up and down and adjust the angle and everything, so that a patient strapped to it can be moved into the interior of a nuclear magnetic resonance apparatus where apparently you have to orient them carefully and keep them quite rigid so that you can interpret the results afterwards. So this is non-trivial control problem. A whole bunch of little motors are needed to do this. And so they wrote up a very interesting paper and they used our stuff.

**Caines:**

Great.

**Wonham:**

For all I know there have been others. But certainly one of the inhibiting factors has been that when you do have billions of billions of states you have to have some decent way of computing things. And I'm not quite sure how they handled this problem at Eindhoven but anyhow the computing side of it has held up the development of this from sort of any kind of industrial point of view, even assuming that the model is adequate in the first place for industrial purposes. I mean you may want to do much more in an industrial setting than this model can deal with. I mean you probably have quantitative features that would have to be superadded to develop any kind of practical design theory, but that's never been really a major objective of ours. However, we have tried to face the computing problem. I guess the best success there has been the work of a student, another student from China, Ma Chuan, who adapted Harel's state, what do they call it?

**Caines:**

The model checking? The state machine?

**Wonham:**

Harrell's paradigm was--what did he call it? Something like a state graph. It's not--that's not quite right.

**Caines:**

State chart?

**Wonham:**

State chart. Yes, maybe it's state chart. Anyhow he adapted that into something that we called state tree structure. Harel's I think exposition was exciting but not quite what we wanted for control purposes and Ma Chuan was able to make a very successful adaptation of this. So we named it a bit differently so it wouldn't be confused with Harel's actual model. And with that he could indeed compute by adjoining also BDD technology for the representation of control functions. So you had the state tree idea which provided an ordering of the variables that was effective in obtaining an economical BDD representation of the control functions that finally emerged. So he was able to take our control theory and adapt it right into this structure. We're working at the moment in fact on a timed version of this where you have different levels of the clock speed, so to speak at the different, in the different levels of the state tree.

**Caines:**

And you produced a monograph together?

**Wonham:**

We produced a monograph and I don't know how influential this monograph has been. I don't track these things but anyhow it was capable of computing enormous monolithic structures. And recently Kai Cai, who originated the idea of supervisor localization, has been getting into state trees. So we have timed state trees coming up and state trees for this supervisor localization and the culmination of the state tree architecture and the distributed architecture. The state tree setting and the distributed architecture I think is very powerful. So I think that pretty soon we would have a working model with computational support that can certainly deal with problems of industrial size. Now, whether at that point they are the problems that anybody in industry wants to solve of course is maybe a bit problematic. But you can't do everything at once and so if you kind of have to wait and see what's going to happen. I sometimes ask my Petri net acquaintances whether anybody uses Petri nets, particularly let's say in French industry. You know, France being the sort of epicenter, as people say, of Petri nets. And the answer is no. I mean at least not to any great extent. And in any case you can't do these things in Petri nets quite. Although a lot of progress has been made recently by Chen Yufeng in Xi’an at [Xidian University]. So the Petri net people have--we've sort of swapped ideas back and forth a bit. And so they're working on this as well. It does seem you can solve very large problems but now it's a question of whether anybody is going to use it because the people who are actually doing industrial design in industry are still pretty much using cut and try methods and have not really read all this fancy literature nor are they using these computational techniques for which you need dedicated programmers to really implement.

### Hybrid System Structures

**Caines:**

Would you like to make a remark about the interface between DES systems and continuums of continuous systems? So the integration of DES theory and methods into what we would call hybrid system structures?

**Wonham:**

Well, I guess my first reaction to the term and concept of hybrid systems is that everything in the world is a hybrid system. I think my first reaction to it when I first discovered that people were talking about hybrid systems was probably I mean a bit prejudicial. It seemed to me that, well, the term of course was invented by computer people. Which is fine and sort of the scenario that I imagined was that they one day sort of woke up and said, "Ah-ha, we can use our computers for fun and profit but in order to do so we have to move to the continuous world." So they discovered that the continuous world is governed by differential equations, of which the favorite one is x-dot equals u in one dimension. So they--now, I know people have moved well beyond that but that was sort of the first thing that happened.

So I kind of looked askance at all this and said, well, sure. I mean you can probably do this, yes. And certainly hybrid systems, mixtures of discrete and continuous systems, almost any system in the world is a mixture of discrete and continuous. So I mean can you have a--can you have a scientific theory of something that's absolutely totally universal and all pervasive? Well, okay, I admit to a bit of maybe prejudice there and I really haven't been following the developments in hybrid systems. People are sort of pursuing what they call hybrid systems. Since you can model just about anything in the world with the hybrid system you can then--well, I know they were very involved in Zeno-like problems at the moment, like Tom Fuller's interesting problem of convergence down a switching curve and so forth with an infinite number of switches sort of thing, which the computer people have renamed as, as a Zeno problem. So they like to look at these Zeno problems. And all that's fine but no, I don't really have an educated view of that because I haven't tried to do that myself. I have sort of enough going on in the purely discrete area. I wouldn’t say--I wouldn't discourage a student who wanted to work in that direction. In fact, Mireille Broucke in our group does do that and quite legitimately so.

**Caines:**

I just wonder if you would like to comment on my remark about the discretization or quantization or abstraction of the continuous world that one apparently needs to make in order to use DES methods?

**Wonham:**

Well, I think that this has been with us for some time, hasn't it? I mean as soon as people got ahold of a computer, a digital computer, and wanted to solve Maxwell--let's say wanted to solve Laplace's equation in some funny domain where you had to do it numerically, I mean that was okay. There you faced the problem for the first time and I don't see that there is any great qualitative difference in going from there to what you've just described. But, as I say, I've never--I have no experience in that area. So it would be foolhardy to I think try to express any opinion about it. Obviously it's a technical problem of numerical analysis that has to be faced at some point, yes.

**Caines:**

On the philosophical level about systems and control theory you remarked yesterday about the possibility of finding an application of dynamical system theory which could have perhaps through robustness and enhanced to the questions of feedback, that they could be some of more fundamental use or some fundamental principles still to be discovered in cybernetics, which could rest upon general principles of dynamical systems. I'd like to insert that I asked Smale when I happen to meet him for effectively the first time last year whether he thought that dynamical systems theory had been adequately used for instance in systems and control. In fact, I may have put it the other way around and said that, as you remarked, that systems and control theory had not yet exploited sufficiently dynamical system theory. And he made an allusion to the work of Leon Chua at Berkeley. But apart from that I think that he didn't know about the gap.

**Wonham:**

Well, I don't think it's a mainstream activity yet. That's the thing.

**Caines:**

That's right. There is a possibility--

**Wonham:**

I mean there are a host of interesting things to do. I mean in your talk yesterday of course you talked about why don't we model the planet properly for once and decide from that what we should do next. And I totally agree with that sentiment. I wouldn't quite know how to put a graduate student on it but I totally agree with it.

**Caines:**

It's a bit ambitious.

'**Wonham:**

Oh, you'll do it?

### Information and Control

**Caines:**

And so--oh, and again remaining at the philosophical level, concerning the duality between information and control which is such a fascinating idea which you referred to in the workshop yesterday, I wonder what you think the prospects are of making this more concrete and whether you regard the progress that's been made over the last ten years in relating channel capacity to the feedback stabilization as a promising avenue.

**Wonham:**

Well, that would certainly be one aspect of this. I haven't been following that work closely, so I hesitate to pronounce on it. It's a very natural thing to do. Clearly how much information you can push through from one place to another and how fast but it doesn't really sort of give you an information theory where the information is married to the control objective. I mean if you're an engineer and you have to--you ask, well, how much stuff and how many bits do I have. Then you sort of figure out what it is you want to do first, then how you're going to encode it. And then you set up a communications link to transmit it. And you have presumable error thresholds which you need to meet by way of specifications. And all that is true but it doesn't seem quite to me to answer the sort of question that tends to sort of obsess me, which is what are we really doing when we look for and process information for a certain purpose? I mean we know certain things. I mean at the simplest level that--if you want to evaluate a function and you can make certain measurements, then in every book on algebra page two there is the commutative diagram with a kernel inequality condition that tells you when you can do that. So that seems to me to be sort of more in the direction of relating information to control than these other more sort of engineering issues.

### Advice to Young People

**Caines:**

And I think the last question should be the classic one. What advice if any would you like to give a young systems scientist?

**Wonham:**

Well, I do get questions like that whenever I go to China of course.

**Caines:**

Of course.

**Wonham:**

They don't seem to ask those questions here. Well, maybe they're here so you'll just tell them what to do anyway but in China of course there--what they really want you to tell them in China is how do you write three papers a month so that I [laughs]. And so the kind of answer I give them is really quite useless to them and very disappointing. The first requirement here is to be curious. If you're not curious, go and sell used cars but don't meddle in research because research is not fun on the whole. It's agonizing. So it's painful. That's the first thing. So be prepared to find it painful. It has its rewards but you have to suffer the pain first. But the main thing is you have to be curious. See, you pay the--you have to be--you have to I think, say to people that the most important thing to do research is a curiosity and doggedness. I mean there are no flashes of genius that most of us ordinary mortals are ever going to experience. Any sort of intuitive hunch you might get about the next step to take on something is a result of simply being curious enough, foolish enough if you like, to give up this weekend where other people are going out and having fun and sitting in your office and struggling over something you don't understand. So the thing is what you have to do is this. You have to say, "What is it I don't understand that I want to understand?" It does--and it can be something--you have to be very honest with yourself and say, well, look, I am very stupid." Everybody else in the world understands this except me. I have to sit down and try to understand it for myself in my own way. So you have to be curious about something. You have to want to understand it. You have to recognize that indeed everybody else may understand it but you don't. And so you better sit down and try to do that. And if you're willing to pay the price of, as I say, foregone weekends, for that apparently bizarre purpose then that's the essence of research. Now, people don't like to hear this, you see. They want to know how to write papers. I say don't even ask that question. Don't even think about that.

**Caines:**

Right.

**Wonham:**

Until much later. So I think that's, I have found, the only honest thing I can tell them.

**Caines:**

Thank you very much indeed.

## Contents

- 1 About Murray Wonham
- 2 About the Interview
- 3 Copyright Statement
- 4 Interview
- 4.1 Video
- 4.2 Early Life and Education
- 4.3 University of Cambridge
- 4.4 Purdue University
- 4.5 Research Institute of Advanced Studies (RIAS)
- 4.6 Wonham Nonlinear Filter
- 4.7 NASA
- 4.8 University of Toronto
- 4.9 Discrete-Event System
- 4.10 Observability and Feedback
- 4.11 Future Developments and Challenges
- 4.12 Hybrid System Structures
- 4.13 Information and Control
- 4.14 Advice to Young People