First-Hand:Reminiscences on My Career in Control
Seminar given by Elmer Gilbert, March 24, 2017
Thank you, everyone. I'm going to start the introduction just a tad early so that Elmer has full complement of time. I guess everybody knows why we are here, because we can't have the absolute record attendance at this seminar ever produced on an accident.
To hear Elmer Gilbert reminisce on his career, and what a career it has been. So many of us here in the room are fighting for these little bitty prizes, the low-hanging fruit. Some of them like Semyon recently did the National Academies [Foreign Member of the Russian Academy of Sciences]. Elmer was probably in the National Academy before I was born. I was born in 1957, and that is the year Elmer got his PhD here. He has all of his degrees from the University of Michigan. This is Go Blue 100%. Something that's very in vogue for the past 15 years is what? Doing startups, entrepreneurship. Elmer created Applied Dynamics, which is still a very successful company. And it was ‘57 or '58. [It was 1957] I was either one year old or zero. I'll just read a few of his awards. He won the Bellman Control Heritage Award, which is just this gigantic thing in 1994 and 1996. He's a member of the National Academy of Engineering. Which means to say, he's a Fellow of the institute. An electronics engineer is something that many of us treasure as something we fought hard to obtain. Elmer just won things along the way. FTC is everywhere. All of the fundamental math for that done by Elmer. Search engines are using optimization all the time to do so. His students developed that. All of these things that almost nobody knows about because they're kind of running behind the engines of everything, Elmer thought about them 20 to 25 years before anybody else knew that they were important. So with that, Elmer, I'll let you come to the stage. And we'll get going.
You'll pardon my use of the stool. It's not a barstool, but I need some kind of, so to speak, structural support.
We will take you to the bar joints later tonight.
Static equilibrium is required here. So that's why I'm sitting here. Can everybody hear me okay?
Well, it's a delight to be here. And I hope I can say something that will be of interest to you. This seminar has been going on, on a regular basis, for over 30 years. And I have been a good attendee of that seminar. And it's nice to be up front for a change. Probably with this crowd, maybe I don't have to worry about ever doing it again. I want to thank Jessy for pushing me a bit into this. People have been thinking about having Elmer say a few words about his career for a long time and I kind of resisted doing anything about it. But Jesse pushed me over the precipice so to speak, and so here I am. And I also want to think Ilya Kolmanovsky for helping me put all these materials together. It certainly makes the show run more smoothly. And I want to acknowledge the support of Lois Verbrugge, my wife. She's a distinguished social research person, demographer. And she always has good things to say about my talks and writings. And so, she's contributed to that. I must say, however, that there are occasions still where she wonders why she ever married an engineer. But I treasure her and what she has done to support me in all of my various things. I want to get started by going back 70 years, that's 1947, and what was high-tech then. First of all, I wanted to briefly say what instrumentation engineering is because that's what my doctorate was in. It was a degree program, non-departmental, in which these various things that were being done at MIT under Dr. Draper, the head of the lab in those early years and the guy who first flew airplanes from Boston to Los Angeles with no other inputs except getting started. So these ideas of measurement, communication and control were in that area. And as you can see, if you look at the courses available in 1953, they involve all the kinds of things that we now consider to be the control area. And so, it's my great good fortune to have found this group of people that I can work with. And there are three there that I have mentioned. Myron Nichols, who was the first person who came in '47, and as you'll see, 70 years ago. And so, Larry Rauch worked under Solomon Lefschetz at Princeton, the world-famous, the very distinguished mathematician. He is the person who really got me started thinking mathematically instead of kind of straight engineering and such. And Robert Howe is the other person who contributed very much to me as a mentor during those years. His enthusiasm about analog computers and learning about dynamics of aircraft and such are all a part of my beginning. So, I just wanted to give you some feel for where it all began.
Here's high-tech in 1947. It's an analog computer, a four-amplifier analog computer. And it's at the same time that most other people first learned about this particular way of solving differential equations. There was a famous paper that came out of Columbia University, Ragazzini being the lead author, where he proposed these particular ways in 1947 of solving differential equations. The research group here was kind of motivated by this guy Myron Nichols (it was on the previous slide). They built these things and played with them, and that was enough to get things really going. And so, it first turned into small computers for laboratory use by students. And people would come and look at this small box. We had maybe four or five of them around. You'd go to that small box and you can solve differential equations, and the solution appears right there. And some people couldn't believe it. It's a miracle. Well, it was a miracle in a sense that it was an inexpensive way of really solving differential equations in real time, which is something the digital computer couldn't come close to be able to doing at that time. As time went on, we built up facilities in our department. And this is where they were 1955, eight years later. And you can see a lot of stuff, all mostly made in the department. And I have a lot to do with that. And if you look at these - the circular dials that you see there, those are servo multipliers. You put one variable into a servo and then you have a multi-ganged potentiometer. And by that way, you can take a single variable and multiply it by many others. I was involved in the design of the building of that particular piece of hardware. There are other pieces of it that I can see there, but that gives you some idea of where we were at. And that facility was big enough, it's close to hundred amplifiers I believe, to solve the full dynamic equation six-degree-of-freedom of an aircraft. So, Bob Howe, my close friend, was very much involved in that sort of research with this sort of facility. I just want to say a few things about analog computers and what they mean during that time period. It's when, for the first time, people could simulate dynamic systems of some complexity and learn from that process. No longer did they have to try to get some kind of approximate solution or very low order approximation or whatever. They could get realistic simulations of real systems. And so, it's during this period of the '50s and '60s, when the analog computers were way ahead, that people got interested in dynamic system simulation, which, of course, now is all over every technology we can think of uses that kind of attitude toward things. And they also had the analog computers, and that period had a profound effect on control engineering because, for the first time, you could really see as part of the design process where you were going. You could get real solutions of proposed designs and such.
And then I want to just at the end say how you program an analog computer. Well, you have a bunch of integrators. And each integrator has to have a single input. Well, that's x naught equals f of x and u. So, people who did analog computers already knew about state space. They didn't have to get it by some other, so to speak, motivation. Because all of this went well and we learned how to do both the technology and its applications, in 1957, some of us founded Applied Dynamics International. At that time, it was Applied Dynamics Incorporated. But several name changes with respect to the last word at least took place. And it was Bob Howe, Edward Gilbert, Elmer Gilbert and Jay King, a lab person in the department at that time. And yes, there are two Gilberts. Ed Gilbert is my identical twin. We worked together on all kinds of engineering-like problems in our high school years right up through the work at the university, until Ed joined Applied Dynamics as chief engineer in 1963. The company still exists. It does the same kind of thing, hardware in the loop sorts of things and provides services to people who are interested in that kind of activity. Its initial objective was to design computers for educational purposes. This computer up here was the first one that we produced. I've designed much of the stuff that's in there, along with my brother and others. And then we got into bigger computers that had patch boards so you could remove your programming interconnections. And then we got to even bigger computers, an 80-amplifier machine. And then we got up to 256 amplifiers in a big computer, the sort that was used primarily in the aerospace industry to simulate and put into the loop real hardware. It certainly was a big step forward.
That big computer I just showed you was developed around 1960, and this came out around 1966. It was the top technology ever built for analog computers. It was an amazing machine. I was involved in all kinds of things in that machine, such as this area here where you patch together logic components to form interfaces with hardware and computers and such, digital computers. So, big computer, really a nifty thing. It lasted perhaps on the marketplace for another 10 years, but that was about the extent of its livelihood.
Other things we would develop too, like the terminal that you see there. I had things to do with that design as well. And then eventually, ADI developed a line of digital computers that were especially adapted to the real-time simulation world. At the time, they were the fastest thing going for that sort of thing. Around the end of the 1990s, they ended the development of special-purpose computers of that sort. Well, enough of that. I guess just to give you a feel for where we've been.
I'd like to say a little bit about one of the results that I'm most well-known for. It involves both Elmer and Rudolf Kalman, who sadly passed away about a year ago. I started out in '59 starting to think about multivariable control systems, more inputs and outputs than single input, single output. And I saw people throwing transfer function matrices around randomly. They were doing any manipulation you might want. And somehow that didn't make sense to me. Something had to be wrong with that. So that's what I was doing in '59. Kalman was just getting ready to publish his first Kalman filter thing, discrete time. And he also submitted a paper to this very famous 1960 Moscow Congress on Automatic Control. And that was primarily emphasizing observability and controllability. So, I read that early stuff in early 1960s or sometime around then, and I tried to figure out what it meant to me. As I have been commonly doing ever since, I try to find a real simple example where I can see very clearly what's going on. What I decided to do was to take the A matrix and let it be diagonal. And we know that that's okay if you have distinct characteristic roots by changing the coordinate system, which doesn't change the idea of something qualitative. You can look at that kind of system and understand it for all systems making that kind of assumption. So, everything gets simple when you do that. Controllability means you have to have a coefficient that connects the control input in one way or another. And to observe, you have to have a coefficient that's nonzero as well. So, you can then compartmentalize the system into these four distinct entities having to do with observability and controllability, the canonical form. Kalman, at the same time, and I'm not quite sure what got him started on this, he did things very abstractly and generally. I give the guy a lot of credit for setting up a structure which is really profound.
And what he did is he partitioned state space into things on award basis, what part is controllable, what part is observable, and then what part is both observable and controllable, et cetera. So, you go from two to four pieces, and it's that four-piece thing that gives the canonical form. So, it was a paper done with time-bearing systems. It was done in great generality. Then what happened is in 1961 (well, so actually go over to the Kalman side first). Kalman submitted a paper to the National Academy of Sciences, and it was done through Solomon Lefschetz. I don't know how papers were submitted during that period, but apparently it helps have it done by somebody's who's already a member of the National Academy of Sciences. That paper was submitted in early 1961 and published in 1962. Then Kalman arranged a special section of the first CDC meeting in 1961 for the kind of stuff that he and I were doing. And sadly, he got canceled because it wasn't done under quite the usual standards of how you set up a section for a paper presentation. So, I didn't get my work published as soon as he did. He got his work published in April of '62. And because of that delay, we both submitted papers in 1962, and they appeared to the SIAM Journal on Control, and they appeared together in the same issue. So, the question, some people ask, is who did it first. And I don't care who did it first. What really counts is that we had two attitudes for looking at these issues. And what I did was very much favored by people new to the field because they could understand it in simple terms. And many people really don't know about the National Academy of Sciences paper of Kalman because it was for them in an obscure place. Kalman's different attitudes added very much to the literature. But a lot of people have told me that their initial motivation about going into control system was that paper because it gave them some clue about some of the more abstract things that could be done that were of interest. Now I'd like to jump into the area that I did a lot of work in with students over the years, having to do with computations of optimal control. And what I'd like to point out here is that I'll talk about an optimal control problem that is linear in state, but everywhere is nonlinear as you want it to be. No particular assumptions on the control set up except it's compact. And you want to minimize the terminal error just to make it simple. There are many optimal control problems that can be set up in a similar fashion.
One of the things you begin to think about when you think where am I going to go with control is reachable set, the set of all states you can go to a given time, and let's say greater than zero. And if you can understand what goes on there, maybe you understand things in a more general, perfect way. So, what turned out is that this reachable set is compact and convex, and you never quite expect that from the fact that the g function is nonlinear as you would wish. So that makes it prone to some sort of rigorous approach to computing optimal controls. So, in fact, if you think about it, if you look at the reachable set and just call it x, all you want to do is to minimize this norm of x squared on a given compact, convex set. Well, that's a lot easier than solving the optimal control problem because that's in a finite-dimensional space where the optimal control problem is in an infinite-dimensional space. The question is, what can you do to learn about the optimal control problem that will give you the tools to be able to compute an n-dimensional space rather than an infinite dimensional space. The clue is you want boundary points of this reachable set because, clearly, they're going to be where the optimal will occur. If you have a normal vector to a hyperplane in n-dimensional space and then find the particular point where that hyperplane is farthest most in the convex set that support point s of x of eta, well, if you can compute that, you at least have a clue about how you can use that particular bit of information maybe to solve an optimal control problem. I called it a contact function, I had to look at how you compute the solution of this quadratic programming problem on a weird set where you could compute the support mapping, but you couldn't compute anything else. So, I had developed a whole algorithmic procedure with convergence and all kind of error measures and such. Then if you'd take that and apply it to the original control problem, you can do it. The only question is how do you compute that support mapping. Well, it turns out it comes from Pontryagin's maximum principle. And the nifty thing about the linear dynamics is you can turn the two-point boundary value problem into an initial value problem and, therefore, you can solve it very easily. So, it's very easy using maximum principle type controls to generate these contact points in a set that's otherwise unknown. And so, it then led to computational procedures. Basically, we have a nifty way of solving a two-point boundary value problem with an iterative procedure that was rigorously guaranteed to converge. So, then I should say that the way in which the algorithm works is you start with x, x sub k. You then find the support mapping for that x sub k. You then have two points, the x sub k and the support point. Find the line segment that joins those and then find the nearest point in that line segment. And that's the algorithm. It's very easy to state and kind of intuitively understand. What I'd like to emphasize is getting at something that seems abstract in the beginning is a pretty good idea. That is to say, to think about the reachable set and to think about what you might compute on it, and it changes the whole nature of this problem. So, this idea of abstraction and looking for something weird, like the support mapping, is how one can sometimes make progress.
If you want to do something else, compute the distance between two compact convex sets, x and y, as indicated here in this little figure. You can do that using what's called the Minkowski set difference. The difference is between x and y for all x and capital X and all y and capital Y. And moreover, what ends up being a happy consequence of that is that you can get the support mapping for this new set, this different set, by computing the support mappings of each of the two sets and then taking their difference. It means you can compute with that the distance between two compact convex sets using the algorithm that I developed for the optimal control application. How is that useful? Well, what about robotics and computer graphics? You got objects in three space, described by some kind of functional form, the most common one being vertices. You have the set of vertices. If you want to compute the support function of a polytope determined by a set of vertices, it's easy to do. It just involves computing the inner product between the eta and all of the possible vectors that you have in the vertex set, and then you get your support functions. So, it's easy to compute. That suggests that maybe we can solve problems in robotics that are computer graphics. And yes, the answer is we can do that sort of thing.
The result was this paper that is now popularly known as GJK, Gilbert, Johnson and Keerthi. And so, we use that set difference thing that I showed you and all of the things we knew about speeding up the algorithm, that is the distance algorithm that I talked about, and then being very perceptive about how you could best use all these features and do it in a rigorous, very carefully [creating] the paper. Well, the devil is in the details of that one. But it does show that if you work hard on something like that, you can often go much farther than you'd ever expect that you could go. So GJK has been in common use. I don't know how commonly used it still is. My guess is it's still the standard way of computing distances between objects in three space. Well, it didn't end there. That was 1988 [when] we did that. In 2001, we did still a further extension, and that was – suppose you know what the adjacency of the vertices is. You have a table. Every time you have a certain vertex, you want to find all the other vertices that are connected to it by a line segment. If you have a table of that, you can basically reduce the cost of evaluating the support functions very much, particularly when you have very many vertices. So that, again, speeded things up. And again, I hear that's it's been very successfully used. I just wanted to give some idea about how you can begin with something, and by focusing on what you don't know and thinking about variations on that, one can make progress in solving difficult problems of a type that you didn't really start out trying to solve. There they are, Gilbert, Johnson and Keerthi. So, they're both good friends. And Keerthi now works for Microsoft in the San Jose area. Johnson is retired, and I'm retired.
There are other things you can do about this distance stuff. If you have a non-convex set but that is compact, you still have a defined support function for any compact set. What happens if you use that support function and do an algorithm for distance minimization? Well, you'll get the closest point not in the set x, but in its convex hull. If you were worried about things bumping together, you'd just as soon have convex hulls not bumping together. That gives you a little margin, but it still means you can apply these algorithmic things to objects that might not necessarily be convex. Then there's the issue: if I have math models for two sets, x of t and y of t, that are moving around, such as an arm of a robot or something of that nature, and I want to determine the distance as a function of time between it and something else. How do you know how smoothly that distance is going to vary? Well, that's something we had to do. You have to prove this. And it turns out, if the x of t and y of t move in a smooth way, then the distance also varies in a smooth way. So, you can begin to think about changing trajectories of objects and such and doing things about obstacle avoidance and such. I have a little pride here because that paper was published in the IEEE General Robotics and Automation volume one, issue one, near the front of the particular issue as well. I want to also say that there're ways of doing other kinds of optimal control problems using these tools besides the one that I've been emphasizing. Well, interesting things happen at a great university. And so, we had in '65 two people at the university, Lamberto Cesari, the world-famous differential equation person, did all kinds of things, very eminent person, wonderful person.
We got to know each and enjoyed our company. And then there was Lucien Neustadt, who was the fellow that wrote the paper that I used early on and what I'm been describing. He became a very close friend of mine. And I convinced him to come and spend a year at the AERO department in '64, '65. So those two people were here. And then, and I think it was Cesari that must've been the leader on this, convinced Gamkreilidze and Mishchenko to come. Those are two of the four authors of the famous maximum principle book, to come to the University of Michigan for a stay of, let's say, four or five months. I've forgotten just how long it was. So, indeed, they came. And while they were here, they convinced Pontryagin to come for something less than a week. 31:09 Everybody wanted to see Pontryagin and what he could he say about the world of optimization and control and such. And so, we couldn't have a big meeting on a spur of the moment, so we said, well, why don't we make a list of people, and that list of people will perform at least a greeting for Pontryagin. Well, once you put out a list, everybody wants to be on the list. So that was a bit of an issue. But it was great to have Gamkreilidze and Mishchenko here. I got to know them both very well, and in particular at least for doing out of the university things, Mishchenko and I got into my car and we'd drive around. Well, at that time, I had a Jaguar sedan. He thought that was one of the best things in the world. And he said, Pontryagin, when he comes, has to take a ride in your Jaguar sedan. So, indeed, it happened. So, I had a vehicle that I could interest anybody that had control knowledge about maximum principle of such that, oh, there's a distinguished vehicle. It had Pontryagin in the back seat. There were other stories too, and it was nice to be able to talk with Gamkrelidze and Mishchenko in particular about how the maximum principle came to be. The book came out in '62. The work actually was more or less finished in '55 or '56 or thereabouts. But, of course, it was world changed because it put down necessary conditions for optimal control problems in the form that you could read, and that's no joke. I mean, it was put together very carefully so that you could understand exactly what had to be done in order to apply the necessary conditions.
Now, I would like to talk about a number of things more quickly just to give you a better view maybe of all the things I've been involved with. There is this famous problem. You're given a linear system, constant coefficients with n inputs and m outputs, and then you have a feedback law that you put around it that has two matrices, F and G, that connect the state of the system and also to an external input v. And what you'd like to do is to choose F and G, sounds all simple, in such a way that the transfer function matrix becomes diagonal. So, you've decoupled the inputs and outputs, which, of course, is what people would normally in many cases want to do. Well, that sounds simple, but people struggled with that. You'd like to know is it possible to decouple, and if so, what's the correspondence between the F and the G and the results that you get. By the way, this problem of decoupling by this scheme was first proposed by Bernard Morgan, one of my doctoral students, about three or four years before the time that is shown here. People struggled with that, and I struggled with it. And I was able to figure out what the solution was, at least a form of a solution with the one-to-one correspondence and things turned out to be. So, my paper really created enough interest that people started counting the number of people reading the paper. And it got to be pretty large, distinctively so. Anyway, where was that paper presented first? It was in 1968 at the JACC, which happened to be in Ann Arbor. Some of you would like to go to the third floor of the Rackham building and go into the big social room that they have on the third floor and look at the east most room, kind of aside room. That's where I first presented the decoupling results. So, every time I go to Rackham for something or another, I have the pleasure of thinking about that wonderful circumstance. You can also apply it to nonlinear systems. And Professor Grizzle and several others worked on that problem. And I think Grizzle's results are the most impressive. But it certainly was a topic of interest. Why don't you see it anymore? That's a good question. It looks like a great idea. Well, you're very constrained if you ask things to be that tightly decoupled. And so, I'm sure there are people in this room who know the modern H infinity type stuff well enough that they would tell you that that's the pretty stupid way of going. I haven't seen much about publications. Maybe it's because they've all been finished anyway. Then this is something that I did when I had a year leave at Johns Hopkins University. And it involves this business of differential equations and their input/output characteristics and how they might be expressed in an alternative way, namely by a Volterra series, where you have series of integral terms, first of all, being linear in the control and then being quadratic in the control and such.
If you look at the first one, the one involving w super of 1, that's standard linear theory. That's your impulse response. That's something you're very familiar with. But then how do you compute this? If you have a system of nonlinear equations such as above, how do you find for it what the second variation is, because some people really want to know what the second variation is because it gives you a way of specifying some nonlinear characteristics, getting bottoms on them and such? So, there's good reason to want that kind of Volterra series. There are questions here. Does the Volterra series converge? How are the two representations, the differential equation and the series representation, related? And what mathematics is best? This is a problem that's been studied for a long time. Roger Brockett, in particular, is one who has done a lot of work on this. They all have involved some kind of special cases that you'd rather not have to worry about. And then it just seemed to me that something should be different. So, I started digging into the literature with the spare time I had and found out about a paper that was written in 1927 by a well-known mathematician named Graves. And basically, it's a variational series or power series, if you wish, in a bionic space, from one bionic space to another bionic space. I worked hard on that one. But I've been able to apply this Grave's type variational expansion to this problem of understanding the inner connection into these two representations. And I was able to do more general problems than people had worked on before, in a more elegant theory. And it showed -- You could actually see if you have the differential equations that are nonlinear and you start taking all kinds of partial derivatives and such of the data in the problem. You could actually start building up a differential equation that gives you the second variation. And then you see the structure of it and so on. It was a real, so to speak, departure from my interest in problems that had been coupled to applications to doing something mathematical. And I really am proud of that paper, and I think it's one of the paper award for 1977 IEEE Transactions; 1976, was the CDC -- I forgot. Makes no difference. Any case, it did get an award for the best paper of that particular conference.
There is now this issue of stability model predictive control. Model predictive control is now everywhere. Keerthi was my doctoral student at that time. And he also worked for Applied Dynamics on the side. We got to talking. I talked with him about this idea that analog computer people had always been kind of pushing. If you can solve differential equations a hundred times or a thousand times faster than real time, then you'll have an implicit way of having a nonlinear feedback that does the implementation of the optimal control. You just say, here is my state and almost instantaneously, I can solve the optimization problem. I know what the control is. I stick it into my system and I go on. That sounds all wonderful, but it doesn't have any real mathematical proof. And moreover, if you're working with discrete time systems, it's not clear how the whole thing works either. And so, in the late 1970s, '78 or thereabouts, Kwon and Pearson, Pearson's a well-known person at that time, attack this problem for linear systems and quadratic measures, such things such as that. And so, while I introduced Keerthi to this and we talked about it for a while and so on, he went away for a while and came back with a complete solution of very nonlinear, any cost function, state space constraints, the whole thing. And he could prove stability of my model predictive control. I really do believe that it was the first paper that did model predictive control stability with such generality. You could say maybe Kwon and Pearson stuff on linear systems was the first time it had been done. Keerthi is a very bright guy, and he just was great to work with. And so, I claim very little about the authorship of this paper because he basically went away for a while and came back with what essentially was the complete solution. Great guy. Then there are other areas in which I have done work. I actually really don't have time to get into. One is periodic optimal control. You can think of steady-state operation for a system in a static state and optimize that kind of problem, but you could also think about optimizing over the class of controls that produce periodic motion and then optimize for the best of such things. Well, that's what optimal periodic control is all about. And it has applications and aircraft crews. I'll mention a few in just a minute. But it got started in mechanical engineering area because there were a lot of chemical processes by which this kind of periodic operation did, indeed, yield substantially better results in these industrial problems.
I've done some theoretical things, and some with Dennis Bernstein, who sits over here when he was a student here at Michigan. There were many kinds of necessary conditions that could be used for optimal periodic control. I wrote a lengthy paper on all of those conditions and how they intermeshed. And again, I had some spare time being alone for a couple of months to work on that. The applications are of interest in aerospace because if you take an airplane such as the U2, that's more like a glider than a real airplane, you can, indeed, get better fuel efficiency by turning the engine on and off. You go through some of kind of a cyclic motion like this and you can take advantage of all the nonlinearities that are present is this problem. There are other problems that turn out to be this way too. In 2013, Sara Spangalo was a student working with me in aero, and she and I talked about a loitering problem, where you have an aircraft going around in circles. And you want to get a solar powered aircraft. You want to get the best efficiency again. When you're going down and the sun is in that direction, you get a better exposure to your solar cells. When you turn around and go back the other way, you'll also get an improvement. So, the question is if you have up and down motions that have a circular trace on the ground, can you improve the fuel performance? In other words, the amount of battery power that's required. And the answer is yes. You can make improvements in the area of 20, 30% for the kind of problems we talked about. I get great pleasure out of finding something like that where you are looking at an application as being a primary interest. Then there's another kind of distance. I've been talking about distance and all these calculations we can do. There's another distance we introduced, Chong-Jin Ong and I, where you have what we called a growth distance. You take a center point in an object's model and you either contract or expand the object about that center point, and that forms some kind of a distance measure about how far out or how far in, let's say, an obstacle goes. So, if you're initially trying to find the connected path through a field of obstacles, what you can do is to begin with shrink the obstacles. You'll have lots of free space in which to move around. You can figure out a trajectory, and then you can start pushing that trajectory by expanding these things until you could either make the desired result take place or you fail and you try another thing at the beginning.
That work has been used by other people. This kind of a distance measure has been used by other people in recent times. I've seen references to it in the last few years. There is an issue in discrete time systems of what's the maximal constraint admissible set. So, give me a linear system, discrete time, and you have some kind of constraint being posed on the motion of that system, like an inequality condition on an output, for instance. And then ask the question, what is the larger set of points I can start out at and converge to the stable equilibrium, let's say the origin. So that's what we'd call maximal constraint admissible sets. And that was work I first did with a student, Kok Tin Tan, and then I've continued with Ilya Kolmanovsky. The two of us have worked for a long time on such things. There were two primary papers. One was the first one with Kok Tin Tan, and the second one was the one with Ilya Kolmanovsky, quite a bit later. They both have achieved general approval as being outstanding ways of getting all the details of how to compute these admissible sets. What caused me to get involved in this whole thing in the first place? I started turning pages in the CDC Proceedings and saw a paper by Kapasouris, Athens and Stein. I didn't know the first guy, but I sure knew the last two guys. They were well-known people. They must be doing something moderately interesting. And so, I looked at that. And the only problem was it took a huge amount of computer speed in order to implement it because you had to successfully -- What you do is instead of taking with a linear system control demands, as you pull back on it, not use the full thing, trying to be sure that you always stay in a safe region. I thought about that. I said, there's got to be something simpler. So, discrete time is the first thing. It's the thing people nowadays use anyway. And the other thing is these constraint admissible sets must somehow be involved in that. And so, indeed, that's what led me to be involved with Tan in the first thing. But then, this is an area which is called reference governors. So, Ilya and I have been working on that area for now some time doing all kinds of applications problems and developing theory, doing extensions that allow compensation and such that improves the size of the admissible set and such. So, the moral of the story is keep your eyes open. You don't have to understand everything in a paper to get excited about what's going on. Think about it in your own ways and then maybe read the paper more carefully or whatever. But [there’s] nothing wrong with doing quick scans, keeping your eyes open for something novel.
Okay, well I think that means I got pretty much to the end of my talk and I probably talked too fast on too many things. I don't know. So, these are just a few things you might want to look at as things that could be perhaps of value to you. And if you want to explore some of my papers, they're all available in my vita on my website. So, I guess I will end on that.