About Carl Ruoff
Carl Ruoff was born in Los Angeles, California and grew up in Compton. He began his undergraduate education at Graceland College, dropping out briefly before enrolling to California State University, Long Beach where he completed a B.S. in Mathematics in 1967. Following this, he attended UCLA for a M.S. in Physics, which he received in 1971. He later went on to complete a Ph.D. in Mechanical Engineering with a minor in Computer Science from Caltech in 1993. Ruoff has held several positions prior to his employment at the Jet Propulsion Laboratory in 1978. These include Mechanical Designer at Arrowhead Metal Products, Philoco-Ford, TRW, and Purex (1961-1967), Scientific Programmer from McDonnell-Douglas Aircraft Company (1967-1970), Systems Programmer for Information International (1971). Teaching Assistant at UCLA Department of Physics (1968-1969 and 1970-1971), Staff Engineer at Bendix Corporation (1972-1973), Technical Staff at General Electric R&D Center (1977-1978), Lecturer of Mechanical and Aerospace Engineering at UCLA (1995-Present), and Visiting Associate in Mechanical Engineering at CalTech (1996-2000). Starting as a Member of the Technical Staff of the Robotics and Teleoperator Group in 1978, his current position at JPL is Division Technologist of the Observational Systems Division and Science Division (2001-).
Ruoff's research interests include robotics, advanced computing, intelligent systems, and technology development and planning. For his work he has received several awards and honors, including the JPL Outstanding Mentor Award in 1996 and the NASA Exceptional Service Medal in 2008.
In this interview, Carl Ruoff discusses his work in robotics, focusing on his research at Caltech and JPL. He describes his involvement in projects, such as the Sojourner rover, and comments on the challenges and breakthroughs he faced. Additionally he describes the environment of the lab and the various collaborations he engaged in, as well as reflects on the future of robotics in space exploration.
About the Interview
CARL RUOFF: An Interview Conducted by Selma Šabanovic with Peter Asaro, IEEE History Center, 2 June 2011.
Interview #746 for Indiana University and IEEE History Center, The Institute of Electrical and Electronics Engineers Inc.
This manuscript is being made available for research purposes only. All literary rights in the manuscript, including the right to publish, are reserved to Indiana University and to the IEEE History Center. No part of the manuscript may be quoted for publication without the written permission of the Director of IEEE History Center.
Request for permission to quote for publication should be addressed to the IEEE History Center Oral History Program, IEEE History Center at Stevens Institute of Technology, Castle Point on Hudson, Hoboken, NJ 07030 USA or email@example.com. It should include identification of the specific passages to be quoted, anticipated use of the passages, and identification of the user. Inquiries concerning the original video recording should be sent to Professor Selma Sabanovic, firstname.lastname@example.org.
It is recommended that this oral history be cited as follows:
Carl Ruoff, an oral history conducted in 2011 by Selma Šabanovic with Peter Asaro , Indiana University, Bloomington Indiana, for Indiana University and the IEEE.
INTERVIEWEE: Carl Ruoff
INTERVIEWER: Selma Šabanovic with Peter Asaro
DATE: 2 June 2011
PLACE: Pasadena, CA
Education and Career Overview
Okay, so just the –
Some of the future challenges.
A brief bio.
Okay, I was born in Los Angeles, California, and I grew up in Compton, California, which nowadays is kind of a heavy place, but it was good when I was a kid though. We had music lessons in school and all that kind of stuff and I went to graduate from high school, Compton Senior High School, in 1960 and then I worked for a while and then I went to school at a little school in Ohio called Graceland College because I had a lot of friends that went there. It was a religious school and then I decided I wasn’t going to go to school anymore, but within a semester I was back at school. I went to Long Beach State, got a degree in mathematics. After that I went to UCLA and got a degree in physics. I was going to be a particle physicist, but concluded that the employment opportunities were very grim. So I thought well, I’ll take a year off and this is back in 1972 and so I resigned from school and started looking for a job and jobs were pretty hard to find in 1972.
I had fortunately had a summer job the summer of 1971 at Information International where I learned to do machine language programming and so then after I was looking for a job there in early ’72, I saw an ad in the “LA Times” that said, “Come to Dayton, Ohio and do machine language programming,” and having lived in Iowa, this little college for a year and a half, I thought well, I kind of like the Midwest. Rolling hills, pretty green trees, not as much cactus and that kind of thing. So I went down and interviewed with this fellow and he said, “What are you going to do?” He says, “We’re going to build a smart robot,” and I didn’t have any experience with robots at that point other than reading science fiction and stuff like that, but at any rate, he said, “We’re going to build a smart robot,” and yeah, thought that’s kind of interesting, and I was planning to take a year off of school and I kind of figured out what I wanted to do, do a little programming or something and then go back to school, but so then he said just going to build a robot and I thought, well, that sounds kind of interesting, so I went back and interviewed with them and concluded yeah, I wanted to do that. It also was true that it was pretty hard to get jobs then. I had one other job offer here in LA, but it was programming missile systems or something and I just thought it’d be much more interesting to build a robot.
So as a matter of fact, I got married. The day we got married, we got in the Volkswagen and drove to Ohio and the interesting thing was that I started asking around like we’re going to build a robot. How does it work? And nobody knew. I mean honestly. There was this high, neat idea they were going to build a smart robot and the idea was that by the 1974 Machine Tool Show, they were going to have two of these Bendix robots and they were called PACS which stood for Programmable Automation Cybernation System, which is kind of a pompous title, but at any rate, the idea was they were going to have two Bendix robots on a conveyor belt. They were going to have one of them putting together Chrysler air temp air conditioning compressors and they would put it on the conveyor. The other one would take it apart and put the parts back in a bin and they would just continue to do that and the idea was it would have computer vision, “adaptive reactive intelligence,” and I remember asking, “What the heck does that mean?” But the thing is nobody knew how it worked and I’d go around asking everybody. I said, “Well, how does this work?” “Well, I don’t know. I’m working on interfaces,” or, “No, I’m designing the arm.”
So I was the only professional programmer on this project, so I thought, well, okay. This actually can be kind of fun because I had always wanted to write an operating system and always wanted to do that kind of thing, so I just very gently started going around saying, “Well, would you mind if the angles increased counterclockwise?” “Oh, sounds good to me.” “Would you mind if the radius increases from zero?” and so anyway, we defined the coordinates of the robot in a very gentle way and I just asked questions. “Could we do this? Could we do that?” and finally within a couple of weeks I was the project manager of the control system because nobody knew and then I began to wonder since they didn’t really have any hard requirements that were specified in any kind of a mathematical way, the idea was that if there was a problem, the machine would look at it, figure it out, and fix it, and I thought to myself I don’t have the slightest idea how to make something that smart, but I know that I can make something or we can design something and build something that you can have a reasonable BASIC-like language or something that you could program to do things kind of like machine tools do. So that’s what we set out to build and as a matter of fact, we were able to build that.
But it was actually a couple of years after I started working there I saw a write-up of what was supposed to happen and they were going to hire two. They only hired one, which is myself. They were going to hire two so-called intelligence engineers and within six months, they were going to build this smart robot that could look at stuff and figure out what was wrong and that kind of thing and I’ll have to say that still today we can’t do that. I mean we’re a long way from having a robot that you could say, “Hey, go get me a beer out of the fridge and when you get back we’ll talk about who’s running for governor,” or something. That is such a difficult problem that we simply can’t do that currently, but that doesn’t mean that robots aren’t very useful because your cars are put together with robots. All kinds of machinery and appliances and things like that are put together with robots, which in fact do very useful work, but there’s an interesting thing about it.
I remember I was pretty fascinated with building this robot because basically we started off with an empty mini-computer and this is 1972 and we probably had – it takes several nanoseconds to do a register add, but it was a pretty fast mini-computer, but very slow by comparison to what we have today, but we filled the robot up with the idea of having an empty computer and providing it software so it could actually respond to the environment and do things, move around. That was very fascinating to me. Motor control and I learned an enormous amount from that and we wrote a lot of software. This machine didn’t even have a pushdown stack, so we had to write pushdown stack software and linked list software, which is all fortunate that I had that summer job previously because I learned all about that kind of thing in that job. So we were actually able to build this robot and it had. I began to realize that you wouldn’t just want a robot that could pick and place. You wanted a robot that could pick and place and adapt to things like if you had taught it to pick up that tape cassette holder and then somebody moved it that the robot could actually adapt its motions to do that. But this is all without path planning, so we had to be kind of smart, but they’re industrial robots too, so they’re in a pretty organized environment. But in those days I thought well, it’s really fascinating if you can make a robot solve problems and that’s true, especially if you’re as we do at JPL, if you have an environment where it’s unstructured and you’re not sure what you’re going to accomplish. But in the case of an industrial robot, you don’t really want them to be. You’d like them to be smart if it was affordable, but fundamentally you don’t want a robot that could – I always thought this would be a good fiducial problem. You say make me an envelope and so the robot can pick up scissors. Maybe it traces an envelope pattern on a piece of paper, picks up scissors, cuts it out, and it gets a glue bottle, glues it and makes you an envelope. That would be a real tour de force from the standpoint of dexterity and problem solving stuff, but nobody wants to buy envelopes that were that expensive because it would cost you a lot. You want to have a roll of paper that unrolls at 60 miles an hour and goes into a machine that just spits these things out and automatically puts them in boxes. That’s how you can get envelopes that cost – what do envelopes cost now? A nickel, a dime? Something like that.
So anyway, it’s really very different, but then after Bendix, I could tell that Bendix was interested in making profits right away and back in the early ‘70s, companies weren’t yet convinced that robotics – I mean they were using them to weld cars together and things like that, very, very precise things, but they weren’t convinced that adaptive reactive robots were going to be really very useful and I could sense that Bendix was going to get out of the robotics business, so I went to the R&D center in Schenectady because General Electric, the General Electric R&D center, they were very interested in robotics for their appliances and that kind of thing and I worked there for a year, but while I was there I ran into some people from JPL and at a robotics conference and they thought, “Well, why don’t you come out for a visit if you’re ever out here?” and I happened to go to Stanford Research Institute, now SRI International, to import a computer vision system and it was just a laminar computer vision system for our lab at General Electric and so I stopped here just for a professional courtesy, find out what everybody’s doing, and the supervisor at the time said, “Oh, when are you coming here to go to work?” I said, “Oh, I didn’t come,” and I said, “This isn’t an interview. I just came here to talk about what everybody’s doing,” but they were a little bit persistent and my wife hated Schenectady, so we ended up – I mentioned to her that these people from JPL kept calling up to come here and go to work and she said, “Go back to California?” and sure enough. When she said that, I thought I’ll bet you in six months I’m living in Pasadena, and it wasn’t quite true. It was La Crescenta.
But so anyway, we moved to JPL partly because I really became fascinated with this whole neurological idea of what does it mean to be smart, what does it mean to be conscious, and those kind of issues which we don’t understand really, by any means, but I thought if there was an organization in the world that really needed capable robotics, it was going to be a place like JPL because if you’re going to go to Mars, it’s going to be a long time before we put humans on Mars if we want to get them back alive and so I came to JPL and we were just winding up the Mars 1984 rover project. We thought maybe we were going to send a rover in 1984, which we never did, but and then NASA funding for robotics started to roll off, but I just kept pushing. I thought this is going to be good for us and then the supervisor left and went to Boeing, so I became the supervisor and that was an interesting problem because I thought if I become the supervisor, that means I’m going to do a lot of administrative stuff and I won’t be doing really a lot of research, but if nobody who has a passion for it is a supervisor, it will just die. You need entrepreneurs. You need people who are willing to push things.
So I became the supervisor and I pushed this and I pushed that and I was able to – it’s kind of interesting, but one day because Caltech at the time did not do robotics, although Marc Raibert [Former Prof. Carnegie Mellon University and MIT] who later went to – you’ve probably interviewed him, but he later went to MIT. He worked here when I first arrived. He had been at Caltech, but they didn’t really start a robotics program and so one day I just called up the head of the engineering and applied science department, cold call. I remember I was kind of nervous, but I told him that well, we do robotics at JPL and Caltech doesn’t, but most universities now are starting a robotics program and isn’t there some way we could work together to do some robotics stuff and so he said, “Well, it’s interesting. We’ve been thinking we ought to do that,” but the way it works at Caltech is that you have to get a professor who’s really interested in and championing this. He said, “You need to talk to Fred Culick [Emeritus Prof. Jet Propulsion].” Okay, so I called up Fred, whom I didn’t know. I called him up. Oh well, I had seen him before, but I didn’t know who he was. Feinman [Prof. SUNY Downstate Medical Center, biochemistry and medical researcher] and Carver Mead [Emeritus Prof. Caltech, Engineering and Applied Science] and John Hopfield [Prof. Princeton University] were conducting a course. This is in 1982, called the physics of computation which I went down with a few other people and we took the course and I saw this looked like it must’ve been a professor or an older student, but he was in there all the time. Turned out that was Fred Culick. Anyway, I talked to him and he says, “Yeah, why don’t you come down and eat lunch with us and we’ll talk about it?” So I went down to the Apaneum and met with there were four professors there and myself and we talked about how we might collaborate. Then but the great thing was that getting the campus involved since we’re all Caltech employees and JPL pays a lot of attention to what campus people think, so getting the campus involved, the next thing you know they said, “Well, can you teach a robotics course?” which I did. We set up a little robotics lab and then Fred and I wrote a proposal in 1984. We wrote a proposal simultaneously to JPL and Caltech. We said, “Well look, we need to do robotics. If JPL’s going to prosper in robotics, we need to have a good investment, so why don’t you give us 500K a year to do robotics and we’ll do it part on campus and part here?” and so they didn’t give us the 500K, but what they did do is we have internal, our research grants, and they started what they called a large DDF that stands for Director's Discretionary Fund. That was director’s discretionary fund, and so they said they would give us 300K a year, so I made the executive decision that we would not do manipulations as a lot of people are doing because of industrial robotics. A lot of people were doing manipulation and we ought to just do rovers because the idea of a rover was it had to be able to rove autonomously. So we took that money and actually built. Matter of fact, Brain had a big role in that, a very big role. We took that money and did the first-ever that we were aware of autonomous mobility demonstration out here in the arroyo next to JPL, natural terrain. That was in 1989. There had been a lot of mobility demonstrations where people were going down the hall and avoiding wastebaskets and stuff like that, but we actually went out, started the rover and it took pictures and it navigated over behind some bushes all by itself and that was really a first.
But then the interesting thing was so I got to know Fred and I taught this robotics course for a couple of years or I taught it for one year and then to be frank, I got really disappointed at the lab here for the following reason. We had busted our hump putting together a proposal for DARPA to do rover-type work and this is late at night. We write the proposal, this and that, and you had to write it in a fashion where you wrote kind of a proposal to be able to propose and they were being very selective and there were only six organizations in the country that were chosen to be able to propose later and so I happened to be in Washington DC at a meeting and we had submitted this proposal and I was just itching to know how did it go. So I called up our program office here and I said, “Well, how did we do?” and they said, “Oh, we were accepted. We can propose, but we withdrew.” I said, “We withdrew? Why?” He says, “Well, we can’t compete,” and a bunch of stuff and I thought and I happened to be standing next to a fellow, David Nitzan from SRI at the time and whom I had met. He and Charlie Rosen had run multi-client studies on robotics which I had participated in when I was at Bendix and General Electric, so I knew them, but anyway, I was standing next to David and they had written a proposal and they were also selected. So he said – the old days of no cell phones. We’re on the landlines, a payphone, and he says, “How’d you do?” I said because I knew that they had made it. I said, “We made it, but they withdrew. They withdrew,” and I said, “I feel so disappointed. I feel like quitting,” and he said, “Well, why don’t you come and work at SRI?” So I said, “Hmm.” I said, “Well yeah, I want to finish my Ph.D.,” because I never did. I got involved in robotics and I never did finish my Ph.D. at that time at UCLA in physics. I said, “But I want to finish my Ph.D...” He said, “Oh, you can go to Stanford.”
So I applied to Stanford. I knew Bernie Roth who was a professor there and he agreed that he’d be my thesis advisor and so I actually sold my house and went in and resigned, told my boss I was leaving here at JPL and then I called Fred Culick up and I said, “Well, I won’t be able to teach a robotics course this spring term because I’m moving up to Stanford and going to go to school there,” and so then he and I knew the people at Stanford knew Tom Binford [Thomas Binford, emeritus Prof. Stanford University]. A lot of people at Stanford have been very active, Lou Paul, and so but then a couple days later, Fred Culick calls up. He says, “Well, if you could go to Caltech would you stay?” I said, “Boy.” I said, “I already sold my house and everything.” I said, “Well, I would really be tempted,” because I always had a soft spot in my heart for Caltech and I had helped put the lab together and this and that and so the not result was that I stayed here and went to – Fred was my thesis advisor then so I got a Ph.D. in robotics from Caltech that had kind of just started their robotics program. Joel Burdick [Prof. Caltech, Mechanical Engineering and Bioengineering] is a professor there now. He had graduated from Stanford I guess and so anyway, it was kind of an interesting way for things to happen, but then after I finally finished my thesis I was over on campus for five years and then I had to come back here and work and so I finished my thesis at night working at home and but then JPL, by getting campus involved and JPL got much more involved and they started thinking, well, because of this grant we had and the autonomy in robotics both at campus and JPL decided it was really something they wanted to do and so JPL, because of that grant we had doing the autonomous mobility and because we had gotten some money, David Miller whom you may interview, he’s at Oklahoma, he was working on small robots with what they called behavioral control.
The idea is if you run into something, it’s really simple. You run into something, you back up, turn left a little bit and then go forward until you finally get out and you kind of get stuck that way, but it was a useful paradigm and but because of that experience, we were able to get the path finder project which was a Mars, the Sojourner rover. It was actually a flight experiment. It wasn’t a real hardcore absolutely rigorous science because it was a flight experiment to show that robotics would be useful and or the mobility would be useful on Mars and the funny thing was that before we actually flew that, there were a lot of people here at JPL that said, “Oh, we don’t want these robotics. We don’t want those damned robots. They’re going to take up mass and take up instruments. We would rather have the instruments there,” or “Let’s put an arm on the lander and it could reach out and grab some stuff,” but Sojourner was so successful that now everybody pretty much believes that mobility is a good thing to do with robots for planetary exploration.
So anyway, I did finish my Ph.D. at Caltech, but then I became the section manager of the robotics section. While we were doing Sojourner, Donna Shirley was the project manager for the rover, for the Sojourner rover that people who were running pathfinder didn’t necessarily want that rover taking up mass on there, but higher pay grades than they intervened, so we actually flew that. But it was kind of interesting. Some of the stuff, I mean just a story that really makes me kind of still chuckle is that I mentioned at Bendix that we had this empty computer. All it had was an assembler, a Fortran interpreter, and a link loader. So you certainly couldn’t run a robot very fast with a Fortran interpreter, so we decided to do it all in a similar language, but so we had to write all our own math routines and everything and it was really an empty computer, but so we figured out a way to do fast sines and cosines using interpolation and some stuff like that, but so we wrote this routine up to do sines and cosines and I asked the fellow that was helping me do programming. He didn’t have a background in programming and he didn’t have much college education in terms of mathematics and stuff, but he’s a good hard worker and so I said, “All right, I want you to do a test. I want you to take all of the” – and instead of using degrees, we organized it by binary fractions of turns. So a full turn was all ones and that made it very easy to index into a table and we could get a sine or a cosine in seven microseconds. Okay, so that was pretty fast, but anyway, I told him. I said, “All right, we want to test this. I want you to just go all the way through the first quadrant, get the sine, square it, get the cosine, square it, and add them, and then print the results out.” So he came to me. He says, “There’s something wrong.” I said, “What?” He said, “It always adds up to one.” I said, “Oh.” I said, “That’s exactly right.”
But anyway, it was very interesting when people started in robotics there was I remember hearing a story that back around 1965 at Stanford at the AI lab they thought that a couple of the grad students could program a robotic vehicle to drive to San Francisco from Palo Alto autonomously and do it in six months and I will say that there’s a lot of work going on. Mercedes and various companies are trying to develop that, but that’s pretty dangerous and it’s not really the kind of thing where you’re going to easily solve the problem. If you took all the cars off and you painted a yellow line down the street or some fluorescent line or something and you just had a sensor, yeah, you could do that, but that’s very simple. That’s not recognizing features in the environment and trying to decide what their intent is and that kind of thing, so but anyway, now JPL, we did Sojourner. That was successful. We did MER and the MER rover, one of them is kind of crapped out, but the MER rover is still going. It’s gone, I just read, about 30 kilometers just today I think it was, and so now I think people believe that the robotics has a place in lots of activities and certainly the military is interested in all kinds of robotics. I mean cruise missiles are really robots. They aren’t trying to do dogfights and invade people shooting them, necessarily, but they still track the environment and listen to GPS or look at topographical signals and get very precisely to their targets.
So I guess robots will be with us for a while, but I think it’s going to be a long time before there are really dexterous smart robots. I mean for example, you take a person, you say, or a good example. I remember going to automotive assembly plants and a guy working on a line can grab a handful of fasteners just out of the bin, go like this and stuff them in and magically they kind of come out of your hand with the right end pointing forward, install them and then grab a screwdriver, an air-powered screwdriver and screw them in, and the dexterity to be able to do that and recognize the shape and the orientation of something or even recognize yourself if I gave you that pen and let’s say it was really cold, you picked it up, you’d go, “God, what’s wrong with that pen?” You have all this built-in knowledge. It would be very difficult to convey that in a way that you could reason about and anticipate, but again that doesn’t mean that robots can’t be very, very capable and really enhance productivity, but the basic problem of how over four billion years we have these neurons that can think about things like how are we here and how did the universe start, it makes us recognize what a difficult, difficult and subtle problem that is. But that’s really interesting.
Just to cover some of the historical elements, what year was it that you went to GE and what year did you come to JPL?
Okay. I came to JPL in 1978 in October and I went to GE in 1977 in September and I went to Bendix in April of 1972.
Ph.D. at Caltech
And then you went back to Caltech. What year was that?
I started in 1985 and went back as the oldest graduate student there. I was 43 at the time. When I finally graduated, I was one of the two graduate students with pretty graying hair, but –
What did you work on when you were doing your Ph.D.?
As a matter of fact, okay, Fred was interested in robotics and I say he was the champion to get the robotics program going, but what I did was I was very much interested in neural networks and learning and so I took a lot of biology classes. My minor advisor, I had my major was mechanical engineering with Fred. My minor was computer science. Carver Mead was my minor advisor and so I was very much interested. Well, I had kids by this time. As a matter of fact, I had four kids when I went back to Caltech to grad school, but I observed that kids – I watched my daughter when she was just an infant and she was interested. She had this red bunny and that’s just huge intrinsically for genetic reasons interested in the red bunny and she wanted to grab the red bunny. So yeah, she was watching and she couldn’t even sit up yet and she was watching this thing and she reached. I could tell she was trying to grab it and she puts her hand over here and I watched her for maybe 40 minutes and she put her hand here and then she realized that she could go like this and get her hand on the bunny and so then after that she began to get closer and closer and finally, after 40 minutes, a half hour or so, whatever it was, she got so she’d see that bunny and she would reach for it. She couldn’t yet manipulate it, but she could get her hand on it, which made her again intrinsically solve problems and so I thought that’s pretty interesting because how do kids learn to control themselves?
You look at babies. I have a new grandson just two months old and you watch them and they’re going like this and that’s how you learn to control your arms because you issue commands and you see what happens and there’s something intrinsically that says that wasn’t quite right and or I played the violin for years. Still do occasionally, but it was always interesting to me how you could learn to play the violin. It’s true that your teacher beats you up if you don’t do it right, but you listen to it and you’re doing this and you say, “Oh, that sounds awful,” but you learn somehow to change the stiffness of your muscles or the amount of force and you make it sound better and you make it sound on tune. There’s always kinds of loop closure things, but you have capabilities in your nervous system and recognize if something isn’t right and if you make corrections, they recognize how to correct, but the interesting thing is that let’s say you’re a concert violinist or a concert pianist and you’re going to be in a skit, so you want to play badly. So you can actually learn to play badly because your knee-jerk reaction is to play very, very well, but you’re supposed to play badly. You have to say, “Okay, what am I going to do? Okay, this and this,” but you actually have to learn that. So the point is you’ve got internal models that you can adapt to, whether it be good or bad, but you’re figuring some – or optimizing some figure of merit that means that it’s acceptable what you’re trying to do.
But, anyways, so my thesis project was two things: learning hand-eye coordination, which basically means you reach for something and I actually had to do this in simulation, because we didn’t have any money to build hardware, but learning hand-eye coordination was just basic. And that’s a common robotics problem. When I worked at Bendix, the first time we did a robotics demo people thought, well, you could just take a tape measure and measure from the middle of the robot to the vision station and this and that. And as long as you put the part in exactly the right place, it would pick it up. But if the part rotated, the arm would go to the wrong place. I began to realize, oh we gotta have the ability to do these coordinate transformations that I mentioned. But then you still have the problem of how do you get – when the computer vision system sees something, it’s gonna see it its coordinates and you need to translate that to the coordinates of the robot so it can grab it. And how do you do that? And we ended up with little things, the spinners, and we figured out that you could – if you always poked a piece of clay you could look at the deviations. If you rotated the piece of clay – you’d have pointers on it so you could tell where it was oriented – but it would form a little circle of mistakes, but there was a well-defined relationship between the last point you made and where it outta be. And so you could go in there and kind of tweak your tables to make the vision calibrated.
But I wanted to learn, again, how does my daughter do this? Well, she plays fiddle with it. So I built a system that could do that. And then I was also interested in how to do you learn to catch? Because you know yourself if you have a well-defined ball, somebody throws it to you, yeah, you can learn to catch that. But then if somebody throws you a beach ball instead of – you know, they go like this. They just fall, descending fires sounds to me, so to speak. And so the simulation there was a learning neural network that you could teach, that it would learn just by watching. It would learn, it would watch the ball, decide where it’s going to land based on experience, decide this is a light ball or a heavy ball, and then it learned to move the arm just by moving the arm back and forth and watching it a lot. So watching it. So it knew what command it issued and it knew what it did, so then it learned to associate the appropriate command with what it saw the ball doing and predicting where it was going to land. But the interesting problem is it’s not always possible to catch a ball. This is a two dimensional problem. Sometimes it’s just not possible to get there or you’re gonna get there at the wrong angle and it’s gonna bounce out. There’s no way you can get it into the cup. And so that ends up being kind of a subtle – it’s not just the number of successes versus the number of tries, but it’s the number of successes versus the number possible catches, which is actually a much more difficult problem.
So I learned a lot about trying to simulate physical systems doing that. Because you don’t – you start trying to – it bounces and it’s got elasticity and damping and all that kind of stuff. And so you end up, you know, with very interesting problems, but that gave me an appreciation for the kind of things people do here when they’re trying to do very sophisticated control of a spacecraft with sloshing fuel and stuff like that. Because, you know, spacecraft aren’t rigid. They wobble all over the place and if you’re trying to stay pointed on a star or a galaxy or something like that, there are a lot of very subtle problems involved in doing that.
So at Caltech, I took two control classes and that kind of thing gave me much broader appreciation of the kind of thing – what it means to try to control a physical system. And so, but, then, you know, you become a section manager and then you become a division technologist. <laughs> And you – as you get older you realize that more and more that you’re – you kind of become a tribal elder and you provide a very useful service by helping people write good proposals and helping them focus their scientific ideas and technological ideas well. So that’s the kind of stuff I do now. I read lots of proposals. And it’s funny: When I was in high school I told my high school English teacher in tenth grade – she gave us an assignment and every day we were supposed to encounter a new word and we were supposed to make a dictionary entry for that and we were supposed to do it every day. Well, of course, I didn’t do it till the end of the semester. And I told her I was so frustrated I said, “You know, I’m gonna be a scientist or an engineer. I don’t need to do this.” And the hilarious thing is mostly what I do is write text. And it’s very important to write it in a lucid fashion. And I tell my students that. I teach at UCLA and I tell my students that, you know, “You think you’re gonna be an engineer, but you’re gonna be selling your ideas in writing a lot.” And matter of fact, we joke about the scientists and engineers here. I say, “Hey, when you’re in graduate school, do you think you’re gonna spend most of your time writing?” You know, people write papers, they write books, this and that. So it’s a lot of writing. And so the, I think, matter of fact I think the best indicator for success, whatever that really means, but is – you know, if you’re talking about success in an organization or something is an ability to, given that your reasonably smart, which you are at a place like JPL, but is that you’re able to express yourself in a lucid manner and sell your ideas. Because if you don’t sell it, you know, job one, if you’re gonna do science is to be funded. And if you aren’t funded you aren’t gonna do a –
So what else would you like to know?
Can we know a little bit about the group here? So when you came here, how many people were here? How did it grow? How did you – what were some of the difficulties you were working on?
Well, okay, when I first came here, this is – again, it was 1978. We had been working on the 1984 Mars Rover and Bryan may have showed you some pictures of that. But it was basically a chassis – a metal chassis with plywood on it. Had a big PDP-40 computer, which is a huge thing in those days. Sitting on the chassis it had Volkswagen wheels. It had a laser scanner and stereovision cameras. And it had a specially built Scheinman arm. Vic Scheinman [Victor Scheinman, Visiting Prof. Stanford University] had built it. It had to be longer so it could reach the ground. And so the idea there was that we had actually built some automatic path planning algorithms. I mean that was done before I arrived. But the funding started to dry up. But they actually were able to kind of like what we do with the rovers today. If you looked at the video image, let’s say, a rock you were interested in picking up, that you could – the robot would actually plan a path and go over and pick up the rock. And it could actually execute three-point turns to get itself in an attitude where it could pick up the rock. And, as a matter of fact, Mike Griffin [Born: 1949, Former Administrator of NASA, Prof. University of Alabama] was one of the people that worked on it. He actually published a paper at a conference about three-point turns with a JPL rover. But the funding kind of dried up for that.
But at that time NASA very much interested in the space shuttle manipulator. Because they hadn’t flown the shuttle yet, when I first arrived. And so we – I became a supervisor of the group and Marc Raibert was interested in hopping machines. He was doing some of that on campus and mostly not here, mostly over on campus, but campus is very close as you know. And – oh, John Craig was here. There’s several people. We had people in an AI group; they’re doing path planning and that type of thing. We had people in a computer vision group which was actually in Division 38, which is the division I’m now in. And we had people in the robotics group, but they all worked along very well. As a matter of fact, the rover lab was a helicopter hanger which was in the place were that building, that big green building now is. They tore it down a few years ago and built the Flight Projects Center. But we fiddled. We had a little bit of funding here and there. We’d get some – in those days NASA was – they had what are called RTOPS, which was Research and Technology Objectives and Plans. So every year, we had pretty close working relationships with our headquarter sponsor at that time, and here we had negotiated the sponsor by what we’d work on. And so we had funding to keep going, but then that started to dry up.
But then the space shuttle – you know, people – it wasn’t really the space shuttle, but back in the early eighties people started thinking very seriously about the space station. And Congress had said that NASA should consider something like ten percent of the space station study budget to be devoted to developing automation technology. And so we thought, “We’re gonna get a windfall. Three hundred and some million dollars a year to do robotics,” which didn’t quite happen. But NASA started funding research that was oriented toward doing space servicing. And this is actually kind of funny: We had a meeting – we’d had what was called a telerobotic inter-center working group and all the NASA centers were involved in it. And so we had a meeting at headquarters. And this is right – matter of fact, this is when I thought I was still leaving to go up to Stanford. I didn’t realize I was – but I had told them, “I’m gonna resign,” but my boss still made me go to the meeting. <laughs> So the headquarter sponsor, a fellow named Ron Larson, he ended up leaving NASA after, I guess, he put in his twenty years and he went to University of Maryland to run their computer center. But I always liked Ron, like working with him. But he was starting – trying to put together a program to deal with space servicing. And this wasn’t really robotic mobility so much as it was, you know, could you replace modules that have failed and that type of thing. And so we went to – we had a meeting at headquarters and just before lunch Ron said, “Well, you know, when we come back I’d like to hear some specific ideas for what we could do.” And I thought, “My god, I’ve gotta talk first or we lose.” I mean, this is the kind of thing where even if you have the same ideas, if you say it second, you’re second fiddle.
So I spent <laughs> lunchtime drawing the outline basically of a pragmatic proposal. A lot of times people who propose robotics – and I get irritated at this today. I still see this. People say, “Oh, we’re going to build a bio-inspired robot that’s gonna be fully autonomous and fully integrated.” And so I always say, “Okay, question: Could you tell me how it works at all? What does that mean? How are – you’re not gonna just build a neuron here. You got a bunch of things interacting. How is it going to be bio-inspired and how is it going to work?” And then I say, “So don’t say that. Say that you’re gonna build something that’s better and that’s still useful, but don’t say you’re gonna go mimic some kind of a dog or something.” But anyway, so the point is that I put together a proposal where the idea is for well-defined local tasks that once you’ve defined it well enough a robot can do it. Like, if you told a robot, “Okay, pick that up,” okay, and the robot could actually take an image, enhance the edges and take a light pen or something or mouse and draw a picture and when I say “Pick that up” – and if a robot knows it’s supposed to pick up something that looks like a little parallelepiped, you know – okay, it knows that it can plan it or you can have the motions pre-coded or something. It can relocate them and do that. But if this were a harder problem where you had a whole bunch of stuff, a bunch of rocks or something – you’re on the surface of Mars or maybe you had a wreck on a space station and things no longer look like themselves – so you say, “Okay, now the robot can no longer figure out what to pick up. But if I can tell it what to pick up, it can do it.” And the robot, you know, you’re not gonna be out there; the robot’s got to manage the forces and this and that. So those are doable control issues. So we proposed what we called “local autonomy”, which means that for high-level stuff, the people supervise, and low-level stuff the robot does it. And that, in fact, really enhances your productivity.
So the funny thing was I put this down during lunchtime <laughs>; soon as we got back in the room I said, “Hey, I’ve been thinking about this at lunchtime. I have an idea.” So Ron says, “Ah, okay, Carl, go ahead.” And so I put this – and I could tell all the other NASA people were so pissed at me. Boy, they were irritated, but the thing is you gotta hit the old – it’s an old saw to “Hit while the iron is hot”. And so what happened is JPL, we got the telerobotics project. That was starting in 1985 and we ended up with something like thirty people here working on manipulation – we call it “Man-machine interchanges” or “man-machine interface” or “human-guided telerobotics”. So it’s got a robotic aspect where the robot’s capable and it’s got a human aspect where it requires – “Pick out the guy who’s gonna run for mayor and go over and give him a beer.” Okay? That’s the kind of thing: a high level, a human-level decision and planning. But it was very useful and we worked on that for several years and, finally, the space station still doesn’t have a great deal of automation on it. The space shuttle, of course, is stopping now, but that doesn’t have a great deal of automation. They do a lot of human-interactive – the astronaut’s helping. But that’s exactly the way that we control the rovers on Mars. We get back images, stereo images. You know, we can reason about them a little bit, but the human operator’s help, say, “Okay, we like that rock.” Because you need some scientists who have some smarts. We don’t have robotic-grade geologists yet. And so they’re using multispectral Raman spectrometers and multispectral images and things. You can tell a lot about what an interesting rock might be so the scientists say, “That’s really an interesting rock. Let’s go abrade the surface and see what’s inside.” And you know, we went to Ford a few years ago. Since we build rovers and they’re computer-controlled, we thought, “Yeah, let’s go talk to the Ford Research Lab people,” because your car today has lots of computing. And so we found that the typical 2005 Ford was gonna have something like 12.5 megabytes of power-control computing. This controls the engine, timing, the fuel injectors, the transmission. And it senses the air temperature, the air pressure, all that kind of stuff, and it was absolutely necessary to do that to be able to meet the pollution requirements. Biggest breakthrough in cars in the last fifty years anyways, has been computerized controls of various sorts. Even controls to keep you from tipping over in your SUV. It’s just remarkable. And lot of different processors and different local area networks. All this kind of stuff is just remarkable. And if you look at an old car – I have a Model T at home – and basically all of the computing happens – you now, you decide how to advance the spark, you decide what the throttle is, you step on the pedals –
Those extra pedals.
Yeah, yeah, yeah. It’s just fascinating. They’re a lot easier to work on. Because it’s hard: You see the computer chip looks just fine, but you know somehow there’s a short in it or something, and it’s easier to de-bug an old car. But anyway the – what were we talking about?
Telerobotics. The human machines.
Yeah. The human machines and there was something specifically we were dealing with.
The Telerobotics Project
Well, so who are some of the people you brought in for the telerobotics project?
Oh, okay, well, let’s see. Paul Schenker [Field Service Engineer at BC Technical], who actually is not here anymore. Brian Wilcox we – I hired Brian in probably about 1982. I thought he was very capable, which he is, of course. But he happened to read an article I had written about sins and cosines in an old robotics journal that no longer exists. So one of the guys originally at JPL, Alan Thompson, started – when I first came – started this magazine. I think we called it “Robotics Today” or something. Anyway, so we wrote an article about the fast way to calculate sins and cosines and indicate into a table – or index into a table. And Brian happened to read that and he sent me a letter saying, “You know, I read your article on such and such.” He says, “I thought you might be interested in my résumé.” And I could tell – he had started a little robotics business, which failed when the first Apple IIs came out, because suddenly there was a real thing that you could buy. But so he had experience, you know, dealing with the business aspects and stuff. But he had built a little rover – not a rover, but a little robot that could crawl along the bottom of a river and sift for gold. He said, “Well, the idea is you power it by the water and it would try –” And the idea was he’d build it and after a few days he’d go back and see if there was any gold there. And I thought, “This is creative guy that we outta have here.” When I went to Caltech to go to grad school he became the supervisor. I says, “You really out to –” He says, “I want to do research.” And I said, “Yeah, but you know, if there’s not a good supervisor, a good entrepreneur – person with ideas and it doesn’t go any place.” So he became – and he was supervisor for years. He’s no longer supervisor of the group, but he’s continued. I mean, people I told you about, athlete and a lot of things, and he had a central role in building the Sojourner rover as well. Okay, what else? I still haven’t remembered what we were talking about before your tape ran out.
Local Autonomy and Mars Rovers
We were talking about the local – you mentioned local autonomy and that a lot of stuff that was going on was not so much autonomous robots per se but, for example, the MIRS were –
They’re traded man-machine control.
And – oh, I know what. Okay, thank you. So when we – the first MIRS, at first, you’d say, “Okay, yeah, I see this rock. We want to go to the rock and so every day you uplink some command – you interrogate the robot and then you uplink some commands. Well, at first to get over to the rock – and they had to kind of drive it. They figured, “Now go over here and then go here.” And so it would take, like, three days to get the rover to a point where it could actually use the rock abrasion tool, the RAT, to abrade the surface so they could look beneath the skin of the rock. But we knew that, okay, once you define where the rock is – and clearly you’re not gonna drive over there unless you can get there. I mean, the human operators were kind of saying, “Yeah, that’s okay.” But toward the end of the mission we uplinked software that would allow us – you’d say, “Okay, go take a sample of that rock.” And then the rover itself can go over, estimate the local surface normals, do the correct positioning and do that operation itself. And it saves a lot of time. You can really improve mission throughput just by automating little stuff. You know, like, if you could put gas in your car or something; and you’d go up there, it puts gas in your car and it takes you a few minutes worth of¬ – rather than a long time to gas your car up. So that kind of thing we actually have implemented. And we’re trying to implement things like if a rock looks interesting maybe pick that rock up or decide based on local information that you have, using SHEER spectrometers. You know, we have all kinds of instruments we put on the rovers as well. And we’re continually miniaturizing them so they get less and less massive and more and more capable. So you can give the robot – give them high-level commands. You can give a robot an ability to do a lot of stuff autonomously. It doesn’t have to be brilliant. It just – it’s like, “My boss said to go do so and so.” It’s a little bit like in 1803 when the people went up to explore the Louisiana Territory and nobody could talk to them. They said, “Go up and find out if you can find the Northwest Passage, what’s there, what did we buy? Document the flora and fauna and come back.” So we’re moving in that direction. And over a long period of time it will happen partly because the deeper we go into space the longer it takes to close a loop. It’s one thing if you’re on – and as a matter of fact, this is an interesting story about Brian. When he was twelve years old his father was working at the General Motors facility up in Santa Barbara – Goleta, I think it is. And they had built a rover, which was supposed to go to the moon. And the idea there – it’s only like a second or two delay. So, you know, you could actually almost control a rover from the ground because you don’t have to wait for twenty minutes or forty minutes depending where you are. But he actually drove one. They had an open house. But it would have an image and then it would go dark, then it would have an image and then go dark. He said he actually drove it into a ditch because he wasn’t getting enough feedback. But the funny thing was that when I was the supervisor and Brian was in the group, we had a trailer parked out behind the Building 198, which is where our robotics lab was, and it had a bunch of old junk in it. And so where Division 34 was, that’s the controls division. But the division administrator desperately wanted to get that trailer out of there and all that junk gone, because he wanted to –
But he wanted parking space and he told me, “But if you get rid of that trailer, I’ll let you have the parking place.” In those days at JPL you could have a space with your name on it, which was, you know, cool. <laughs> But anyway, so I said, “I wonder what’s in that trailer?” So I said, “Hey, Brian, let’s go over and look in the trailer and see what’s in there.” And the first thing we see is this rover. He says, “God, is that –?” Because GM had built it for JPL. He says, “You know, I think that’s the rover that I drove when I was a kid.” And so I said, “Wait, we can’t get rid of that. We’ve gotta save that.” So we took it out, we put it in the lab. We talked with the program offices. They bought some new wheels actually off of a Honda ATV. We put some new wheels on it and that’s what Brian did a lot of work with it, painted it blue, and he did some stuff out in the arroyo. This wasn’t the autonomous demo one, but he did a lot of kind of human – man-machine interface type of control work with that. But I thought it was really hilarious that his old rover returned to haunt him. But it was very good because we – at that time NASA wasn’t funding rover work. And I remember one of the NASA sponsors, Mel Montemerlo, accused us at one of the <inaudible> meetings (54:05), he said, “All you at JPL, you just want to build a rover.” And I thought, “Of course, we want to build a rover. What’s wrong with that?” <laughs> “Rovers will really benefit science and benefit NASA.” But, anyway, that’s a kind of funny little story about Brian and things that kind of return to haunt you later.
Why weren’t they building rovers at the time?
Well, because they thought they were gonna do the ’84 rover, but it was much too expensive. You know, these flagship missions. In recent years, of course, NASA has started – we end up competing for everything and they put these cost caps on missions and it just – the idea of “Hey, let’s have a five million dollar mission” – those are just – they aren’t – you know, the United States, we’re having a lot of economic difficulty. And so you start – what can you really afford to buy? But at any rate, it was gonna be a very expensive mission and we didn’t really have the technology in place to do it at that time. The Sojourner was a much cheaper mission. Again, it was a flight experiment, which meant that the flight certification wasn’t nearly as rigorous. ‘Cause the idea of, okay, they thought if it lasted for thirty days or a few days it was gonna be fine. And it did last a lot longer than that and it really captured the public’s imagination. But, yeah, it ends up being a matter of funding and a matter of – you know, how does NASA decide what it’s going to do? Now, we have every one of the main NASA directors, you know, earth science, planetary science and astrophysics, they have decadal studies and the powerful wheelbase scientific community weighs in on what is relevant to do? And if we discovered something that looked like life on Mars there’d probably be a lot of funding to do Mars stuff. If we discover some new thing about dark energy, I mean, we’re of course gonna fly some missions that are related to dark energy and that kind of thing, but if we found some really interesting work, really interesting scientific tidbits, it probably would be more funding. But, for example, you know, the Allan Hills Meteorite was found in Antarctica, which they do believe was a Martian meteorite. I remember, Dan Guldens got on a press conference and said, “You know, we think these might be fossils of life.” Well, that really influenced the later funding because then people were curious and they say, “Well, okay, that’s interesting to do.” Some of the astrophysics things people think, “Well, universe has been here, if we understand it correctly,” you know, “at least thirteen million years. And maybe the universe will be here in a couple hundred and maybe some astrophysics missions aren’t the most important things to do.” But if you have a strong enough constituency that can talk to their congressman and NSF and this and that, then it ends up being very political and I don’t mean that in a negative way. That’s what humans do. You know, how do we decide what we’re gonna do? We all get together and we talk about it and whoever’s got clout has the loudest voice and somebody’s senator has a chair of a certain committee, then <laughs> maybe they win. But that’s all – that’s the kind of way that politics works. But nowadays, the country can’t afford quite as much as it did and so they’re very – try to make sure that we’re doing things that are relevant. And now, at least, some people notwithstanding, who think that climate warming is a big humbug. There are enough people that think, “You know, there are some things going on that maybe we outta know about.” And so earth science seems to be on the ascendency now, but that’s – you know, if you gotta talk about not being able to raise food versus did you have dark energy, maybe being informed about your decisions so you could be reasonable about climate change and that kind of thing is a useful thing to do.
Relationship Between Roboticists and Scientists
How is the relationship of the robotics group with the scientists that are also part of the missions over time?
Well, let me say, I think it’s actually very good. I know that at first, before we flew Sojourner, a lot of – I won’t mention names, but some of the people who sit there on those panels and talk about the stuff that the rover picked up and they go, “Well, we’re just kind of opposed to it because, well, a) it was unproven technology. And they knew they could put instruments in that mass that might make for some good science, that it would, of course, be very frustrating if there was this really interesting rock that you couldn’t quite reach. But my feeling has been that the relationship is very good. There are people in – I also work in Division 32. I’m a division technologist in both this division, the instruments division, and the science division. But, you know, of course, do you do science with the instruments? But there are people like geologists – Bob Anderson, for example, is a geologist in 32, but he works very, very closely with the MIR people. He worked very closely with the Phoenix people. And he was one of the people that discovered, you know, you can pick up sand and you think you’re gonna put it in the instrument, but it doesn’t always go in the hole. <laughs> So he learned a lot about how do you actually get samples and get them into the instruments so you can make the measurements you want to make. And as an institution we’ve learned a lot about that, too. But my feeling is that planetary scientists have worked quite well with the technologists and, as a matter of fact, people in general in Division 38, we have people who are really expert at building super conducting detectors and they work so closely with the people in division 32, who will do the astrophysics, that sometimes you can’t tell what division our groups are in because they - you have to have the really sophisticated super conducting detectors to be able to make major things like a causing microwave background where you’re talking about differences in temperature of ten to the minus five degrees. And so they watch out for each other and they work very profitably and productively together.
Challenges and Accomplishments of the Rover Projects
So what were some of the biggest technical challenges or technical accomplishments along the way of developing these various rover systems?
Well let me make - you just reminded me of something that I think was - it dawned on me when I was still at Bendix. I was really pleased that we were able to build a robot that you could - if you could throw something out one at a time on a table, it could look at it and decide if it was right side up or upside down and what orientation it was, and then we can - the robot could go pick it up and then put it down where it was supposed to. We could even, you know, we had some four sensors and you could make little tests. I was very proud of, we built a whole bicycle break and well you know - there’s a part really there because humans when you’re doing an operation say - like how you say pick up the pen. There’s no pen there, or that’s not a pen, that’s a top stick, or you know. People would be able to do that. But rovers say, you know if you say pick up the pen and it doesn’t, it would just pick up nothing. And I’ve watched rovers - or not rovers, but robots, self-destruct because, in one case we were doing a demo the rover was supposed to pick up a washer with a hole in it and then it’s supposed to put the hole, put the washer over the axel of this bike break. But the thing is that the washer falls out of the hand, so then the hand is force controlled, it closes, so then when the rover goes, when the robot tries to stuff it down on the axel, it destroys the hand because there’s no hand there. The thing is you start really appreciating the kind of stuff that human beings can do.
But, at any rate - so we built this robot and it provided everything, it was in reasonable shape, or you could program it so you could test, but that of course really slows things down. And the human beings just look and say, “ah it’s not there,” before they reach for it. But a robot, you end up going over and nudging and it’s like having fake gloves on and trying to decide what you’re going to do. So that slows it down and then people don’t want to buy it because it’s too slow. But the - so we had a guy at Bendix, we were trying to - we wanted to be able to make this system that would do assembly and so, you know I could do all of these nice clean coordinate transfers, yeah I could do that and the robot was really very good at doing that, but then we hired a guy who was supposed to, the big problem is let’s suppose you have a bunch of springs in a bin, you want to pick a spring out, a human being, it doesn’t matter if the springs are pretty hard because they get wound up but you imagine if you have to pick up nuts and bolts, you reach and then you grab some and you put them up. But having a non-human try to get nuts and bolts out one at a time, especially if they tangle up, it’s a very very hard problem.
And so this fellow, his name is Bob Malique, and I used to call him Rube because a lot of the stuff he’d try to come up with was kind of Rube Goldburgieish, and he never liked it and I was just teasing him, but he was a pretty smart, creative guy too. But I finally realized one day that the hard problem isn’t dealing with an object that’s out there justly located on a plane, the real problem is you’ve got a bunch of junk and you’ve gotta figure out what to do with it and deal with it one at a time. So, you know, we’ve been solving the easy problem, the hard problem is getting those dog-gone parts out one at a time. But then you know if you were in space for example, one of the Russians did this, they ran their vehicle into their space, into Murr right? And it messed it up. Well things no longer look like their graphics models, and say if this is all smashed, you know, human beings will say, “oh yeah because it looks clear and I can see these little corners and stuff, yeah you smashed that,” so that’s what it is, it’s a smashed cassette case. But you know, a robot isn’t necessarily going to know that because it doesn’t have all of our capabilities.
So one of the big problems is dealing with uncertainty and I think that’s still - matter of fact the problem of like trying to drive to Palo Alto on the freeway, under computer control, there’s a problem where there’s a lot of stuff going on, a lot of things, a human being can’t do that until you’re like what, 16-years-old, maybe if you’re really good you’re 12 you can drive the tractor or something. But handling those kinds of problems is - anidextory, those are incredibly difficult. One thing I did work on, it was Ken Salisbury [Kenneth Salisbury, Prof. Stanford University] whom you may have interviewed, he and I built a multi-fingered hand. He actually designed it and built it as his Ph.D. dissertation but I remember we were - we had thought, “hey we should build a multi-fingered hand,” and I went up to talk to Tom Binford and, at Stanford, and I says, “Does that mean I can work on this?” And they said, “oh yeah.”
So, Ken Salisbury, that became his career, but the idea there was; could you actually program something to do dexterous things. And it turns out that’s very difficult, you know, you can kind of preprogram it, but the kind of really effortless dext - or even a squirrel, you know, I feed squirrels in my yard at home they pick up peanuts and they nobble on them and they throw them down, and even them, and they don’t have a great deal of cognitive capability but they know enough that it’s a peanut and they can eat it and they can peel it and they can get rid of the shells. But those kinds of problems, dealing with uncertainty and dealing with situations that are not really well defined, those are the big challenges. You know, trying to, you know like, I wonder to myself, how is it that I know things?
Like something I hadn’t thought about for years I was - it occurred to me and I thought about it and I right away picked up the names and this and that and all I’m thinking is how is it that you even in your brain you format - you formulate a query and you don’t even know the name, but maybe know, yeah that was the thing, that was that weird looking thing that we saw down at the museum in San Diego, and what was it, oh yeah that was the Solda and how does that work? We really don’t understand that, I mean, just a matter of fact if you talk to a neuroscientist they will admit that we mostly don’t understand the brain at all. We know a lot about the chemistry and the physiology of neurons and how you get signals, but how you go from a neuron to maybe your spinal cord neurons talk to maybe 200,000 other neurons and how information gets gated and represented and how you learn. You learn to walk, and after you learn to walk you don’t think about it, if the floor is smooth, it’s a flat floor, yeah you talk to your buddies and walk down the hall, if somebody has left a banana peel there you might slip and then suddenly you have an interrupt and you recover with reflexes, but after you’ve decided the world is organized again you start walking. If you’re walking down even stairs, it’s straightforward, if you’re trying to go across a creek and not get your feet wet and there are rocks there, you don’t sit there and solve physics problems while you’re doing that. You might solve physics problems while you’re walking down the hall but if you’re going to get across a wet creek and not get your feet wet, for one thing you have to test is this rock stable, there’s all kinds of stuff that you do and it typically interrupts conversations because it requires a great deal of thinking and organizational skill to make that happen.
Yeah, I think - I used to think that well yeah we’ll be able to make these smart machines and they get smarter and smarter but they aren’t very smart yet. And I think, again, that doesn’t mean they’re useless, because they do lots of really really interesting and useful stuff but the whole kind of, you know, like how do you know - somebody touches you with this you’d say oh yeah that’s the end, you know I expect this to be the point of a pencil, but you could also take a scorpion and you know, and say okay, yeah that’s okay but then, “oh God it’s a scorpion.” You get more and more reverence about - especially when you see images of neural tracks and all that, you get more and more reverence about how does that even work? And how is it that you end up being conscious? I can imagine programming a robot, a very sophisticated robot that we can’t build yet, so it can look like it was conscious, it says, “oh yeah, that’s me. I call that me,” but does it have the feeling of being conscious. You know, you taste coffee, you can put a spectra on masspecture, that’s coffee, and it’s Arabica beans or something like that. But does it taste it and does it have the sensation of taste? Or that’s red, or that’s green, or whatever. Those are really really very fundamental problems that I - matter of fact, I don’t even know if humans are smart enough to answer them yet. But we may experiment with genetic engineering and build some people with really more facile brains that can do it. But anyway.
Challenges of Man-Machine Interaction
What about the challenges of actually, so you mentioned the man-machine interfaces and so really it’s interaction between humans you have all these capabilities and then robots who have different kinds of capabilities but not the same ones as humans. What are the challenges in that?
Well you know the challenge - well okay, here’s a good example, people talk about - I remember years ago people were trying to work on implantable gate assists for people who were paraplegic. Okay so then I thought, well okay, suppose you’re going to build a really smart robot that, say the floor is flat and this and that, but how do you, or there has to be one thing, maybe you could have an exoskeleton you control a person so they can walk. But if you wanted to have something that actually helped the person to interact, how do you convey the information and in what form from the person’s nervous system to the control system of the robot. Because if people have severe, you know, ALS, or some kind of a severe injury, they’re cognitively very capable; we just had colleague here, John Klein just died last week, I know a year and a half ago, yeah a year and a half ago he woke up one morning and he couldn’t walk. And that kind of came and went but in terms of - he had ALS. Okay, but his mind was crisp like Stephen Hawking, mind is crystal clear, they can hear, they can understand, but their motor control capability is just gone, those diseases really degenerate motor neurons.
So how would you get the information, and you know, now we can, people put electrodes in people’s visual cortex and you kind of see the people, and you can put these neural implants and allow people who have cochlear damage to allow people to hear to a certain extent. You know, stimulate the right kind of nerves, but then how would - if you wanted to give the person a little pack that could stimulate their motor neurons or even directly maybe you could innervate the muscles with artificial neurons, but how do you get the information out in a form that would make sense to a machine? Those are really, you know, because we don’t know. Matter of fact, when I taste coffee, does it taste the same to you, the quality of it. I don’t like cheese, a lot of people like cheese, so there’s a different equally, I like cheese, ugh I hate it. But people think, “oh this is delicious cheese,” so what is that? Ultimately we can all agree that this is red, but you might be perceiving blue or green or purple or something like that.
So I think those are very fundamental issues and also trying to - people want to build robots that are self-healing, but at the genetic level, we heal ourselves, you cut yourself and then genetically it knows what it’s supposed to be providing, it’s not a terrible injury but it’s, if it’s a little cut, it fixes itself. But if you lose a finger, you don’t grow a finger back, unless you’re a starfish or something, but how would you make a robot that could heal itself? Unless you started providing very biological things like biochemical and amino acids and the blood stream and this and that and enough information that can grab them and can reformulate, those are very fundamental issues. If you - let’s say you wanted to send a robotic colony to Mars, are you going to send a whole warehouse of parts or are you going to be able to maybe dissociate and fabricate new parts with native materials with more advanced like 3-D lithography kind of stuff that people now do. But those are, I think that - or even wiring up a brain, okay I don’t mean that as a trivial task but remember several years ago we were doing a task here for DARPA and the idea - what we were trying to do was active vibration suppression.
So the idea was we had a stock, we had a focal pane mounted on the stock and we were exciting it but we wanted to look at the vibrations but we wanted to be able to actively damp it to stabilize the focal plane so you could get a clear image. Well the guy who, it was a fellow group supervisor in my section at the time, he was responsible for building this and so I noticed that we were way behind schedule and every time they’d do some testing and they would find a problem so they would open up this box to fix the problem and they’d fix the problem but they’d make a new problem because they’d break a wire or something. So I began to think, I said you know, “didn’t you design it so you could open it,” no we figured we’re going to build it and then your going to fly it, I said, “you have to test it.” And it occurred to me being able to wire something up as subtle as a nervous system with enough information content that they can do something subtle is a remarkably difficult problem. And if you’ve ever talked a brain surgeon, we had a brain surgeon come by here a couple of years ago; he was interested if he could use robots to do brain surgery. But it turns out that you know, your brain is not designed to be worked on and if you have to, and this is actually kind of chilling in a way, but he said if you have to operate on somebody’s inner brain, you basically have to take their head all apart, unless if it was you know cancer or something you might be able to zap it with very directional radiation, but you’ve gotta get in there and try to fix something like an aneurism or something like that, you take people’s faces apart and their skull apart and their all unfolded and you get in there and you do the operation and then you put it all back together again. So the idea of being able to wire up something with the kind of tools that we have today for doing micro devices even really sophisticated computer chips, it’s just really a challenge. So I think being able to make really capable robots that have enough nervous enough capability to make very informed and subtle decisions is going to be a long term challenge.
But now the kind of thing - but one thing that robots can do, let’s say you want to reach into a furnace and you want to pick up a piece of hot iron and then you’re going to put it in a forge and you’re going to stamp it. Well, robots don’t hurt. Of course people don’t people don’t either if you wear iron gloves with enough cooling, you don’t hurt either but.... So those are issues - robot do very useful stuff. Welders, you know, you’re car, if you buy a car today and it’s welded together and they have big welding machines that humans had a hard time coping with, but they’re very consistent, your cars are pretty consistently welded together unless there’s a glitch of some sort. But, much more consistently than in the old days when people had welding guns on the spring supports and had to move them manually to weld your car together. So I think we’ll see - and a matter of fact, you look in the military now a fighter pilot, if you’re going to see offensive planes miles away, it’s radar and all kinds of displays that try to assimilate and distill what the operator’s going to see and make it mineral for the operator so then they can make a high level decision and you see it in the movies all the time, lock on, okay you’re locked on, okay once you’re locked on you can track and then you fire. But trying to find it in the first place can really be a challenge and so I think we’ll see more and more of that where people do more and more high-level things and robots do more and more.
But then, you know, you look at the way the world economy is, people in Spain are rioting because there’s no jobs, we historically said, “well we’ll use robots to free people to do more productive stuff,” and that’s great if the people are really capable but you’re not going to take every person and teach them general relativity, I mean there are very few people in the world that can really comprehend those very esoteric things. So basically what you do is you end up with people that don’t have much to do that’s within their capability of doing it and I think that’s going to end up being a significant problem. We aren’t organized socially yet to let people just live without working, they say oh I’ll go get a job, so maybe you need robots to make lots of stuff for people but then you’ve got the problem of the carrying capacity of the Earth and yeah so I just think sometimes, my goodness, you always think you’re going to solve these problems but there’s always going to be new ones. And sometime is going to be a billion years from now and there will probably be some kind of sentient life and you wonder what kind of issues they will deal with.
Future of Space Robotics
What would you say is the future of robotics in space exploration?
Oh that we’ll do more and more of it. For one - let’s say you wanted to, people often talk about doing an interstellar mission, well if one thing, you’re going to have to fly really really fast to even make it - even make it possible to get there in a reasonable amount of time you’re going to have to use a ion propulsion and go faster and faster but you aren’t going to send people there. You know, maybe sometime, maybe we’ll discover some new physics, I mean we, in 1899, a guy named Buell was head of the patent office and he said, “we should just close the patent office because everything’s been invented that can be invented.” That’s certainly hasn’t been true, matter of fact we invent more and more stuff and you think about your disc drive, the disc drive on your computer, now you have 80 gigs, 100 gigs, 200 gigs, in a tiny little package, and that was because some people fiddled around in a lab and they discovered gigantum magnita resistance, which suddenly really shrank - I remember when I had my first computer with a disc drive I had 250k, and so now we discover new things all the time. Maybe we’ll discover some interesting physics that will allow us to, worm holes or this stuff all sounds like science fiction now and nobody knows what a worm hole really is and doesn’t really go other places. But, you know, we’ve discovered lots of interesting physics that people didn’t know in the middle ages and if you had a cell phone - you take your cell phone with images on it, and if you were by yourself they’d probably burn you as a witch in 1600 or 1500 because it would be stuff that somehow wasn’t mentioned. But we will do more and more robotics, it will be more and more capable, but we’ll for a long time, we will need human capabilities to govern the overall task. Maybe you - what you really want - you don’t want something that’s totally autonomous because totally autonomous it says, “Heck with you I’m going to do what I want,” you want a very loyal agent that says, “yeah boss.” This is what we’ll do and we’ll do the best we can to meet your needs.
So if we could just go back to some of the other people that you’ve worked with over the years –
Okay well I’ve –
You’ve touched on already.
Let’s see.... Well at MIT there was Dan Whitney [Daniel E. Whitney, Mechanical Engineering and Engineering Systems] and Jim Nevens [James L. Nevins, Passed Away 2012, Contributor of Apollo Space Program] , they had a multi-client study, as a matter of fact we gave them our first - the engineering model robot that we built at Bendix that we did a lot of our exploratory work, we ended up giving that to them. When Bendix - when we built some other - some more product oriented robots, so we gave them our first one. Then when I, actually when I left Bendix, as I mentioned I went to GE, but one reason I went to GE was because I wanted to go to RPI because they had a professor there who had been - let me see, Hilbert Herbert, he had been working on puzzles and stuff. So I talked him into being my thesis advisor, so then Bendix was getting out of the robotics business so I called him up and said, “hey why don’t you donate your robot RPI,” so they got a robot but then I enrolled in the computer science curriculum at RPI but then I left to come here. I always thought I would go back to UCLA and finish my Ph.D. in physics but that just never happened. But I get to do a lot of physicy kind of stuff in the science division now so it’s interesting. Other people, it’s Nevens and Whitney, then at Stanford there was Tom Binford and Ernie Roth, at SRI there were Charlie Rosen and David Nitson they had a multi-client study going and those are you know, principle people, I never always - Well okay there were other people, Matt Mason was a grad student at MIT when I was - matter of fact I think he was a grad student when we first went - we made a presentation of this pack system because my boss at the time wanted to get out and find out what people were doing in robotics, we were pretty isolated there in Dayton. I remember we showed a movie of the programmable automation cybernation system and all the grad students laughed when we said that. That’s what made me think that’s kind of a high fluent sounding thing to say but - oh and Tomas Lozano-Perez [Prof. Computer Science and Engineering] I think he’s still at MIT, I haven’t seen him for a while. Ruzena Bajcsy, she was running the grasp lab at University of Pennsylvania and then she left and she’s been NSF –
She’s in Berkeley.
Berkeley yeah. She got a Ph.D., she already had a Ph.D., but she got a Ph.D. from Stanford in computer vision about the time that, you know, back in the early mid-70s, yeah that was when I was being active in those multi-client studies and things. So then there were some people at I didn’t work that closely with within NASA, each NASA center seems started a robotics group of some sort. All of a sudden it got to be when the space station started to be real and they started to fund this teller-robot project that we had here, the NASA centers all started - matter of fact, JSC changed their name from the Man Space Flight Center because they in one year started several AI and robotics groups and you’ve probably talked to them they’re, what are they called, robonact and things like that. It’s interesting I think that a lot of times you know, one fellow here at JPL built a multi-fingered robot, Bruno Yal and the funny thing is that even though it had a lot of fingers and stuff it still, when you wanted to pick something up it couldn’t do this kind of stuff. You know, it would hold something and then it would slip and then it didn’t know how it would slip and it couldn’t tell how it was actually holding the object. So, yeah but it’s been a lot, of course I’ve been doing robotics for almost 40 years now, it’s kind of stunning, and now I have grandchildren and all that stuff, it just kind of time passes and you realize that you don’t have forever to do things.
Advice for Young People
What’s your recommendation for young people who are interested in robotics? Who want to pursue a career in it?
Well it’s interesting; you have to decide what you like to do because robotics is one of those system level activities. It’s if you do control theory, people talk really precisely formulated problems in control theory. If you’re doing detectors you can do very precisely formulated problems in solistic physics and that kind of thing. And as a matter of fact it’s really easy to say, “Hey look this got a trillion transistors on it,” looks like a piece of silicon to me. But robotics has got all of that stuff, it’s got the whole computer organization, how do you integrate sensory information with the overall control of your task, matter of fact, how do you even remember what you did? Robots don’t remember what they did, it’s almost like that have a hippocampal lesion so they can’t remember what they just did but you know, a human explorer would walk through and say, “oh I’ve been here before. I remember that little red rock. We weren’t successful when we tried to use that last drill bit because it had a little crack in it,” humans they remember all those details and they put them together and they still manage to get the job done. A human being, you know, if you tied somebody up and they needed to get out of something they might even, if they could grab something with their jaw, they might even grab and pull themselves up by just the force of their jaw and using their necks. So human beings are very versatile in solving those kinds of things. So you need to, and if you try to solve an animal robotic problem in all of its complexity, it’s too big. So if you’re interested in looking at overall robotic projects than I think you should learn to take some control, and you need to take some signal processing, and you certainly need to take software, you need to know about detectors and sensors and things like that. If you want to get down to the details of controlling an arm, then you need to have a good basis in control theory itself. Sometimes you have the idea that you can control a mechanism, but if you don’t - let me call my daughter right fast because I didn’t know it was so late. Well no this is fun, but anyway I think that you know, if you’re really interested in the subtlety of neural beasting and stuff like that than you need to look at neural biology but at the end there’s so much stuff it just, just really gets complicated. There’s a lot of things to do and it’s a field that will be growing being able to make smarter household assistants or hospital assistants, things like that. It’ll be interesting to see what things are like in 50 years compared to now and maybe we’ll have robotic biological hybrids. Or maybe we’ll use biological ideas to be able to build the circuits, I mean, you know, you’re brain gets built from amino acids and things like that over a long period time but maybe we’ll do things like that. But then of course your, if you don’t feed your robot it’ll die, you can’t just turn it off and put it in a drawer.
Robots that have various organic materials.
It’s a fascinating problem. So okay so it’s been interesting talking to you –
Yeah, that was our final question.
So you got a fair amount of robotic philosophy in this and that.
People always kind of have thought of me as a robotic philosopher, Vic Scheinman once, there’s this funny story, he was here, we had another group, it was actually in the same group, Tony Batesey used to be here and he was very much interested in tele-operators and so he had done some consulting work with them because Tony built a big wrist sensor for the shuttle that had been there. He actually went down to Texas and was using it to show that you could minimize overloading, orbital modules and things like that. I remember that one day I was walking - I was taking someone on a tour of the tele-operator lab and I mentioned something about Vic Scheinman and Vic walked in. And I said, “Vic what are you doing here?!” and he was consulting with Tony but I remember once he introduced me to someone and he said, “well he’s really a robotic philosopher,” but these are fascinating issues, the kinds of things that being able to start with animate matter and program it so that it can actually do things and respond in a reasonably effective - well you look at Marc Raibert’s big dog, his walking machine, and you go kick it, and it invokes reflexes and it tries to stay upright. So it’s got a - it’s almost like a headless horseman it isn’t doing a lot of planning and high level reasoning, but it does a good job of climbing and walking on things that are difficult. As a matter of fact, I should have mentioned it, it was Marc Raibert who actually kept calling me up, well I met him at a conference, and he kept calling me up and saying why don’t I come to work at JPL. So yeah I neglected to mention that but Mark was here for, when I first arrived and then he decided to leave and go to CMU and then after that he went to MIT and then I guess he’s still at Boston in dynamics because he maybe got tired of academia, I don’t know but he was always, successful mover and shaker and stuff so. Well good talking to you, I should probably go pick up my daughter.
- 1 About Carl Ruoff
- 2 About the Interview
- 3 Copyright Statement
- 4 Interview
- 4.1 Education and Career Overview
- 4.2 Ph.D. at Caltech
- 4.3 JPL
- 4.4 The Telerobotics Project
- 4.5 Local Autonomy and Mars Rovers
- 4.6 Relationship Between Roboticists and Scientists
- 4.7 Challenges and Accomplishments of the Rover Projects
- 4.8 Challenges of Man-Machine Interaction
- 4.9 Future of Space Robotics
- 4.10 Collaborations
- 4.11 Advice for Young People
- 4.12 Final Remarks