Oral-History:Robert Ambrose

About Robert Ambrose

Robert Ambrose was born in Boston, MA in 1964 and grew up in Houston, TX. He received his B.S. and M.S. degrees in Mechanical Engineering from Washington University in St. Louis, and his Ph.D. from the University of Texas at Austin. After graduating, Ambrose secured a position at the NASA Johnson Space Center in Houston, TX where he is currently the Principal Investigator of the Game Changing Development Program. Ambrose also heads NASA's Robonaut project and has been instrumental in the development of human-robot interactions.

In this interview, Ambrose discusses his Ph.D. work and NASA projects, focusing on his developments in the field of human-robot cooperation. He also briefly discusses his initial interest in engineering and the future of robotics.

About the Interview

ROBERT AMBROSE: An Interview Conducted by Selma Šabanović and Matthew R. Francisco, IEEE History Center, 28 September 2011

Interview # 662 for Indiana University and the IEEE History Center, The Institute of Electrical and Electronics Engineers, Inc.

Copyright Statement

This manuscript is being made available for research purposes only. All literary rights in the manuscript, including the right to publish, are reserved to Indiana University and to the IEEE History Center. No part of the manuscript may be quoted for publication without the written permission of the Director of IEEE History Center.

Request for permission to quote for publication should be addressed to the IEEE History Center Oral History Program, IEEE History Center at Stevens Institute of Technology, Castle Point on Hudson, Hoboken NJ 07030 USA or ieee-history@ieee.org. It should include identification of the specific passages to be quoted, anticipated use of the passages, and identification of the user. Inquiries concerning the original video recording should be sent to Professor Selma Šabanović, selmas@indiana.edu.

It is recommended that this oral history be cited as follows:

Robert Ambrose, an oral history conducted in 2011 by Selma Šabanović and Matthew Francisco, Indiana University, Bloomington Indiana, for Indiana University and the IEEE.

Interview

INTERVIEW: Robert Ambrose
INTERVIEWER: Selma Šabanović and Matthew Francisco
DATE: 28 September 2011
PLACE: San Francisco, CA

Early Life and Education

Q:

Okay so if we could start with your name and where and when you were born.

Ambrose:

Okay. I’m Robert O. Ambrose. I was born in 1964 in Boston, United States.

Q:

And did you stay in Boston? Did you go to school there?

Ambrose:

We moved from Boston very soon after I was born. My parents were in school there. And we moved to Houston, Texas.

Q:

And that’s where you went to college, or no?

Ambrose:

No. So we moved to Houston. I went to elementary school in the Houston area. And then we moved to Chicago. I went to elementary school there, as well, then back to Houston. And I finished through high school in the Houston area. Then in college I went to Washington University in St. Louis, which is in St. Louis. And I got a Bachelor’s in mechanical engineering and a Master’s in mechanical engineering. Then moved to Austin, and I got married. And my wife and I both went to graduate school in Austin at the University of Texas. I continued in mechanical engineering with a specialty in robot system design.

Q:

How did you decide to go into mechanical engineering in the first place?

Ambrose:

Well, I actually wanted to do three majors, and it wasn’t possible in the time that I wanted to – I decided I would rather do graduate work than three majors. But what I really wanted was physics, mechanical engineering, and electrical engineering. And I compared the syllabus that was available for all three of those degrees, and they were basically mutually exclusive. So there was almost no overlap, not even the freshman year. So I realized it was going to take twelve years to get all three degrees. And I decided I would rather get a different set of three degrees in less than twelve years. So I looked at the syllabi and I saw that the parts of physics that I was most interested in were well covered in mechanical engineering. And the mechanical curriculum required that I take a lot of electrical engineering classes, whereas the electrical curriculum had no mechanical engineering classes. So I decided to go mechanical. But since then I’ve always been interested in what I consider to be mechatronics. And as I got into my Ph.D. work I spent a lot of time on electrical design and software design in addition to mechanical engineering.

Q:

What got you interested in the three areas there; I mean that’s a pretty ambitious undergraduate – ?

Ambrose:

So the three areas of physics and – well I like physics. And the parts of physics that I like the most are mechanical and electrical in nature. So I’m really into electro-mechanical systems. And so I was looking for a degree that would allow me to work in that area. And then I started to develop an interest in the software engineering aspects of robotics. And I feel those are the three key things, mechanical, electrical, and software engineering that brings robotics together.

Q:

So it was just something in high school, maybe like a subject in high school--

Ambrose:

No I’ve been building things my whole life. And I always liked motors, and electricity, and gears, and mechanisms. And I’ve been building things with electric motors in them since maybe age five. So--

Q:

What kinds of things have you built?

Ambrose:

Well I’ve built a number of robotics systems, more recently.

Q:

I mean--

Ambrose:

Back when I was-- boats, airplanes, cars, and at age five I would build things and turn them on and watch what happened. There was not a whole lot of close loop control going on. By the time I got into middle school, I didn’t understand what it was, but I built a closed loop control system for a solar collector that was pointing at the sun using a photo resistor with a tube. And when the sun would move and cast a shadow, it would turn on the motor. And it would repoint it. And that’s my first experience with closed loop control systems. And I didn’t understand the math. I was just completely winging it, obviously. But – and still do, maybe. But the experience had a lot of impact on me. I remember playing with friction, to add friction into the system, which caused things to be stable. But then I noticed that I would – it would stick. And it would develop an air – and it would take a certain amount of air before it would break free and move again. And that experience, playing around with the physics of what was going on there, came back to me later when I was in college taking a linear controls class. And it was kind of intuitive to me what was going on, and I was now trying to figure out the math that explained it. So I put that – was able to put those things together. And it was a good feeling to understand the math behind something I’d previously experienced in kind of an ad hoc way.

Q:

And in your undergraduate did you have a chance to work in any laboratories, or participate in research?

Ambrose:

Not really. I TA-ed classes a little bit, but never really needed lab classes to get hands on that – I could just build things. We built things in dorm rooms. We built things at home on summer breaks. And I like to build things and didn’t need to go to school to do that. Garages are great.

Q:

What kinds of things did you build in college, other than potato guns?

Ambrose:

I didn’t build a potato gun. Built a submarine, that was kind of fun. Built airplanes, rocket powered airplanes, walking machines, a lot of boats, and a lot of rockets.

Q:

And so were these – where did you find – what kinds of materials did you use and where did you find them?

Ambrose:

Well my ability to use machine tools was not so great. And I picked that up in graduate school. And I really wish I’d learned how to use a mill and a lathe early. It would have really improved the quality of my prototypes. So I was doing a lot with wood and plastic and composites glued together. I didn’t really get into metal until graduate work. And I really wish I had done more earlier, but I didn’t have a machine shop readily available. And I wish I had. My kids do. So-

Q:

What kinds of materials were available then? There were kits and things, where else did you find the other components, motors, gears, that kind of stuff?

Ambrose:

So I found a lot of shops that salvaged equipment. And that was great. I could go in and I could buy a squirrel cage fan that had a big motor in it and an impeller. And I could find a power supply that had been taken out of a washing machine or – refrigerators have great electric motors in them. I was able to find a lot of actuators through salvage like that, power supplies. And you can find them in salvage being pulled out of heavy machines and durable goods. Even back then, the prehistoric era when I was a kid, you could find a lot of electric motors that were, for their time, excellent. Like a starter motor – the Apollo rover was built, basically, with starter motors and very simple electric actuators in today’s terms. But they’re pretty torque-y.

Graduate Work and Introduction to Robotics

Q:

So when you got to graduate school what type of environment did you find there, did you start working with?

Ambrose:

Well I did a theoretical Master's where I was doing linkage synthesis. It was the first time I really started to use computers to do design work. I’d always been into CAD, and liked drafting, just always enjoyed drawing. I grew up in a household with a lot of architecture and drawing. And design was always being discussed. And I got into the CAD work early. But then starting to use computers to do synthesis where a more formal synthesis – I started in my senior year and learned a few software packages that were just coming out that were extremely painful to use, but you were able to do so much more than you could do by hand. They subsequently led to packages that are more common like the Adams Mechanical Design package. We were using some early versions of that where you would build a file that a fax would read. And that file had a file structure where you would define mechanisms, parameters, and then you could iterate and write – have the program iterate through variables and optimize. So I started – as a Master’s candidate I started doing a lot of formal optimization, design theory, design methodology. Took a couple really good classes from professors who were really knowledgeable in that field and very strong computer backgrounds and mechanical engineers. So they really knew how to use computers to do design work. And I took a class – we were in St. Louis, and there’s a very large aerospace industry there. And one of the classes that was arranged was to use a brand new CAD system that they had developed, they were using for the Harrier and F18 programs. And they kind of used us as guinea pigs to learn how to use this CAD system and watch how we learned it. And use us as kind of beta testers of their in house CAD, which was a great system, was really very capable. And then couple into that – so in these more analytical design tools. So that had a big impact. And I went into linkage design and the process of stating a requirement and then optimizing to get the best results was something that I really embraced. And then took that into my Ph.D. work at UT Austin, where I came up with a design methodology specifically for robot manipulator design. And that was the main thrust of my Ph.D. work. I really wanted an experimental component to that. So what I did was I built a set of modular joints and links. And I tested them thoroughly so I got a lot empirical data on what their performance was. And then I built a database using all that data and wrote a design program where you would state a requirement for the arm. And it would hunt through every possible permutation of putting every piece in the tool chest together and every possible configuration, and then tell you which one was best for the requirements that you just specified. And it was a great design problem. And I, obviously, used a lot of my Master’s work in linkage synthesis and optimization. But I think the most important thing was I actually built those parts. And when I got to NASA, I realized that by the time I was done formulating that design problem, I knew what the answer was before I ran the program. To formulate the design space and to write it all down so you could run the synthesis, I already figured out which corner of the design space was best. So what I started doing at NASA was focusing more on visualization of the data, and not trying to wrap the synthesis loop around it, just to build visualization tools so I could try out different ideas. And then visualize the results. And then I would just, in what I call wet-ware, synthesize and play around with the parameters until I got the design where I wanted it. That was a lot of fun, too. I like visualization – data visualization – came up with ways to see the design of the robot stretch and morph very physically. Rather than bar charts, it was the actual shape of the robot, which changed in front of my eyes as I would tweak parameters. And the payload would shrink, and you’d see if you were making progress or not.

Q:

What year was this when you were doing a lot of this stuff?

Ambrose:

So my Ph.D. work started in the summer of ’87. And I finished in the summer of ’91. So I was using a PC and a Silicon Graphics. So discovering the Silicon Graphics was a great eye opening experience for me. It’s a great computer system that allowed the open GL – well the GL graphics library that then became the Open GL graphics library that’s used a lot today, was a great experience. And being able to visualize design in three dimensions and move it – see how it moved and see how it performed. That was a great design tool.

Q:

How is that different from playing with the materials in your hands and shaping them while you’re designing versus looking at something on a computer screen?

Ambrose:

It’s cheaper. <laughs> But it’s not as satisfying to me. So for my Ph.D. work I wanted to do both. I wanted to build modules and then be able to more virtually synthesize the combination. I didn’t want to have to put every possible sequence of the joints and links together every time I wanted to evaluate something. I didn’t need to do that. But I think having designed and built the parts was very important for me to understand. And it led to some important things that happened later in my career, having actually designed and built robotics systems early. And those early robots that I built were really, really bad, horribly bad, very wrong. And I was free to do that. And that was important to be able to get that out of my system. And it allowed me to set a lot higher standards for later when I was doing it professionally, to know what could be done and have an expectation that it will be done better every time.

Q:

What do you mean when you describe bad and wrong in terms of these robots?

Ambrose:

Come up with some more descriptive terms. Clunky, fat, weak-

Q:

You can just describe the robot or one of them.

Ambrose:

Well I built robots that were not deeply integrated. So I was buying components like actuators and sensors with their own housings and then putting those together, rather than starting with more basic components like the stator of a rotor – stator and rotor of a motor or just the optical disc and a reader head. Buying the whole encoder with a package around it means you’ve got this big thing sticking out the side in a clunky looking way. And it violated my sensibilities of what design should be. But I was learning and getting feel for the componentry that had to go into the designs. And I’m big on iteration. And I can just look at designs and know that they were a first rev or that the person didn’t care. It might be a tenth rev, and they just don’t care. But you look at it and it’s clear that they were putting things together for the first time and hadn’t really thought it all out yet. Once you get all the pieces together, you realize oh well it would have been better if I’d known that the nth piece that I was going to add was that way, I would have made a much different choice on the first and second piece that I selected. And it’s clear when a prototype that’s not well thought out like that early in the design, just has kind of a clunky look to it. That or they were just starting to understand the design.

Q:

And who did you work with during your Ph.D.? Who was your advisor?

Ambrose:

My chief supervisor was Dell Tesser at the University of Texas at Austin. And Dell really liked design and liked actuators, came from an era in mechanism design and got into robotics after he graduated and went to the University of Florida, and worked with some great people like Joe Duffy at Florida, and then came to Texas before I came to Texas and started a robotics team there. And we had – so Dell was great. But even better than Dell were the grad students. We had a great group of people that were just really into robotics. There were a lot of students – like thirty students there. And we all were just really into robotics, and we all learned from each other and were always finding new things. And back then, that was a long time ago, pre-Google. So if you have thirty people with fairly crude information gathering abilities, that might be as good as one person today who’s got the world at their fingertips. With that many people all finding things the old fashioned way where you had to go into the library and read, was better than just a person by himself. So all those people finding new things and new ideas and hanging out at the lunch table discussing them that was really important. That’s where most of the learning was going on with the students all working together to figure out this world of robotics.

Q:

And who were some of the students?

Ambrose:

Well one of them was my wife. So I was very – I’m definitely married up. She did her Master’s in bio-medical engineering, and then came back to mechanical for her Ph.D. and started working in the same team, same supervisor professor. So she was about a year behind me. But there were a number of other students. And several of them now are in – work with me at NASA. A lot of them went into academia and manufacturing. We were pretty heavy in manufacturing, thinking about robots to do manufacturing, some very high tech manufacturing like semiconductor manufacturing and a little bit – and some of the folks went into military robotics.

Q:

And what kinds of projects were they generally also working on?

Ambrose:

So our group’s thrust was manipulator design, and manipulator dynamics, and manipulator control. We had basically nothing going on in mobile robotics. We were very focused on manipulator – manipulation. And secondary thrust and – well it was all aspects of manipulation, design of manipulators, dynamics, control. And then a secondary thrust in human robot interaction. It’s primarily force feedback joysticks that are essentially manipulators themselves – be able to apply forces back on human’s hand or arm. So we built a lot of force feedback joysticks and connected them to force feedback manipulators and built force-reflecting systems. And we had a number of consortiums where worked with other universities, which was a great experience. University of Florida, Tennessee, and Michigan as part of a Department of Energy consortium. And then a little bit more locally we had a consortium with NASA, and I volunteered for it because I wanted to go work at NASA. So I volunteered for this project that had little to do with my thesis, but I just wanted to learn about NASA. And there we had UT Austin, UT Arlington, Texas A & M, and Rice. And we were operating robots across the Internet back before that was cool, and semi dangerous. So we were learning a lot about remote displays, common interfaces between robots. We built one back around ’89, ’90. So the robots from all the different labs could take the same commands and could display some more data despite totally different everything about that robot. And you know that still persists today. There are just standard after standard after standard for robot developers. And many people would think I was a newcomer coming to that in 1990. But it’s a constant evolution of commonality. And we’d made our own attempts back then in the group we were a part of. And I got to operate a robot at NASA from my lab in Austin, and that was fantastic, led to later aspects of my career.

Q:

What kind of robot was it?

Ambrose:

It was a Robotics Research 1607 – KB 1607. It was a very nice manipulator, a seven degree of freedom arm, way ahead of its time, built by the Robotics Research Corporation. Keith Kowalski and Paul Eissen did a great job designing that manipulator. And we operated it from Austin with a joystick that looked nothing like it. So our goal was to be able to build what tests were called universal joysticks that they didn’t have to be mapped to look just like the machine they were controlling. They could be designed to work well with you, rather than have to mimic the machine. So we had force feedback joysticks that looked very different than the Robotics Research arm.

On Working at NASA

Q:

And how did you decide you wanted to go work at NASA even during grad school?

Ambrose:

Well I’d already decided I was going to work at NASA. So I was in Houston during the moon shot. And I decided summer of ’69 I was going to go work at NASA. So there was no fuzz on that. And there were a few speed bumps along the way. After I made that decision, I then entered kindergarten and discovered I had a bunch of education I had to <laughs> work on before I could go work at NASA. So it was kind of a harsh reality to a five year old. But I assumed I could go work there right away, but apparently they like you to get an education first. So, but yeah I’ve wanted to work at NASA for a while. I’m living the dream.

Q:

And so what was it like when your dream came true? Or how did you-?

Ambrose:

It’s still coming true every day. It’s coming true every day. I really like working at NASA. It’s a fantastic place. It’s I think the ultimate place to be an engineer. The challenges that we’re given are literally out of this world. There is no preconceived notion that we’ve done it this way we’ll always do it that way. The presumption is that the only way we could ever do it is if we come up with a new way to do it because it’s such a new problem. And there’s something about NASA being kind of for the betterment of all mankind. If you take a lot of our U.S. agencies’ acronyms and you bounce those alphabet soup acronyms off of people in various parts of the world, they won’t react very positively to our various agencies that we have. And you know that’s not good or bad, it’s just the truth. And, yet, if you ask anybody around the world, “What’s your perception of NASA?” It’s a very positive one. I mean you can’t fault NASA. It’s doing work for all of mankind. And when we go into space we always do it with other countries. And it’s a very international kind of endeavor. So it’s a great feeling to be a part of that. It’s very – it’s humbling to get to be a part of that story. And every generation at NASA gets to write a new chapter. And we’re entering some interesting new times at NASA. I’m hoping that our robotics engineers are going to be the ones writing those chapters.

Early Robotics Work at NASA

Q:

And so what was the first thing that you started working on when you got to NASA? What group were you in?

Ambrose:

So I went to work for a division at the Johnson Space Center that I now lead. And that division had a number of branches in it. I went to work for the technology branch, which was the branch who was kind of looking at the far out future stuff. And there was a gentleman named Charlie Price that I worked for. I can’t say enough about Charlie. He was very forward thinking, always had a gaze that was down range, always was thinking ahead. And as far as I know the only mistake he made was hiring me. He brought me in as a contract employee, and the first year there I went to over twenty places. Because I was a contract employee, he had very limited travel budget. And he was always getting invitations to come to places like this conference we’re at today. And he couldn’t go, so he’d send Rob. My first year I went to over twenty places, got to go to basically every robotics lab in the country. And it was a great kind of drinking from the fire hose experience. But I got to meet a lot of people and it really helped me build a network that I continue to use today. The first project I was on was – no one gets to work on one thing. I had a project that I led, where I was designing a set of – imagine this, manipulator joints that you could put in a thermal vac chamber. So I wanted to learn about the space environment. So I’ve been thinking about how to build a manipulator in air, and now I wanted to think about what would be different in space. I didn’t want to just keep building fun robots, and then the only reason they would be NASA robots is they’d have a NASA sticker on them. I wanted to do what was unique to NASA, so I think that’s the environment. So I tackled thermal and vacuum first. And started over on the design thinking about those components and that would be able to work in the space environment, built a bunch of parts and took them into a thermal vac chamber for some early experiments. And that was a great experience for me to get to see the way that a robot’s performance changed as a function of temperature. Pulling a vacuum didn’t have a big effect on it, per se. If you design the materials so they can take vacuum, when you pulled the vacuum they still worked. Now you had have made the choices upfront, but temperature is a very big deal. And I set as my objective to build a robot that could work in what was called the EVA touch range. EVA is Extra-vehicular activity. It’s space walking with an astronaut wearing a space suit. I wanted to be able to build a robot that could go anywhere a person in a space suit could go. So I researched what the thermal range was, and it was about minus 50 to positive 100 degrees Celsius. And if it’s colder or hotter than that we don’t like astronauts to touch it because they could get burned or frost bite. So I said okay that’s the range for us. We’ll build a robot, if everything’s’ got to be designed to be in that range to work with the astronauts, we’ll build a robot that could work in that range, too. And minus 50 is mighty chilly. And a hundred degrees is boiling hot. Most robots can’t take that. And what I immediately found, even though the robot worked, it worked so much differently at cold and hot temperatures, that I had to change the software. I had to have the software adapt the robot to its new temperature. So that was a major breakthrough for me was to have the robot read its own temperature constantly, and constantly update itself so that it always had the same response. So if you asked it to go from zero to five degrees range of motion, the way it went there, the rise time, the settling time where it would overshoot slightly and then stop, the amount of time it took to stop, I wanted all that to be exactly the same no matter what temperature the robot was at. And at cold temperatures when you would give it a command it wouldn’t move at all because the lubricants were sludge. So you had to have a different set of control gains. And I found what those were. And then at high temperature it was completely out of control because at high temperature the lubricants were so loose that the friction that control system was counting on wasn’t there anymore, that had gone away. Now when it cooled down it was back. So what I wanted was a control system that would be invariant to the temperature. So no matter what when you commanded the robot you’d get the same result. And you didn’t have to care about what temperature it was at, which is a very unusual idea. And you don’t see that even today. It’s – normally people just warm the robot and try and keep it at a constant temperature because they’re afraid of all those changes. So that was a good project for me. I built those joints. I also was on a second project where there had been an enormous – I think probably the biggest robotics ever had just been cancelled. It was a project called the flight telerobotics servicer, FTS – maybe five hundred million dollars. And it had been cancelled just as I had arrived at NASA. And so my branch chief sent me around the country. And we called it the FTS technology capture effort. That I would go around the country and find every component that had been developed at great expense for this incredible robot and find out what was good and bad about that component so that we wouldn’t lose that information. We’d spent so much money coming up with that design that we wanted to take that into future designs. So I went to every vendor, every motor, every sensor, every bearing, the people that built the cable harness, and just interviewed them and looked at the designs. And about five years later, four years later, we got all of that equipment accessed to us. I got it all delivered to me. At that point I was further up in the team structure. So I actually was the one who took delivery of all that equipment. They had built one flight manipulator. And I’ve still got that in storage – protected and bonded storage. So that if I needed to go do something with it I could. But I ended up using a lot of the parts, the spare parts in my own experiments because there were these incredible components and did a whole series of additional experiments on thermal vacuum performance. Developed some custom lubricants that could work in the space environment. Worked with a number of people that had been searching for these materials also and we kind of came up with some concoctions that we thought would do well in space and all of that led into my thinking in the design of the Robonaut system that we started around ‘96. We’d just come off this technology capture program, built our own joints and a bunch of other experiments. At that point we felt we were really ready to design our own robot. We’d invested in all sorts of hands; that Charlie Price was really into robot hands and we were buying the best. We have an incredible museum of robot hands. It’s really-- it’s about the best in the world and we continue to help any little company that’s got a new hand. I tend to buy it just to help them because there are so few out there and we add it to the collection, but we also learn from it and part of NASA’s mission is to help get our economy moving and so giving a contract to a little company has been an important part of NASA’s history. We didn’t invent the microchip, but we bought 90 percent of them in the mid-‘60s and that was really good for little startup companies like Texas Instruments and we were giving them these contracts and if we can do that in robotics, it’d be great. So we’ve been buying a bunch of prototypes. We’ve been doing experiments kind of piece by piece, studying the componentry and by about ‘96 we decided we were ready to build our own robot in-house and that’s when we started the Robonaut project.

Q:

So why was the flight telerobotic servicer canceled?

Ambrose:

It involved Congress and the White House and strategic objectives for the country. And I don’t – it was obviously canceled before I joined NASA, so I don’t really remember all the details. When I got there it already had been canceled so I was part of this effort to try and make sure we got the most out of it. But it was a big project. It had been designed to go to the space – was going to go to the space station, but at that point the space station was going through some redesign. Its assembly was being moved on schedule to the right and so they decided to put more money into the construction of the station itself than robots and other things to go with it. At that point I think the space station was scheduled to be assembled starting around ‘94-‘95 and ended up being much later in the decade than that, but they had to reprioritize and it fell off the chopping block.

Q:

And before we go to the Robonaut, I was also curious. You mentioned in your first year you ended up meeting a lot of people and going to a lot of different places. What were some of the people and places that you--

Ambrose:

Well, I met my personal hero, Red Whittaker, Carnegie Mellon. That was great. I had met him a couple times before, but I had never gone to CMU, so I went there for a couple meetings and you know Red’s incredible and to this day – I mean a lot of what we’ve got in our program at the Johnson Space Center is based on what I saw as the right way to do things at CMU and his students might disagree with that, but they’re wrong. The way he does it is the right way to do it and sometimes you just have to do a 100-day death march to be successful and he’s not afraid to do that from time to time. He also has a long-term vision for the technology and doesn’t get too lost in the moment. He’s seen the evolution of the profession and he knows where we are in that profession and what he can expect as improvements. I remember I was talking to him about students and today’s students coming into robotics are so much better than they were even a decade ago and he said yeah, it’s really becoming a true profession and that kind of hit me that robotics was at a turning point going from kind of an oddly formed collection of anecdotal rules and guidelines and truly becoming a profession and that was probably late ‘90s when he said that to me over an adult beverage and it’s true. Robotics really has matured into a profession. Who else? I met a lot of great robotics engineers at JPL. Guys like Paul Schenker and Samad Hayati, Brian Wilcox, just incredible robotics engineers and in particular for me, space robotics engineers. Rich Volpe, Paul Backes, Chuck Wiseman, some great thinkers in the field of robotics and space robotics. There’s a pretty good guy up at the Ames Research Center, Butler Hine, and then universities all around the country. Osama Khatib at Stanford, Bob Cannon at Stanford, retired, Steve Rock at Stanford. A number of professors at MIT like Steve Dubowsky and Rod Brooks and Gill Pratt. I got to meet all those. Marc Raibert, great robotics researchers that were very inspirational. You could see in them their own visions for where this robotics story was going and they taught me a lot.

The Robonaut System

Q:

And so how did all of this then feed into the Robonaut system and how did also people decide that that was the kind of platform; that they were going to build something more humanoid?

Ambrose:

Well, Robonaut started with an idea to build something called a work system and it was an idea that was formulated by a guy named Dave Lavery at NASA headquarters and Charlie Price. They outlined in I think it was a brief white paper, three levels of a work system. The first was something that could fly around – again very focused on space station and working in a micro-gravity environment with people as opposed to like a Mars rover. So that team was very focused on supporting human spaceflight with robotics and they defined three levels of a work system. One was a free flyer that could just inspect. It could provide camera views. You never have a camera in the right place. You always want it somewhere else looking back at some angle that you just don’t get and as you’re working you always want a new angle and a new camera and a new place, so being able to have a flying eyeball was kind of a minimal functionality. It doesn’t have to do a lot of work, it just needs to help you see or perceive. Maybe not even cameras. Maybe some other kind of sensors. We built one of those and we flew it called the air cam, but the original name was this great name. It was EWS Number One, External Work System Number One. It’s kind of a class robot. Air cam’s a little nicer. EWS2 we didn’t build, but it was generally regarded as a mule that could move near you and keep stuff for you and you could use it as a mobile workbench or a mule that could just – a pack mule. It could carry stuff for you. So astronauts are pretty encumbered with what they carry. They’re having to use their hands to climb typically so they’re kind of out of hands for carrying stuff and they’re cognitively challenged climbing because they’re in a spacesuit and they’re in free – it’s kind of like rock climbing where the rock is falling in freefall and you have to climb in a vacuum so you’re wearing a suit, but other than that it’s pretty easy. So since they’re challenged, they wanted something that could carry stuff for them so that they could focus on themselves and not have to carry equipment. We never built one of those, but it was a good class of machine and EWS3, External Work System Number Three was one that could actually do work, so it could work with the astronaut very likely in kind of a junior apprentice role or assistant and then when the astronaut would go in, it could keep working and maybe work at some level, not as good as an astronaut, but it’s persistent. It could stay out all day long and you could leave it behind and that was kind of the niche that we targeted Robonaut for. Now we didn’t call it Robonaut. It went through a bunch of renames, but after a couple of years by about ‘98 we called it Robonaut. It didn’t exist. There was none built at that point, but after a couple of years we had some of the key pieces together and I became the Robonaut project leader in August of ‘99 and at that point we still didn’t have a Robonaut, but we had a hand and we had an arm and we were running out of time. So it was kind of a critical moment. I’d been the arm subsystem lead and had built the manipulator down to about right here based on some of those experiments that I’d done early in my career and we really needed to put it together so I challenged the team to – let’s get this thing together. I knocked out a neck pretty quickly with a head with some cameras and we put the head subsystem and the arm subsystem and the hand subsystem together and I remember the first day we turned it on and we had a teleoperator. We decided we were going to start with teleoperation because we could and the teleoperator put on a VR helmet, the same helmet we were using to command another robot we’d built and to do VR training for the astronauts, gloves and a helmet, and we had a guy put that on. We turned the robot on and he reached out and he grabbed something just perfectly the first time and we said okay, we’ve got something here. Then reaching out and grabbing a tool and getting it in a really nice grasp where you could actually use it and it was strong and it could apply forces just instantly with zero training. Said to me okay, we’ve got something here. Now I knew for it to really meet its potential it would have to be autonomous, but since that’ll take 50 years, in the meantime it’s – a person could step inside the machine and just be that good, I’ll take it. But clearly over the evolution of the Robonaut system we’ve gone more and more autonomous and with the Robonaut Two we started autonomous only and it’s a great teleoperator robot, but we’ve hardly done any teleoperation with it. But when we reached out and we grabbed a tool for the first time on like the first try, we knew we really had something that was pretty capable and that really summed up pretty much every experiment we ever did and continued to do with Robonauts, especially with teleoperators. The first take almost every video you’ve ever seen of a Robonaut One was the first or second take and that was it. We didn’t need to do a whole lot. We actually found if we kept doing it, it got worse. So it was actually better to just take a first, try this, and we’d videotape it and it would work perfectly and we would say okay, done. Let’s try the next thing. Try that and there’s something about the way we describe teleoperating Robonauts is going – is stepping into the robot and when you put the helmet on and you move your head around and the view moves so well, that’s probably 90 percent of convincing you you’re inside the machine and then when you look down at your hands and you wiggle your fingers and you see them, it just looks like you’re wearing astronaut gloves, right? And you do this and you see your hands, you’re there. We’ve had some really funny experiences where a person is – they may be in another room or down at the other end of the same room and they’re teleoperating the robot and holding like a big chunk of metal or something and they drop it. The teleoperator way over there moves his foot out of the way because the operator doesn’t want it to hurt their toe, right? But then the teleoperator looks down and realizes well, Robonaut doesn’t have toes and actually I’m not in this thing, am I? But it was really telling the way they felt they were there. All their reflexes were there. We played around with audio and being able to talk to your friend in this robot is just amazing and I did a bunch of experiments where I would work side-by-side with the robot being teleoperated and it was just I was standing next to Fred and we were just talking about what we were going to do and I’d say here, hold this, and we would try something for the first time building some assembly together and it was just like I was talking to a colleague and we were putting something together for the first time and it was just that easy and I could point at things and say here you put your left hand here. I’m going to put the tray of parts on your right and he’d look over on his right and see it and we would just start building things together and that human-robot team was clearly very powerful.

Q:

So was it like 2001 did you say when you first started where you did that first experiment?

Ambrose:

The first time we reached out and we grabbed something was September ‘99 and we went from nothing to having that ability in about two months. So we had a neck with a head on it, some cameras, one manipulator with the Robonaut, one hand on the end of it, and a telepresence control system off to the side and that came together. Now those pieces existed in July and by end of September we had put them together for the first integrated test and that was ‘99. In 2000, we put another arm on, a lefty, and we took the system and went upright with an upright torso with a waist joint that could move the torso and then we really in 2001 we repackaged the torso with a better computer system and some new software and went into skin design. Started looking at tactile sensors and some overlays for improving the vision system, visual system. In 2002 we built a second unit that was Robonaut. So from the very beginning I called the first one we built Robonaut 1A1, R1A1, and everybody made fun of me that, you know, this is just Robonaut. Why are you calling it R1A1? But then we built Robonaut 1B1 and R1B1 and then they started to get the picture that we were going to be building these for a while. R1B1 was very similar to the first, but all the electronics that were in a rack behind the robot were integrated in the torso and it was the first time we had built a lower body and we built a single leg for climbing in zero gravity and we did a number of experiments on a flat floor with hovering, climbing, stabilizing with the leg while working with the hands, being able to move the body on the leg. We later – because the torso was self-contained, we could do that and we later started swapping out other lower bodies. Got a great project with DARPA about that time and they gave us – Doug Gage – actually Mark Swinson was our first DARPA program manager. He believed in us. If he hadn’t given us our first contract, we would’ve been absolutely dead. He believed in us and gave us our first real grant and then Doug Gage came in and teamed us up with a great group of university researchers thinking about autonomous control manipulation. And so there I got working with Rod Grupen, University of Massachusetts and Alan Peters at Vanderbilt and Maja Mataric, USC, and Cynthia Breazeal at MIT and their students. Great team. Doug Gage gave us a Segway and we put R1B1 on the Segway which was terrifying having a multibillion-dollar robot balancing. Wow. But we really learned a lot about mobile manipulation, bringing mobility together with manipulation and we learned how that design was really bad; that the torso on a Segway was not the right answer. It could not reach the ground, so if it dropped something it had no ability even with Robonaut’s arms being pretty long, it still could not reach the ground. So at that point I realized that just bolting manipulators on rovers was not the right thing to do. It’s easy to do, but it’s clunky. That just you got this part, you got this part. They weren’t designed to go together and if the mobility system had ever seen the manipulator when it was being designed, it would’ve been designed differently and if the manipulator design had ever been designed thinking about mobility, it would’ve been different. So you hook them together because you can, but then you realize oh, arm can’t reach the ground and the arm can’t reach – you know there’s that spot in the middle of your back you can’t quite reach? Most mobile manipulation systems, that spot is huge. That’s a majority of the robot that it can’t reach and that’s really wrong. Mobile manipulation should be able to scratch an itch wherever it’s got it or get a stick. The last thing we did with the Robonaut One series was we built a custom rover and we solved this problem. We went after a number of things. We built a mid-joint mid-waist that would allow the upper torso to bend all the way down, get close to the ground. It had a spine rotation where it could rotate around and we put a little workbench on the tail end of the rover chassis where it could set things down so it didn’t have to always carry it. It could pick things up and set things down. It could have tools that it could use on its own little workbench and I was just looking at some robots downstairs and in the exhibit hall that had a little tray where it could set things on itself. What a practical idea. We did that in 2006. Had a nice size workbench that Robonaut could turn around so the torso had an additional mid body so that that tray was not always in the way. It could turn around and it could work, but when it needed to get to the tray, it would reconfigure the torso and turn around and so I started thinking a lot about waists. I think the waist is the key in robot design to bring mobility and manipulation together. So I started studying primates and my wife and I wrote a paper on the primate waist for – it was I think the first issue of the “Journal for Humanoid Robotics” and we looked at primates and small primates have an incredibly athletic waist. Like lemurs and monkeys, so we defined a waist as a gap between the pelvis and the ribs and the musculature deflects it. It seems simple, right? All primates have that, right? Well it turns out not. If you look at the great apes, there’s only one great ape that’s got a waist and that’s us. If you look at bonobo chimps and orangutans and gorillas, the ribs touch the pelvis. There is no gap. There is no range of motion. There is no musculature to flex. Their body is just a rigid puck that’s got incredible limbs coming off of it, but their torso is just very rigid. They don’t spin. If you look at their skeletons, it’s very different from us and so then if you compare the skeletons, little monkeys, great apes, no waist, and then humans with a waist again. What happened? It’s interesting, but if you look at the way great apes move and manipulate, they tend to do one or the other. They’re using their arms to manipulate, swinging, or knuckle-walking, but they don’t tend to do a lot of sophisticated manipulation on their own. They’re very super strong. Their arms are very long. So that’s one of the things that I came to was that if you have a waist, your arms don’t have to be as long. You can have a small zone called a dexterous workspace and if you have a waist joint you can just move that zone around. If your shoulders are fixed and you can’t move them, you discover how short your arms are very quickly. So the ability to move a sweet spot of manipulation around was important and I’ve got some – in that paper we made a few fairly naïve projections about why we might have a waist and I think the clear evolutionary endpoint is the roller desk chair. That’s really a great augmentation. A very wide range of motion for swiveling, X/Y motion. It allows a person in a work cell to have a smaller set of arms that can get anywhere and just be very – it’s a great combination of mobility and manipulation and I think that was a lot about it. People tend to sit and manipulate for hours and so if you were going to build something and you would collect raw materials next to you, you wouldn’t want the raw materials right in your way. You’d put them over here and you might have a fire there, tools over here, and being able to move around that workspace while stationary, essentially, it’s great to have a waist. If you look at chimps, they tend to move, sit, manipulate, eat, get up, move, sit, manipulate, eat. They don’t stay in one spot working like we do. We can hang out in one spot and get a lot of work done very quickly and then we can throw while running and that’s a really powerful thing to be able to throw on the run. You don’t see that in other great apes. Being able to control your upper body, stabilize it and orient it and do something athletic with your upper torso while you’re running is pretty powerful.

Q:

That’s interesting. I just watched “Planet of the Apes” and I think in that movie they’re throwing while they’re running.

Ambrose:

That’s a scary thing, isn’t it? <laughter>

Q:

Yeah, <inaudible>

Ambrose:

I think they were evolved in that movie.

Q:

They probably <inaudible> that way. You know they grew a waist.

Ambrose:

They got a waist and since they were in a lot of those using human actors, it’s hard to get rid of a waist in an actor. But so that was an interesting design revelation and we exploited that in the Robonaut One design.

Human Parameters in the design of the Robonaut

Q:

Actually what I was going to ask is about you mentioned that even at the very beginning, the Robonaut was really intuitive. How did you actually manage to design it? I mean what was the design process or how did you go about it to make it so intuitive from kind of the very beginning?

Ambrose:

Well, I would first say we got lucky, but the design objective was not to make a machine just like a person. It was to make a machine that could use all the same equipment that was designed for humans. So we had fit through all the same access corridors that were designed for humans in space. So it was – we accepted the human as the de facto design standard for performance and function on the jobs that the robot would do, but it didn’t have to do them in the same way that a person did them, but it was going to have to do the same tasks and if we could do it faster or with more endurance, that would be great, but we had to use the same tools and fit through the same passageways that were designed for the astronauts. So when we – that drove the hand design. The Robonaut hand has more fingers than the minimum that you need just to hold an object and a lot of people, their first thought is well let’s simplify this by only having three fingers and I’m all for other people designing robots that only have three fingers, but we’re going to go for five and with five you can handle tools that are designed for humans where you have to stabilize the tool and articulate buttons or switches or triggers or latches on the object and a three-fingered robot has to control things with two hands. So imagine you holding an object that’s got a thumb latch that you’ve got to flip, but you can’t do it. You’ve got to reach over with the other arm and you’ve got to control the latch. It’s a little clunky. So being able to operate all these tools that were built for astronauts wearing gloves, we had to be able to do it. So the ring and pinky are mainly just for stabilization of the object. We had also decided we were going to go after a hand so it can interface with all those same tools and the hand has a feature that very few robot hands have. If you look at any of the hands that are in our museum or in the exhibit hall, they don’t have a palm. The only ones that do are the anthropomorphic hands, right? The palm is actually one of the most important features of a hand and most robot hands don’t have palms. Just the whole algorithm for using the hand becomes different where if you’ve got a palm you use the arm to put the palm on an object and then you grasp it and that is such a simple and effective way to grasp an object rather than to reach out and try and make a pincer grasp on it. That’s much more delicate, slower, it’s less robust. Having a palm is typically the first contact or even you envelop. So the grasp is an enveloping grasp that’s very strong and rugged. That’s such a useful grasp and most robots can’t even begin to do it. It’s just like the most basic thing and an infant has a grasp reflex, curl reflex that is that you touch their palm and they do an enveloping curl. That is really simple and powerful and most robots can’t do it. So we wanted to be able to just put the hand on an object and have it grasp it. So that was one of the first autonomies we built for Robonaut. The grasping was hard to control. So much of what happens when we as humans grasp things is done where we can’t see what we’re doing. That our – we tend to go overhand on objects and one of our astronauts noticed – Dr. Nancy Curry noticed that teleoperators of Robonaut tended to go underhand to grab objects so they could see what they were doing and when she said that it was just obvious to me after that how we were doing it all wrong and humans reach overhand and they grab things, but at some point they transition from vision to tactile. We were trying to do it all with vision with the teleoperator. So we could do it, but it was taxing the operator. They were seen-forces which is a computationally intensive way to get force data, but humans are great at it. I was – well, we had novice teleoperators able to do all kinds of things with Robonauts because humans are just crafty. They would see things flexing and they would know that there must be a force there and they would see contact because somebody would bend or flex and they could feel it kind of in an intuitive way, but it was clear we needed to get into tactile force control and where we went was we turned that over to the robot; that all force management was up to the robot and the teleoperator just thought they were really good; that never broke the arm. They never crushed the object and they just kind of wiggled like this and it would just get it and that’s because we had the robot protecting itself from the teleoperator, not doing exactly what it was told, and contextually deciding how hard to grab objects based on what the vision system saw as a delicate object. It would grab it differently than if it was a cue ball that could possibly crush. So that was our first major break in autonomy, starting to do turning over all forces to the machine and letting it handle the forces and then once you do that, you can automate the higher level actions because the robot’s going to be safe. It’s going to not hurt itself and not hurt the environment. So getting to that was a major breakthrough because they we could start automating everything.

The Human Robotics Systems Project

Q:

I’m imagining in your office you have all these skeletons of apes and things like that. So how do you educate yourself on some of these? Even just figuring out that the palm is important and you mentioned a doctor who is … Nancy or Dr. Nancy, but she noticed <inaudible> sounded like you have the whole team there, people looking at, biologists and so on.

Ambrose:

Sure, we've got a great group of people at NASA, and basically surrounded myself with people that are a lot smarter than I am and they make observations like that all the time and you never know where it might lead but it's that same kind of team approach to figuring out a domain. Everybody comes in with a little different perspective. Nancy has flown in space four times. She's a robot operator <inaudible>. She trains other astronauts how to use robots in space. She has a Ph.D. in Human Factors and is really into that intersection between humans and robots working together, and it turns out that is our niche at the Johnson Space Center. Well ok space, but then no one can do everything in robotics, it's a big enough profession now, and our niche is space robotics obviously but my buddies at JPL build robots that go to planets without people, and we help them a little bit. And at JSC, Johnson Space Center, we build robots that work with people in space with some help from JPL and other centers too, because there's a lot of commonality, of course, but it's really our niche is building robots to help humans in space. So for the last six years I've been leading a project called the Human Robotics Systems Project, and it funds work at the Johnson Space Center plus JPL, AMES, Glen, Goddard, Langley, and Kennedy, so we've got seven centers all around the country part of this team, and we're really focused on building machines that can help people work in space. And when we turned to the moon in about 2005, 2006, I was told to stop horsing around with Robonaut, we need to build a rover. So I said, alright, we're going to get into mobility, and I focused the team on designing a rover for people, which is a different kind of a problem. The Apollo Lunar Rover was an incredible machine and it was very different than Sojourner or Spirit or Opportunity. If you just look at the design, everything about it, it's almost exactly the same mass as Spirit, but totally different performance spec and not at all smart because you'd load this cognitive subsystem on board called the human, but boy it was fast and pretty torque-y and could carry a lot of payload, carry two astronauts and unfolded pretty remarkably off of the side of the lunar lander. But when we decided we were going to build kind of the next generation of lunar rover for astronauts I put that picture up on the screen and said, this isn't it; said we're going to challenge every aspect of this design. We're not just going to go build a modern version of an Apollo Rover, we're not just going to swap the aluminum out with carbon and say okay this is the update. So much has changed, let's rethink this. Plus the mission's different. The mission we were given was to stay on the moon for many weeks, and the design spec for that rover just was not what we wanted. They were never allowed to drive that rover further than they could walk back to the lander. So they just fundamentally didn't trust that if it broke down they would die, there was no hope, they would run out of oxygen before they would get back, so they just never drove it very far. They drove kind of spirals around the lander, and in the last mission they allowed it to get up to 10 kilometers away because they said that was pretty much the furthest they could run before they ran out of oxygen, and they started out there and they worked their way back. So it was all about getting the crew back to the landers, they could return to earth. So I said, okay, that's the job. We're going to design a rover that no matter what will get the humans back to the lander, absolutely count on it, and that way we'll trust that we can drive further than we can walk. And that's what we did. We build an incredible machine. It's a rover called Chariot that we built in 2007, and it is the ultimate off road vehicle and the people that told me that were the people that built the Hummer1, and the people who built the Hummer2 and 3. When they test drove it they said, you know, this is the ultimate off road machine, and it is. It's designed to carry two astronauts and live in it for a couple weeks, maybe even four weeks. We've only tested it up to two, and it can basically drive anywhere, it's just an incredible beast. Twelve wheel drive on six legs, active suspension, it can turn on a dime, it can drive sideways any direction, it's got a pressurized cabin so you drive it in short sleeves; you don't have to be in a suit the whole time. And we built two of them now, Chariot 1A, and 1B, so you know where I'm going with that. We had crew in it for 14 days, two astronauts in each of two of them, and last year we emulated a long mission on the moon, driving in the volcanic region of Northern Arizona, and we each did about 200 kilometers driving together and we tested them by having a science team that only had overhead imagery, but they didn't know what had happened in that valley and so the science team had to figure out what were the sequence of events that occurred geologically in the valley. So they used the astronauts and the robots to go explore and there was a ground science team that had done their Ph.D. on that valley and knew everything about that valley and they kind of graded the remote science team and so we wanted to see how effective they were at exploring and using this rover, pair of rovers, totally redundant <inaudible>. It's got redundant drive in each leg, redundant steering, redundant suspension and it's designed so it can do everything it needs to on five out of six legs. If one leg completely breaks, it can lift that paw and keep driving so it doesn't drag the leg, and then if absolutely everything goes wrong we have two of them and we showed that in an emergency, all four could climb into one of the rovers and spend the night driving back to the lander. So that's the way to trust it, you build a completely redundant, very capable machine, put everything into making it able to survive failures, and then you build another too, and that's what you send to the moon. And the idea was they would stay on the moon and the crew would use them. They would land, the rovers would drive up like valet, they would use them for a couple weeks, maybe a month, and then they would go home to earth and the rovers would drive to a new valley and wait for the next crew to come and then explore that valley and kind of, we called it the leap frog exploration where it might take six months or a year to drive robotically to the next valley and then wait for the next set of explorers to come and explore that valley. Those rovers are incredible and we're now working on the next generation that's going to be even better and I joke with the guys, if you look at the rover designs, what happens if you ask if somebody who designs manipulators to design rovers, that it's got legs and active suspension and the body can control where it needs to go, and along the way we brought in some ideas from JPL, Brian Wilcox, totally new ideas about putting wheels on the ends of legs as opposed to just bolting on some wheels onto a box and calling it a rover; just really new thinking about how to design rovers. And along the way we discovered that if you did that it would change the way manipulators could be mounted on the rover. And it would change the way the waist could be built. If the lower body can bend down, then the waist can be even better or different. So when we started the Robonaut 2 project, we knew we were going to build a new rover for that and we did, we built the Centaur II that has incredible articulations as a rover and when you mount a Robonaut on it, it's upper body, it can just do all sorts of things. We reprised our design with a little work bench on the back and the Robonaut 2 is so much more capable than the Robonaut 1 that it could just do all sorts of things. And the Centaur 2 rovers are much more – than our first Centaur, we just tested that combination of Robonaut and Centaur out in Arizona a month ago and we commanded it from Houston going out and picking up rock samples across time delay and very autonomous where you just got designated rock and say go get me that rock, and since it's a government robot, after it got the rock, we told it, wrong rock, but it didn't mind. Robots don't complain.

Q:

Yeah, we saw the rover part because we were at ISR, but the <inaudible> hadn't <inaudible>.

Ambrose:

You're right, yeah, it hadn't gotten there yet. So you saw the Chariot rovers?

Q:

Uh huh.

Ambrose:

And the Centaur 2 with the digger on it, the digging blade. Yeah that's a very capable design with the hands can't handle it, the bulldozer usually finishes.

Q:

Yeah, we actually have a video of it so we put it with your…

Ambrose:

Oh very good.

Q:

I ventured out there and got some footage of it.

Ambrose:

I'm sorry I didn't recognize you.

Q:

No, no, no, don't worry, there were so many people there.

Ambrose:

There were a lot of people on that bus, yeah. That was a big bus. Well good. Well the Robonaut 2 project's been great. We formed this team with GM that was just fantastic, they were a great partner, and we built that team based on an aligned vision. They wanted robots that could safely work next to people and handle all the same things that were designed for people. So cars are designed to be put together by people, right, and space craft are designed the same way. Human space craft are designed so everything can be serviced by – it was all put together by people originally anyway, but it's all designed so human can fix things and service it. So where we were kind of accepting that as the de facto standard for a robot, GM is looking at that same idea. We're not trying to get rid of humans. A robot that's designed to safely work around humans kind of assumes that they're going to be humans. If you're going to design a robot that would do it by itself, you design a totally different kind of robot, it could be sharp and fast and lethal, you wouldn't go near it. You put it in a cage, you never let people in as it's doing its job, but if you're designing a robot that's going to work shoulder to shoulder with people, it's got to be a totally different kind of machine. It's got to move at a pace that's not going to surprise the people. It's got to be smooth and soft and be able to fit in the same places that people fit in and handle all the same objects that people handle. It's a lot harder if you could redesign all the world to be handled by a simple robot, go for it, because that's definitely a better option as far as the cost of the robot that redesigned the world is kind of expensive, so we decide, okay, the world's designed for people, let's just accept that rather than fight it, and step up to the challenge of building a robot that can work in a human's world. That's what we've done with the Robonaut system; well we're on our way. We've been working on Robonauts for 15 years, and I think it's going to take about 50 to get it right. So by the end of that probably be pushing Robonaut 10, and we will have made just incredible progress. I can't even imagine what the next 35 years will be like, but it's going to be great. Even the jump from Robonaut 1 to Robonaut 2 is just huge, this robot is so much more capable, more reliable, faster, stronger, better and I expect the jump from R2 to R3 to be equally impressive and we've got a lot of pent up ideas now about what it might be and the team's eager, I think, to get going on the R3. We're not done with the R2's yet, we're still learning.

Q:

What are some of both the technological and the conceptual changes that made that jump possible?

Ambrose:

Well, just all sorts of ideas. When we built the R1 it was a long time ago, it was '96, '97 when we were designing it and I like to design robots to implement an algorithm. So if you understand the control algorithm then you can make choices about how fast should it be, what kind of sensors are needed, where should the sensors be, what kind of motions are appropriate, and you can optimize. But you understand the algorithm that you're trying to implement, and we found some new algorithms. So the algorithms for autonomy are very different than for telepresence control, and it drives you to very different kinds of sensors, and you need more sensors, and different kinds of sensors. Sensors that might be hard to display to a human, but that are ideal for the robot. It's really hard to show force, but the robot feels it much better than a human could ever understand; so all kinds of force sensing, very important. There was an algorithm, an idea more than an algorithm that Gill Pratt had at MIT called series elastic actuation, and it was I think one of the most important breakthroughs in robotics, where he was putting elasticity into the drive trains and then rather than sense strain gauges signals, very micro strain, where metal is stressed and it strains very minutely, a strain gauge picks that up. The problem is strain gauge picks up everything. There are much better temperature sensors than there are strain sensors and they'll pick up all kinds of electromagnetic noise and they age poorly and they're fragile. So what Gil found was with macro strain, we added a lot of elasticity in the drive. You could measure that deflection with position sensors that we're seeing a lot of strain, a lot of wind up and it was easier to do without as much noise in the signal. Plus when the robots even just turned off, minding its own business and you bump into it it's soft. So a great new idea and it had a huge impact on the design of Robonaut 2. So I looked at all the implementations of series elastic and most of the implementations involved linear springs and cables and pulleys like at the MIT Leg Lab, but I was looking for something more inline and direct and so we came up with a number of very novel series elastic embodiments for the Robonaut 2 project, and we're over 40 or 50 patents now coming out of the project. So I can't go into all the details but we came up with a series elastic design that was basically one part that could be made fairly inexpensively and it just kind of simplified the whole packaging down and so with the Robonaut 1's we had already started to go to a very svelte design of the manipulator. I didn't want things sticking out sharp, pinch points. Cylinders are nice but when they start sticking out at orthogonal angels they form sharp edges. I didn't want to hurt people, and I also liked the idea of skin coverings that are, again, soft, and got a patent in skin design. There are a lot of other benefits to skins on robots for cleanliness and EMI, Electro Magnetic Interference Control, but safety's a really good one. Skins can protect the robot from a dirty environment and vice versa. So we started covering our robots in soft fabrics, space materials, Teflon, Kevlar, Vectren, and we've continued that with the Robonaut 2 but with a soft core and the drive trains you don't need as much compliance in the skin because it's already soft. But again, if you look at the design of the Robonauts, they're smooth. There are no sharp edges, protrusions of corners, and if you look at like a welding robot it's lethal, it's got 90 degree angles, sharp and steel bolts sticking out of it and if it hit you with that bolt might as well be a knife blade. We don't want any of that, everything's got to be smooth and svelte for working around people and we've really pushed that in the Robonaut design, it's an extremely smooth and compliant now also, and I think that's essential for working your people. You just don't want any sharps or pinch points, and that goes to the NASA design philosophy is before space hardware is certified, we go over it with a swatch cloth and look for any burs or sharp edges that could cut an astronaut's space suit and kill him. We don't want that so everything's got to be smooth, and I think that's really important for the future of robots working around people is they just need to be soft and smooth and easy to bump into and not cause a problem. Also helps when the robot bumps into things that are not people. When it bumps into itself, the robot's elbow touches its torso, it's usually a disaster. It's got a scrape on its elbow, it's got a scrape on its torso, and it probably ripped its shoulder apart because that's where all the torque was so it's got typically three injuries, just from bumping its torso, and humans do this all the time. It's actually comfortable, because it's the end of the stroke of your arm, perfectly reasonable for your arm to touch your torso, it shouldn't be a disaster, and yet it is in most robots. The robots have their elbows out like this because they don't want to touch anything and that's horrible. If an arm reaches in and bumps something that should be no big deal, but if it's got a bunch of wires or sharp edges or snag points, keep it away from me, so it'll hurt us.

On the National Robotic Initiative

Q:

So with the NRI there's a lot of focus now on these kind of pro robotic applications. So do you have a feeling for over time how in the community this interest in more human oriented systems has developed versus focusing on the ones that are more autonomous and have their own closed environment?

Ambrose:

Well, that's what we've always been focused on at the Johnson Space Center. Robots for working with people is our deal. So we're really happy the rest of the world now thinks it's a good idea too. I'm the NASA point of contact for the National Robotics Initiative and I've been working with the office of Science and Technology policy to get it moving. I read some of the early <inaudible> papers on co-robotics. It was like I'd written them myself. They were wonderful. A lot of really solid thinking had gone into some of the ideas. I asked, let me write an extra section, it was like co-worker and co-protector and co-inhabitant, and I said, let me write one on co-explorer. So I read a few pages on that and as I then started to talk to several of the other agencies, other federal government agencies about the initiative, they could all kind of relate to it, that they all had some aspect that was important to them. The NIH has great applications for robots working with people obviously, and NASA, I've already gone over a lot of the ideas, but what I wanted was each agency to kind of see itself in that theme, and pick out the aspects of it that were important for that agency's mission, and it was actually pretty easy for everybody to do, and one of the things that was probably the core value that everybody had was safety around people, that they all had some job to do and that varied agency to agency, but let's do that job with the robots next to people without hurting the people. That's across every single agency's interest so it's the easiest thing for us all to focus on and it's really important right now where robots have matured and we're now getting them capable enough, we actually might want them to be near us because they could do jobs that are worth it, now please just don't hurt us while they're doing those jobs, and that's where we are now. There are a number of other aspects. It would be nice for them to not annoy us, where we could get them to do a job quickly without having it take more time than just doing it ourselves certainly. And then there's a number of other aspects of the human robot combination that I think are probably true, and interest all the agencies, but don't hurt people is really very fundamental. We took some major steps in that when we got the Robonaut 2 certified for space station. That is a very conservative community. We don't just let anything inside the space station with the astronauts. It's got to really be safe. So fortunately we've made a lot of really good choices in the Robonaut 2's design, and we have three independent ways to feel a person bumping Robonaut or vice versa, and that's enough that it's trusted. So three independent sensing systems that can all sense forces of contact with their own cable harness going back to separate processing. When NASA goes through a design like that, it's just very paranoid, what could possibly go wrong. I'll assume three things go wrong, you know, what do you get, and we went through all of that and proved the design was safe enough so that astronauts are allowed to be in the work space of this robot while it's running and that's a major breakthrough. You see some of that going on but it's dangerous. People today are in the work space of rovers and manipulators that concern me because they don't have redundancy and they aren't particularly soft and smooth. A lot of the designs are just really wimpy so that you know they can't hurt you much because they're not very strong. That's not the right answer either, because what we want is a machine strong enough to actually do work, but safe enough to do it right next to us. So just making the machine extremely incapable isn't the right answer. So a robot that can handle, like with Robonaut we typically handle 20 pound dumbbells and full extension to do this forever, yet we trust it to do it right next to us. That's the challenge. Anybody can build an under powered robot that can't really pick itself up, but build a machine strong enough to do work and be safe is the real challenge and likewise with mobility. So the co-robot theme isn't just about humanoids at all. Robots are in so many different forms. Cars are going to become robots and what would it take for you to trust driving on the same roads with robots? What would it take for you to trust being in the car when you're not driving? What would it take to put your kids in a car and have the car drive them to grandma's house, but you're not in it, and the kids aren't driving? That's tough. For me that last one's probably the hardest. What's that going to take to be able to have robots working with us, shoulder to shoulder, driving lane by lane, and trust that they will not do the wrong thing, and that's where we are right now. We can build a car capable that could drive at any speed we want to drive, but making it able to do it right next to us is the big challenge. So we call that one co-driving, co-pilots was already taken. But being able to share the roads with these things huge breakthroughs are needed, I think, to make that happen, but it's going to happen. You know it's going to happen. We are going to have mixed traffic: big trucks, little cars, bikes, pedestrians, and you're going to have robots driving in all of that, it's just a matter of who can figure out how to do it safely first.

Q:

In that case it's also not just a problem with robot actually doing something wrong; it's the fact that people are somewhat unpredictable as well.

Ambrose:

Oh it will always be the people's fault. Always. I mean robots should never be blamed. You can quote me on that. It's always the people, because at a minimum we designed them and so it will always be the people's fault. Robots have a bad tendency to do exactly what we tell them to do and it's definitely the person that's the problem.

Q:

So one other thing I was going to ask you about quickly, or maybe I shouldn't because I've talked to some other NASA people about funding.

Ambrose:

Thank you for not asking.

On Collaborating with Teams

Q:

<Inaudible> going all two hours too, it looks like we're getting close. So I think you told us a little bit about how you see this stuff developing in the future so we can get that from there, but are there any other aspects of the project you worked on that you wanted to mention or the teams or many people that you worked with?

Ambrose:

Well, I mentioned the General Motors Corporation, great partner in the Robonaut 2, and I mentioned DARPA, another great partner, a number of universities. I really don't think that there's any limit to our advances in robotics, we're only in the very beginnings of what robots will be able to do and our optimism mainly comes from watching students build new machines and I do a lot of work with kids in robotics and programs like Dean Kamen's First Robotics Program are truly inspirational, and every time I watch kids build some new incredible robotic it reinforces, in my mind, how the best robots are in our future, and they will not be built by me. They will be built by…

Q:

<Inaudible>.

Ambrose:

Maybe. They like building robots too.

Advice for Young People

Q:

So that actually kind of nicely brings us to our general last question which is basically if you had some advice for young people who were interested in robotics, what would it be?

Ambrose:

There's no excuse for them not to find a robotics team somewhere. When I was a kid that wasn't an option but now there are so many out there that they really need to go find a team and get busy. It's relatively easy today to do that. The resources are there. Go find an engineer and volunteer. That's a good way to get a job. I've basically volunteered to go work on a project that was funded by NASA because I wanted to go work at NASA. If there's a robotics team working on some problem nearby go volunteer to help the engineers go build it, and it's amazing what you can contribute just by having a good attitude and trying. You'll pick it up and learn to work with others on a team and that's, I think, the best way to get started. You don't need to wait until you go to college. That's not going to get you where you want to go. Start early and volunteer to get involved with a lot of ongoing projects. They're probably in your area and seek them out and volunteer.

Q:

Great, thank you so much.