Oral-History:Michael Sims

About Michael Sims

Michael Sims was born on June 3rd, 1949 in Memphis Tennessee. He completed his undergraduate and graduate education at Rutgers University studying physics, mathematics, and computer science. In 1990, he received his Ph.D. in Mathematics/Computer Science/Artificial Intelligence/Machine Learning. From 1987 to 2013, he worked at NASA Ames Research Center where he was the Principal Research Scientist. He also worked on the Pathfinder and MER missions at the Jet Propulsion Laboratory (JPL) from 1997 to 2015. In 2013 Sims joined Moon Express, where he served as Vice President for Software and Chief Robotics Officer until May 2015. In 2015, he became the Senior Research Scientist for the Mars Institute.

Sims' research interests focus on space exploration and expansion robotics.

In this interview, Michael Sims discusses his career and work in space robotics, focusing on artificial intelligence and human-robot interaction. Describing his research and collaborations, he outlines the applications and challenges of his work, such as TROV and MER. Additionally he comments on the evolution of his research work and of robotics as a whole, and its future potential and applications.

About the Interview

MICHAEL SIMS: An Interview Conducted by Selma Šabanovic with Peter Asaro, IEEE History Center, 9 June 2011.

Interview #752 for Indiana University and IEEE History Center, The Institute of Electrical and Electronics Engineers Inc.

Copyright Statement

This manuscript is being made available for research purposes only. All literary rights in the manuscript, including the right to publish, are reserved to Indiana University and to the IEEE History Center. No part of the manuscript may be quoted for publication without the written permission of the Director of IEEE History Center.

Request for permission to quote for publication should be addressed to the IEEE History Center Oral History Program, IEEE History Center at Stevens Institute of Technology, Castle Point on Hudson, Hoboken, NJ 07030 USA or ieee-history@ieee.org. It should include identification of the specific passages to be quoted, anticipated use of the passages, and identification of the user. Inquiries concerning the original video recording should be sent to Professor Selma Sabanovic, selmas@indiana.edu.

It is recommended that this oral history be cited as follows:

Michael Sims, an oral history conducted in 2011 by Selma Šabanovic with Peter Asaro, Indiana University, Bloomington Indiana, for Indiana University and the IEEE.

Interview

INTERVIEWEE: Michael Sims
INTERVIEWER: Selma Šabanovic with Peter Asaro
DATE: 9 June 2011
PLACE: Mountain View, CA

Early Life and Education

Q:

So, if we can start with just you saying your name and telling us where and when you were born.

Michael Sims:

My name is Michael Sims and I was born in Memphis, Tennessee. I was born on June 3rd, 1949.

Q:

Did you grow up there, and where did you go to school?

Michael Sims:

I grew up [there] till I was eleven in Arkansas and after that point we moved to New Jersey and I lived in New Jersey until – well, I did undergraduate and graduate school at Rutgers in New Jersey.

Q:

What did you study there?

Michael Sims:

Undergraduate, I was a physics major and then in graduate school I did a combination of, a lot of things along the way but my degree is in mathematics and physics, mathematics and computer science.

Q:

So, how did you get interested in physics and then computer science?

Michael Sims:

Physics is easy. I remember as a high school student looking and deciding, “What are you gonna do when you go to college? What are you gonna study?” And I read this book called Near Zero that I just loved, wonderful description of physics, how it works in low temperatures and that became the stimulus for my going into physics. Still have a large interest in physics but having studied, a little bit of graduate work in physics, I decided I wanted to know more mathematics. Having done a bit of mathematics, one thing led to another and I one day passed my qualifying exams and discovered that you’re supposed to do a thesis after that. So, I began working on a thesis only at some point to, late in the game, realize I didn’t like doing mathematics in the way that it took to be a good mathematician, which was largely sitting in a room for a large number of hours by oneself. So, I managed to morph that into computer science and a marriage of computer science and mathematics.

Q:

And what was your thesis project on?

Michael Sims:

I did discovery in mathematics. So, it was a computer program that began with a corpus of knowledge of some particular subdomain. The domain I began with was surreal numbers, which is a superset of real numbers, and surreal numbers are a nice domain because it’s very succinct, easy to describe and it’s also a powerful domain to work in. So, it was a nice place to begin and beginning with that corpus of knowledge and sets of rules of how you do exploration of a domain, how you check examples, how you postulate theorems, how you prove theorems, how you postulate new concepts to make things look simpler, and then how you rate all of that, whether this is an important concept, important theorem or not important, and so my thesis work was the evolution of that in a system that I called IL.

Q:

And who were you working with?

Michael Sims:


I worked with Saul Amarel, who is at Rutgers.

Q:

How do you spell that?

Michael Sims:

A-M-A-R-E-L. Saul was one of the early AI researchers in the world and founded the Computer Science Department at Rutgers. I also worked with Tom Mitchell, who was on my committee.

Q:

And were there any other students there or other people that you worked with closely at the time or even, not necessarily formally worked but informally had kind of formative conversations with?

Michael Sims:

Yeah, there were a lot. The main one that comes to mind is Rich Keller. So Rich, after we graduated, Rich and I were in a group together and Rich came to Stanford for a few years and then came to NASA Ames after that. So, Rich and I still stay in touch. Patty Riddle, Patricia Riddle, moved to New Zealand but she was also a colleague in that group.

Q:

And did you continue your interest in artificial intelligence after graduate school?

Michael Sims:

Yeah, I did.

Q:

What did you do?

Michael Sims:

So, artificial intelligence in some ways was a partial substitute for robotics. I like both actually, but it was a way to bridge from mathematics toward robotics, the kinds of robotics I wanted to do, which was how do you control robots, how do you get them to do smart things? So, late in graduate school I was a faculty fellow at NASA Langley in robotics....design of real time operating systems to use for robotic arms on a space station or something, and I wrote some code and some hardware to stop the arm prior to collisions, so detectors that would stop it so that it wouldn’t smash into Space Station or something like that. And then while I was just finishing up my degree work Peter Friedland came to NASA Ames to form the artificial intelligence group and Peter recruited me and so I was effectively Peter’s first artificial intelligence hire, maybe his second depending on how he hired count.

Q:

So, just to jump back, what years were you at Langley and what kind of robotics work was going on then, at the time?

Michael Sims:

So, Langley did a bunch of robotics on space station types of robotics at that time and I don’t remember exactly, probably ‘83, ‘84, something like that.

Q:

And was that the first time that you really came in contact with robotics or did you have an interest prior?

Michael Sims:

So, I had an interest but I hadn’t actually done any robotics prior to that.

Q:

What got you interested?

Michael Sims:

Partially, it was two things. Well, one is I had seen a thesis presentation by Terry Winograd where he talked about his work, which represented moving blocks around in a structure and then how did you interact with that with a language, and how did you do the planning to do that movement. So, I liked Terry’s work that was nice things and it sort of excited me about the overall domain. Two other parts to it, one other part is science fiction. So, I’d been a long fan of science fiction and all the way back to Asimov’s foundations and even much earlier than that I read material, and so it’s just exciting, right? From the point of view of NASA and what we’re doing here though, the thing about robotics is that it enables you to do missions cheaper and to allow humans to actually travel much cheaper. So, that’s really been the focus of my work, is how do you use robotics as a tool for allowing exploration to take place more powerfully?

Robotic Exploration Systems

Q:

How does it actually make missions cheaper and allow humans to travel more cheaply?

Michael Sims:

There are many answers to that but one that comes to mind initially is that if you go to Mars, you go to a planetary surface, you have an option of taking everything with you, as we did to the Moon. You take your entire package with you and drop it on the surface and then you leave most of it there and come back. You also have another option, which is you could pre-package and pre-deliver and pre-assemble and pre-deploy those components on the surface. So, we could take a habitat and put it on the surface of Mars before humans ever leave, and in that case you would have it checked out, you’d have redundancy in the life support systems. But that’s typical of the kind of use that you can use robotically. It’s to pre-set things up so that you make the whole, the burden of the humans actually going easier. It also can path-find traverses, you know, you go out and look, is there some place they even want humans to be, and so we began a bunch of field campaigns which looked at the interaction between human beings and robots. We did that starting twenty years ago or so, and what do you do, what does a human do, what does the robot do, how much of the pre-examination of the terrain and traverse does the robot do, do you find this interesting science before the humans begin?

Q:

And so, what were some of the places where you did this work and what were some of the kinds of interactions between the humans and the robots that you were looking at?

Michael Sims:

So, we did it all over the world. We had a lot of field tests. We did them in Russia, Kamchatka, we did them in Hawaii, we’ve done them many places in the United States. The basic interactions, a great deal of it is just understanding how to control the robots, how to have human beings interacting with the robots. So, my background in general was in machine learning, that’s sort of technically the arena I’m in, and I was interested in the idea of, my initial thought was let’s just take a bunch of techniques from machine learning and apply those. But I began with a premise about automation and autonomy, or definition I guess, and that definition is that a system is more autonomous if a human does more of what they want to do and less of what they don’t want to do.

So, given that definition, I looked around at what was the most autonomous thing we could do to these robotic systems and I think the answer that I came to, and I think it’s still the accurate answer, is that the best way to add autonomy to robotic systems was add virtual reality interaction systems. Because what that allowed the human beings to do is actually see the terrain, which the scientists very much want to do, and the rover operators will want to do. So, see the terrain in high fidelity, see it in all kinds of angles, all kinds of circumstances, see it with as minimal data bandwidth between the location that you’re at and where you are, and to allow humans to play the game of “How do you interact with that, how do you respond?” And so it’s a huge lever, that was a really big lever on our exploration part.

Q:

And what were some of the kinds of systems you developed with that or –?

Michael Sims:

So, I’ve worked my whole career at NASA Ames and we have developed a number of robotic systems. I think you’re gonna talk to Terry Fong who followed my, had my job after me, and see some of those systems that we actually built. But for the most part, this is a software world, right, so we actually aren’t trying to pioneer hardware systems, we’re trying to pioneer the software that fits around that and interacts with that. So, in doing that we’ve interacted with many teams. We’ve had a long rich interaction with Carnegie Mellon Field Robotics Center, we worked on Dante and Ambler and many other systems. We took undersea vehicles and we’ve driven them around under the Antarctic. We’ve been part of the operating team on robots that go down into volcanoes. We’ve used a Marsokhod vehicle, which was invented by VNIITransMash Team in Russia, Saint Petersburg, and that robot is an incredibly robust planetary exploration robot and so we used that for a long time as one of our field tests, and as you will see it when you look at the NASA Ames lab, you’ll see there’s a whole suite of newer vehicles that are being used, too.

Q:

And in developing this kind of virtual world that the humans could, that could in a sense translate between what the robot is doing and what the human is actually trying to do, what were some of the important design considerations that played in, and were there times when you were like surprised by the kinds of things that you needed to develop from that <laughs> or...

Michael Sims:

So, one began with the intuition, which is you build this virtual world that represents what’s going on in the terrain and you give it to the robot drivers and they can just drive around. And we do that, and that’s fairly effective, and it turns out that a much harder driver for building these systems is doing science. So, if you can do the science, you can do the robotic engineering operations usually. The science is much more demanding in its requirements. Human operators can get by with horrible resolutions and the models not being very accurate, but scientifically, it’s a much stronger driver. So, one of the surprises was that, early on, was that science actually is a much more powerful driver and if we actually just focused on doing really good science that the robotic control elements actually are relatively straightforward, okay. So, the control parts themselves are not big deals, yes.

<brief interruption>

Michael Sims:

Early on – we are inventing these systems to drive robots, to control what’s going on in the world. The analogy I often use is it’s like driving in a car before they had steering wheels, and there were automobiles that were driven before they had steering wheels and before they had even gear chains, right, in regular ways. If you think of driving a Caterpillar, you have a lever that you pull to sort of speed or slow down one wheel or the other of those two tracks. The steering wheel’s a great invention, it allows you to interact and we are still often in that kind of realm of major transition in understanding. We took a vehicle to the Antarctic and we had benthic scientist, people that study undersea organisms of various kinds, mostly animals, and we drilled a hole through the ice in Ross, near Ross Ice Cap and McMurdo, and we’d drop a diver down and we’d drop a robot down and drive the robot around and we did this a couple of times. And one year when we did it, we had a operating system we thought was okay, so we operated it for a few months, maybe two months, trying to improve and we trained operators and the operators got better and better, right. And better and better meant that after someone drove for a week or so, they could sort of tolerate for an hour or two, right. So, it took – but there was drastic improvement over a week, okay, and we were gonna give – we later, a few weeks later, did a show at the Air and Space Museum where we drove the robot from underneath the Apollo 11 capsule in the lobby of the Air and Space Museum and as we were getting close to that we realized that there was a whole other way to do it, right, we could change the interface. But in order to do that we had to throw away all the human factors evaluation we had done previously because we weren’t just gonna change – but we did, right. So we said, “Okay, we know a better way to do this.” Effectively, what we did is there are large currents that push cabling so we effectively gave mechanisms for taking all of that out, so it allowed you to be relatively stable at a spot, and there’s this picture of me showing a seven year-old girl how to drive this like in 30 seconds and she could drive the robot around Antarctica fine. So, it was that difference between somebody training a few weeks to be a good driver and someone sitting there and being able to drive it, a seven year-old girl, with virtually no training, is the kind of differences that we’ve had and we still have. We’re still in that realm of discovering new ways to do it.

Q:

What was that robot called?

Michael Sims:

That was called TROV, T-R-O-V. Telerobotic Undersea Vehicle, I don’t know. <laughs>

Michael Sims:

Something like T-R-O-V though.

NASA Ames AI Group

Q:

Yeah. Just to go back a little bit to when you first came to Ames and Peter Friedland.

Michael Sims:

Mm-hm.

Q:

So, what kind of robotics was going on here then and where did this new sort of AI group fit in with that?

Michael Sims:

Mm-hm. So, the robotics effort here at Ames was really started by a person named Henry Lum and Henry began the effort looking at robotics associated with Space Station and they had a PUMA arm that they used to do various activities. We actually had a robot – NASA is an interesting place in that there are different fiefdoms and, you know, there’s the science code and there’s a space station code. These organizations may or may not talk to each other but they often have different organizations they designate as their lead person or lead center to do these things. So, Ames had an official role in doing some robotics stuff for space station but none whatsoever in science, in fact the science side forbid us from doing any of the stuff. So, science guys would come by and they would lock the doors <laughs> so that they wouldn’t see what was happening. But they recruited me to come into that group at some point, but I had much less interest in orbital robotics than I did in what we call field robotics, planetary surface robotics. Because basically, we as human beings are gonna soon live on multiple places, I can explain what I mean by “soon,” but we’re gonna soon live on the Moon and Mars and Earth, right. So, there are gonna be at least three major habitats of humanity in near term, and robots are part of our major tool in making that happen. So, I was really interested in how we did the surface ops and how we got that ready. So, I effectively moved most of our effort into field robotics and we’ve been doing mostly field robotics since that time and it began with, and has continued with, a very close relationship with the science team here. So, we have repeatedly done field tests. A field test is we do a best analog we can to what we would do on a planetary surface. So, prior to the Pathfinder rover going to Mars and prior to the MER rovers going to Mars, we would regularly bring in large teams of people that actually were eventually on those science teams into our site and we would then have a robot in a remote location and then we would see what it took to actually operate that powerfully. Can you see, can you understand what’s going on? Sometimes, no, given the tools we had at the time.

Q:

So would you say, you said earlier that your work was more software oriented, so were you writing a lot of the software that wound up on the Mars explorer robots or developing parts of the platform?

Michael Sims:

So, that’s a great question. I’ve also had many broader roles in NASA since that time. One of the roles I had was as part of, and occasionally lead, of a NASA internal group called TeleroboticsIntercenter Working Group, which is a great organization that once existed in NASA to coordinate robotic activities across NASA but also with several external entities, university entities – I’m sorry, I forgot your question, try it again.

Q:

Parts of your work is the software elements of the Mars exploration…

Michael Sims:

Yeah, great. So, one of the problems that we were trying to address the entire time that this organization existed was, “How do you take this field robotics that we do on Earth and integrate it into missions?” It’s a very difficult process. Partially, there’s a little bit of it doing it because it’s not invented here, so it makes it hard to get in. But probably more importantly is that planetary missions are different classes of activities than things you do on the surface, than we do here on the surface of Earth. So, whereas we can build half a dozen robots and take them out repeatedly, if you send a robot to Mars, you spend hundreds of millions of dollars and you don’t want to casually drive off a corner or have your software fail and screw it up. So, there is some sense to this conservatism in letting software get integrated but I can tell you from the point of view of our team, this Intercenter Working Team that was headed initially by Mel Montemerlo at NASA Headquarters and then by Dave Lavery, I can tell you that what we had was a great deal of frustration at the ability to – at the one side you look at, here’s great, kickass technology that we can do on the ground and then you look at what actually gets in a mission and it’s like nothing compared to it. So, that’s been a struggle and continues to be a struggle, “How do you integrate?” And I would say there was a great leap forward in the Mars Exploration Rovers mission and mostly it came from one source, in my view, and that source was the integration of the principal software architect. The person that wrote the software on the robots, effectively, was a graduate from CMU. So, it was person transfer, so by taking somebody that had come through a program of graduate work where he had worked in these leading technolog – and then put them in the midst, integral in a mission was the way to integrate. It worked very well, and that person was Mark Maimone. There has since been a number of other good people that have gotten into the group and integrated, but Mark was the critical transition.

Q:

And to the extent that you’re doing research and different elements of software, is it primarily like architecture of the control, or the interface or vision manipulation, or sort of everything?

Michael Sims:

Mm-hm. Yeah, it’s a system. It’s a system and to me it’s important to draw that box of the system around the human beings, right. So, you have this box which includes human beings interacting with a robot. You have a communication system that interacts between – you have a computer system and a communication system interacts between the two and then you have a robotic environment and then you come back and close that loop through a science team, typically. That whole system, how do you build that? And that’s the hard part, is putting it together. Yeah, and there are lots of important parts. I mean, here at Ames we’ve done a huge amount of work historically, have great systems to do stereo modeling, right. So, we’ve long built wonderful models of terrains from stereo images because that’s what we typically have on planetary surfaces, and then we have developed systems for displaying those. But it’s important to realize those are one component of an overall system and to make it work you have to have them all working.

Q:

What are the big challenges in doing that?

Michael Sims:

So, it depends on the time, right. I think in the case of the MER missions, which have been going on now for seven, eight years or something. When did we land, in ‘04, so that number of years. In the case of a MER mission, I think we’ve done a reasonable job at sort of getting a handle on the data but clearly early on, managing data was a big deal. You get back science data and where the hell is it? I don’t know, and it’s interesting the method that we commonly use right now is not one you would often think of. Its primary organizational structure is by the day you took it, and so I can tell you, “Oh, if you want that, it’s on day 39,” and we have more than 2600 days and yet I still mark things in my mind by that day and we can go back and retrieve it, and a lot of it’s organized that way. So, data management, it was a big deal. It’s a hard problem but this kind of organizing by days seems to work fairly well and it’s not because we haven’t thought of other things, it’s because it still seems to work as better than anything else people have thought of. The critical – so I proposed a couple of missions to, robotic missions, that are called discovery class which means they’re a few hundred million dollar missions.

<brief interruption>

Victoria Mission

Michael Sims:

So, I proposed a mission called Victoria, which was a mission that went to the southern polar cap of the Moon, and it was – not polar cap, polar region – and it was to go into a permanently shadowed crater and to do detailed verification and resource analysis of what resources were in that southern region, in that permanently shadowed crater. I also proposed with, I was a deputy PI on the second one but it was a concept I created which was called a Long Day’s Drive, which was a Mars mission that went to the northern polar region of Mars, and the advantage of that one was both of those missions were designed to do science very quickly. So, it’s a design of a vehicle that is contrary to the designs we’ve used on any, nor any of the current planned ones, at least short-term plans, because it’s designed to gather the science very quickly, right. How do you gather the science quickly? Not less science but roughly the same science, but how do you do it in a way that you can do transects in an orderly fashion for many kilometers while traversing and moving along. And you choose a different set of instruments, you choose a different set of components and you put them together in a way that actually allows you to do this mission. They’re perfectly viable, it’s just an alternative perspective on how you do that design of the mission.

So, one of the most difficult things in all of these is understanding how to put that system together, how to design – the space is really large so you have to make design choices. One of the criteria that allows you to do those kinds of missions is the ability to have confidence in a vehicle self-saving. So, if I knew that every command I sent my vehicle would be safe, I could do a huge amount of things. But in fact in MER we spend averages six hours a day, six hours in uplink, maybe more, verifying that what we’re sending is safe and it’s very carefully laid out. We effectively program MER in assembler language, not what we call it but if you know assembler language it looks like assembler language, you know, and you say, “Do this,” and “If that, do this.” You’re carefully monitoring that, human beings are carefully – eyeballs are watching that whole thing to make sure you don’t make errors. If I actually knew I could send an arbitrary command to that vehicle it would be so much easier and I could do so much more because I wouldn’t be nearly as worried about the process and I could stack things together time after time. But in the case of MER, for example, we don’t have faith in the obstacle detection for use of the arm. So, it’s a mission requirement that we have to see the terrain under the arm before we can move the arm in these environments, right. So, before we can actually try to do placement of the arm on something.

Q:

What do you think it would take to achieve that kind of confidence?

Michael Sims:

I think we’re capable of doing it right now, right, and there are circumstances in which it’s dangerous. So, the question is really, “How well can you detect dangerous circumstances?” On MER we were inside a crater, Endurance Crater, and we were down near the bottom, down near as far down it went, and there was drifted sand dune like piles that we were traveling next to and we wanted to get around this set of rocks to this other set and we gave a command and we said, “Okay, let’s move half a meter that way and see where we go,” and we slid downhill about a meter, right. So, instead of going one meter that way we slid downhill about a meter, and it was not too far down that hill where we probably couldn’t have gotten out of, at least that was an expectation on our part. And we had a ground rule, by the way at that point, that we always needed one of the tires on solid material. So, we need to be able to manage those kind of circumstances. You need to be able to know that what you’re doing is leading to slippage, that you’re – how do you respond properly to this.If you start falling, a human being has a natural set of reflexes, those are really powerful, but they might do the wrong thing. And sort of, how can you have a high confidence that you first detect the right things that are going on and you respond to them appropriately. And the way we manage it right now, on the MER missions at least, which is typical of how we do this and in my view it’s incredibly limiting of the future, it’s not how we’re gonna do it in the future, but it is that we do these keep-out zones, right. Somebody exclusively lays out a terrain that says you’re gonna travel down this path, if you get outside that way, too close to that rock, stop. Right, yeah.

Uncertainty in Environments

Q:

So, do you think in the sort of long-term development that it’s gonna be a matter of sort of the experience of the operators with the robots or do you think it’s going to be something more like software verification that you know the system is safe?

Michael Sims:

I don’t think it’s either of those, okay. So, you want some sound verification that the software you’re doing is good, right? But it’s more than software in and of itself, it’s the interaction of your hardware, software and the environment that you have to manage and the hard part about the slippage example, slipping down the hill further than we commanded and in an opposite direction, the good thing about that – the hard thing about that was that it was the commands I issued could have been exactly the correct ones, right, the system could have acted exactly properly, in fact I’m sure it did, and yet it did something bad for the vehicle because of the situation. So, you have to marry sensors about the world, you have to understand the physics of the world in a way, or you at least have to have really good heuristics. I mean if you start to fall, for the most part your reflexes are appropriate, right, but they aren’t always. Sometimes you do slip the wrong way and you do hit the ground and you break something, or whatever. But for the most part they’re really good, right? So, it’s in the realm of doing that.

When I ran the robotics group at Ames, one of the tasks I gave the team, and we never really got to it to my satisfaction at least, people were worrying about driving this one rover and I asked them, I would like one operator to control a hundred robots on Mars or on the Moon, right, how do we do that? One operator, a hundred robots, or ten, whatever the number is, but it’s not one and it’s not a dozen people doing one in a day. It’s one person doing ten, or whatever the right number is, a hundred. If you take that as where you’re aiming then it’s a whole different game we’re playing than the way we play planetary robotics right now. You can imagine someone doing that in a factory floor, you can imagine one person commanding a hundred robots in a factory floor, right? If we’re going to have the kind of future that we think is possible, if we’re gonna mine minerals off the Moon and asteroids and if we’re gonna create colonies on the Moon and Mars and other places, we’re gonna have to manage large numbers of robots doing a lot of things by initially very small teams, and the traditional metaphor, which is the traditional way of interacting, dozens of people watching one little thing move for six meters is not how we’re gonna do it.

Q:

So, do you have some ideas about how one would do it? I mean, in the factory they manage it because it’s a more structured environment but your problems similarly to having those kinds of robotics, I think, just in the normal world with people or something is that it’s an unstructured, unknown, partially unknown, environment. So, what are some of the ideas about that complicated –?

Michael Sims:

So, I don’t have an answer, right, but I can give you some hints. If you look at some of the DARPA work that looks at legged robots, there’s some very nice work where they have these leg robots that you can sort of shove around, right, and you can put them on random circumstances on the ice and they can start slipping and yet they manage to manipulate themselves, they manage to stabilize under those environments. That’s kind of what you need. You need something so that the vehicle in and of itself is robust with respect to managing what it’s doing, and given that then you can just use relatively simple high level commands. It’s that level, it’s that sensor to response to being safe level, is where the crux is. When you get around that, the rest of the game is easy, right, because if you’re not so worried about the high level commands, you can stack lots of high level commands and command them out and do lots and lots of things. But if you’re worried about that level of, you take this action and you’re not sure it’s gonna do the right thing, right, then you’re in a very cautious realm and you’re not gonna be able to command lots of robots.

Q:

So, basically the machine needs to have more local autonomy and a response and awareness to the environment to make the job easier for the operator who’s actually not there.

Michael Sims:

Right. And that, by the way, is the crux of it and we, in the robotics and the artificial intelligence community, have repeatedly made the mistake in that realm. People think about – well, to begin with, people don’t take autonomy in my definition I gave previously which is, “People do more of what they want and less of what they don’t want.” People take autonomy definition to be, “Oh, you use autonomous technology.” I build planners, I build machine learning systems, you put one of my autonomous planners or my autonomous learners in your system, it’s more autonomous. Well, it might be and it might not, and I can give you a classic example of that. There are people who talk about robotic spacecraft and they wanna fly these robotic spacecraft and they want the spacecraft itself to decide to go to another planetary body. Well, it’s patently absurd, right. You’re never gonna do that because first, the communication is very, the amount of time between locations is very large relative to the communication time. So, it’s easy enough to tell them to do the right thing, there’s not a necessity of time. And secondly, orbital dynamics is very expensive stuff and people want to decide where they go. They don’t want to give that away, they don’t want some robot deciding, “Oh I wanna go to this other asteroid, I like it better.” That’s not a place you put a planner. So, you have to be careful, it’s not just the system, in fact I would argue that putting in a planner at that level actually makes a system less autonomous, right, because it’s taking away the human’s ability to do what they want to do. So, you have to be a little careful about where you put in your autonomous systems and where you put in the smarts into it, to actually get it to what you’re trying to do.

Q:

But in terms of the confidence in the system, I mean, it’s also about the human, that interaction with that system, having the confidence. So, it’s about them learning how that system operates and developing trust in that way.

Michael Sims:

Yeah, yeah, definitely. I mean, this is not an issue only for robots on planetary surface, right. I mean it’s, if you’re controlling an automobile, if you’re controlling a nuclear reactor, you would like to have a lot of confidence that these things are gonna respond in a quick response mode, in a way that at least does no more damage, right.

MER Mission

Q:

So, to take us back with you to the beginning of the MER mission, I think you were one of the co-PIs on that proposal?

Michael Sims:

Yes.

Q:

Could you tell us a little bit about how it all started, were there alternative visions of what would happen, or competing projects?

Michael Sims:

So, in the Pathfinder mission, which I was part of too, Steve Squyres pulled me aside and asked me if I wanted to be a part of the MER mission and so I, it was a great opportunity, so I of course said, “Yes.” Prior to that Steve Squyres, the PI for MER, had a proposal for the MER mission, as did one of his colleagues, Ray Arvidson, I believe – this is pre my history – and so those two proposal teams merged into what became a stronger team that became the MER team. So, there were some negotiations at some level in the background that I didn’t really, I wasn’t privy to, so I didn’t really know much about what happened on those. We proposed two or three times when I was part of the team to – we proposed Discovery and we did not win and we proposed once more and then that mission went along for a while and when it was cancelled, and then we were accepted the third time and that actually flew. There, of course, were very strong technical issues that come up and, you know, how do you manage this? And some of those, frankly, you manage by default, you just didn’t think about it. Some of them you manage because you have incredible constraints, so it’s, “This is the cheapest thing to do, the easiest thing to do.” Some of them you manage because they’re politically something that you need to do.

We, as part of winning the MER proposal, we proposed that we were, and it was what got us back in the game, right, we proposed that we were effectively gonna fly Pathfinder mission and we were just gonna not have a lander, we were gonna have the rover be the lander, right. So, there wasn’t gonna be a separate lander, and then we’ll just take the mass from that lander and put it into the rover and that’ll be fine. So, we’ll go with the same airbags, we’ll go with the same parachutes, it’ll use the same processor. All three of those that I just mentioned caused us grief. Same parachute, same airbags, same processor. The first time we tested the parachutes, they ripped apart. First few times we dropped the airbags on real things, they ripped apart. The processor, we still fight today, it doesn’t have enough computational power to do reasonable obstacle avoidance on the terrains. It’s just because we’re using the Pathfinder and it was old at that time, right, processor. So, we still fight the fact that it’s an old processor. So, a lot of the decisions are made in those kind of ways.

Squyres is a powerful and great leader, right, and Steve began with very clear rules, how we were gonna work together. We have rules of the road which describe the interaction of the entire team, the science team members, and the relationship between the science team members and the engineering team, and those gave a focus for how we interact and let me just say one thing about that. A lot of missions, historically, and even since MER have had this battle, maybe, not quite the right word, between the science components of the team and the engineering components of the team. They look as if they’re trying to do different things but in fact you’re not, right, it’s a conversation, you need to be in a conversation about how to get something done that you both want to get done and do. And Squyres and Arvidson and the rest of, you know, the support of that, really insisted from the beginning that there be this close connection between the engineering part of the team and the science. So, we meet every time there’s an uplink, we meet as a single team, and we plan that day with both engineering and science people sitting in the room and talking about what’s happening, how we should do this, how best to do this, what’s most consistent with what we’re trying to do. There is effectively not a separate engineering and science team, it is a single team working for a common purpose, that metaphor of the people that actually want to do something with it and the people that have the ability to turn the screws, working together, is powerful and that probably is the strongest thing that we do well at MER.

Q:

And you had mentioned earlier that practically in the very beginning the science team was like cordoned off in its own fiefdom, it seemed, like very early on when you just got here, you mentioned there were kind of fiefdoms within NASA where the different parts would be...

Michael Sims:

Ah, yeah so within NASA, organizational hierarchy.

Q:

Okay.

Michael Sims:

There’s a science code, which refers to all the science missions of any kind come out of that, from a NASA Headquarters point of view. And then there’s a aero code, which controls all the aero missions and then there is a space station or human space flight code, depends on the time of, what year it is whether exactly how those are partitioned apart, but those intend to be – each of those has an associative administrator who is in charge of that code and at much above my pay scale they argue about where money goes and who does what, but...

Collaborations

Q:

So, who are some of the people you’ve collaborated with over the years that have been influential on your thinking and work?

Michael Sims:

So, I mean Red Whittaker comes to mind. So, I’ve collaborated a lot with Red over the years, and people that have worked out of Carnegie Mellon. So, the field robotics team there, you know, there’s a dozen people or more on that team that I worked closely with over the years. Clearly people here, some people at JPL, some people at USGS, Ken Herkenhoff at USGS, Justin Mackey at JPL, Carl Ruoff at JPL. So, those are by no means the – I mean that’s just a random picking sample.

Q:

What’s Red Whittaker like to work with?

Michael Sims:

I love Red. Red’s a wonderful addition to humanity, right, and Red’s a powerful guy. He used to do boxing in the Marine Corps, so he’s got that kind of temperament and he’s probably done more, I would say he’s done as much for robotics for space as anyone. He has a passionate commitment to robots going into space. Sometimes it’s been very frustrating because he hasn’t flown a mission yet, but he certainly has been out there and Red has repeatedly sort of backed people up in the sense of, “Why do you want to do this?” “Here’s a better system for doing that.” He’s great technically to have in a room, you know, “Here’s these mechanisms, here’s these motors, here’s what I’m trying to do,” and he has a great experience from the point of view of industry. He’s done a lot of work on farming equipment and automating industrial components, particularly in agriculture and mining industries. He can be sort of no-nonsense, sort of person in the game, but I’ve never personally heard Red criticize anyone personally. He may criticize the team, <laughs> “You guys did a bad job,” or something but, personally? No, I’ve never heard him criticize anyone. We’ve interacted closely together.

Q:

Okay.

Michael Sims:

Not always necessarily in the same room, but we’ve interacted closely together. But I think the def – I mean, if you look at it from a broader perspective, what’s the accomplishment. I think there’s phenomenal accomplishment at how we managed it as a team, right, and I credit almost all of that with Squyres. He certainly had support from JPL management in the process, right, but Squyres was the person that insisted on it all along the way. But the other part is just as a pure technical accomplishment, right, so okay we haven’t traveled quite as far as the Lunika jet, but it’s been an incredible scientific set of accomplishments. It’s been an incredible accomplishment at keeping them alive, having a team working together and doing it, and a team that talks to each other. If you haven’t been exposed to planetary missions outside of something like MER, it would be valuable for you to talk to people that are on them because they can be incredibly dysfunctional. You have people that get angry, you have people that have egos getting in the way, you have people that are much more committed, I mean maybe reasonably, but these are critical points in a career, people don’t do this many times so they want to make sure that their point of view and their way, their point of view is taken and their credit is given, right, and it tends to create fissions between groups. So, the ability of MER to, I mean, I would say people still generally like each other, right. Yeah, I mean, I consider all the people on it my friends and there are some that can be annoying, right, but that’s in the context of a friend being annoying. You don’t have to look very far in missions and the thing about missions, I think, it’s only that it’s a team that’s forced to collaborate. Once you’re set, they’re forced to collaborate for long periods of time, right, and it’s not hard to find situations and teams where people hate each other. You know, where generally it’s like, “I don’t wanna be in the room with these people.” So, that’s a good social accomplishment.

Q:

And how many people are on the team, do you –? Regularly…

Michael Sims:

You’d have to ask somebody like John Callas, who is the program manager at JPL, to actually know the number. There was originally maybe north of a hundred science team members, maybe a hundred and twenty science team members, maybe as a comparable number of engineering team members. I don’t know what it works down to full-time equivalence but it’s not very many these days, but just in general for something like MER for the last, well since we’ve landed, have basically gone down about thirty percent a year or something. So, you know, you dial up and how much money you get next year, take the money you get this year and divide it by, multiply it by seventy percent or something and that’s what you get next year.

Q:

How do you think, if you have any idea, it might be an “impossible to answer” question, that Squyres had the intuition or the skills to create this kind of a highly functioning team?

Michael Sims:

So, that’s a good question to ask Steve.

Q:

Okay. <laughs>

Michael Sims:

I will give you an intuition. So, Steve was here at Ames before he went to Cornell, I think he did a postdoc or something. I know people that knew Steve at that time and said he was rougher, he had a lot more rough edges and maybe someone pulled him aside, and in fact I know he got some really good advice from Gentry Lee and people like that who have a huge amount of experience that said, you know, “Here’s how to play this. Here’s the things that you have to push on, here are the things you don’t wanna bother touching.” So, he got good advice but I think the crux of it is, an immense willingness to play the game to make it work, right. He just lets his ego get out of that game, decides what does it really take to make it work? And he looks at it, he’s willing to change, he’s willing to understand, and my experience has been that occasionally Steve does something that I would think is a little, you know, I wouldn’t have done and I’ll tell him and he cleans it up, right, he straightens it out. A lot of people in Steve’s position wouldn’t bother, Steve has a lot to do in his life. So, I mean it’s a willingness to be human, it’s a willingness to play with a team and actually find what it takes to make it work. He’s among the best. So, you were gonna ask me about Carl or something.

Q:

Mm-hm. <laughs> What’s Carl like?

Michael Sims:

So, I love Carl. I don’t know Carl nearly as well as I know Red but Carl and I have known each other as sort of NASA robotics connected people for a long, long time. Carl and I also went to Kamchatka together. So, we took a – I had a Marsokhod robot, I still do in one building around here, and we were doing a set of proposals with the Russians using Marsokhod, and the Marsokhod is a six-wheel conical wheeled vehicle built by the team at VNIITRANSMASH and the VNIITRANSMASH is one of the most spectacular teams that ever existed in robotics. They had many many generations of robots between the Lunokhod, which they built and the Marsokhod, which they built to go to Mars. I would still argue the Marsokhod is the most capable vehicle of that class yet built. It’s an outstanding vehicle. People typically criticize it for all kinds of reasons like power. Power for mobility was obvious, from early on, not the issue and it’s not the issue here, right. Does it skid steering? Yeah, it’s a little less efficient but that’s not the driver here, power for other things is what drives these missions for the most part. So, Marsokhod is a great vehicle that they designed and, if you get a chance, I once had a video that they created of their generations of vehicles and if you look at other people designing vehicles since then, you’ll see that this Saint Petersburg team had previously designed something like it and tried it out and you’ll see a field test of it. So, they had a video of all their, sort of like their hall of fame of vehicles that I once had and got misplaced or stolen or something along the way, so I don’t have it anymore. But, Lou Friedman gave me that. Lou Friedman used to be director of the Planetary Society. So, I wandered down a pathway and lost track of...

Q:

Kamchatka with Carl.

Michael Sims:

Oh yeah, Kamchatka with Carl. So, we went with the VNIITRANSMASH team to Kamchatka, right. We had a team in – this was associated with a number of things we were up to, but primarily associated with a proposal we were doing on a rover, one of the, I think it was the first Discovery proposal cycle. We did a proposal for a rover to go to the Moon, right, and we were gonna use a Marsokhod, effectively, as our vehicle and as part of that we did field tests. And I think as this was in that timeframe when we were doing that proposal. So we sent a team to Russia – to Kamchatka. I went and Carl went on that trip, and we had another set of team to do control. And they were at – this was joint proposal work with McDonald Douglas at that time. And so they were in Huntington Beach. And there's a good team at Huntington Beach included Carl Sagan, for example, as part of the science team on that end of the game. And Carl and I had a great time in Kamchatka with the Russians. It’s an interesting place – interesting experience and we sort of – mostly watching the Russians control their vehicle at that point, but it was good – good experience.

Russian Robotics

Q:

So who are some of the people involved in the St. Petersburg team? You had mentioned the lead –

Michael Sims:

Slavo Linkin, L-I-N-K-I-N, was the lead at IKI, I-K-I, which is the Russian Space Science Institute in – just outside of Moscow. The science – the engineering team at the Russian side and the people that bought the – and own the vehicle were at Lavochkin – Bopakin. Lavochkin is the usual name for it which is a – Lavochkin is kind of like JPL for the Russian space engineer. It kind of was like JPL for the Russian space agency. It's a sort of a quasi-commercial – quasi mainline funding into this institution that ran mission stuff. And they'd done a lot of mission stuff. So they sort of did the mission side there. I forget the name of the guy who was – led their team. But he was one of the Lunokhod drivers. So he had driven the Lunokhods when they were on the moon. The designer – the principle designer of the Marsokhod was a guy named Gromov, G-R-O-M-O-V, I believe. And I don't recall his first name at the moment, but he was at the Transmas in St. Petersburg. But the Transmas basically – I mean they are major businesses building trucks and stuff like that – military trucks and things like that. On our side it was us – the planter side was involved so Lou Friedman and those guys. Also on our team would have been Butler Hine who's here at Ames. I'm not sure if Terry Fong was around those days – probably not but he might have been. You can ask him, but those are the primary ones.

Q:

They were looking for more Russian robobacists.

Michael Sims:

Yeah, I'm all –

Q:

It's very helpful.

Michael Sims:

Yeah, Gromov was good, and I can find the name of the guys that –Lavotchkin. But the ones that I knew are probably retired by now.

Q:

Okay.

Michael Sims:

Because this was maybe 20 years ago, and they weren't – they had driven the lunokhods. So they were not young.

Q:

How was it working with the Russians? Were there differences in approach? Or can you tell us something about the Russian –

Michael Sims:

Yeah. Well, this was – definitely differences in approach. It definitely is entrepreneur – at that time was entrepreneurial and probably still is world. I remember someone once giving me a little quote about Russian mission proposals, and so they said, "Basically anybody can propose anything, but if it doesn't work you never get to propose again." So it was a different strategy. So we did a number of field test with the Russians. We had a close relationship with them for a long time. I found the relationship great. They were good partners. Strategically they were different. Partially it probably came from the need to know Russian interactions with the Soviet – people interacting with the Soviet system. We needed to do – we needed to know the format of certain kinds of images they were sending us. What were the headers doing? What were they looking like? What's going on? And we tried to get that information for several months back and forth in emails back and forth. And we just could never get it out of them. Just could never get it out of them. Then we show up and we – actually in a room with them we could actually figure out exactly what it was, and it all made sense. It was all fine. But that sort of difference in being able to grab something out of their head remotely wasn't – and I never quite understood that but it was – you could see it other places. When we went to Kamchatka with Carol Ruoff it was a classic Russian field trip, I guess. We were in – desolate in the middle of nowhere. I remember it's one of the worst trips in my mind I ever had. For the following reason, which was there's an old military like truck had almost no suspension on it, and we're doing these marsh board like roads – sort of gravel roads that we travel for many hours. And I can literally remember my brain hurting after trying to sleep – lying down and trying to sleep on that route. So it's like – we were supposed to get a – we were supposed to fly back and forth with a helicopter from Petropavlov, the southern city to the mountains where they had – where, in it, Tranmos had a field site. And by the way, Gromov came across a part to a lunokhod when he was there, and he gave it to me. So I have this one part off one of the lunokhods that he tested. But so you know – and Russia was Russia. St. Petersburg – I mean Petropavlov was – one day there just was no hot water, but their cold water was furnished by – run off from a glacier. So it was a very cold shower.

NASA Projects

Q:

So what are some other important systems and innovations that you worked on during your career at Ames?

Michael Sims:

So what I do right – one of the things I work on right now is trying to get teams to collaborate. So I founded a connect lab. A center that's interested in how do you get teams to collaborate better. So I look at that both in terms of – this is not robotics per se, although, it does have a relevance to it. I look at the teams working together both in terms of missions, but also in terms of other science teams.

So I have a collaborator, who is very active and leads collaborative technology for the virtual institutes which are NASA institutions that are formed, and they're based – the ones that are busy so far are based out of Ames which are hundreds of scientists around the world collaborating in an institution without walls – not brick and mortar walls. So we've looked at what's the nature of collaboration? What works on – I'm interested in both that theoretically, and I can say more about that if you want to hear it. But I'm also very interested in practically. So we build hyper walls. In the case right now we are building a portable hyper wall which is a 846 inch screens with very thin bezels. So they sort of all merge together, but it's portable. And eight of those screens go across, and then you can display really high resolution images. And we have very high bandwidths in general. So we have a – the building next door we have another one. We have a 10 gigabit connection from there, and we can do that all the way to our collaborators in San Diego, UCSD –CalH 2 collaborators. So very high bandwidth and if you couple that with high performance computing you're able to see sort of – in my view we're looking at what the futures going to be. You're going to have giga pixel images. You're going to have walls which are displayed with being able to put giga pixel images up at native resolution. You have these in your home. They're going to be affordable to just scroll out. You're going to be able to put – to transport huge images back and forth between these data sets, and you're going to be able to process them to see things in different ways. So what does that enable? What does it enable for human beings? But what does it enable for people to collaborate? How can we work together better?

And the theoretical part – this is behind that. Is I've been looking at for a couple of years what I call Human 2.0 which is – it is human – let me back up a little bit. The first thing, the agent of action of human beings in no longer an individual. It has not been for a long time. So a human being cannot build a radio telescope. It’s not possible. A human being cannot invent Maxwell's equations. Cannot have done Faraday's, cannot mine minerals from Chile, cannot have invented smelting process, cannot have invented microelectronics. A human being cannot do all those things. So a human being cannot build a radio telescope. So what builds that radio telescope – the agent of action is not a human being, but it's an aggregate of human beings. And that's what I call Human 2.0. And if you look at it it's clear that the fundamental transition that took place was the invention of written language. So the invention of written language allowed us to transcend both time and space and working together. So no longer are we bound to only work with you when you're sitting next to me, but I can work with you across distances. But I can also work across time. So I can work with Maxwell. Maxwell can have invented these equations. Now I can use them. So effectively we can work together to do something. And so the thing that's happening right now is this becomes – modern technology allows this to be much more – to happen much more quickly – more transparently. So we can do a Wikipedia. We can have a million people contribute to this document that previously we had done in different ways. But the document – or we can have open source Linux based world in which a great deal of people contribute various components along the way.

So the question – and one more thing about that – and this is – so this came out of an inquiry. What is the fundamental difference? What is it that makes a human different? Why are we human? What is it about human beings that's special? And this is the thing about human beings that I've – after a lot of thinking about that question for many, many years I've come to the place to believe that this is the fundamental distinction of what this entity, that we call human beings, different than any other entity on earth. And my interest in it is not only that and understanding how humans work, but to realize that when we interact with other species – other planets in the universe – when we find other organisms. It's going to be this kind of transition. You're going to show up, and you're going to interact with these entities. But you're going to only understand them if you see that they can act aggregately in a way that makes these kind of transitions. They share a memory in books, or they share a memory in electronics. We can – and it's the – human 2.0 is the first post evolutionary – post Darwinian evolutionary organism. In Darwinian evolution – in a usual way to think of it you have some set of genetic material that is passed from generation to generation and some – via some selection process you have filtering of that to another population sample. That requires reproduction, and it requires generations in order for that to work. In other words, if you want to change the wiring in your brain to be able to think differently you have to evolve – you have to have some change take place in the genetics, and then that has to change place – take place over time. It has to get transmitted over time. In our world we can change the way people are wired together just by changing the addressing on an email or some document you're sending around. So we can change – we can form a whole new group that's looking at a mathematical subject – robotic control algorithms. That can be distributed around the world. It doesn't have to be an entity in a same room that's talking. So it's a powerful transition, and it's post Darwinian. We can evolve. We can change the structure of how we create and what we do as a human 2.0 entity beyond these normal transitions of genetics and that selection via breeding.

Future Challenges and Applications of Robotics

Q:

So what are the challenges for scientific and technological innovation in general and more specifically in robotics given that kind of future?

Michael Sims:

That's what I'm trying to figure out. I mean, literally, that's true. I mean that's why it's interesting to me, and I don't know a good answer to that. I can make up something while we sit her for a moment but it – I mean clearly – I'll give you an example. So if you think of the – it's more than just intellectual. It’s more than just group think because there's a physical component to it that goes on at the same time. But if you just think of the intellectual part of it you might think of the metaphor of a heterogeneous multiprocessor as some group acting together working like a heterogeneous multiprocessor. If you think of that as – no one that I can think of would question the fact that if you have a heterogeneous multiprocessor if you kill one of the processors you have a lesser machine. So what is the implication of that for sort of morality. And then the morality is if we don't – if we kill one of the components of this human based aggregate we have a lesser machine. And if you believe – and so in a less extreme form of that we actually intellectually do very much the same thing. I went to Rutgers, or if you go to Harvard, Michigan or wherever you go you're an elite. You're part of a very select small group of people that are educated to know these kinds of things. But there are hundreds of millions of people in the world that don't have access to that. That do not have access to making that kind of change. I'm always reminded of Ramanujan. This incredible Indian mathematician mostly self-educated, and sort of came out of the wilderness and was discovered by Hardy Littlewood or one of those guys. Brought to England – he died of starvation, but that's another issue. But this incredible mathematician that came out of the normal structure. We have people like that all over earth. What can we do if we actually gave them the capability of working together on the problems that we are trying to address? So we often look at all of this out of being resource limited. We look at it like – that there's too little to go around. But actually from an intellectual point of view there's a huge resource of earth – of humans that we aren't fully using.

Q:

In terms of robotics there's certainly a lot more usable pretty sophisticated kinds of robots that you can get online, and even the Lego's ones doing things like this. Do you see a sort of revolution in DIY robotics and changing the kind of scientific problems if robotics in the future?

Michael Sims:

So the first part of my answer is that I mostly focus on robots used in space. So I have the most to say about that. But I'll say something about other classes of robots. I think robots are like microprocessors. They're just a mechanical version of microprocessors. So if you think about microprocessors – if you talk to someone 30 years ago, where are computers going to show up? Oh, it's going to show up in my car. Yeah, it's going to be – I'm going to have a central processor in my car. It’s going to control everything. I’ll talk to it, and it will say do this. You look at a car today, and you ask where's my processor? Well there are dozens of them. They're all over the place, and yeah, they might connect in the background somewhere but they're pretty transparent what's going on. It’s not like an obvious processor. You can well have a knife processor in your toaster.

And I can tell you one really interesting thing is people are building – people here are building phone satellites. They are building satellites which are based on Android phones. We’re going to fly one of those. We haven't yet, but we're going to fly one of those with one or more phones. One that flies with one phone has a more powerful processor than any spacecraft that's flown yet – one phone – one Android phone. But you don't think of that. I mean most – now that we're starting to get web smartphones you think of it more as a computer, but for the most part it's a phone. And the same way we – by imbedding more and more smarts in the world. It’s going to take our – the same thing in robotics. The first wave of that is kind of the iRobot, Roombas – room vacuum cleaners. But I think it's going to be rampant, and I think it's going to show up all over the place. And you're going to have smart entities interacting with other smart entities, and some of them are going to have mechanical manipulators that they use. Some of them won't, but architecturally they're going to look very similar. They're going to – whether it happens to be something that crawls around the room robotically, or whether it's stationary in the corner it's going to architecturally look pretty similar – mechanically it might be different. So I think is going to get imbedded broadly and transparently as very much as microprocessors I think have gotten imbedded broadly and transparently. In robotics in space it's – how do we use these creatures to build things? How do we use these creatures to mine ores? How do we find things? How do we do it in a powerful way? And part of it is the question we talked about before which is, how do I get – how do I control a hundred of these as opposed to one? If I'm getting a hundred doing something what is the nature of that communication system? What's the nature of the architecture? What's the nature of the relationship between those components? Is it ant like? Is it more hierarchable? What does it look like? I don't know.

Q:

So in terms of architecture and organization of multi robot systems what are the real technical challenges, scientific challenges in the next five to 10 to 20 years?

Michael Sims:

Tony Stentz and Chuck Thorpe once did a paper which is – I don't remember the exact title. But it's something to the effect I've never seen an architecture I believed, or I've never seen an architecture worth writing about. I often feel that way about describing architectures. Architecture is a great thing to describe when you're writing a paper after you've done all the work and tried to organize what you did. But they're a lot less help when you're actually building something, and you actually have to make it work. So to a large degree I don't know. I don't have a sound answer as to what it's all going to look like. What that next phase is going to look like. Ask your question again. I got headed down part of the answer and I got sidetracked.

Q:

Maybe it's an organizational thing, but in terms of the coordination problem of these robots, what's the technical question or the technical problem that really – is it communication issue? Is it sensing? Is it integration of sensor fusion or all of the above?

Michael Sims:

So I would go back to where I started which was – so the comment about architectures. There's a sense in which, oh yeah, you can do some bloody awful things on the top level, but if you're not awful on your top level architecture it's going to probably work. Some will be more efficient than others by minor degrees, but you're still in the ballpark. If you have that arch – if you have structure of safing. If you have that structure which you can sort of build the other components on, to me, that's the big transition that will take place. When we can trust these devices to do more or less what we want to do. I have a robotic vacuum cleaner. It breaks all the time. I love it, but that ability to have a world in which it's robust is part of – it's sort of equivalent to a robot in space being self-saving. It's a world in which you can rely on it to actually do what it's supposed to do. And when I punch my microwave I assume it's going to do the same thing over and over again, and there's going to be reliable. And it's not going to get confused if I cook one thing, and then I cook another thing later. It’s simple but still. I assume reliability and reasonable interaction without – we don't have that in most of our mechanical devices – robotic based mechanical devices yet.

Q:

Are there any other challenges in terms of space robotics that you see coming up in the future?

Michael Sims:

I always think in terms of the problems we're trying to solve rather than technologies. So the problems we're trying to solve are how do we cheaply and quickly get objects to space? How do we pyramid what we're building? How do we cascade? And not necessarily self-replicating sort of deals, although that could be part of it to some degree, but certainly not going to try to replicate to high precision repeatability. So if you can build 90 percent of the components in space – 90 percent of the heavy components in space that's a huge deal. I wouldn't worry about closing the 10 percent if they're real light or something – ship them from Earth. So but how do we get structures that can actually do the kinds of things we can do? We want to build habitats. We're going to build places human beings are going to live. I think – this is just me talking – pure speculation. I think we're going to live in underground facilities on the moon and mars, for the most part, that are going to look very much like your typical suburban mall. And you might – a lot of people cringe at that. That's an awful thought but people are generally comfortable in a mall. There's open space. You can walk around. There are green things. There are offices and I can go there or here. It’s not like living inside the cockpit of an airplane. So I think that's what our worlds are going to look like, and so how do we build those worlds? How do we get to a place where we build those worlds? What are the tools we do? How do we mine? What do we do? How do we detect the right things to mine? Is this platinum sitting on the surface of the moon? Is this – where is that rich ore? And how do we actually get to it once it's there? Those are going to be really hard problems. Those are going to be – but there's a statement that – which I love which is from somebody that the first trillion year of earth is going to be based on – off earth industries. And there is enough minerals in one or two asteroids that are not too far off to be more major metals and major components than we've used – than we've mined in the whole history of Earth. And their readily available – not readily available geographically, but on those surfaces of those asteroids they're readily available. So there's a huge potential for humanity once we get out of the cradle.

Advice for Young People

Q:

And we ask everybody a question.

Michael Sims:

Yeah.

Q:

Which is for young people who are interested in pursuing robotics as a passionate career what's your advice for them?

Michael Sims:

Yeah, I think that's exactly right, passion. You have to love what you're going for. You don't have to, but if you're going to be successful passion is a great thing to have as part of your – and it also is a game when you play – either the academic part of the game or the research or the applications, perseverance. There are going to be lots of hurdles. So if you really want something there are going to be people standing in your way. There are going to be people saying you can't do this. There are going to be people saying you're not the right person. But if you really want to do it do it. Just keep going and don't take that as discouragement. And the usual advice, study mathematics, physics, computer science, engineering those kind of things and all the sciences. All of those are great fodder for the mind. Give you preparation to do the right things.

Q:

Great, thank you. Thank you. Is there anything else you would like to add?

Michael Sims:

No.