Oral-History:Terry Fong
Terry Fong
Terry Fong was born in Pittsburgh, Pennsylvania and grew up in the Chicago area. Fong received his B.S. (1988) and M.S. (1990) in Aeronautics and Astronautics from the Massachusetts Institute of Technology and his Ph.D. in Robotics from Carnegie Mellon University. Fong's research interests include space and field robotics, human-robot interaction, virtual reality user interfaces, and planetary mapping.
During his time at MIT, he worked in the Space Systems Lab with Dave Akin where he developed the Steward Platform Augmented Manipulator (SPAM). Finishing his master's degree, he went to work at NASA Ames from 1990 to 1994 (for the first time) where he worked on iWARP, virtual reality interfaces and environments for robotics, and remotely operated robots for exploration, such as Dante II.
After four years at NASA, Fong went to Carnegie Mellon for further graduate work in robotics. Working with Navlab during this time, he focused on adding proprioception to robotic technology before moving to Switzerland with his wife, where he performed user interface research and served as co-founder and Vice President of Fourth Planet, Inc. (March 1996 to December 2000.). In 2001, he earned his Ph.D. from Carnegie-Mellon with work done on Human-Robot Interaction at the Swiss Federal Institute of Technology in Lausanne (EPFL). He took post-doctoral fellowships at Carnegie Mellon and EPFL. Fong returned to NASA Ames in June 2004 where he currently serves as Director of the Intelligent Robotics Group and manager of the NASA Human Exploration Telerobotics project.
Fong is NASA's Senior Scientist for Autonomous Systems, a position he has held since 2017. He is also Chief Roboticist and former Director (2004-2017) of the Intelligent Robotics Group (IRG) at the NASA Ames Research Center. Fong previously served as project manager for the NASA Human Exploration Telerobotics (HET) project, which developed and tested advanced telerobotic systems on the International Space Station. From 2002-2004, Fong was a post-doctoral fellow and the deputy leader of the Virtual Reality and Active Interfaces Group at the Swiss Federal Institute of Technology (EPFL).
In this interview, Fong discusses his involvement in and contributions to the field of robotics. He recounts his time at MIT, Carnegie Mellon, and NASA Ames, and the research work and projects he completed. He reviews his various robotics collaborations, his involvement with Fourth Planet, Inc. and as director of the robotics group, and his contributions to virtual reality interfaces and Human-Robot Interaction research. Additionally, he reflects on the evolution and challenges of robotics and provides advice to young people interested in the field.
About the Interview
TERRY FONG: An Interview Conducted by Selma Šabanovic with Peter Asaro, IEEE History Center, 10 June 2011.
Interview #700 for Indiana University and IEEE History Center, The Institute of Electrical and Electronics Engineers Inc.
Copyright Statement
This manuscript is being made available for research purposes only. All literary rights in the manuscript, including the right to publish, are reserved to Indiana University and to the IEEE History Center. No part of the manuscript may be quoted for publication without the written permission of the Director of IEEE History Center.
Request for permission to quote for publication should be addressed to the IEEE History Center Oral History Program, IEEE History Center, 445 Hoes Lane, Piscataway, NJ 08854 USA or ieee-history@ieee.org. It should include identification of the specific passages to be quoted, anticipated use of the passages, and identification of the user. Inquiries concerning the original video recording should be sent to Professor Selma Sabanovic, selmas@indiana.edu.
It is recommended that this oral history be cited as follows:
Terry Fong, an oral history conducted in 2011 by Selma Šabanovic with Peter Asaro, Indiana University, Bloomington Indiana, for Indiana University and the IEEE.
Interview
INTERVIEWEE: Terry Fong
INTERVIEWER: Selma Šabanovic with Peter Asaro
DATE: 10 June 2011
PLACE: Moffett Field, CA, USA
Early Life and Education at MIT
Q:
We can start with your name and where you were born.
Terry Fong:
My name is Terry Fong. I was born in Pittsburgh, Pennsylvania.
Q:
Did you go to school there as well?
Terry Fong:
I grew up outside of Chicago. I went to grade school, junior high, and high school in the Chicago area. And then I went got my Bachelor’s and Master’s in Boston and my Ph.D. in Pittsburgh.
Q:
And which school were you in Boston?
Terry Fong:
MIT.
Q:
And what major were you pursuing at MIT?
Terry Fong:
Aeronautics and astronautics.
Q:
And how did you get interested in aeronautics?
Terry Fong:
When I was growing up, I always wanted to build and fly airplanes. That was my dream. I thought I was going to be an aircraft designer until I went to college and discovered that designing aircraft can be quite tedious. But, then I started working on robots and found them to be much more interesting on many levels.
Q:
So did you see any robots while you were at MIT?
Terry Fong:
I did! There were robots all over the place, in the aeronautics department, EE, computer science. Lots of different kinds of robots too – small, big, all shapes and sizes and that was really fun and interesting because there were so many different types.
Q:
Did you participate in any contests at the time?
Terry Fong:
No, I didn’t! It’s actually funny because now I work with many students and engineers who have participated in robot contests. And I’ve been a judge for robot contests. But I, myself, have never participated in a robot contest.
Q:
What was the first robotics project that you were involved in?
Terry Fong:
The first real project was when I was a sophomore in college. I applied for a part-time job in the MIT Space Systems Lab and started doing work on neutral buoyancy robotics.
Q:
And who did you work with on that?
Terry Fong:
The Space Systems Lab was run by Dave Akin, who was a professor in the MIT aero/astro department at the time. The lab used neutral buoyancy robotics as a way of developing and testing space systems.
Q:
And what kind of work did you do?
Terry Fong:
I’ve always been a software nut, so for me robotics was primarily about writing code to make cool things move.
Q:
And what was the next thing that you did that was robotics related?
Terry Fong:
I worked in the Space Systems Lab throughout undergrad and then I decided to stay at MIT for grad school. I did my Master’s thesis in the Space Systems Lab and I built a large underwater robot arm. The arm was designed to be similar in scale to the Space Station arm, but with different capabilities. It was an interesting system because it had a three-joint serial part and a parallel end-effector based on a Stewart platform. I called it the “Stewart Platform Augmented Manipulator” or SPAM. The idea was to use it to study large-scale, coarse/fine positioning.
Q:
What were some of the challenges in designing that system?
Terry Fong:
Well, I built the arm with a team of undergrads. I had freshmen, sophomores, and a couple seniors working on it with me. At the start, we really had no idea how to build such a large, complex thing. Because it was an underwater arm, we had to decide, “Well, how are we going to power the system?” Since it was meant to be very large, we decided that we could not just build large motors. So we ended up using a combination of hydraulics – using water, not oil – and pneumatics. It was really interesting figuring out how to build the system and size it so that it could move large things underwater, including people. I remember calling lots of plumbing supply places and asking for various manifolds and other things and they wanted to know what I was going to use them for. I said, “Oh, I’m building a robot.” At that time, it wasn’t really common for students to build underwater robots. These days, however, you see kids – even from fourth or fifth grade building underwater robots – and they have many contests. So, it is a lot more common now.
Q:
When did you finish your master’s degree?
Terry Fong:
I finished my masters in 1990.
First Stint at NASA Ames
Terry Fong:
And then after that I came out to California to work at NASA Ames. I’ve been actually at NASA twice. The first time I was here from 1990 to 1994. That was a break from school for me because I wasn’t sure if I wanted to do a Ph.D. I thought it might be interesting to work in a research environment and so I came out to NASA Ames.
Q:
What were you working on when you came here?
Terry Fong:
NASA Ames has always been an interesting place to work because it is very diverse. The first time I was here – from 1990 to ‘94 – I worked on everything from high performance computing to virtual environments. We did parallel computing on a system called the iWARP, which grew out of a project at Carnegie Mellon called the WARP, basically a high powered distributed parallel computing system. The iWARP used something called “systolic computing”, which is an analogy to the cardiac system because data would be “pumped” through the system in lock step. But, what the robotics group became most famous for was virtual reality interfaces. Back in the early ‘90s there was a lot of interest and enormous hype about how virtual reality could be used for all kinds of things. NASA Ames was a pioneer in using VR interfaces for robotics. At that time, high-end graphics computers were large – literally refrigerator size computers, like the Silicon Graphics Onyx, that cost several hundred thousand dollars. But, that’s what you needed for doing real-time interactive 3D graphics. We used virtual environments for testing robots at NASA Ames. We also worked on robots that went to the Antarctic. One was a ROV that we deployed under the sea ice at McMurdo Sound and remotely operated from a virtual environment. Later on we worked on the Dante II robot with Carnegie Mellon. Dante II walked into the Mount Spurr volcano in 1994.
Q:
What were some of the challenges of designing these environments and interfacing them as an interface for the robot?
Terry Fong:
Well, you have to recall that back in the early ‘90s virtual reality was cutting edge. People were trying to use VR for all sorts of things – from simulation of robots, to financial markets, to social interaction. At the time, we did not really know everything about the psychophysics of how people perform in these environments. Most VR developers did not understand how to create stereo 3D graphics such that people could stay immersed for more than a few minutes at time without becoming nauseous. It was also really tiring because head mount displays were pretty heavy at that time. Some were essentially twenty pound helmets. And head tracking was done with electromagnetic systems made by companies such as Polhemus and Ascension. These trackers were extremely expensive and very non-linear, so sometimes you would move your head and there would be a big jump in what you saw. Overall, trying to create environments that people would be comfortable in for a long time and be productive in was very difficult.
Q:
So who was in charge of the robotics research then and who else was working in that group?
Terry Fong:
When I first came to NASA Ames in 1990, Mike Sims was running the robotics group. About 10 months later, Mike stepped down from that position and Butler Hine took over. Butler was the one who really pushed the group into developing virtual environments, virtual reality user interfaces. We also worked a lot with Steve Ellis, who is a very well-known researcher in spatial displays and human performance. I actually used to carpool with Steve to work, so we’d get an extra two hours of work everyday to talking about virtual environments and robotics.
Q:
Were there any other people that you worked with who were related to the virtual environments or the robotics group?
Terry Fong:
At that time, one of the things that really distinguished the group was that we used virtual environments and remotely operate robots to perform scientific exploration. We worked pretty closely with planetary scientists who were interested in using VR and robots to understand remote environments. For example, we worked on one project with Carol Stoker to remotely operate an underwater ROV in Antarctica. Carol is a planetary scientist here at NASA Ames. She has worked on a pretty broad range of things. She’s worked in underwater environments. She’s done a lot of drilling work. Also, she’s well known for her work on Mars science. To me, the blend of engineering and science was really exciting and motivating.
Q:
And what were some of the motivations and constraints that came from this interaction with science and with scientists?
Terry Fong:
Well, part of our motivation was that we were trying to represent data in a way that would be meaningful to both engineers and to scientists, especially data that came from instruments carried by robots. In the early ‘90s, the best computer graphics systems were made by Silicon Graphics, but they were nowhere near what you see on the market today. Back then, high performance meant real-time rendering of flat shaded polygonal models of perhaps several tens of thousands of polygons at most. Today, you pick up anything, even your cell phone and it probably has 100 times – or more – better graphical performance of what you saw back then. So trying to represent scientifically meaningful data and engineering data in a meaningful way with such limited resources was a huge challenge. One approach to increase realism that was quite common was to use texture mapping, that is to say mapping images on polygons. Overall, it was very hard to create real-time 3D representations that were both meaningful and accurate – at least from a scientific perspective.
Graduate Work in Robotics at Carnegie-Mellon
Q:
What made you decide to go back to get a Ph.D.?
Terry Fong:
Well, I had been at NASA for four years and I had worked on a lot of remote operations projects, different kinds of software and I decided that I really wanted to be able to focus on one area. I think one of the things that’s probably true just about anywhere in a research is that you can work on project after project after project without ever focusing on just one thing for a long period of time. So, I wanted to go back and get a Ph.D. so I could spend multiple years working on something. Then, I had to make a decision of, “Well, do I stay in California?” – because clearly there are a number of good schools here – or, ”Do I go someplace else?” And I chose to go to Carnegie Mellon because it was (and is) the best place for learning about robots.
Q:
And you mentioned that you had worked with some of the Carnegie Mellon folks on Dante and so did you already have some connections with people there from that project?
Terry Fong:
Yes! I actually knew many people at Carnegie Mellon extremely well before I went there for my Ph.D. I had previously visited several times while working at NASA. Each time I had been there for a visit, the reaction had been “Hey, the guy from NASA is coming so let’s go show him around and meet people.” So, I was extremely familiar with the research at CMU and I knew many of the professors at a professional level before going back as a student. In particular, I had worked quite a bit with Reid Simmons, Red Whittaker, and Chuck Thorpe – I knew them all extremely well. And, because of the work that we had done with the iWARP, I knew a lot of people in the computer science department, such as Thomas Gross and Jon Webb. Knowing all of these people as researcher made it an interesting transition.
Q:
How did you choose an advisor once you got there?
Terry Fong:
I went there knowing that I really wanted to work with Chuck and with Red because I had worked with both of them before. In the course of the projects we had done together, we had talked a lot and found many similar interests. And, of course, having come from NASA, I wanted to do something space related, or something that was in the field. Both Chuck and Red did projects like that. But, I also considered other people too, including Reid Simmons and Eric Krotkov. In fact, I almost decided to choose Eric because during the “marriage process” at Carnegie Mellon – where students arrive and you spend time meeting different professors and then there is matchmaking between students and advisors – one of the things he said was that he had a goal to do robotics research in all of the five senses. That is to say, he wanted a robot that could taste, and one that could smell, and then one that could use its vision, etc. I thought that was a really great goal and I was really close to choosing him. But, then, I think the lure of working on field robotics really drew me to Chuck and Red.
Q:
And what were some of the projects that they had going on at the time?
Terry Fong:
Back in the early ‘90s, the Navlab Project was still going very strong. At that time, I think there were only two Navlabs. But, by the time I left, there were half a dozen (or even more) Navlabs of various sizes and shapes. The Ambler Project had wrapped up. The Dante Project had wrapped up. There was some work in building some Lunar Rover prototypes. My initial work at CMU focused on adding proprioception to Navlab, so that the robot could “drive by feel”. Cars, even back then, often have safety systems that are designed to take some action to safeguard the driver if you start running off the road or skidding around. Anti-lock braking is a good example of that. With Navlab, what we we’re trying to figure out was, “Could we feed some information back into the system that would allow it to perform better when driving off-road?” In other words, we wanted to robot to realize, “This terrain is really, really bumpy,” or, “I’m shaking left and right or mainly going right, therefore I should try to correct and go left.”
Q:
And what were some of the ways that you implemented these solutions with Navlab?
Terry Fong:
I started off by thinking, “I’m not going to use anything that looks outside of the robot. I’m only going to use accelerations, changes in orientation, etc. and try to pick out patterns.” For example, I was interested in how rapidly the vehicle was bouncing up and down or maybe how strongly it was bouncing. But, it wasn’t clear (at least to me), do you need just one sensor or do you need lots of sensors? You have to look at, for example, wheel slip as well as inertial measurements. For humans, it’s clear that when we move, we rely on proprioception. You can infer a lot about the outside world, or at least what you think is going on in the outside world, and you make judgments based on those inferences. Being in an airplane in the clouds is a good example. You may think that, “Hey, I’m flying level because my body tells me I’m level,” but when you come out of the clouds you may find that the plane is actually in a bank. We had the same sort of challenges with Navlab too.
Q:
What was your thesis on?
Terry Fong:
Well, let me first say that I have never followed the usual path through life. I’ve (almost) always chosen to take not just “the road not traveled”, but “the road not yet built”. Because I had worked at NASA and then went back to school, I was a bit of an unusual grad student – older, someone who had done research and had published papers already, and certainly someone who knew the faculty at more of a peer level. On top of this, after my first year and a half at CMU, my wife (who was also doing a Ph.D. there) joined a research group that did all of their research in Switzerland. So, we left Pittsburgh and moved to Switzerland and I ended up doing my Ph.D. research at a Swiss university, the Swiss Federal Institute of Technology in Lausanne or “EPFL”. Although the research was done at EPFL, my degree was from Carnegie Mellon! And during my first year at EPFL, my CMU advisor was Chuck, but he wasn’t able to support me because most of his research funding came from DARPA and it was difficult for him to support a researcher working out of the country and completely alone. So, I found myself thinking, “Okay. I’m in Switzerland. I’m trying to do my research for CMU and where am I going to get funding from?” But, I was fortunate because some people that I had worked with at NASA Ames had decided to create a startup company and they asked me to join it. So for a period of time I was a CMU grad student doing my research in Switzerland while being paid by a California startup company.
Q:
What was the company?
Terry Fong:
It was a company called Fourth Planet that created virtual environment visualizations of real-time data. Things completely unrelated to robots. Things like computer network monitoring, just looking at say bandwidth usage and connections for various places on the Internet. Like I said, it was a pretty different approach. I probably wouldn’t recommend anybody to go through grad school with the goal of doing your research 6,000 miles away from your university, 6,000 miles away from your advisor and getting paid by a startup that you created in California, but it was a path.
Q:
So it’s at least two jobs in one?
Terry Fong:
It was at least two, maybe three jobs, but you know it was fun.
Q:
Did you work with any of the people at EPFL at all?
Terry Fong:
EPFL had an Institute of Robotics, but their focus was primarily on very small-scale systems… insect and behavior-based robots, as well as industrial systems. EPFL was known for manipulation for precision, high speed, pick and place work. But the group that I joined was the “Virtual Reality and Active Interface” group, which developed a broad range of user interfaces. Computer-aided surgery. Visualization of remote environments. And some robotics. When I first arrived at EPFL I spent some time talking with several professors. One was Reymond Clavel, who was famous for the Delta robot. The Delta is widely used in manufacturing for very high-speed manipulation. I also spoke with Jean-Daniel Nicoud, who’s group created several well-known small robots, including the Koala and Khepera, which led to a startup company called “K-Team”. I don’t even know if the K-Team is still in existence, but their robots live on in many research labs. In any case, EPFL was certainly a big change for me from Carnegie Mellon. It was a big shift in terms of the way people approached research, in terms of facilities, many things. Going from a place like Carnegie Mellon, which had hundreds of researchers working in robotics, to a place, which had maybe a dozen roboticists was a big change. But, it was also liberating in the sense that there weren’t any expectations of, “Oh. You can only do this research,” because there were few people in robotics at EPFL and everybody just did what they felt was really interesting.
Human-Robot Interaction
Q:
What did your thesis wind up being on?
Terry Fong:
Well, when I moved to Switzerland I wasn’t able to continue my work on proprioception because the Navlab I was using was a retrofitted Humvee. It was pretty hard to put a Humvee in my suitcase and take it to Switzerland. So I ended up in a completely different area, working on Human-Robot Interaction, or “HRI”. At that time, the HRI community was really very nascent. Not really even recognition of the term, “Human Robot Interaction”. Instead, people were concerned about robot interfaces and human-robot communication. But, a lot of people around the world were starting to look at, starting to define “HRI”. And I was really interested in the problem that as humans and robots work together on similar tasks, how do they communicate? But when I say “communicate”, I do not mean “how do they express themselves?”, but rather ”What is communication useful for?” And, so my thesis ended up addressing what I called “collaborative control.” The central idea in collaborative control is that if a human is working on something and has a problem, he should be able to ask a question to the robot – and vice versa. The bottom line is that nobody has all the answers, but if you ask questions, you can benefit from the knowledge of others. This is especially true when you consider robots and humans, which have different perspectives, different sensing modalities, different areas of expertise, different levels of precision or ability, etc. If humans and robots can support each other by exchanging information, then they can work in a very collaborative manner. That approach was quite different from other HRI research at that time.
Q:
Were you talking to Reid too because he has that concept of mixed autonomy, which seems somewhat related but I don’t know if it is.
Terry Fong:
Well, for a long time within the AI community researchers have looked at mixed initiative, adjustable autonomy, sliding autonomy, and I don’t know how many other types of autonomy. But all of these architectures were really focused on the idea that the robot has autonomy and you just need to “dial in” the right bit of autonomy for a particular moment in the time, task, scenario, whatever. I wasn’t interested in that because that seems like tweaking knobs on a control system. I was more interested in the notion that humans and robots could work as partners, or as peers, just like you and I are talking. In other words, when I have a question, I should be able to ask you and benefit from your knowledge. In my research, I was very focused on task driven scenarios. I was very interested in situations where team members might say to one another, “We’ve got a job to get done. Therefore, we have some common ground because we’re trying to do the same thing. And, if I have a question, I can ask you because we’re working towards the same sort of goal.” So, my work focused more on task-oriented information exchange, rather than control of autonomy.
Q:
So what were some of the tasks that you tried out for this?
Terry Fong:
The most basic one was just moving from Point A to Point B, which is a classic navigation task. More importantly, this task is still a challenge for robots of any size, shape, or form, especially if the robot is operating in a natural environment where the perception of obstacles, terrain hazards, etc. is still a huge problem. So I was interested in having robots ask questions of the humans, such as, “Is this dangerous? Can I drive through this? Do I have to drive around it?” To do this, I ran some studies where I would put paper cats – they look like a cat, but are actually just a picture of a cat, cut out and mounted on cardboard – in front of the robot and have people remotely operate the robot. The robot would drive until it detected an obstacle and then it would send a picture to a human and ask, “Can I drive through this?” Depending on the situation, the answer might be, “Well, it’s a paper cat, so just drive over it.” Or maybe, “No, I don’t know if that is a cat or not, so drive around.” In short, even with a really basic task, driving from Point A to Point B, robots can really benefit from interaction with people.
Q:
And was it mostly natural language that in a sense the robot was using so it was asking questions or were there other types of communication that you were working on?
Terry Fong:
The dialogue was very scripted from the standpoint that I had chosen a very, very specific task. I only used a pre-defined list of questions. The challenge was deciding which question to ask, because the robot might want to ask about a lot of different things at the same time. Ideally, you want the robot to choose the question that is most important for a given moment in time. But, it is also the case that you don’t want to be annoying because you don’t want to ask question after question after question… If you do, then pretty soon the human says, “This is just too annoying. I’m going to stop answering questions.” So one thing I tried was to do a bit of user modeling, thinking that if a robot is interacting with an expert in navigation, it should ask certain types of questions. But, if the human is a novice (in navigation) – maybe they don’t normally drive robots but they’re still pretty skilled in some areas beause they’re human after all – you should ask other questions. This ended up being a big part of my thesis, really, just deciding when to ask questions. And, for different users, should you ask different questions?
Q:
What were people’s reactions to the notion of robot as peer?
Terry Fong:
I think it was something that the HRI community didn’t have a good sense of, “Is this a good thing or a bad thing?” And in fact, even today, I would say that nobody really knows for sure. We see robots in everyday environments and, well, “Is it a peer? Is it a partner? Is it a tool?,” is a great topic for long and lengthy discussions and debates. But, to me, that’s not really the key point. I mean, I don’t necessarily care whether, or not, a robot is a peer, or a partner, or a tool if, for a particular task – because I’m very task driven, especially here at NASA – that how we (humans) work with it give us the ability to do something better than we could without it. To me, that is the fundamental thing. Whether or not the robot is a peer, or needs to be a peer, I don’t know.
Q:
And so far at the EPFL you used small mobile platforms generally for this kind of work?
Terry Fong:
Yes, I worked with a bunch of Pioneer robots, which were made by Active Media. They’ve since changed their name into Mobile Robots and I think they were recently purchased by Adept. In any case, when I worked with the Pioneers there were indoor models and eventually a somewhat rugged outdoor version.
Back to NASA Ames
Q:
Did any of this research then carry on when you came to NASA – NASA Ames?
Terry Fong:
When I came back to NASA – after having been gone for 10 years – I found that the group had changed quite a bit, but was still doing research in remote exploration robotics. The main reason I came back was to work with my friend, Illah Nourbakhsh, who had decided to take some time off from being a professor at Carnegie Mellon and was running the group at NASA Ames. Illah and I had known each other for years and years and years, but never had worked together. So, when I arrived, we created a project called the “Peer to Peer Human Robot Interaction Project” that was an outgrowth of my thesis. We were trying to go beyond the kinds of things I had done in my thesis, and focus on spatial dialogue. We wanted to be able to say to a robot, “Shine a light here,” and have it act appropriately. The project involved the group here at NASA Ames, some researchers from the Robonaut group at NASA Johnson, Carnegie Mellon with Reid Simmons, and Alan Schultz from the Naval Research Lab.
Q:
What were some of the results and things that you learned from this project?
Terry Fong:
Well, this project was much broader than my thesis. We used natural language speech recognition. We also used spatial reasoning, so there was some computational cognitive modeling involved. And, we employed perspective taking, that is to say, “If I’m here and the robot knows that I’m here but it’s over there and I say, ‘Move this there,’ what am I talking about? Is that reference to me and my body frame? Is it a reference to the object and the task I’m working on?” It was a bit of a challenge trying to figure out which area we really wanted to explore first, which area we wanted to make the most progress on. But, overall, I think that the primary result was that we were able to show NASA that robots and humans could work together in a more autonomous manner. It wasn’t like the traditional NASA approach, which is basically “Hey, we’ve got a robot. We’re going to send it off. Then, we’re going to command it and we’re going to monitor it.” Instead, we showed the idea that humans and robots could work more closely as independent peers. I remember doing lots of demonstrations to people… managers from NASA headquarters saying “Wow. That’s great. That’s just like science fiction.” And I thought, “So now we’re making science fiction into, hopefully, some a bit of science fact.” That was a lot of fun.
Q:
How did you get them interested in this because we have heard a lot about the kind of command and control approach and also the need to be very conservative when you’re working on actual missions? So this is obviously a more research-oriented project, I guess I would say. So could you tell us a little bit about how you got it passed through and how you got people at NASA to accept this different way of looking at things?
Terry Fong:
I think it a lot was due to good timing. Right when I was getting ready to come work with Illah, NASA had decided to start a very large, new, technology development program. This program was designed to look at a very, very broad range of research areas. Some of those areas were near-term, but many were very far-term, kind of “What might be possible in 10/15/20 years?” And because of that, we had the freedom to try something that was very different from mission-focused, “Let’s reduce the risk as much as possible because we’re sending a $2.3 billion rover to Mars.” It was good timing to try really very different – very risky actually, I would say – research.
Q:
You said it took about a year and a half with how the project ran. What happened then and what did you do after?
Terry Fong:
Well, the usual sort of thing! That brand new program that I mentioned was cancelled <laughs> and, because of that, NASA ended up shuffling around the various projects and ended up creating a whole different set of research projects that were much more focused on near-term missions. So we wrapped up the Peer to Peer Human-Robot Interaction project and moved on to other things.
Q:
Can you tell us about some of the things that you did then?
Terry Fong:
Over the past five years, we’ve moved from Peer to Peer Human Interaction – at least here at NASA Ames – to what I’m now terming “Robots for Human Exploration”. The idea is that robots, unlike the ones that we use on Mars right now, can be used to improve the way that humans explore. If you think about the way that we use robots today, such as Spirit and Opportunity and eventually Curiosity, we have them do everything because there are no humans with them. Therefore, we are extremely conservative in how we operate them. We don’t want to lose them because they’re the only things in a place that we probably are not going to be able to get to again, even with another robot, for a very long time. And then, contrast with the approach that we’ve taken with human exploration. The last time we had a human explorer on another planetary surface was almost 40 years ago, in 1972 when Apollo 17 was on the moon. Jack Schmitt, who was the only scientist to actually go into space, was the geologist on Apollo 17. It has been a long, long time since then, but we’ve learned to do a lot more in terms of how we use robots, whether they are rovers or landers or spacecraft, orbiters, satellites. And there’s been a big change in what you can do from a technology standpoint since the last time we had humans on another planet. One of the things that we’re interested in now is this whole question of “Can you use robots to improve the way that humans carry out exploration?” Can you improve the things that humans do by, for example, having robots work before humans – doing things such as scouting, setting up equipment, setting up communications relays? Perhaps even do initial survey work. And then, can you have those same robots support humans, as automated transport or maybe safeguarded transport. So you might have a robot, which was working independently before, but now humans can jump on it and, because it has sensors, it can avoid hitting obstacles when they’re driving. And then, after the humans leave and go home, you might use those same robots to do follow-up work. By “follow-up”, I mean completing tasks that were started by humans or performing tasks that are complimentary, or supplementary, to what humans are doing. The upshot of all of this is that we do not have to rely uniquely on humans, or robots, for everything. Instead, we can use robots for tasks, which would be very unproductive for humans to do. Systematic survey is a good example of that. A lot of the work that we need to do to understand an area involves making thousands and thousands of repetitive measurements in a very structured way. Sending a human to another planet, which is extremely costly and extremely risky, simply to make thousands of measurements in a lawnmower pattern does not really seem rational. Many people would say, “Why would you do that?” Well, you do it if that’s the only way you can understand the environment. But, if instead you had the option to employ robots for survey, perhaps interacting with local human explorers when needed, well that is a game changer. Overall, the idea of “robots for human exploration” is something that I think is very powerful for NASA and we’ve spent a lot of time in the past few years trying to understand how do you build robots to work before, in support, and after humans.
Q:
Is there a view to implementing this any time soon in one of the missions?
Terry Fong:
As I said, we’re driving towards supporting human exploration. So, of course, the basic question is “When are humans going to get off the planet and actually step foot on some other planet or an asteroid?” Up until last year, the answer was, “Oh, we’re heading towards the moon. We think we will be there in the 2020 timeframe.” Well, that’s changed again, both because of political realities and also because of the economy. There are some things that we’re just not able to afford right now and so I don’t know when we’re going to see humans back on the moon, or on the surface of Mars, or on an asteroid. I do firmly believe it’s going to be sometime during my lifetime, but I don’t know the specific date. At the same time, however, NASA continues to spend a lot of effort to figure out how to create robots that can support humans in future exploration. So we are continuing to do a lot of work, primarily testing here on Earth in planetary analog environments – places on Earth that have some characteristics that are similar to the moon or Mars in terms of terrain, in terms of geology. That’s really where our focus is these days.
Robotics Group
Q:
You also became the director...
<off topic conversation>
Q:
...of the robotics group. How did that happen and what was the group like when you took it over and how have you been developing since?
Terry Fong:
As I said before, I came back to NASA Ames because Illah was here running the group and I’d always wanted to work with him. We had known each other for a long time but never really worked closely together, other than, well, just before coming here, we had written a survey paper of human-robot interaction. That paper is incredibly dated now and I’m actually horrified that people still cite it because it’s so old now. But...
Q:
But it’s the first one <laughs>.
Terry Fong:
But it was the first one out there and because we enjoyed writing it, we thought, “We should work together.” So I came back to California, back to NASA Ames and worked with Illah. But, then six months later, Illah got tenure at CMU and decided “bye-bye California, back to Pittsburgh.” So he left! Actually, it was a great six months together and we continued working on the Peer to Peer Human Robot Interaction Project, even when he was back in Pittsburgh. But, when he left I basically inherited the group from him. At that time, I think there were about 15 people in it. We’ve really grown since then, over the past 6 years, to 32 people and a group that does pretty broad research. It has been a really interesting journey over the past six years.
Q:
What are some of the priorities for the robotics group?
Terry Fong:
We are creating technology to improve exploration of remote environments. That doesn’t mean that only robots. We’re also interested in using orbiting spacecraft to capture detailed mapping information. The challenge is how do you take that information, which can be very, very large – perhaps petabytes of data – and visualize it in a way that anybody, whether they are a scientist, an educator, a student or a grandmother, can understand. Over the past few years, we’ve taken software that we originally developed for robot navigation – using stereo images to create 3D maps that are useful for navigation – and turning it into software to build planetary-scale maps. A large part of my group now works on automated planetary mapping. We take lots of data from Mars orbiters. We’ve been working with the HiRISE Imager on the Mars Reconnaissance Orbiter. We’ve also worked with a number of other datasets from Mars and the moon. For example, the Lunar Reconnaissance Orbiter has a camera called the Lunar Reconnaissance Orbiter Camera, LROC. We’ve been processing data from LROC as well as historical datasets. Some of the richest data, some of the best data, believe it or not, is actually from Apollo. Apollo 15, 16 and 17 had film cameras called the Apollo Metric Cameras, mapping camera. During the past several years, Mark Robinson at Arizona State University has been scanning the original film with a photogrammetric film scanner. This is a scanner that can scan down to film grain level. So you end up with these enormous, really enormous, images of film, the film strips that were perhaps a meter long and maybe 20 or 30 centimeters wide. My group then takes that information and applies computer vision methods that we originally used for our robots to create large-scale image maps. And then, for areas where we have stereo overlap, we can create 3D terrain models. But, the thing that’s been so exciting for the past few years is that our next door neighbor – I mean, literally across the fence – is Google. We’ve worked with them to create versions of Google Earth for the moon and Mars. So today, for example, if you go download Google Earth, there’s a little icon on the toolbar that looks like Saturn. If you click on that, you can switch from looking at the Earth to looking at the moon or looking at Mars. And I’m incredibly proud that just about everything you see there was developed by my group: the base maps, the images, the tours, which allow people from all domains, all areas of interest, all areas of learning, to interactively explore data collected by NASA. To me, that’s a wonderful thing.
Q:
If I remember correctly, Illah had been working with the Google Earth folks at some point too. Were you also connected with that?
Terry Fong:
Just before Illah left, he and Randy Sargent started a project called Global Connection, which focused on interactively exploring images in really new ways. One thing that Illah and Randy created was a truly amazing, wonderful thing called GigaPan. The inspiration for GigaPan came from Mars. There’s a camera called the Pancam on both the Spirit and Opportunity rovers and some of the original pictures that came down from that camera system were things that Randy, who was working here at NASA Ames at the time in the robotics group, wanted to view in a better way. So he started creating what became the first version of the GigaPan browser, being able to actually work with panoramic data that came from Mars, and that was the inspiration for what became GigaPan.
Q:
You mentioned that one of the things that you’re working and are really excited about is having these visualizations and this different kind of data and exploration open to more people. There are a lot of people who are now looking at how you could use crowdsourcing to actually explore things. Is that part of the vision at all or <laughs>...
Terry Fong:
Yes, definitely! It’s nice the way a lot of things come full circle, that you may start off on one direction and create something new and then eventually it wraps back around. GigaPan’s a good example of that. It started with Mars data as a way of trying to visualize and allow scientists to browse panoramic datasets. Eventually it became a robot camera, which by the way a spin-off company is now selling commercially. And then last year, we used GigaPan to help scout ahead of a simulated human exploration mission. We captured panoramas, put them on-line, and let the general public vote on where the humans should explore. In other words, we started off with a robot system that was inspired by data from a robot on Mars, which eventually led to the creation of a commercial platform that was used to plan a simulated mission for testing new technology and techniques for exploration, including involvement of the general public.
Q:
How did that work out?
Terry Fong:
It worked out really well on two levels. First, it really raised the awareness of some of the future mission work that NASA is doing right now – trying to plan for the day when humans go to other planets. Second, it was really rewarding seeing some of the comments from the public, especially from young students saying, “Oh, hey, this is why I want to grow up and be a rocket scientist.” Also some geology students at the Arizona State University used these panoramic images to do field geology. They used the panoramas for analysis that we then fed back into the planning for the mission. I think crowdsourcing can be on different levels depending on the expertise, depending on what you’re trying to do. In one case we we’re working with the broad public. Just asking them, “Vote on what you think is most interesting” But then we also crowdsourced more detailed scientific information by working with students, who had some limited geology knowledge.
Q:
What was Chuck Thorpe like to work with as an advisor?
Terry Fong:
I was phenomenally lucky to have him as an advisor. Then, when I got to EPFL to also have Charles Baur as a co-advisor. I am absolutely convinced that they are twins separated at birth. They would <laughs> probably be horrified to hear that because they look so different, and they obviously have such different backgrounds, but they have a very common approach towards research. They were always very, very open to new ideas. They were very willing to give me as much rope to hang myself on, but then helped to support me when things didn’t go right. I don’t think that I could have actually done a thesis where I was in Switzerland, doing research for a CMU Ph.D. while funded by a startup company if I hadn’t had those two. I still feel phenomenally lucky.
Challenges of Robotics
Q:
What do you see as the major challenges facing robotics over the next five to ten years?
Terry Fong:
Well, these days you see robots in many different places. They’ve certainly become much closer to the average person. A lot of people have robot vacuum cleaners. The Roomba is hugely successful. The military, of course, is using robotics more than ever before… and that’s both expected and somewhere disturbing. I think whenever you have technologies that radically change things there’s always a question of “Is that technology going to be used appropriately?” That’s not to say that you should not use robots for military operations. It’s a question of “Well, how do you do that? And what is the implication of doing that?” The same thing, I think, could be said about almost any sort of new technology. For space, the question is “Can we use robots to improve the way humans do exploration?” It’s true that we’ve learned a tremendous amount from using robotic explorers, whether they are rovers or landers or spacecraft. But, at the end of the day, there’s this fundamental urge, I think, for humans to explore and it’s clear that, if we’re going to have humans spending more and more time off the surface of the Earth, we need to find ways to make that more productive, less risky and perhaps more cost-effective. And clearly robots offer a possibility of addressing all those things.
Q:
For young people who might be interested in a career in robotics, what kind of advice do you have for them?
Terry Fong:
Well, I’m a software guy, so my advice is to learn how to write code early. Learn every single language, every platform, out there because so much of robotics is based on software. This is not to say that creating new mechanisms isn’t important. I’m continually surprised by the way robots evolve mechanically and electronically. But, to me, the thing that really makes robots different from, say, just remotely controlled vehicles is that they can think and they can be independent and that’s all software. So for people who want to get into robotics, I’d say “Learn everything you can about software engineering, everything you can about A.I. Learn everything you can about creating user interfaces.” And that’s all software.
Evolution of Robotics
Q:
You mentioned that you were there at the beginnings of the HRI community. Could you tell us a little about who else was – you communicated with about that, other than Illah, at the time, what it was like, how it developed in the last, I don’t know, 10 years or <laughs>...
Terry Fong:
I’m sometimes asked, “You were there at the start of HRI, so how come you’re not there now?” Well, it’s not because I have a lack of interest! I mean, that’s really been one of the things that I’m most interested in, the whole question of how do humans and robots interact. Over the past few years at NASA, however, I’ve focused, like I said, on this notion of robots for human exploration. This approach requires interaction between humans and robots, but it is not just proximal. And, it doesn’t have to be real time. It’s more on the human/robot coordination level. But, in terms of human-robot interaction as a community, as a domain, as a specialty of robotics, I do feel like I was there at the start. There was the creation of the Human-Robot Interaction Conference, which has become the core conference for issues of human-robot interaction and human-robot teaming and human-robot coordination. When that was first getting started, the people that I think who were most involved were Illah, Mike Goodrich at Brigham Young, Alan Schultz at Naval Research Lab. There were other people who were, I think, still in school at that time, including Holly Yanco. And, there were certainly people that are still involved today. Maja Mataric, for example.. And then there were other people in government labs – Jean Scholtz, who was just coming towards the end of her government career – interested in trying to figure out, “Well, if we’re going to study human-robot interaction, how do we measure it? How do we assess at some level, in some way, the way that humans and robots interact?”
Q:
Does HRI feed into the rest of robotics or is there still a division there? What are some of the challenges to having those <laughs> communities?
Terry Fong:
Robotics, by definition, is interdisciplinary, multidisciplinary. I’ve even heard “transdisciplinary” recently, though I’m not quite sure what that really means. Basically – it’s inherently broad. So, one question is whether there is a “boundary” around robotics? And, in some sense, this is the same question that people asked years ago about artificial intelligence. I’ve heard “Everything you achieve in robotics, that’s artificial intelligence.” Well, not really true! I think everything you do is robotics and artificial intelligence is this little tiny piece. So it’s hard for me to really try to wrap my arms around it and say, “This is within robotics and that is not.”
Q:
They were talking about HRI but I don’t know if you wanted to say anything else about...
Terry Fong:
I guess the only thing I would say about HRI is that because it is so multidisciplinary, I think it has the effect of pulling others into the robotics domain, or at least increasing awareness about robotics. People from design, for example, who may not have even thought about robots before. People interested in ethics, for example. Just looking at how do these things that we’ve now created that are more or less autonomous or semi-autonomous – how do they interact with us? And, to me, perhaps the biggest contribution of HRI is the fact that it is pulling more and more people and more and more domains into robotics in general, and that’s a good thing, because that means that robotics touches other fields and can learn from other fields and can give back to other areas. Whether, or not, HRI exists within this bubble called robotics, or if it is something that bumps into it from various places, that’s actually irrelevant. The fact that it’s there, the fact that people care about it and are learning about things together… that is the most important thing.
Closing Remarks
Q:
Great, thank you. Anything else? Is there anything you’d like to add or something you think we missed?
Terry Fong:
No, I don’t think so.
Q:
Great robot stories <laughs>.
Terry Fong:
Yeah, great robot stories. I’ve been really fortunate to have worked with a lot of great people in robotics, people who really care about doing meaningful things. And certainly, if you haven’t yet talked to them – and I’m sure that you will – people like Chuck Thorpe and Reid and Red and others. They are the people that really created robotics.
Q:
Thank you very much.
Terry Fong:
Sure, my pleasure!