About Oussama Khatib
Oussama Khatib was born in Aleppo, where he grew up before moving to France for his undergraduate and graduate studies. He attended the University of Montpellier where he earned his B.S. and M.S. in Electrical Engineering in 1971 and 1974, respectively. He later completed his Advanced Degree in Automatic Control in 1976 and his Ph.D in Automatic Systems in 1980 at the l’Ecole Nationale Spuerieure de l’Aeronauque et de l’Escape. After graduating he joined the Computer Science Department at Stanford University as Senior/Research Associate from 1981 to 1989. From 1990 to 1999 he was an Associate Professor of Computer Science and (by courtesy) Mechanical Engineering at Stanford University, before being promoted to Full Professor in 2000. He also served multiple terms as a Visiting Professor at various universities, including the University of Singapore, EPFL, Paris VI, and Scuola Superiore S. Anna, as well as Director of the Stanford Computer Forum from 1997 to 1999. Other professional involvement includes being president of the International Foundation of Robotics Research, director of the Stanford Robotics Lab, and member of Stanford University Bio-X Initiative.
His research interests in robotics, include autonomous robots, human-centered robotics, human-friendly robot design, dynamic simulations, and haptic interactions. For his work in robotics he has received several awards, including the Japan Robot Association (JARA) Award in Research and Development in 1996, the IEEE RAS Pioneer Award in Robotics and Automation in 2010, and the IEEE RAS Distinguished Service Award in 2013.
In this interview, Oussama Khatib discusses his career in robotics, focusing on robot control and motion planning. Describing his work and research, he outlines his time at Stanford University and his involvement in several robotics projects, including the Stanford Robotics Platforms—Romeo and Juliet. Discussing the evolution and challenges of his work, he describes his move towards humanoid robotics and his involvement with robotics societies and activities, as well as provides advice for young people interested in a career in robotics.
About the Interview
OUSSAMA KHATIB: An Interview Conducted by Peter Asaro, IEEE History Center, 10 April 2013.
Interview #726 for Indiana University and IEEE History Center, The Institute of Electrical and Electronics Engineers Inc.
This manuscript is being made available for research purposes only. All literary rights in the manuscript, including the right to publish, are reserved to Indiana University and to the IEEE History Center. No part of the manuscript may be quoted for publication without the written permission of the Director of IEEE History Center.
Request for permission to quote for publication should be addressed to the IEEE History Center Oral History Program, IEEE History Center at Stevens Institute of Technology, Castle Point on Hudson, Hoboken, NJ 07030 USA or firstname.lastname@example.org. It should include identification of the specific passages to be quoted, anticipated use of the passages, and identification of the user. Inquiries concerning the original video recording should be sent to Professor Selma Sabanovic, email@example.com.
It is recommended that this oral history be cited as follows:
Oussama Khatib, an oral history conducted in 2013 by Peter Asaro, Indiana University, Bloomington Indiana, for Indiana University and the IEEE.
INTERVIEWEE: Oussama Khatib
INTERVIEWER: Peter Asaro
DATE: 10 April 2013
PLACE: Stanford, CA
Early Life and Education
So we usually just start by asking you to introduce yourself and tell us where you were born, where you grew up and went to school.
So, Oussama Khatib. I was born in a country that is very much in the news these days, unfortunately. For the wrong reasons. I was born in Aleppo. Aleppo is a very beautiful city. The city of Aleppo goes back to the Sixth Century B.C. And is it sort of the terminus of the Silk Road. So it is one of the oldest continuously inhabited city in the world. And I grew up there. I came to France to do my undergrad and graduate study after high school. I lived in Montpellier in the south of France, where I did my undergraduate study. And then I went to Toulouse. Toulouse, the capital of the aeronautic and space in France, where I did my Ph.D.
And what did you study as an undergraduate? I’m not aware.
So I studied automatic control, what is called automatic control, electrical engineering. And I was doing a project at what we call in France Grandes Ecoles, at Grandes Ecoles SUPAERO, which is a school focusing on aeronautic and space. And I was doing a project on AI. And I was working completely away from the physical robot. <laughs> And one day my advisor told me about a project of industrial robotics, and was asking me whether I would be interested to do further studies. And that’s how I came to know about robotics, became involved in that project and did my Ph.D. in robotics. And that was probably one of the first Ph.D.’s at SUPAERO in robotics, but certainly it was among the very few thesis in France. We were a few hundred at the time, and today we can say we have tens of thousands of people involved in research in robotics.
So what year did you finish your Ph.D.?
So I finished my Ph.D. in 1980. And by, I don’t know, destiny or chance, in 1978, I was presenting my first paper on the artificial potential field, the concept that allow us to move around obstacles reactively. I was presenting this paper in Italy and I met a professor from Stanford University. This is Professor Roth. I’m sure you had an interview with him. <laughs> Professor Roth invited me to visit Stanford as a post-doc after my Ph.D., which I did. But what happened is he invited me to stay one year and I never left. And he keeps reminding me of that.
<laughs> Great. So what was your thesis project on, and who was your advisor?
So the advisor, the professor who was supposed to be my advisor, left. So it was a sort of self-made project that really focused at the time on this idea that we had a lot of work in analyzing the kinematics of a robot. In analyzing the dynamics. And even in controlling the robot. So I was wondering, “What can I do?” I mean, they solve all the problems. And I thought maybe I should do something more than just tracking a trajectory. And I was thinking perhaps we could do something more in the control, if the control of the robot trajectory can accommodate obstacles. And the inspiration I had came from actually human, the way human move. And at the time I was observing my son who was learning so fast compared to any robot that can learn. And I was realizing that there must be a different way of controlling robots among obstacles. And I proposed this idea of creating an attractive potential, a repulsive potential, and just sort of move with a goal position attraction, attractive force. And when we come in the proximity of an obstacle, just to create a repulsive force to move around those obstacles. So this idea was very simple, and in fact I was able to implement it few years later because we didn’t have robots at the time. So I implemented it in Montpellier at the university where I did my undergrad, went back there, and implemented it on a robot, one of the first robot that was developed for a nuclear energy plant. And we managed to get it to work after a week of adjustment. And then I realized that it’s not enough. I mean, the simple idea is great, but we really need to understand more about the system to understand the dynamics of the system with respect to its motion, in the involvement, in the task space. And that led to a lot of research that I was doing later on how to model the dynamics of a robot in term of the task behavior and how to model also that dynamics when we are interacting with the environment and that is we are making contact, physical contact with the environment. And that became the theme in my research. The theme is not only about moving in free space, without collision, but really moving around obstacles while in contact or multiple contact, and then still managing to do that in a way that is reactive. So that was the work in my thesis.
What kind of robots were you working with at the time?
So this is the interesting part, is the robots at the time were mostly hydraulic robots. Very large robot, heavy robots. There was a robot from Renault that I think it was called the R80. Huge. Eighty kilograms it can carry. And I really didn’t implement my approach on that robot, but I used this cable-driven robot called the MA23 manipulator that was designed and developed by Jean Vertut. Jean Vertut is probably the father of robotics, robot design, in France. And there is a number of very, very influential people in my early career in robotics, and Jean Vertut was one of them. Jean Vertut developed most of the concept that led to cable-driven actuation. The three-fingered hand Kenneth Salisbury developed was based on a lot of those concepts. Many of the robots that were later developed were based on ideas and concepts that Jean Vertut developed. Jean Vertut passed away very quickly years back, and the whole robotics field missed him. But I had this privilege to know him. And also the privilege of having my paper selected by him, my first paper, to be then – the introduction to me to know Professor Roth, who eventually invited me too to come to Stanford.
Robotics in France
So who else was working in robotics in the early ‘80s in France?
So I became involved during these years in one of the early project in robotics called Spartacus. Spartacus is a project to develop a robotic system that is aimed to assist human. And in this project a number of research groups around France, from the South to the North, in <inaudible> were involved in that pioneering project. I think this was probably one of the earliest project in Europe. And it brought a number of people who are still today involved in robotics. So if we go over the group of people who is today involved in robotics we will find many of them who participated in that project. However, my research itself was very closely connected with another laboratory in France. Because of my work was pursued at the school SUPAERO. Next to SUPAERO there is a very important center of robotics, and this is called LAAS, L-A-A-S. And at LAAS, there was a very important project initiated and led by Georges Giralt. And Georges Giralt was in fact one of my other mentors. In fact, he was the director of my post-doctoral study. He was the chair of the committee of my defense, and with Georges we had a long, long years of interaction, sharing a lot of the ideas that in fact became part of my own research or the research that has been pursued over the years at LAAS. Georges Giralt was leading the robotics research effort at LAAS, and one of the researchers there is Mark Runeau. Mark Runeau was one of my very close colleague, because both of us, we were interested in dynamics. And Mark Runeau, together with Wisama Khalil from Montpellier, were also, the three of us, we were working in different aspect of dynamics. In fact, Wisama and Runeau developed software to generate automatically, symbolically, the dynamic equation of robots, that in fact I used myself in developing some of the dynamic models. But my approach to dynamics, as I said, was based on the idea that we need to understand dynamics at the level of the task. Not just at the level of the joints. So I was doing this transformation and I had a great partner with Mark Runeau for discussion. And Mark Runeau was very close to me in Toulouse. And we shared a lot of moment of discussion on dynamics.
Arriving at Stanford
Great. And so who was working at Stanford when you arrived there as a post-doc?
So arriving to Stanford, it was wonderful. We had the AI lab in Margaret Jacks. In the basement of Margaret Jacks, next to the DEC-20. And we had a small room with one table and four robots. And the AI lab just moved from the hills down to Stanford to the basement of Margaret Jacks. So at that time, there were a number of, I mean, extraordinary people that were involved in the AI lab. The AI lab attracted a lot of talent in robotics over the years. Just behind me you can see the Vic Scheinman robot, the Stanford Scheinman arm robot. And there were a number of people who were involved in the ‘70s in the AI lab. But in the ‘80s when I arrived, we had John Craig was there. Tom Binford was leading the AI lab. And my friend, the very famous, the very famous, researcher in both AI and robotics, Rodney Brooks, was there. In fact, many other people came through as visitors. We had also a very famous designer, Ken Salisbury, who was there. There were a number of people from the field of vision. There was Harlan Baker. I don’t want to go over all the names. I have the list and I have their photos. If you would like, <laughs> we will look at that later. But it was an amazing dynamic environment that I’ve never experienced in my life. It was just the discovery of that exciting place that made me decide to stay one more year and one more year. And then Stanford just kept me forever, because it is very, very difficult to leave this environment. The research environment, the excitement, the opportunities to take on a new project, is really unique. And what is interesting also is the fact that while I was at Stanford, I was in France as well, collaborating with my colleagues. I was in fact involved with other collaborations in Japan. In the year ’84, ’83, and in fact December ’83, I visited Japan for the first time, and I had the great, great opportunity of meeting Professor Inoue, Professor Umetani, Hirose, Professor Nakamura, who was a student at the time. And his advisor, Professor Hanafusa, and Professor Yoshikawa, and also a number of, I mean, just pioneers in robotics. All of that was done during one trip I had to Japan in 1983. And since then I kept in touch with all these different laboratories over the years, and that was really an amazing experience. So what I discovered about Stanford and about the AI lab is the fact that what we are doing is so much connected to the world, and that became one of the other dimension for my research.
Early Robotics Projects
And so how did that influence your work and what sorts of projects did you start working on?
Well, this is really interesting, an interesting question. Looking back at my Ph.D. where I started, looking at the vision I had in that work, I feel like I’m still working on my Ph.D. That is from the beginning, I took on a challenge, which is to try to understand the robot system with respect to descriptions of the task. That is to try to not just worry about modeling the physical robot in a convenient space, which is essentially the joint space. I was interested in understanding the characteristics of the robot when it’s interacting at a given point with the environment. So what would be the effective masses that I will feel here when I’m going to touch the environment? What is the normal space in which I can exert forces and what would be the corresponding tangent space where I can move? A lot of questions related to the dynamics and the control of robots in task space have not been really fully addressed. There are few people around the world who really look at those problem. Neville Hogan thought about it in the context of analyzing human motion and proposed very interesting models using impedance control. There are a number of other researchers who looked at those problems, but synthesis in task space has not been something that a lot of work was done in, and that was really one of the things that I was committed to pursue and to work on. So there was that work that was taking me from thinking about how, “Can we control just one effector?” Then how can we go from one effector that is not only moving in free space, but obviously interacting with the environment? But then how can we go from one robot end effector to two end effectors? So how can we do comparative manipulation when we start to have internal forces? Then how can we have general cooperation between multiple robots? Then how can we make this robot mobile manipulators? So just behind me you can see Romeo and on this side you have Juliet. And these are the two platform that we developed to explore the workspace of a human. That is we always thought about these manipulators attached to a table and bringing the component to be assembled by those robots.
Now we, if we are really to think about robots that would interact with human or move in the human environment, we would need to have those robots not fixed but mobile. So we started to look at mobile manipulation, and that brings a lot of interesting issues in the dynamic of macro mini-structures. The properties of dynamics in relation to reduce effective inertias, redundancy, and all kind of things that were amazingly interesting to analyze and work on. As we pursued this work, we discovered that, “Well, there’s more.” When we got to humanoid robotics we have the branching, complex branching structure, with multiple tasks that need to be simultaneously achieved while maintaining constraints, while maintaining balance, given the fact that these robots are under-actuated. It’s amazing. You have one problem after the other. It’s so rich, so exciting. So that was one line of the work that I was really interested in, dynamic control and control structures.
But there was as, you know, from the beginning I was interested also in the aspect of interaction with the environment through understanding of obstacles and the need to avoid collision and the need to create motions around those obstacles. So having resolved the problem in term of reactivity doesn’t mean that we resolved the global problem. Because motion planning requires us to think about a global path. Now, if we think about the computational complexity of this, we realize that this is exponential in the number of degrees of freedom and it is costly to find a path, to plan a path, especially for a robot with many degrees of freedom. So I was always interested in finding a way to connect those planning levels, that is the level where we are planning the motion with the levels of execution where those levels are running at very high frequencies. So if we think about the control, we need to have a millisecond of control <inaudible>, whereas if we think about a motion planner, well, between the time we find the obstacles and perceive the environment to the time we come up with the new plan, it’s going to take long time. Especially in the ‘80s. So what we really tried to do was to create an intermediate level that would allow us to think about this connection between motion planning and control in a way again similar to that of human. So the idea that became known as elastic planning is to start with a plan and continuously adjust this plan reactively. Locally by mechanisms really simple as repulsive potential forces, but the trajectory itself is treated as a physical entity with property of elasticity that will make it shorten and adjust to the environment and the changes in the environment. And that became a very, very interesting technique that also is being pursued by a number of my current student and previous students like Oliver Brock, who worked on this. Sean Quinlan developed the early version of it, and now a number of new students are working on taking this further, not only to move in the free space, but also to think about the contact space.
It’s really this kind of interface between logic planning and control theory.
Right. I mean, the whole idea that really is pursued here is human. I mean, if we think about human, human never go through this exercise of precisely planning every aspect of their motion. We do not compute a full trajectory or a full plan. What we do is like we are sitting here and after the interview you’re going to go back to the parking. You’re not going to plan all the details of that motion. What you know is you need to go to a door and you know that it’s going to be feasible to go to the door. And probably you have some intermediate landmark along the way. So we plan with this concept of feasible, reachable locations, and along the way we know that we are going to accommodate our motion using our reactive behavior. So we are interacting with the world as we are moving. In your car you are driving around other cars and people and bicycles. It wasn’t completely in the plan. But you know that skill is there. So we make use of skills, and what we do when we are interacting with the physical world, when we are making contact, when we are assembling objects, when we are putting things together. We are also using skills. When we are playing tennis, when we are doing any challenging task, we are using skills. So the idea is let’s then plan – first of all, let’s develop those skills for the robot. That is skills are a more abstract, a more advanced capability for the robot than the simple control of following a trajectory or reaching a point or just touching a surface. It is a skill to say, “I’m able to accomplish this task, taking into account different things.” So by building those skills, the planner can plan using those skills. So the task of planning becomes much less difficult and complex, and in real-time, the skills can perform at much faster servo rate than any dependency, full dependency, on the planner that would be required if we didn’t have the skills. So all the challenge is, “How can we teach these robots those skills?” And this is what we are doing. We are making use of a lot of fundamental characteristics of those robots in term of their kinematics, dynamic models, so that we build the first layer and then we abstract that layer to create a behavior that can accomplish a different task in the environment with obstacles, with a object. So we talk about compliant motion behaviors that allow the robot, for instance, to move to a multi-contact with the environment by just detecting and feeling those contacts. You can imagine a task of just putting a lid on a box. And this is something really hard, if we want to do it just by controlling the motion and the position, but actually this is something that becomes very easy if we understand compliant motion. So this becomes a skill that then could be used and reused in different situation.
It’s really the sort of direct engagement with the world that you’re sort of utilizing to simplify the problem.
Yeah. It is really, to say, that we get a lot of inspiration from the human. But we are not trying to just record human motion and replay the motion. So what we are trying to do is to really to understand what is behind this motion, what is the strategy, and how can we encode that strategy in something that the robot can use as a strategy so we can generalize? But this is, again, part of the whole approach. We are seeking solution that can address a versatile number of problems. We’re not just looking for finding or engineering the solution to a specific problem. Because robotics is about ultimately all kind of task in all kind of environment, and we really need to look at this versatility.
Robot Interaction and Forces
And when it comes to compliance, you’re actually taking information from the things that you’re touching. And instead of trying to figure out all of the what you’d have to do to sort of manipulate it, you just start acting in resistance to it and –
–then solving half the problems in it.
Yeah. So actually, I mean if we look at the history of robotics in the last 50 years, 52 years now, well what we definitely find is that robotics has been dominated by the idea that we need to generate, we need to program a trajectory for the robot to move in order to perform a task. So, this was, in fact, most of the effort in the `70s, early `80s. We were building robot programming systems, actually one of the first robot programming systems was developed at Stanford AL arm language, that became VAL when it was developed and adapted to the Unimate robot. Now, the question – I mean obviously in many industrial applications we want to move from one point to point. We want to really follow specific trajectories in free space. We are doing a lot of things, tracking different motions in painting, in many of the industrial applications. When it comes to making contact with the implement, when it comes to saying I want my robot to do something, I have a problem because using a position control, controlling the robot to follow a trajectory to reach a point and if this point is a surface we're going to touch then because of imprecision and errors we are going to make a contact with the environment at that location either before the surface or little after the surface. And, as we control position this error could cause large force to break the robot.
Now, what we had to do was to try to accommodate this with trying to slow down or control the velocity and little by little we found ourselves really just unable to handle the interaction with the world. So, there was a lot of work in laboratories trying to create compliance through all kind of techniques that brought velocity control or admittance control, but there was something that was missing in the design of the robots that really caused a lot of difficulties to implementing many of the control techniques that a lot researchers were developing and building and that is the fact that there was no ability for those robots to control the forces or the torques. So, the joint torques were absent. We couldn't control these torques which meant that we can only control the position and because we are controlling the position we are making use of high gears, reducers that cause the robot not to be always back drivable, which caused these robots to be ill-suited for interacting with the world. So, when we look at the robots around, especially in the context of industrial robotics and manufacturing, we have position controlled robots. So, as early as the day I arrived to Stanford, I tried to convert the Puma robot, which was controlled in position by the Unimate controller into torque controlled robots by cutting the cables of the controller and going directly to the amplifiers to control the current in the motors.
Well, obviously you cannot compensate for the friction that you have in the gear transmission system, but that was sufficient to be able to control a good part of the torque and that allowed a lot of interesting development in terms of force control. As early as `83, we were able to do compliant motion, moving over surface that we didn't know its location, moving over a wavy surface, moving a micro switch, electrical micro switch, and making the assembly of a double peg and hole. That was quite surprising in terms of the ability of force control to perform very complex tasks that were really, really difficult at the time. And, I think that idea, that force control is important, completely transformed the way we programmed the robot. We think about – so we were talking about compliance, the idea of compliance changes completely the way we approach a problem by creating as needed center of compliance in relation to the object location, in relation to the object geometry and the property of the environment, we were – to...
You were talking about the torque control and compliance.
Mm-hmm, yeah. And, yeah talking about the center of compliance and assembly. So, I don't know where did you stop. Do you know where it stopped after...
It was the very end of that sentence, but you can pick up wherever you like.
Okay, alright. So, you tell me when we start.
Okay, we're going.
So, the idea of force control is that now the motion is guided by the relationship of the object carried by the robot and the environment. So, the interaction forces are going to guide the motion and in a way we can from the geometry of the object create a strategy based on the center of compliance to take the object from one context state to another context state. Well, despite the face we studied those problems, despite the fact that there has been a lot of research around those issues, we could not really implement it in many of the robots because of the lack of the robot's ability to control torque, which limited its ability to control forces.
So, another aspect was the fact that if we talk about dynamics of robots and if we were interested in the ability of the robot to move along a trajectory, but a little faster, this is very important in many manufacturing applications, you want performance and you want speed and acceleration. Well, the study of dynamics shows the different interactive forces that are going to be produced by the motion, so we can compute those forces, but in order to compensate for those forces we need torque control. So, both for performance that is dynamic control, dynamic decoupling, contact with the environment, advances in assembly capability of the robot to do assembly and part mating, we needed torque control, which was not there. So, by the end of the `80s, mid `80s, into how to create torque control abilities and one of the things we did here was to take the Puma and say let's place a torque sensor at the output of the Puma though we retrofitted the Puma with a torque sensor on Joint 3 and we demonstrated amazing abilities.
Once you have the output torque measurement you can compare it to what your desired input is and if there is an error you can create a controller that compensates and drives the motor to achieve that desired torque. So, that removed the friction that almost rigidified that joint and made it high performance. That led us to think we should build a robot that is completely based on torque control, in the late `80s. So, we launched a project to build Artisan. So, you have a piece of Artisan also behind me. This is a robot that was designed to have high performance torque control and we introduced a lot of ideas using inductive torque sensing, LVDTs, all kind of configurations to build torque control ability. And, we built two degrees of freedom of this robot. Basically, we built a wrist structure of this robot and we were able to reach a level where you could just blow on the arm and the arm would move. It would respond to just the wind. And, that was a remarkable achievement and demonstration of the abilities of the robot to be so sensitive to external forces and the need for torque control.
Well, obviously in a laboratory like at the university you do not build a robot. It has to go and be developed so we collaborated, in fact, with DLR and my friend Gerd Hirzinger was very, very interested in those technologies. In fact, one of my students Dieter Vischer, who was working on this project worked also at DLR on some of the aspects of torque control and there is a very good connection between Stanford robotics lab and DLR in the collaboration that was very fruitful. In fact, DLR were the first to produce an amazing machine, the lightweight robot, that brought torque control not only to the lab, but somehow moved it together with Ralph Caupe (sp?) to KUKA and this is a very successful transfer of technology that moved from the university in collaboration with a research institute and eventually resulted into a very, very amazing machine that brought torque control as a basic capability. So, I believe torque control is essential to render the robot compliant and to allow the robot to interact with the world and also to think about robots in a different way. It's not anymore programming the robot, it's really thinking about the interaction of the robot with dynamic forces or contact forces.
So, what were some other projects you worked on through the `90s?
So, we moved forward with the development of torque control and, as I said, we moved to take manipulation to the field to the environment by combining mobility and manipulation and then we started looking at interaction between platforms to do collaborative work. So, Romeo and Juliet were working together while sometimes Romeo would be doing a lot of home work like domestic work, like ironing, vacuuming, and all of that, but sometimes the two would be interacting to carry a heavy object and the concept there was sort of different from just the concept of saying well we built a robot and they will do the work. The idea was we wanted to create a machine that can assist the human and work with the human and the idea was essentially it can represent the concept as follows. The human provides the brain and the robot provides the muscle. And, you put the two together and you have amazing synergy. So, first of all, you provide the muscle with the robot it means you need to do a lot of coordination of many different motions. You're creating sufficient amounts of autonomy for the two robots to carry a task, to control internal forces, to do all of that, and to move and interact with the human, to follow the guidance of the human, and now the human is just like touching slightly the robot to position and all the load is carried by the robot. So, that was the concept of a robot that is assisting the human, but at very, very high level. And, that started our idea of human guided motion and human-robot interaction. Now, human-robot interaction we were pursuing that directly on the robot, but from the early `90s we started another important research project of human-robot interaction with haptics.
So, we probably were among the few first groups to look into the problem of haptic rendering. There was the group of Ken Salisbury at MIT working on that problem and here at Stanford we developed a comprehensive approach to dealing with the problem of collision detection. So, in haptics what you need to do is to represent your objects virtually in the machine graphically, and you want to find where your finger or hand is with respect to this object, so you need to do collision detection. And, objects are represented with polyhedrals, so you have a lot of things to check and that means you are going to spend a lot of time doing collision detection. At the time, robotics and haptics came together through what I just was describing earlier about the elastic planning. So, the elastic planning started with the idea of what we called the bubble band or elastic band. So, the elastic band was this idea that we would like to model the free space around the path so we have a trajectory that we plan, but instead of just planning for one path we are going to plan for a tunnel of the free space. And, this tunnel is essentially you can imagine spheres that are intersecting. So, from one sphere of free space to the next sphere if they are intersecting then you can have a passage.
Now, the obstacles that we have are represented by sort of a very fast way of computing distances to those obstacles from where you are would be to represent those obstacles with a combination of spheres. So, you can put the whole world in one sphere and then you break it into two and break it into two and this sphere hierarchy lets you go from where you are doing the check down to the leaves that are in contact. The same concept that we were using in motion planning, elastic planning, we applied to haptics. And, we developed collision detection very easily, one of the – I mean at the time was one of the fastest techniques to resolve the collision with a large number of objects and at the time that was amazing because we were doing – I mean as you know haptics requires very fast interaction. Now, that technique became a part of other algorithms to do collision detection that are actually making use of combined sphere hierarchy and balding boxes and other things.
The other problem in haptics is the problem of collision resolution and collision resolution is something that we know is difficult, how you resolve collision, how you simulate the collision. Now, simulating collision between free objects is something that we studied, we know how to do, but working with the problem of collision or multi-collision between multiple links; that is at multiple points you are making the collision, is a problem that we don't really much how to solve because you have constraints. So, in fact, Barav, one of the researchers who looked at this problem in the context of graphics and simulation resolved it in a very interesting way where just before the collision you have the velocities before you remove the joints, now you have free body colliding. You collide. After the collision, you put back the joints; that is you eliminate the constraints, you remove the directions that are not possible, which takes time eliminating the constraints. Now, in our approach, again this is the interaction of robotics with haptics. In robotics, we know how to commute the effective mass at a given point on the robot because we are interested in controlling this point in contact. We need to be able to stabilize these points, so this is information that comes from projecting the dynamics along... well now, if I'm going to resolve collision this is very useful. I can take this effective mass, I take this effective mass, and make them collide. I replace the robot with two masses. Now, if I have multiple collisions I will have two masses here, two masses there, two masses there, and in addition I have the interaction between those masses. So, that led to a very, very effective solution in resolving multi-collision between multiple links.
So, in the `90s, we saw haptics progressing. We saw the elastic planning progressing. We saw mobile manipulation progressing, and by the end of the `90s something very interesting happened, which was suddenly the robotics world discovered there is a robot that is going upstairs and downstairs without falling, the Honda robot. A lot of people who were working in bi-pedal locomotion were depressed, I remember. And, a lot of people actually became really energized and excited, robotics is moving. I think that was a really big challenge to the community coming from a company that built a machine with those capabilities. Now, Honda really produced one of the most remarkable mechanical systems and developed one of the most interesting machines to perform bi-ped walking, stable bi-ped walking. But obviously, they needed much more to really do useful things with the robot, and one of the things that we explored with Honda, just in the following year, I remember in 1999 was to try to take the capabilities we developed for Romeo and Juliet to Asimo. And, in fact, I recall at the time I said I don't know Asimo I need to study the problem. So, in 1999, we started the project and we've been working with Honda on pursuing implementing, developing the capabilities of a Honda robot or a humanoid robot to go beyond just walking, to enable a humanoid robot to interact with the world, to be useful, to do things.
And, that was the direction that our work has taken in the last almost 10 years, I mean 12 years. And, that was one of the major projects that we have and it is really remarkable that a company like Honda kept a long-term relationship with a laboratory to explore this direction. So, from basically `99 to now, we've been working on how to deal with this problem of getting a humanoid robot to interact with the world. So, what were the challenges? First of all, if you have been to an Asimo show you would hear the following. They would tell you with Asimo there is one rule and that is do not touch the robot. That is this robot is controlled to balance, but it is not controlled to be interrupted by any external force unless the controller is going to accommodate that and that requires a specific way of doing it.
So, Asimo is essentially position controlled robot. What we had to do, we had to first of all take our concept and models from the structure that we had with mobile manipulation of Romeo and Juliet, which was essentially a city[?] like structure to a branching structure and deal with the dynamic interaction between the right and the left hand, all those different parts of the robot, model the robot in terms of effective masses and dynamic responses of the different parts we would like to control. But then, we have all kinds of redundancy that remained so if the arm is fixed the body can still move. So, we developed a technique that makes use of the idea of similar to a human task, you're moving your hand, you're reaching with your hand, and your body is following, the posture. You're controlling the posture, but the task has priority. We started building those priorities. Two students worked with me on this. First of all, Jaeheung Park and Lewis Sanders, both of them contributed to this work in exploring the contact and the strategies for dealing with the priority control of the different tasks and subtasks that we are controlling. I mean when you think about a humanoid robot you think you need to control the posture, but you have to be consistent with the tasks, do not interfere with the tasks. At the same time, you have constraints, you have to be consistent with those constraints. You have joint limits. You have obstacles. You have to balance. You have so many things to do and people very often go and say alright, okay, I'll build a controller to deal with this problem. I'll build a controller to deal with that problem and I'll put them together and they will start fighting each other. So, what we ended up doing is reprogramming everything and choreographing every motion to produce what we think is a robotic motion.
So, what we've been doing was to create a framework that explicitly from the beginning integrates contact, integrates constraints, integrates the posture control, task control, the tasks at multiple points. You're controlling center of mass. You're controlling pressure. You have all the degrees of freedom, all the constraints all together. So, the task-oriented controller approach that I developed in `80s and in the `90s called operational space control, this comes from the title of my thesis, "Operational Space Control." Operational space control became constraint-consistent task space control; that is the task is a multiple thing that you need to control it under a lot of constraints, and this task control is done with constraints that are obstacles that can be joint limits that self-collision, all these things are coming together. So, your robot is inside a field of forces trying to move away from all these different constraints to perform a task. Now, how do we control this robot? You said torque, torque control.
Do you know a humanoid robot with torque control? Alright, we're back to square zero. So, what do we do? Well, so you develop the simulator. You develop the controller. We spent many, many years building SAI, our dynamic simulation system with collision detection resolution, so we are simulating the environment, we're simulating the interaction with the environment, we're reproducing all the contact forces, and we are controlling the robot using that framework and it works beautifully. Now, taking it to Asimo, again, is a big challenge because you have to deal with the fact that Asimo is controlled in position control, but if you identify a problem. I mean in research the most difficult part is finding the problem to solve. Once you decided – when you set your mind to the problem the solution will come and you solve it. So, we said alright we need to find a way to go around the servo loop of position control of Asimo. We came up with an idea which is called the position to torque transformer. And, the position to torque transforming is essentially an idea to fool the servo controller to make it think that it's following a trajectory, but actually it's producing the right torques. So, it's like inverting the controller and driving it in a way to make it compliant, and we managed to make it compliant and we managed to render Asimo really, really almost like a robot with compliant joint to control.
So, what we see through this from the `80s to `90s is the same progression in the same direction toward task, always task control with dealing with contact, interaction, and dealing with constraints, obstacles, self-collision. But, all of it is done in terms of thinking about the robot as a system submitted to all kinds of dynamic forces, external forces, so it is moving in force space. And, through that mechanism we never talk about inverse kinematics whereas most industrial robotics, most robotics, is actually developed with the idea that alright I need to move like this in space. Well, let me compute what joint motion I'm going to have and let me find the inverse kinematics and then let me control those joints. That worked very well for free space motions, but once you start making interaction and contact with the environment you are going to have a problem controlling joint motions and forces of those contacts. In a way, the task-oriented control unifies motion control and force control or contact control in the same space, in the work space. And, this is also really interesting from the fact that this is almost like how we humans do it. We think in the workspace and we operate and control relationships in that space.
And it also translates very quickly to the multiple robots and...
Yeah, yeah, exactly. And, by the way, I mean understanding models of what the hand dynamics is makes you say alright now my skill is related to the hands, I mean what the hands are doing. So, I can develop the skill independently of the robot, so when I'm going to do learning, when I'm going to learn human skills I'm not talking about a robot that is going to establish connection between motor actions and resulting motions so my learning will be dependent on that specific robot. Rather, I'm going to just think about forces and moments applied to the hands and the resulting behavior on those objects, and then I can take that strategy and apply it to any robot, and the relationship is very simple. Forces and moments here are related to torques using what we call the Jacobian transpose. Jacobian transpose captures the moment arms of the different joint axis. So, this is basically the major direction is think how you can produce those tasks in their own representations and then you can transform it through the Jacobian to produce those forces and controls as needed for different robots just by modeling the dynamics, the kinematics of that robot.
Well I know you're going to ask me what did you do in the new millennium, right? You are going to ask me that? So, one thing beside Asimo that happened was really very interesting. So, when you have a humanoid robot and you're interested in understanding how can we better be inspired by the human, how we can go further in exploring human skills and building controllers that are proven with the human to work. So, when we really considered that we realized that either we need a way to look at every task a human is doing and try to copy that task and reproduce it by the robot or ready to go and identify the way humans perform those tasks. I mean identify in the sense that we need to model, have a dynamic model of the human, kinematic model perform these tasks and look at the musculoskeletal control of the human and how humans are handling these different things.
So, we started studying that took us to the human, and what we discovered there was that a lot of the techniques, a lot of the algorithms, developed for robotics, for articulated body systems are amazingly powerful when applied to musculoskeletal systems, because we developed recursive algorithms, very efficient algorithms that make use of generalized coordinates in very efficient ways. We were able to reproduce, reconstruct human motion very quickly in real time whereas a lot of the algorithms and software that the bio-mechanic community use are still far from the optimal characteristics and efficiency that is needed to create real time interactions. A lot of the measures, a lot of the characteristics and metrics we developed in robotics can be applied to the human.
So, we started applying robotics to human models and we started doing a very interesting analysis that led us to a lot of interesting conclusions, not anymore about robotics; that is there is always this idea that robotics is about robots. Actually robotics is sort of this body of science that we are developing with sort of like algorithms, models that efficiently allow us to explore articulated body systems, high-dimensional spaces, and take us through that analysis with insight coming from the physical world, but really taking us to those high-dimensional systems, and they are nonlinear interactive nature, while humans are more complicated than most robots; that we know, but we have been able to apply those techniques to robots. I mean those robots, those techniques that we developed for robots to the human and the result is really, really interesting.
So, here is the first result. The first result that we found was just related to something very basic, which is to say if I was going to push an object, so you want to push the table, what posture the body will take to push a table. If I'm going to lift or pull a weight, what would be the most appropriate posture for that specific direction. It turned out that if you are doing this repeatedly and if you really discover the proper posture to push or to pull or to apply a force, you are using a posture of your body and this posture would cause the force and the effort of your muscle to produce that force to vary depending on what posture you are taking. So, if you're drinking a cup of coffee- drinking a cup of coffee, this is a very common posture. We never do it this way, we do not do it that way, we do it here. And, the reason for that, there is something special about this configuration, some 40 some, 42-43 degrees, and this is the fact that at this configuration the effort associated with the muscles that you have is minimized. So, it's sort of like – when we are building a machine what do we do? We build the machine and we use the machine in a way to make use of its mechanical advantage. Somehow, humans discovered the mechanical advantage and its use and we use it. We use it because we are then able to minimize the effort associated with our muscles in performing, producing the forces needed for the task. And, this property is resulting in sort of an energy minimization associated with your muscles. So, if we take one muscle of tension M, and the energy will be M2, but you have big muscles and small muscles so you need to consider the capacity and weigh your energy by the capacity. So, the energy we discovered that we human minimizing is E, the energy, is equal to CM2.
Interest in Robotics
...how do you follow that? So, how did you get interested in robotics or what drew you to the field of engineering and robotics?
There is an interesting story about engineering. After my baccalaureate I went to enroll in Aleppo, at the university. It was just before I got my fellowship to come to France, so I went to the university and I enrolled in engineering and the clerk said, "What, you cannot enroll in engineering, you are the top student. You have the highest score and you are accepted in the medical school so you can be doctor, so you should enroll in medicine." I said, "Sir, I cannot see blood, I really, really want to. I love science, but I really like to work with machines and I would like to enroll in engineering." He refused. He said, "Come tomorrow with your father." And, I had to bring my father to go to engineering. Yeah, I always liked physics, math, and I always found it like the textbooks we had were not enough for me, so I was creating my own problems to solve in math or physics and exploring like beyond the boundaries of the textbook. But, the idea of robotics was really not there. I mean I was interested in electrical engineering and computers and circuits and electronics and all of these things were fascinating by the way we can create those amazing machines.
And then, it was sort of by accident I was at the right time with the right group who started robotics at the SUPAERO, and pointed me to robotics, and I felt this is it because what is wonderful about robotics is you're not studying electrical engineering. You're not studying mechanical engineering, you're not just looking at biomechanics. You're not looking only at the psychology. It is this cross-disciplinary field that brings all different aspects of computer science with electrical engineering, with mechanical engineering. In a way, this is why we have had a lot of difficulties in the beginning of robotics because we have computer scientists approaching the problem as a programming problem. We have mechanical engineers designing robots, but are difficult to control for the purpose of controlling the robot to create interaction. So, it took time to build skills for people to understand not only their discipline, but across disciplines. So, to realize that we need to design mechanisms not only to be fast, or light, or heavy, or strong, or precise, or accurate, but also we need to design them for the characteristics that we need in the control. And, we can optimize those machines when we look at what is needed in the control. But, if we do not look at and analyze what is needed in the control, in the design, there is this cycle.
I think the industry of building aircrafts, aviation, went through that as well for many years. But, I think robotics also benefitted over the years from many of the previous technologies, but there was so much focus on manufacturing and industrial robotics that it was optimized for precision, accuracy, speed, and the intelligence, the abilities of the robot, of creating what robotics is about was not really needed because the manufacturing environment is structured. So, if you have a structured environment you just program the robot for that structured environment. You know what your environment is.
In the `90s, my thought was robotics will never find all its depth if it is imprisoned in manufacturing because it is adapting and automation is integrating the robot in their processes. Robotics started so easily to develop when it escaped and moved out to the human environment. As robots moved outside, they are facing the unstructured world and now you have to do perception. Now, you have to adjust to the changes in the environment and now you have to deal with dynamic environments that is always changing and you're interacting with humans in that environment. You have different requirements for the robots, their safety. You have different requirements for the navigation. You have all kinds of new things. That brought a lot of new things and new concepts and amazing development since the turn of the century.
So, I was working before to deal with joint torque control for the robots, building Artisan. After or since robots escaped from the manufacturing plant and went out, we realized we need to make safer robots. So, that was one of the other projects we have been working on, which is to see how can we build a robot that is light, has low-impact, that's guaranteed to be intrinsically safe, and Mike Zinn and another student Dongjun Shin worked with me on this question together with Professor Roth and also Ken Salisbury. In fact, the Willow Garage robot sort of came from that initial project that started with an NSF support that led to one of the solutions of the problem of making robots lighter and safer and that is to do gravity compensation. So, behind me you have two robots that were built to address those problems through distributed macro-mini actuation, what we call DM2, which later was implemented with pneumatic actuation and electrical actuation and this is the hybrid actuation solution that brings strength with softness from the pneumatic actuation and fast, but light and small impact with the electrical motors. So, that was another interesting project that was quite successful.
In fact, GM was very interested in this project and was sponsoring it for the short time before GM had some financial difficulties and we had to suspend it, but I was glad to see GM involved with NASA on building the robot that also brought a lot of safety to the system compared to robot […]. Now, one other project that we also launched in probably the beginning of 2010, probably 2009, I don't remember exactly when we started is a project dealing with the fact that okay now we are studying the musculoskeletal model. We understand what is the closed loop in terms of the behavior transmitted from muscles to the task. But, what about the control, what is the motor control that is taking place and how can we connect the representation of the task in terms of what the eyes are seeing in the world to the muscles and the way the muscles are operating.
This is something that is taking place in the brain and understanding what is going on in the brain is very complex. Now, what people usually do is they stimulate some neuron in the brain and look at the response, muscular response, and you can connect this region with that muscle control. Our understanding in robotics, of integrated body system, shows that there is enormous effort that goes into the coordination of motion. So, you're coordinating a lot of joint motions to produce some coordinated motion of the hand. And, what we've been interested in is to think about how can we get data that will capture this coordination between different joints. So, one solution is to go to fMRI. Inside an fMRI, you can capture an image of the activities in the brain, and by moving part of the body you will be able to say connect that activity with the physical activity. The problem – I don't know if you tried to go inside fMRI, but it's really confined. It's very compact and you can maybe move your hands, but not more. So, what we tried to do through our experience with haptics is to say well I really need to stimulate the brain to do coordination and I can imagine all the tasks I can do inside fMRI if I had a haptic device, but forget about taking any haptic device with you inside of fMRI because of all the magnetics that are there.
So, what you need to do is to think about a haptic device that is fMRI compatible. And, this is what we built so, in fact, over there you can see a haptic device that is compatible with fMRI and it is placed outside and you can reach to it and now you are able to stimulate the brain and perform complex stuff. You spent a variety of tasks that involved applying forces using your muscles to produce these different specific forces, and you can use tools. So, you remember earlier I said we do the synthesis of controlling robots by thinking about forces and moments and apply to the arm using the Jacobian transpose. Well, in the case of the muscles there is more transposes because there are other mapping that what I believe is a Jacobian transpose in the brain and we are trying to identify it.
Organization, Associations, and Conferences
Great! So, I still want to talk about some of your work as an administrator on the robotics animation society and just how the field as a whole has been developing coordinating over the last few decades and what the role of various conferences and organizations have been.
Good question. How about if we do that after a small break. I need water.
Sure. So, I was wondering if you could tell us a bit about how different kinds of organizations, and associations, and conferences have evolved, emerged, and shaped robotics over the last two decades?
So, the first conference I attended was ROMANSY. This is in Udine and it's part of IFToMM. IFToMM is the International Federation of Machine and Mechanisms and, in fact, Professor Roth was President of the organization and that's where I met him in RoManSy. RoManSy is Robot Manipulator Systems Symposium and RoManSy is still running and I was co-chairing the last meeting of RoManSy in Paris, last year. That was, in fact, one of the very first meetings in robotics. It was dedicated to mechanisms and machines mostly focusing on design. Later on, in probably the early `80s, one of the very first meetings in robotics was called ISRR, International Symposium of Robotic Research. That was sort of a small meeting that brought all the experts in robotics at the time to meet and discuss the different aspects of robotics from vision to planning, to control, to design, all these different aspects in one meeting. And, this meeting is still running and it is organized now by the International Foundation of Robotics Research, IFRR. IFRR Organization is an organization that organizes different meetings and sponsors different meetings including ISRR, International Symposium of Robotic Research, and ISER, the International Symposium of Experimental Robotics. Now, experimental robotics is a symposium I, myself with Vincent Hayward from McGill initiated in 1989 because we felt it is very, very important to not only develop the theory for robotics, but also to validate or invalidate the theories. And, the first meeting took place in Montreal and last year we organized a meeting back in Quebec City, not far from Montreal.
So, this meeting went around the world from Montreal, to Toulouse, to Sidney, to Rio de Janeiro, to so it goes continuously around the world and we started to try to go to the south. So, we're going to have it in Morocco next year. There are a number of fantastic, very important thematic, topical conferences, symposia, workshops, that brings small communities to focus on that theme. And, I think these meetings have been playing a very important role in robotics as robotics has many different sub-areas. However, the community as a whole comes together in one meeting as we know, and this is ICRA or IROS. So, at the ICRA and IROS all the researchers in robotics and related fields are interested in coming to meet with other researchers to find out about the state of the research, but also to interact and especially for PhD students, newcomers to the field. They really want to find out what – I mean this is the place where everyone is coming. So, ICRA started in 1984 in Atlanta and it was not ICRA, it was just the robotic conference that was organized by what was called, it was not yet the Society of Robotics and Automation, the Society of Robotics and Automation came a little later, and we had the first official ICRA in St. Louis in 1985.
And, that was the first and the smallest gathering. I remember I presented my paper on artificial potential field at that meeting and had a lot of discussions, discussions not only with one discussion, but many discussions with a lot of colleagues and it was, I think, the first time I met with Gerd Hirzinger. It was through those discussions that we had coffee together and we started talking and then he started to visit Stanford and connect with me. The interaction idea, the contact among people is very important. And, this remained for me one of the most important aspects of a conference. A conference, we're not just going to listen to a presentation and leave, we really need to talk and interact and discuss. So, in 2000, when I organized ICRA, in San Francisco, I thought we really need to bring more interaction among the participants. And, this is always a challenge because the problem is when you have a large number of participants you're going to have more and more parallel sessions and a direction becomes harder. I participated to meetings where we go to the session you find the authors, the co-authors, the chair, the co-chair among themselves because we have 15, 17, 18 parallel sessions. So, one of the things we tried to do in ICRA 2000 was really to reduce the number of parallel sessions by reducing the time of the presentation, by squeezing the time between sessions by all kinds of ways to bring the conference to be more closer in terms of even – I mean geography of the meeting was such that everyone has to stay within a small space so that they really have to meet and interact. I remember some meetings where you go to the third floor, and come to second floor, and you end up losing everyone in every direction. We wanted also to attract people, not only by putting them in a space, but also by creating interesting attractive presentations, and we developed the idea of mini symposia with a high quality of presentations and interaction, and I remember in 2000, we were monitoring how many – how many participants would be in a given session, and the numbers were amazing, very high numbers, we had like seventy in those mini symposium, and this model came to be used later in other conferences. One idea we introduced, I think in ICRA 2000 was the meeting – the last meeting where we’ve been using proceedings – I mean still, we use printed proceeding after, but we were using paper submissions. I remember in that room over there we had the so many thousands of pieces of papers to handle because every author would submit eight copies, and then you add to them all the papers for reviews, and you end up with a huge amount of pages to handle for that submission, and when you have – I recall we had about over two thousand submissions, so it was huge number of papers to handle, and that was the last time before going to electronic submissions.
Electronic submissions and the lack of, and the large number of papers and heavy proceedings led to a situation where a participant will go to the meeting with a program, and the program would list the title of the paper, and the name, and the room. So this was really difficult to conceive as sufficient information to plan your participation. So we introduced the idea of the instead of having just a simple program is to have something called the digest. So the conference digest became now a very useful tool in many conferences, but it was introduced in ICRA 2000. Anyway, many of those ideas, including also interaction, and not only in the conference space, but in the – on the boat, on providing social space where people can really discuss and meet. In IROS 2011 – I don’t know if you attended, but we had the banquet in a museum, and we had a cruise that brought so many participants. That was quite memorable to many who were there, but I think – I think the quality of every aspect of the experience of a conference is very important, and it’s important because there is, I believe, a huge effort by everyone who’s attending those conferences from Japan, from Europe to the U.S., or from the U.S. going to Europe or to Japan, and we are taking time for a conference. So the quality of the papers we are going to have, the quality of the social interaction, the quality of all the aspects of the discussions, and in that meeting are going to be very very important, and I think IROS, ICRA moved in that direction, and in the coming years – I think I’m going to have even a more challenging role in dealing with conferences in the future. I think conferences are a very important window and vehicle for our community, and I don’t think about that in terms of just one conference that is better than another. Every conference is serving a very important role. ICRA is the flagship conference for our society. IROS is supporting a large number of participants of our society and others. All the other meetings, Humanoids, RSS, and WAFR, ISER, ISRR, ARK, all these other meetings are contributing to themes and topics within the areas of robotics, and having smaller gathering in one track presentation where people from vision, from motion planning, from mechanical design are sitting together through the whole conference, listening to all these papers. Goes back to what we said in the beginning, robotics. Robotics has all these different aspects, and unless we really bring that understanding between people in computer science and electrical engineering, mechanical engineering, to really understand what are the issues and the problems, it’s going to be really difficult to build the field. So we need the small meetings, the symposia, the workshop, the one track high quality, what we can call slow conferences, like slow science, slow food. In other setting we do not have that amount of time. We still have the three, four days with thousands and thousands of participants. The last meeting in San Francisco was nearly two thousand participants, IROS, and those numbers are growing, and I think we really need to think hard about how we’re going to do the future conferences. But what is clear is that we are going to need to have more and more of the specialized conferences to create that interaction and those discussions between the communities that are being formed. One thing I wanted to add is also the fact that robotics is making impact in other fields, in biocomputing, protein folding, in biomechanics, in a lot of other fields makes it that there are new communities that are coming to interact with robotics, and they have their own conferences, and we are seeing a lot of those interactions. So I think – I think this is why we are looking at robotics with all its different dimensions, and it’s becoming harder and harder for newcomers or even people who are working on the field to find their way, and this is – this is not new. This started back in the late eighties. I remember – I mean we started to see so many things happening. So back in the eighties, I – in the late eighties, '89, I proposed, in addition to starting ISER, I proposed to have a way to think about a sort of place where we can find out about what’s going on in the field, and we created what was called the robotics review. The robotics review was published by MIT, and I worked with John Craig and Tomás Lozano-Pérez on its editing. In fact, we edited two volumes in '89 and '91, and with the contribution of about fifty, sixty people each time, and with a lot of editorial work by us, a huge amount of work because the idea was to review the field and come up with what is going on in the field. Well, this is maybe was the first step towards what Bruno and I worked on later in the handbook of robotics, and it wasn’t forty people that were involved, it was much much more, a hundred and sixty people with 1600 pages, and I’m really excited about the new edition that is coming because it will bring yet a new dimension that is very important. This is the multimedia. The technology is developing and it is helping our robots to compute faster. We have a lot of new technologies in material in sensing, etcetera, and I think we really need to see it, we need ways to communicate algorithms, we need to share software, to put videos and to even introduce material to help understand concepts through simulation and different ways of presentation. So I think this is something that is going to be really interesting, and we will see how the first result of this will be, but I think the multimedia is going to be quite interesting.
Oral History of Robotics
Thinking about the coherence of the group and the history of the robotics and automation society, I want to ask you how you came up with the idea to do an oral history of robotics, and how that idea started.
Well, I think in every domain we have so much that is not written, that is not published, that is really maybe only the historian of science can discover later, that we can really not find – I think I’m mixing up, can you stop this?
Let’s try it again.
Yeah. I was thinking about others – something else.
I’ll just ask the question. Can you tell me about the origins of the oral history of robotics project?
Well, I mean robotics went through a long history of many many years, and I think so much is not documented. There are a lot of things that are unpublished, and really having direct statement by those people who have been involved in the field, presenting their point of view and answering your questions is very helpful. I always really thought that we really need to look back at the history in order to know where we’re moving, and where we made mistakes, and where we can try to address issues that have slowed down our progress. So the history is very important, and I think a historian of science will go probably further with what robotics brought and how robotics developed, and that is much more needed in term of sentences, technical understanding of the different aspect of how the field moves. So if we think about the field we spend so much time optimizing our algorithms because of the limitation of computers in the eighties and in the seventies. I remember implementing my potential field in PDP-11/45 that have 80K memory, and the program was in and out overlays, and we were running slow server. We could have a vision system monitoring obstacles, moving obstacles with one second – one second rate. So the motion was so <laughs> jerky, and despite all of that, there was a sense of need of generality. We knew computer power will increase, but it was only when computer power really increased that we discovered many different ways of looking at the problem and focusing on different issues, and I think technology has a huge role in the way systems like robots are developed, and materials science and new material fabrication techniques are affecting the field of robotics in many different ways, and today, many of the components of computation, fabrication, and material are changing sensors, communication, and that is changing the way we think about robots. So I think that’s why I think it’s very very important to look at the history and see how this evolution took place, and we are also recording that history, especially with the pioneers, who really contributed to the early ideas, and explored that environment that was completely unknown to many. In fact, we are still somewhere in – on the way to building robotics, and I think there is so much to be – to be explored, and every time we think we made some progress we discover, oh, there is much more. So I tell my students, this is a field with infinite resources of problems, and a lot of fun and excitement to explore.
Challenges and Future of Robotics
What do you see as the biggest challenges facing robotics, and the most exciting future directions?
So robotics, as we said earlier grew from the industrial application that was confined to manufacturing, that is the structured environment, and now it went outside, and it is changing robotics in many different ways, where really the whole space of challenges is brought together to the robot, in term of the perception. We need very very fast processing of a lot of data in real time to provide information about the tactile contact, about the dynamic motion, about the visual environment, and also we have more degrees of freedom to deal with, we have a human to interact with, we have all these aspects that are sort of exciting all the dimension of the problem, which is great. So this is one of the ways to justify why we want to humanoid robotics. Well, at least we are able to look at all the complexity of the problem. We probably don’t need the full system in every application later, but we need – we need to understand many aspects of many dimension of those problems. Now, when we think about robots developing in human environment we are going to build those skills, we are going to improve their capabilities, and I think what’s going to happen is that robots will go back to manufacturing, now much more intelligent, much more capable, and they will change the structured environment that we have used to develop in manufacturing. So in manufacturing we have much – we spend a lot of effort building the assembly line, very costly. Maybe the most costly part is how to program it, how to adjust it, how to tune it up. Well, now, when in the future I think manufacturing will, if we have those intelligent machines, we should be able to change the whole picture to get away from the assembly line, and get these robots to move and do something. So there will be a huge implication in manufacturing, there are also implications in our lives. Robots that can really ultimately, finally, help us in homes, in challenging environment. I didn’t have the chance to discuss that, but two, three, new projects we launched recently, one is a project to build a humanoid like robot that is a diver, that is – it is a project for exploring the Red Sea reef – I mean the natural environment in the Red Sea that is quite fragile, so that if you – if you imagine taking a robot in that environment, the marine biologist would be really really upset because robots – that typical robot that are quite heavy, and would be very dangerous for those environments. So in those fragile environments we need something as safe as the robot that we built for the human, and that’s what we are developing. We are developing a robot that in fact would be sort of like a robonaut. A robonaut in the sense that this would be aquanaut working with human divers to reach down to place sensors, to bring samples, to interact with human, and to cooperate and do everything we talked about on the ground, underwater. This project started last year, and this year we have a – we have a – we have a prototype design and we are collaborating with micro-robotics, in fact, on this project.
There is another project, and a very important aspect of robotics that we really need to address, and this is how to deal with disastrous areas. So we saw Fukushima, and we saw how little robotics contributing to helping with Fukushima, and one of the reasons for that is we get used to the idea that robots will be working in our comfortable buildings in those flat surfaces, flat floors. So they are moving on two dimensional environment, and evolving in those environments through biped walk. Now, if we have an earthquake or a disaster the environment becomes completely difficult. You will have to go over three dimensional objects, and you did a different way of thinking about controlling those robots, especially in their locomotion than the typical ZMP, zero moment point control, inverted pendulum control for biped. So we are launching a concept that is called contact supported locomotion, and that goes back to the idea of interacting with the world and using the environment to support the motion, and this concept is taking us further with the new tools that the robot can carry to bring extra support, something we call supra-ped that brings sort of poles that the robot is using to move in the environment, and this concept is really taking a biped system to become quadruped, to become whatever needed for the stability as we evolve in those environments, and that will make at least the system capable moving in many different – three dimensionally. Then, there are a lot of other challenges to – for the perception and for the planning, but these are issues that can be addressed. I really think the hardware is a big challenge. Unless we build the hardware that addresses the problem of torque control, unless we deal with the issues of the design of the robot with integrated sensors, with realistic and effective hands and grippers, with the autonomous power to operate beyond the twenty minutes – twenty-five minutes that we usually have with humanoid robots, I think we are going to face a lot of difficulties, unless we achieve those challenges with the technology of building the machine. I think the challenges, in terms of software and algorithms, I think they are there, but they are not as critical as the hardware because we had made a lot of progress in robotics. The field has matured to a point where we have much more capabilities in the software than in the platforms. So the development that we are seeing today, all around the world, in terms of new platforms, new hardware is very helpful, but sometimes some of those platforms are missing the key capabilities that are needed, and this is what we talked about in terms of the communication, but we are seeing much – more and more torque control, force control, series elastic actuation, all that are leading to more compliance in these systems. One other thing that we are seeing also is a great increase in the use of robots in the medical field with the integration of haptic devices to operate those systems, and this goes back to the idea that robots are useful, even if they are not completely autonomous, that is, robots can have a lot of skills to be interfaced to human, and the human are guiding those robots at the right level, at very high level, but are able to interact with them to make them very very useful in surgery, in operations in remote areas, or in different other applications. Underwater – for the underwater we are operating those robots from the boat. On the boat you have – you have the same haptic devices that we use in surgery, and now the operator is doing surgery underwater. So the whole concept is that we should be open to the idea that we have – we have robot that are interacting under some level of control of the human guidance, of the human – the robot is assisting the human, but the human is also assisting the robot. In some cases, in smart transportation we are making use of the haptic to alert the driver, to augment the driver, to augment the experience of driving, and all of these things are creating interaction between the user and the machine in ways that, not only through switches and sounds only, but through tactile feedback and tactile interaction. So I see a lot of new applications coming in all these fields, and I think the robotic technology is going in many different areas and specializing for those applications, but the underlying technology with its component in the hardware and the software are already the same, and as we approach the complexity of humanoids, we approach also models similar to those of the human, but all of this is still at a very low level compared to the cognitive aspect that are needed in building the full intelligence system. So the way I see it is that we increasingly will bring autonomy to those machines, and there are studies at the higher level, and somehow these developments will meet at some point. What is dangerous is to do that work in separation. We really need to understand the overlapping of those fields, and this is why we need more interaction between those communities, the AI community, the robotics community, the developer, the designer, and the people who are working on the control architectures.
Advice to Young People
So the question we usually end with is what’s your advice to young people who are interested in careers of robotics?
Don’t hesitate. This is a field that is a lot of fun, a lot of excitement, and it is – it is actually amazing. After so many years I started by saying, I feel – I’m still working on my PhD. I really feel that exploring a road that is – keeps stretching, and it is a great pleasure to do this exploration because it is – we are after things that relates to us, to human, to behavior, to actions, and we are discovering things all the time, and I feel, keep the imagination and enjoy a wonderful field, robotics.
- 1 About Oussama Khatib
- 2 About the Interview
- 3 Copyright Statement
- 4 Interview
- 4.1 Early Life and Education
- 4.2 Robotics in France
- 4.3 Arriving at Stanford
- 4.4 Early Robotics Projects
- 4.5 Robot Interaction and Forces
- 4.6 Haptics
- 4.7 Human Inspiration
- 4.8 Interest in Robotics
- 4.9 Organization, Associations, and Conferences
- 4.10 Oral History of Robotics
- 4.11 Challenges and Future of Robotics
- 4.12 Advice to Young People