Oral-History:Ken Salisbury

From ETHW

About Ken Salisbury

Ken Salisbury was born in Schenectady, New York. He received his B.S. in Electrical Engineering, and his M.S. and Ph.D. in Mechanical Engineering from Stanford University in 1975, 1977, and 1982, respectively. From 1982-1999, he served as Principal Research Scientist in Mechanical Engineering and as a member of the Artificial Intelligence Laboratory at MIT. His work in haptic interface technology later led him to found SensAble Technologies, Inc. In 1997, he joined Intuitive Surgical as Fellow and Scientific Advisor, focusing on medical telerobotics systems. In the Fall of 1999, he returned to Stanford, where he currently serves as Research Professor in the Department of Computer Science and Surgery, and (by courtesy) in Mechanical Engineering.

Salisbury's research interests include robotics and haptic interface technology, and currently, focuses on medical robotics and surgical simulation, and the robot-human interaction. Some of his contributions to robotics include the Stanford-JPL Robot Hand, the JPL Force Reflecting Hand Controller, the MIT-WAM arm, and the Black Falcon Surgical Robot.

In this interview Ken Salisbury discusses his career and work in robotics. Outlining his movement from academia to industry and back to academia, he describes his involvement in robotics projects, such as the Stanford (Salisbury) Hand and Barrett arm, and various collaborations with other roboticists. He comments on the evolution of robotics, and its challenges and potential applications. Additionally he reflects on his career and his many contribution to robotics.

About the Interview

KEN SALISBURY: An Interview Conducted by Selma Šabanovic with Peter Asaro, IEEE History Center, 18 November 2010.

Interview #747 for Indiana University and IEEE History Center, The Institute of Electrical and Electronics Engineers Inc.

Copyright Statement

This manuscript is being made available for research purposes only. All literary rights in the manuscript, including the right to publish, are reserved to Indiana University and to the IEEE History Center. No part of the manuscript may be quoted for publication without the written permission of the Director of IEEE History Center.

Request for permission to quote for publication should be addressed to the IEEE History Center Oral History Program, IEEE History Center, 445 Hoes Lane, Piscataway, NJ 08854 USA or ieee-history@ieee.org. It should include identification of the specific passages to be quoted, anticipated use of the passages, and identification of the user. Inquiries concerning the original video recording should be sent to Professor Selma Sabanovic, selmas@indiana.edu.

It is recommended that this oral history be cited as follows:

Ken Salisbury, an oral history conducted in 2010 by Selma Šabanovic with Peter Asaro , Indiana University, Bloomington Indiana, for Indiana University and the IEEE.

Interview

INTERVIEWEE: Ken Salisbury
INTERVIEWER: Selma Šabanovic with Peter Asaro
DATE: 18 November 2010
PLACE: Stanford, CA

Overview of Education and Career

Q:

When, where you were born?

Ken Salisbury:

Are we rolling?

Q:

Yeah.

Ken Salisbury:

Oh, okay. So my name's Ken Salisbury. I was born in Schenectady, New York. I was the son of my father, same name, Kenneth Salisbury, who was heading the steam turbines division at General Electric. So at the time I think he kind of wrote the book on how to design steam turbines, but that's in the '30s, '40s, a long time ago. Spent two years there then moved to California where my dad was a faculty member here for a couple years. Went to grade schools in the middle Atherton, Palo Alto – I'm sorry. Not Palo Alto, Menlo Park grade school area. Had a lot of fun in school. Liked to hang out with the girls more than the guys. I don't know why. <laughs> They were more interesting. The guys were into like bashing each other with toys and the girls were kind of like into more fantasy play, which I liked. Did really well in school and kind of skated all the way through high school without a lot of work. A lot of things came easy to me and I got pretty involved in student government and scouts and competitions and a lot of stuff. I was pretty aggressive about those things. Went to college at Stanford, both undergraduate and graduate. Took some years out on and off during that, but I spent about 11 years as a student at Stanford which was really a privilege and really a fun thing for me and now I'm back <laughs> sitting on the other side of the desk, but it's really good to be here.

My education at Stanford started in mechanical engineering. I like to build things. I have a pretty good physical intuition about things. Was convinced by a friend of mine, Rob Young, who's a technical guy, a serial entrepreneur who started quite a number of games, built companies. He enjoys them. I was convinced as an undergraduate to switch to double E so I could learn some things that I didn't really know so much about. Mechanisms I kind of knew already. I built lots of stuff as a kid. So I got my BS in electrical engineering then switched back to mechanical engineering because I really wanted to dig deeper into science, the physics behind mechanical things. This kind of cycle of build something intuitively and then go back and go, "Well, why did it work?" or "Why didn't it work?" So go to the analysis from whatever discipline is appropriate and then build it again based on what I learned in there and then my Ph.D. was with Bernie Roth in mechanical engineering and ultimately resulted in a thesis that looked at the design and utilization of robot hands and that became the three finger. Initially JPL, Stanford Hand, which not by my fault, but people now call it the Stanford – sorry, the Salisbury hand. There's the real one <laughs>. From there I accepted a post-doc at MIT to work in their AI lab. A one-year post-doc turned into a 17-year adventure with a lot of really good folks and then I came back to the Bay area on the invitation of Intuitive Surgical where I became their science advisor and fellow for four years and kind of helped that company get started by making technical contributions, licensing some of my patents, and finding good people to help them out. And then came back to academia ultimately as a research faculty in mechanical – sorry, not mechanical, but in divided between computer science and the department of surgery because in the interim four years I'd gotten involved in medical robotics and I also have a courtesy appointment with mechanical engineering because I know the department well and I advise quite a few students there. So which brings me here which is my sort of latest lab which is the bio-robotics lab and it's kind of in the Clark Center, but it's kind of jointly supported by my appointment in computer science and the department of surgery, so it's kind of an eclectic group of people.

Developing the Salisbury Hand

Q:

Now that we've kind of gone through the whole trajectory, how did you get to building a hand in the first place? Why were you interested in that?

Ken Salisbury:

The first finger I built probably was when I was six years old. It goes way back. I've always been interested in things having to do with hands ranging from magic tricks to playing musical instruments to knot tying, teaching scouts how to build bridges and stuff and so hands in a variety of ways have always been an interest of mine. I worked with my dad after he had a stroke and tried to help him restore functionality in his hand and was kind of toying with physical therapy versus engineering for quite a while, partly because of the emotion – the psychological aspect of supporting a person going through recovery as well as the mechanical sense that I had about how you should be moving things to stimulate different nerve paths or muscles, but I ultimately got seduced by robotics because it had a lot of gadget components to it and entranced very much with motors and sensors and computer control of them and I kind of saw that coming early on, so switching between Double E and ME kind of provided a foundation which I sort of envisioned would be very important for me in the long run and that's certainly been borne out. A lot of our devices in my lab here and previously have motors and sensors and computers connected to them. I think my best contributions have to do with creating interesting mechanisms that have – they're controllable, their controllability, so to speak. They're observable or have observability which means you can look at the state through sensors and you can have sufficient control to make them move to a new state and that kind of comes from a controls background, but it exhibits itself in design of mechanisms so that they're designed right. I really firmly believe that a mechanism's got to be really, really good to have mechanically to have good performance. Some controls folks just say, "Oh we can write an adaptive control algorithm to deal with it," but if the mechanics ain't right, ain't going to work <laughs>. So and that's a theme that has been repeated many times in my work. I forgot what the question was.

Q:

We were just starting to –

<laughter>

Ken Salisbury:

You can cut that part out.

Q:

Oh no, that was great. How you got interested in hands and what kinds of problems you were kind of looking to solve by working on hands and developing a new hand.

Ken Salisbury:

Yeah, good question. I mean ultimately from some prior experiences with NASA and prosthetics and dealing with some handicapped kids, I began to realize that there were some interesting contributions to be made in designing robot hands and there've been many before me. Many of them were anthropomorphic. A lot of them turned out to be one-trick ponies. They could grasp something or they could do something, some manipulation in a limited sense, so I wanted to step back and kind of look at the science behind it. How should a hand be built? What are the tasks that we’d want it to perform that are interesting and useful and then how do you put a control system around that to allow you to execute those tasks? And to be honest, when I began this thesis work with Bernie Roth, intuitively I kind of knew how the hand should've been built and then I had to kind of backfill and justify what I did, which was a very good learning experience and is kind of a style of research that I do now with my own students. If you've got intuition, go for it and then come back and figure out what you did. In fact, one of my mottos is ignore the prior art for a while so that people think more freely about how they can solve the problem without accepting the constraints that others have assumed and then be duly diligent and look and see what's been done. So what resulted from my Ph.D. work was a three-finger hand that had sufficient mobility to both grasp an object securely and control the quality of grasp. If somebody's trying to push the object out of my hand I kind of describe how to squeeze more tightly to stop that from happening and the hand had mechanical ability to do that and the math told us how to respond to these different disturbances and secondly had the ability to arbitrarily move through small rotations and small orientations objects that it grasped. So this kind of high-level goals which we mostly, partly, I would say succeeded in doing. It kind of became a platform and an icon at that time in dexterous hands. It was not intended to be anthropomorphic, though it had fingers and tendons, but its kinematics were quite a bit different. It kind of looked like a three-finger hand, but it could also curl backwards and do some really bizarre things. It was kind of fun watching people watch the hand manipulate in the demonstrations and then it would kind of warp out and do something non-anthropomorphic and people would get all upset about that because they would be engaged and then be surprised. I formed a small company to sell these hands because people started saying, "I want one of them," and we sold about 20 of them, so at that moment for me it became my first platform that went out to the research community and it has a life of its own. People still write papers about it and it was kind of a good idea. I had no idea it would have the impact, whether it was due to deep insights or just being first, but some cool ideas. I don't know, but it's been fun watching its evolution and its influence on the field.

Q:

Who else worked with you on the hand? So Bernie –?

Ken Salisbury:

Well, that's a good question and the kind I like to answer. It was an opportunity with NASA and Jet Propulsion Labs to get seed funding for a project and I hooked up with a fellow, one of my several mentors, Carl Ruoff, and together we, if I remember correctly, pitched an idea designing a new generation of robot hands for NASA, but other applications, and then so he and I and with Bernie Roth as my advisor brainstormed and reviewed and went through the ideas. I did a lot of drawing and sketching. The two of Bernie and Carl gave some sort of high-level goals and some low-level details and then I kind of carried the ball from there and came up with this hand, built some prototypes. Now I get a chance to show one of my early prototypes if it doesn't fall apart. I like in the spirit of some design skills that I was taught at Stanford, the earlier you reduce an idea to a physical embodiment, the sooner you get the insight about what is the appropriate problem to solve. So this is about an afternoon's work with a band saw and a belt sander and it really gave me a physical entity to help me figure out what was the right thing to do. I tried eating my dinner with this. Was not very successful. My housemates laughed, but it definitely helped and ultimately after many iterations and consultations with students and Professor Roth and others, we came up with this device which has nine degrees of freedom. It has nine joints that can be independently controlled and they can go one, two, three degrees of freedom. This turns out to be the right number to controllably grasp something and then make the small orientations that I talked about and it worked.

I forgot to put a palm in it. That was a big mistake. It turns out palms are really important. Over time we added sensors to it. The first Ph.D. student to begin with me developed this four-sensing finger so that as the hand grasped things, we could have even more fidelity in the touching of objects to confirm grasp and ultimately to determine material properties. We could stroke across a textured object and determine its spatial frequency, the vibration, the friction, the stick slick behaviors. There's a whole lot of information you could get out of this and this is different from the more classic tactile sensors which would have an array of single points measuring the distribution of force. This measured the net force on an object which to me was more directly useful in grasping and maintaining grasps. I think I said this was developed by David Brock who was my first Ph.D. student. Dave went on to found a couple companies. A lot of my students have gone on to found companies. It's sort of fun. We have a little more entrepreneurial bent here. Though some of my students have gone on to be faculty members, a lot of them more have tended to take their technologies into companies and exploit them in the real world and that's one of my charges with my students. Make something that we can continue to use whether it's here in the lab or via licensing or some other means of transferring it out. Yeah, so it took on a life of its own. There were 20 of these around the world and people started sending me papers about the hand doing pretty interesting things and it was satisfying and sort of inspiring and I continue to be interested in hands. We have a new project looking at a low-cost sort of high-performance hand for a DARPA project. So it's sort of fun for me to revisit something I did 20-25 years ago and curiously the co-investigator on it is a recently graduated student whose name is Curt Salisbury. So I kind of fantasize that this will be the Salisbury Mach 2 or something like that. <laughs> He's a fabulous designer and we are having a lot of fun designing a new generation of hands.

New Generation of Hands

Q:

So what are some of the challenges in this new generation of hands? Where are you taking it now?

Ken Salisbury:

Well it's for a DARPA project where they want to develop hands that can be used to make safe or diffuse IEDs, improvised explosive devices. That's sort of their immediate driver and that's certainly a good practical goal to follow. I like having a concrete goal so we know when we've succeeded rather than abstract make hands more dexterous. What does that mean? But I'm also interested in it because it relates very much to my interest in prosthetics, sort of the hopefully low-cost end of these technologies. Instead of being military, it's hands for the rest of the world and so learning about mechanism design, details, finger shapes, frictions, surfaces, kinematics, how many joints, lots of things from DARPA is relevant to my interest in low-cost prosthetics. So that's sort of a happy convergence coming back from my early work in wanting to do physical therapy, now bringing in knowledge from government-funded work into spinouts to underserved populations. Okay, where we going from there?

<laughter>

Q:

We can go back. I was just curious. Who works with you on that one?

Ken Salisbury:

Well, on the high-tech end it is Curt Salisbury. He's the principal investigator on that project run through Sandia National Labs and then we have a good subcontract to myself here at Stanford and here I have a couple students working on it, Rob Wilson and Morgan Quigley, covering mechanical and electronic interfacing. Oddly, DARPA spec’ed that this hand should be a retrofit for the Barrett hand. They wanted to kind of divide the task at a certain point. Well, the Barrett hand came out of our work at MIT and so suddenly my arm is now becoming – that arm is becoming the standard platform for carrying around the next generation of robot hands. That arm, initially called the WAM, meaning Whole Arm Manipulation, became commercialized by Bill Townsend who was my first to graduate Ph.D. student at MIT. He's formed a company called Barrett Technologies in Massachusetts and now sells the arm called the Barrett arm and the "Guinness Book of World Records" said it was the world's most dexterous arm at one point. <laughs> Wonderful or dubious distinction.

Q:

We've used the Barrett arm. We were making robots that play shadow puppets with people, so we were actually using that.

Ken Salisbury:

You saw that with them or –?

Q:

Yeah, yeah. We were using it in a project to –

<laughter>

Ken Salisbury:

So let me make a comment on that. It's easy to seduce people with an arm that kind of wiggles and does interesting things, but moving in free space is easy. The thing that's hard and still not well understood is making contact and controlling physical contact and interpreting it. So if you're going to pick up something and you don't want to slip or you want to turn the doorknob, how do you monitor that and cause it to happen? I mean it's very classic in robotics. People make a complicated thing and it wiggles around. It's like, "Well that's great, but what are you going to do with it?" I don't mean to deprecate the complexities of other people's ideas, but where the rubber hits the road is when you grab something and do it intelligently.

Work at MIT

Q:

So to go from the Stanford, since you just mentioned your Barrett arm and MIT, how did your transition there go and what kinds of projects were you involved there?

Ken Salisbury:

At MIT?

Q:

Mm-hmm.

Ken Salisbury:

Well, read my –

<laughter>

Q:

Yeah, but we want the stories.

Ken Salisbury:

Yeah, yeah. So there were many. I mean this has been true to my life. I had the opportunity or the blessing to work with a lot of really clever people. I tend to take on students who are rather independent, sort of mavericks. I'm not a hard-driving Gantt chart oriented guy. I mean I have to satisfy sponsor requirements. A lot of my students who are in fellowships get a little more free reign and that's where things really happen. I think I have kind of a good gut sense of who's going to work out and fit with our lab. So at MIT, what else did we do? I brought my hand in from my Stanford work, began commercializing it and selling them. Worked on the WAM arm with Townsend. Worked on the touch sensing interpretation of the four sensing finger I showed you. Worked on with Brian Eberman who used that force information to deduce material properties such as texture and stiffness and other things and then used that to detect events such as collision or starting or stopping of sliding and other things. It was very deep mathematical analysis and very interesting, but really complex and it's at that point I got a little bit frustrated with robots perceiving what's going on. I don't know if we lacked the sensor capability or the processing capability and I decided to flip the problem around and start looking at haptics and that is the idea of using force feedback and something that tracks the motion of your hand to make you feel like you're touching a virtual object. So you might see on the screen a wall when you're moving this device around and when your icon on the screen touches it, you feel the force. So we brought in a second modality for interaction with geometric information and that work was inspired in part by my work with Thomas – sorry, by Mandayam Srinivasan, who runs the Touch Lab at MIT. He kind of came at it from a psychophysics point of view and understood what people needed to detect information, touch and material properties of objects and some ideas of the precision quality needed in that.

So I sat down with Thomas Massie who was an undergrad at the time and gave him the challenge of building a device that let me feel things that were on the screen and Thomas was a very clever designer and within a week had a one degree of freedom device that could do exactly that. You could make it move and feel a virtual wall and it's like yeah, that's good. A quick prototype, proof of concept. I think we got a good idea here and within a couple more weeks he came back with a wooden model of the phantom haptic interface which ultimately became to me really the first practical force feedback device. It was simple enough and instead of having complexity, many different degrees of freedom, it had only a few degrees of freedom and traded that for fidelity, bandwidth sensitivity to force, and so I often like to trade complexity for fidelity and that worked and that caught on. A lot of people started using the phantom to interact with virtual stuff ranging from people who wanted to sculpt virtual stuff to psychophysicists who wanted to do controlled experiments to understand what people could perceive in various ways. The company he founded, partially I helped with that founding, was SensAble Technologies which continues on now. They're in Woburn, Massachusetts. They've branched out into quite a number of applications ranging from sculpting as I mentioned whether it be for the jewelry industry, sculpting faces, shoes, sort of organic designs, to automobile design. The ability to get people's hands back involved in the shaping of things to learn sensory motor skills so that they could express what it is they wanted to show, plus the advantage of computer tracking modification, mirroring, smoothing, and a lot of other things. So that became pretty interesting. That company has now branched into SensAble Technologies Dental or something close to that and so they're looking very much at the dental market perhaps initially to train people to do procedures, but ultimately to help them sculpt different appliances that go into the mouth, bridges, prostheses of various sorts, and that seems to be catching on. The phantom haptic interface has been used a lot by medical researchers, people who want to enable doctors to feel and train or plan or ultimately rehearse medical procedures with the inclusion of force interactions which is pretty central to a lot of procedures. Learning to discriminate between a cancerous tumor which might feel crunchy versus a benign one which might feel elastic. Sure they can do that with cadavers or with real patients in some circumstances, but to be able to virtualize that just like flood simulation where you can be exposed to a variety of circumstances really is a big win. So in our work here currently we're looking at simulation for rehearsal which I think is the most interesting one because it improves the outcome, if it works, the outcome of a particular patient. So if you can show safety and efficacy for an individual now you can say this is insurance reimbursable just as a scan is which helps diagnosis and procedure performance. So that's one of my current thrusts in surgical simulation, getting the fidelity good enough that it works, and then validating with having some rehearsal time before doing the actual procedure will improve the outcomes.

So continuing the history at MIT, we developed some hands, some curling hands that Helen Greiner developed. She was the founder of iRobot, cofounder, which is something I just dug up today. It's not a well-known publication but has a huge number of insights into it. I'm guilty of not publishing everything that we do <laughs> or they end up in obscure places. So I really like pulling out her thesis or some other students' theses and spreading them around to bring people up to speed. And that got me involved in more hand design, more kind of simple grasping hands that could pick up arbitrary objects. Could not manipulate them in some arbitrary ways, but were pretty good at grabbing onto on-size objects. One interesting spinout of the WAM arm which was designed to have good force controllability, was designed to be able to reach through gravel and find big rocks for NASA or to do other – open a door that has a constraining motion access. It turned out the same design principles that led us to high force controllability and fidelity also meant that the arm could go really fast. It had very high accelerations. We designed it to have maximum power transfer from motors to endpoint which means maximum acceleration. So this arm could move like a bat. It was great. So working with Professors Lateen and Gunter Niemeyer and Jesse Hong they developed a system that could catch a ball so you could throw a ball at it. We had a couple cameras that tracked it, predicted the trajectory and the arm went up and would reach the ball. It got up to reach and grasp the ball and we got to about an 80 percent reliability level on that. One of the fellows in that project who designed the gripper was Akhil Madhani who's now at Walt Disney Imagineering R&D group designing really cool robots for Disney. We overlapped there in that he wants his robots to have human interaction as do we.

So how do we manage touch interactions in a safe and attractive way? Akhil also designed an early prototype of a surgical robot which he called the silver falcon and then the black falcon, two generations, and ultimately those were – let me back up. To Akhil’s credit, he said from the very outset, “Anything that gets invented here, we’re going to share equally.” So there’s sort of no ego problem on the line. And this is always true with students, who gets credit for what? And I’m generally pretty generous about that, but to have a student come forward and say, “We’re going fifty-fifty on this,” is really great. And so he did design an early prototype surgical robot that some intellectual property from which was licensed to Intuitive Surgical, became their portfolio. Ultimately, Akhil and I became consultants with Intuitive. And ultimately, it was my relationship with Intuitive that got me to move back to California. I had a four-year sabbatical with them as I rekindled my relationship with Stanford and then ultimately moved from Intuitive to Stanford as research faculty. And ultimately that became joint in computer science and surgery. So, anything else at MIT?

Q:

You mentioned wanting to work on this problem of the robot and people actually coming into contact. A lot of the things you mentioned have to do with touching – getting balls, or grabbing balls, or touching more physical objects. Were there any particular projects that you were working on that had to do with direct contact with people?

Ken Salisbury:

Yeah, some. Some. I like to explore things to see what really is the interesting problem, so one experiment we did was to develop the ability to shake hands with people. And the experiment was simple. We had somebody shake the hand, and we told them to have one of four different affective states. Be angry. Be anxious. Be inquisitive, or something. I forget what the four were. And they were somewhat thought out to span a range of emotional states. And we recorded some aspects of what they were doing as they shook hands with this robot, which happened to be a PHANTOM with a very simple hand on it, impedance of the motion, spectral components of how fast they moved, duration of that. And then we played those recorded behaviors back to another set of subjects, and had them pick one of the four. What attitude is this thing shaking hands with? And actually it was pretty accurate. I can’t give you the numbers, but it was well better than fifty percent. Pretty simple sample, pretty simple paradigm that we were looking at, but it was sort of a hint that there’s at least a simple communication that could go on through touch. Since then I’ve kind of developed a taxonomy, which begins with touching and being touched.

So if you might want to play pat-a-cake with a robot, or just simply touch its arm and have it get out of the way, which are both safe things and also emotional. That it’s not forcing itself on me, it’s sort of accommodating to me. After touching and being touched, there’s taking and giving. So if the robot is going to hand me something, how does it know when to let go? What does it sense about our physical interaction that gives it the confidence to let go and know that I’m not going to drop my bowl of soup on the ground. And you can imagine all sorts of human customers that might need that ranging from people in a store to handicapped folks to elderly folks. Third level is leading and being lead. And so I might be using the robot to help me carry cinder blocks across a construction site. I’m leading it. How do I get it to follow me, appropriately use its extra capabilities, strength, collision avoidance, other things, to follow what leading information I’m giving it by pulling on it? Being lead kind of touches on the elderly, or impaired population, helping me find my way down the hallway. If I want to stop and talk to a friend or get a drink of water how does it know? So all of these involve touching and interacting and not just yes, no touching, yes, no force in this direction, but impedance, vibration, frequency content of the motion. That’s totally open area. And there’s probably more to that taxonomy not yet fleshed out. But that is one of my missions right now is to understand that vocabulary and the syntax of it. Try to sort out what we can so that the robots can be responsive and communicative in a physical sense with people. Okay?

Q:

Alright. Great. <laughs> You seemed to be thinking of whether there was something else, so.

Ken Salisbury:

Well, no I get into something and then I forget what the question was. You know I get on a thread that’s like, “Oh yeah this is cool.” And then it’s like –

Work at Intuitive

Q:

Yeah, no they’re all great. Don’t worry. So we came from – we were going, I think, from MIT to – how many – well, we can look at that. Never mind that. I was going to say how many students have you had at MIT and that kind of stuff. But that’s easy to check out. So when you were going from MIT to Intuitive why did you decide to – kind of what were the specific problems that you wanted to deal with at Intuitive that made you want to make that switch?

Ken Salisbury:

Oh yeah, no, perfect. Well, again saying I’ve always had an interest in dexterity and hands. And here was an opportunity to enhance a skilled surgeon’s ability at a different scale. So technically, that was pretty interesting. Second, I’d been working on a lot of moderately abstract aspects of this. How do you control it? How do you build it? Yeah, picking up rocks for now, catching balls, kind of gave us a hint of a concrete application. But here we suddenly had a couple guys, one dear friend from my freshman dormitory, and another very smart experienced medical entrepreneur who said, “We’re going to do this thing. We’re going to take telerobotics and make it practical, FDA approved, and we’re going to use it to fix people’s lives.” And for family reasons I wanted to come back to the West Coast, but I also saw this as an opportunity to kind of get on this train – I don’t know how to say – They had raised 100 million dollars. They had made this commitment. We’re going to do this amazing thing. And it was moving like a train going downhill. And I thought, okay this is an interesting and instructive ride. I want to get on board. I was, ultimately, was not a founder of the company. But I feel very happy that I was able steer them in some of their early decisions. And then help them make connections with really capable people and license some technology to them.

So yeah, I was intrigued to have an opportunity to apply my fifteen, seventeen years at MIT of sort of developing a theoretical framework for doing these kinds of physical interactions, human-machine activities – apply it and get some feedback. And also know when I had succeeded. When I saw surgeons performing heart valve replacements, I felt pretty good. You know it’s like, okay, this stuff is relevant. And, of course, they had a huge team at that point who filled in all the stuff that I didn’t know anything about. I’m pretty good a figuring out problems, but I’m not always good at answering them. So that’s why I work with students and colleagues who are much more deep in their particular disciplines.

Q:

Did that particular application give you kind of new questions that you needed to answer?

Ken Salisbury:

Sure. I mean lots of questions about how should I map a human’s ability to control and need for receiving information into the activities of a remote, and not necessarily distantly remote, but maybe at a different scale, useful actions? So I work both on the master control end of this, developing the fancy joysticks that are needed to control the remote robot. I had some activity with the designing of the grippers, which are sort fancy laparoscopic tools. And was involved with lots of design reviews of the overall system and the performance. How much bandwidth do we need? How much dynamic range and force? And so it gave me a chance to apply things that I kind of knew how to do from my work at MIT, but didn’t really have a focus of application. And ultimately over some years that all came together, again with the help of a lot of really smart people and some good support from the VC community and a robot that could really surgery and ultimately improved outcomes. They looked at cardiac bypass procedures early on because that was thought to be a good market. Difficult market. Turns out it’s hard to get behind the heart and do some of the other things. But you can at least avoid the sternotomy and get to vessels on the front end of the heart. And since then Intuitive has found other applications. The prostate – radical prostatectomies, removing the prostate gland when it’s cancerous, and other procedures which really capitalize on this really good human-machine interface where your visual and your haptic coordinates are co-aligned so that forward means forward, rather than the old style laparoscopy where you’re pushing on a tool going into a body and over here the image is going that way. So it’s difficult mapping between sensory input and motor output. So it’s been exciting to watch that company grow. They’ve distributed a lot of robots that are doing real surgery frequently.

Return to Stanford

Ken Salisbury:

So ultimately I wanted to come back to Stanford and sort of begin the next generation of robots, haptic interfaces, human-machine interaction, various combinations of those things. Began working on lower cost robots. I got a little tired of the elitism of the 100 to half a million dollar robots that were being sold to a few labs, and kind of wanted to get the cost of robots down to lower cost without giving up performance. And that’s still a mission that I’m on. I think we’ve got some good ways to do that, and I’m excited about the next generation of those robots. Another thread that had grown out of my work at MIT was wanting to build multiple arm robots that were on a mobile base. And we started working on that here with Keenan Wyrobek and Eric Berger, two Ph.D. students with me at the time, and built a prototype, which became called the PR1, or Personal Robot 1. And then we started shopping this idea around to try to find support for the next generation. And after about forty pitches – giving of the pitch to potential angel funding or sponsors of some sort, they found a really good donor who got the idea and saw the value of building a platform rather than building a three month explicit task oriented something, that we could really change the climate of this research community. And that resulted in Willow Garage. And they’re going like a house afire. They’ve sent out ten or twelve robots to some of the really best labs around the world and have galvanized that community with open source software and a standard platform.

And I think that’s the tip of the iceberg. They’re not sitting on their laurels with just those outputs. They’ve got more coming. And they have a very generous attitude. The open source component of it allows researchers from all over the world to contribute and have access to colleagues in slightly different disciplines. And instead of each camp developing and hiding its own developments, now we’re starting to pull them all together through the ROS, Robot Operation System project. So, in my lab right now, it’s a pretty tight group. I think we have eight researchers. One component of our work is in surgical simulation where we take scans, potentially multimodal, such as MRI and CT scan, ultrasound, infuse them into much more rich representation of the human head, in the particular case right now. So that you can see soft tissue and hard tissue and interactively try out different procedures. Like if you’re trying to do a sinus surgery or a transsphenoidal resectioning where you need to figure out what’s the best approach so that you can see what you want to see. Then they can rehearse that and try it out so that it should be familiar when they actually go into the real patient to do the work. And we’re making good progress on that. We haven’t done clinical experiments with it. That’s – assuming some funding comes in that we think will, we will start doing some clinical experiments to validate that this is actually helpful.

Surgical simulation, or surg-sim, has been around for a long time. And it’s sort of been expensive and not very capable. There’s not been a lot of success commercially. They haven’t become self-sustaining except in a few cases. So I’m trying to make it good enough, capitalizing on better hardware, better algorithms, and some of our insights about where it should be applied, to really help it become useful. And part of the magic of being supported, computer science and surgery, is that I have access to people in both areas. And I’m physically located between those two entities. So I have a number of surgeons, Dr. Nick Blevins who’s an ear surgeon, Dr. Sabine Girod who’s a cranial facial surgeon, neonatalogist Lou Halamek and a longer list. So we stay in pretty tight connection with people who have real clinical needs in this area, but also have enough of an understanding of technology that we can overlap and they can say, “Well, couldn’t we do this?” And I can say, “Well, technically this is possible, or not.” So a lot of what I do is make connections with people who have abilities and people who have needs who can talk on the same wavelength. And so our surgical simulation work has been going on for quite some years now. And haptic rendering, there’s some new breakthroughs on that so we can feel with better fidelity more complex interactions. The visual rendering, it looks like we’re going to be making some progress on that. Surgeons really like to see how things look, the color, the change when you put pressure on a blood vessel – an area with vascular flow through it, and other cues. So we’re trying to incorporate these from either real patient scans where we can, or through prototypical models or morphing between the two. So and that’s lead to development of next generation haptic interfaces.

For the cranial-facial work in particular, we wanted really good fidelity so that the surgeon could feel the texture of one set of bones versus mobility of something else. And fidelity here means being able to sense or detect very small forces versus very large forces. And the wider that range is, the better dynamic range you have. That’s a very important metric in these haptic interfaces. There are also spatial dynamic range, and then temporal h – what kind of bandwidth. And so sort of finding what’s the necessary volume within that three space should we design these things. That’s kind of what we explore. And I’m very interested in sort of making what I call high fidelity robotics, which means I want to expand that volume of capabilities and sort of make sure it intersects with practical needs as well as technical capabilities. So that is a mission that we are on now. So that’s the medical simulation component of our work, covers some of the haptic interface design. And then on the robotic side, we’re working on again high fidelity robots, or haptic interfaces. So really taking a fundamental look at that. We have some applications in more practical near term use, such as a telerobot for positioning ultrasound sensors on a patient while they’re receiving radiation therapy. If they move and their prostate, for example, gets out of the range of the beam, you want to turn off the beam. So this is to give real time feedback to make sure that that works. And then that problem in spades happens if you’re eradiating more mobile tumors, the liver, abdominal. And so you really want to track where it is and either steer the beam or turn it off when you get out of range. So it’s a nice application of a clinical need that we had a sufficient technical solution to address.

An example of the synergy that comes from our mechanical plus computer science overlap, one of my students developed a technique for watching the image of fertilized embryos as they mature from a very primitive state to the more mature and – how to say, robust form the blastula. This is relevant to in vitro fertilization, initially. Where you want to implant the growing embryos – fertilized embryos, as early as you can so they get into the right environment and mature healthily, but it’s very difficult to tell – was, previously, which of those would mature into a healthy next stage. And so using some computer vision techniques, he was able to track the maturation of these cells in multiple cases, and ultimately find predictors of survival maturation. That ultimately became a company cofounder looking at IVF. And you know he’s the technical lead there. so it’s one of many kind of happy stories where something came out of the blue because my students took the initiative to find somebody and realize there’s this solution here, and applied – in Kevin Loewke’s case, applied the math and the mechanical abilities that he had to find a fundable, sustainable technology. Yeah a lot of my students have formed companies. Thomas Massie going back to the PHANTOM, they – whether it’s because they’re intrinsically that way, or because of my nudging, a lot of my students build devices that become platforms for other people. So the PHANTOM became really a worldwide standard for that. The WAM arm continues to be the arm that people use if they want force control ability. The surgical robotic stuff used by Intuitive Surgical, the personal robot activity now at Willow Garage. So I get a lot vicarious satisfaction out of seeing these things live on, rather than just disappear as the pieces on a shelf some place.

Development and Influence of ROS

Q:

Do you get feedback from the communities that use them that also feed into your design at all?

Ken Salisbury:

Two kinds. Yeah some people come back and say, “Could you make it so it does this?” And you know I try to assess that. Is this a population that really would benefit from this? Or is it a one-person need? It’s always useful to hear people’s feedback. Sorry there’s another dimension to that which has slipped my mind. What was the question?

Q:

It was how the feedback from people who use – since your platforms are so widely used, whether the kinds of uses that they have or things that they say about them end up giving you ideas for more or further development?

Ken Salisbury:

Yeah, so there are people who come back with very good ideas for further development or potential collaborations. I recently met a faculty member here, Professor Howe, who’s developed very high fidelity spatial and temporal resolution tactile sensors. It’s the first one that I’ve seen that makes sense for a robot. And it also has a lot of other applications where you might want to build a glove where you can track people’s manipulation of things. And it’s an example of the kind of synergy that works for me around Stanford. I sort of find people who are doing interesting things and say, “You could apply it over here.” I certainly get some category of sort of wacky people who call up and they say, “Can we use your robot in a theme park in XYZ location, and make it throw balls in a contest.” And I was like, “Eh…” They probably have a lot more money than NSF, but I don’t want to go quite yet to the entertainment industry. I’m interested in the human-machine interaction, shaking hands and all of that has sort of technical meat to it that I’m interested in, but replicating something we’ve done for entertainment – eh, not quite ready for that.

Q:

Do you think if something like ROS had been available earlier in the history of robotics that there would have been faster development of – it was really a problem over time of trying to transport code that you wrote from one robot to another robot, and reinventing the wheel every time.

Ken Salisbury:

Yeah, maybe I mean we’ve certainly seen examples of the value of open source software, and sort of to inspire and cultivate communities to collaborate with each other. I mean Emacs is one of them. Visual Toolkits. There are a lot pieces of – in other disciplines of this kind of shared resource. In robotics I don’t think it could have happened much earlier. I think it works with Willow Garage’s work because they simultaneously have a platform so you can – if you use ROS on this platform, you can compare apples with apples, and you can each share the developments of other researchers in your field. I mean Microsoft tried to do this. It didn’t work. It hasn’t really caught on. And whether it’s because it was improperly structured, or because there wasn’t enough technical capability coupled with a decent platform, somehow we’ve crossed the threshold of where this can be done. You know some of the problems like speech interpretation, or image segmentation, object identification, grasp planning, there have been lots of little niches of that work. And so by saying look all you guys can share each other’s work if you adhere to a certain programming interface – applications – API, applications programming interface.

So it sort of provided a – on one end, a common interface so that people could contribute and use – and then at some lower level it takes care of some of the more difficult machine specific control needs, safety, stop when this overloads or this force overloads. Ultimately there could be collision detection built into that so that you just can’t drive it into the wall. And it allows for incremental improvement. So your vision algorithm can now deal with specular reflection. Put that in and other guys can just update it and they’re able to take advantage of it. So I think it really does galvanize and simplify some of this work. And I’m not sure it could have happened much earlier. Partly because it was just technically possible, and I don’t think the research world had gelled into the multiple disciplines that we talked about. And that it was technically possible to do this. And then the inspiration really of Eric and Keenan and their sponsors that this is worth doing. And it seems to be born out.

Q:

The PR2, though, is once again one of those platforms that are very expensive and only kind of available to a small group of people, so is there any inkling of potentially making some platforms that are more widely available because then you could get more people in the community involved in development?

Ken Salisbury:

Sure. And I think that’s a great idea. I mean in the same way with the early PC being a platform. It was low cost enough that somebody working in their garage could get involved and develop spreadsheets and word processors and things that others hadn’t imagined. Yeah, right now, I mean at least Willow has given away ten or twelve of these robots. They’ve seeded the research community and the industry to some extent with the platform. You know if you’re making ten or a hundred of such robots, you don’t get the economies of scale, but PR3 is going to look at that. I mean we here in my own lab and others are looking at cost reductions – there’s some interesting ways to do that, as well as performance enhancements, which I think is needed. How do get all the other parts –? It’s a twenty-five degree of freedom robot. Yet, if you look at the cost of Townsend’s WAM arm, which is a wonderful arm and has changed the face of this kind of research, that goes for slightly under 200 thousand, maybe 175. The Willow robot – I don’t think I’m speaking out of turn, if you ask for a quote now, it’s about 400 thousand, but instead of getting six of seven degrees of freedom, you’re getting twenty-four, twenty-five, plus a community of people that are really sharing the work. So yeah, it’s still not something your high school can buy and build a class around, or even a university and build a graduate research program unless they’re lucky enough to receive one. And there aren’t many graduate programs around the current robots –

<break in recording>

Ken Salisbury:

– is a chicken and egg. It’s got to show some success to encourage higher quantity production of it, to enable broader community to contribute to it. And I think they’re on their way to do that. They certainly want to make it available. And I think we’ve begun on a good spiral of that chicken and egg cycle to show utility, show desirability, increase quantity, maybe find some clever solutions to making it less expensive. The Willow Garage model, because it’s foundation supported, it’s not in itself a sustainable industry, except for the generosity –

Medical and Clinical Robotics

Ken Salisbury:

So when I was at MIT I worked a lot on kind of basic aspects of robotics, how to use the robot hand to manipulate kind of a generic object, how to make an arm that could have good force control, a variety of other things, haptic devices that let you feel virtual stuff. When I made the transition to working with intuitive surgical, suddenly I was in a situation where they had a real commitment to making something that was clinically relevant and that was intriguing to me to kind of join up with some very experienced company builders who made a commitment and borrowed a lot of money to fulfill it, to build a robot that would really work on people. So it was very gratifying to be in a position to take this stuff that I’d been learning and suddenly find it was very relevant and very helpful to this team. And so we’ve been doing that for four years, I had a lot of chance to see what was going on in the operating room, how the technology could make a difference, where it stumbled and not and also sort of the cycle of prototyping, of assessing your successes and failures and redoing it. That’s the thing we do in design all the time but when you’ve got clinical applications in the loop where there’s a cadaver or an animal or a human or other kinds of tasks, you learn something from each one. We had one experience with a I don’t know if you want to use this or not.

<crew talk>

Ken Salisbury:

So one interesting example for me in the design process in this sort of clinical venture funded environment was we were getting ready for clinical experiments with an early version of the robot and that I was working with the team developing the optic interface, the fancy joystick that controls the robot and the clinical folks said “We need 20 of these things because we’ve got clinical starting in six weeks,” or something crazy like that and I kept saying, “We ought to build just one of them and try it out.” They said, “No we need 20, go with your best idea right now.” And so we built the 20 of them, within about 10 minutes, the doctor came on out, they were rubbing their wrists and just saying, “This is horrible, it just does not work.” And we could have found that out in about a one week’s effort of just building a mockup of the thing and letting them try it out, but the rush for sort of clinical relevance, the venture capital money, waiting at the door to be spent and proven that it was useful work, pushed us in the direction we should not have and so that really is for me has underlined the value of prototyping things and trying them out and especially in a medical environment, so there’s just a lot of unknown unknowns that you can’t imagine and we’ve known this in design for a long time but this really emphasized the value of that. There was a lot of money spent building 20 bad, not very good devices. The good news is I inherited all the parts from them, so we have lots of extra motors in my lab, so they got something out of it.

When I came to Stanford I was interesting in medical simulation, I had done little bits of that mainly haptically – as we began to discover that we could discriminate between many different feelings of objects and doctors saw it and said, “Oh you could teach the difference between a tumor and a benign more elastic structure in the body.” And so I got interested in developing surgical simulation, initially for training and at the time and even now it’s kind of preaching to the choir when you talk to some doctors. To some it’s like that’s crazy, we want to do it the old way. And so I got more involved here, my appointment became with Dr. Krummel in the Department of Surgery as sort of half my time and the other half is in computer science, which is a perfect blend because I had the clinical coaching from the Surgery Department and a community to try out my ideas on and had the resources of the Computer Science Department. Even though I’m a mechanical engineer, I can build gadgets but I guess I have enough sense to find other people to come in and really do the work that I think ought to be done and so that was a four or five year NIH project to develop simulations which worked pretty well. The visual rendering was good; the haptic rendering was good, not ready to turn into a company or a self-sustaining technology. As much because the technology wasn’t fast enough to do complex scenarios and visually and physically and partly because we’re still learning what to simulate. One of our colleagues simulated knot tying, that’s an intellectually interesting problem, but I can simulate that by getting a piece chicken meat from the grocery store and just doing it. So it’s not a cost effective thing to do.

So rolling ahead in time we began working with Dr. Blevins who’s an Otologist, an ear surgeon and he walked in one day with a laptop full of gorgeous CAD drawings of the middle ear and other parts of the hearing system and it just really excited me that an MD was thinking that concretely about the shapes of things and how they look in different patients and that he had a good physical sense of what he wanted to do. He knew I was doing simulations so roll ahead five years later, we’re still working together, he’s actually done that procedure on my own son three times so I knew ahead of time he was pretty good at doing it because I’d seen him running the simulation. It was a mastoidectomy which is a very difficult procedure to teach. You have to drill through the bone behind the ear, avoid some delicate structures and then go inside and do various things with a little tiny ossicles little tiny one millimeter bones and it’s hard to train – you can’t do it on an animal model, cadaver bones are hard to get, you don’t do it on a live person because it’s so delicate. So it’s very hard to do training in that demand. So we’ve been up and we’ve done some pilot studies with that, it’s having some success. We’re now I think on the verge of doing patient specific simulation for rehearsal and that’s something the doctors are pretty excited about and I am too, I have an entrepreneurial head that says it’s got to be self-sustaining in some way and if the technology can improve the outcome of a specific patient, improve their efficacy and safety of it, then potentially FDA will go for it and in particular if it improves an individual’s outcome, then you can start thinking about your insurance reimbursement, just like you would get reimbursed for a scan. So I think there may be a business model there. So we’re embarking on –

<crew talk>

Ken Salisbury:

So where was I, working with Dr. Blevins developing – So with Dr. Blevins now we’re starting to. With Dr. Blevins now and some of his colleagues, we’re starting to take very high resolution scans of the head, both in CAT scan and MRI and then fuse those together so you could see different structures. The bony structures so we know it’s there and them some of the softer structures so we can begin merging these two modalities to get a quite interesting, quite good fidelity scans and then one of my grad students Sonny Chan has been working on the rendering algorithms to allow the bone to be removed, to allow much higher visual fidelity so the doctor can go in and practice the procedure and they’re doing some interesting things to enhance their understanding of where they’re going. They can dial up the transparency so they can start seeing structures outside of what’s visible at the surface so they kind of get this superman view through the material. They can see what’s inside, which sort of I think enhances their situational awareness for when they go in and do the actual surgery. So technically we’re getting there, we’re not ready to do human experiments although we’re gearing up to do some cadaver experiments on that to see if it actually enhances the person’s ability to perform the procedure if they have the rehearsal to try out.

Q:

One was working on these and how does for example working on a domain like medicine change how you think of designing devices for example or change your work, how does it influence what you do in a sense?

Ken Salisbury:

I like to see things used and I like to find a niche where they’ll make sense to somebody so working with NASA was really fun, because I believe in it and I support them, but it’s a 20 year project before something flies and I’m a little more impatient than that. In a two to five year cycle you can develop a medical instrument that will go out the door and start doing real practical work. So medicine has been a good application area for me, also I think surgeons in particular I think very spatially, physically, they use different vocabularies so there’s great communication between them and me, because I do think that way too, just use different words and use different instruments. So I mean there’s this wonderful interplay we have the clinical pull, doctors want to do something and if they’re technically savvy enough they can begin to say “Oh can’t you do this” and there’s the technical push where I know about lots of gadgets which may or may not be relevant to what they’re doing but then there’s this overlap and if you call it the low hanging fruit where we can actually merge their needs with our capabilities and outcomes, it’s an interesting robot.

We have one robot, students working on which will draw blood from a patient or do an IV insertion. The first he'd mentioned this I thought this is crazy, who’s going to want a robot doing this, but it turns out that about 25% of IV insertions are failures and so you can do a lot of damage, a lot of pain and that there is a need for doing it more accurately. So he’s combined a bunch of sensors, imaging below the surface of the hand, various touch and pulse sensors and developed a very interesting instrument that may well have some commercial applications, I’m pretty intrigued to see how it goes. It’s a fabulous instrument anyway. It kind of gives some of my more mechanically minded folks a better defined design criteria. If I just say build a better arm, some interesting things happen, that’s where the WAM arm came from but we didn’t think exactly that we’re going to do this task. But Rueben who’s working on this IV insertion robot had a very clear goal, he researched it extensively, talked to lots of doctors, he drew blood from probably 20 of my students and colleagues, meaning using a needle to draw blood, not in some other way and took measurements of the motions of the forces exerted. So really got a nice understanding of what’s happening clinically there and then use that information to like guide the design of the system.

Sometimes opportunistic things happen and it’s partly the way I work, I get good people in; earn a certain amount of trust and latitude. I have a certain amount of trust in, I give them a fair amount of latitude and a good example was Kevin Loewke he came in, mechanical engineer, worked on some different devices for us, eventually got interested in – he got hooked up with the In Vitro Fertilization group here. I don’t know how he met them or how they found him, but he started looking at the maturation of cells in fertilized embryos to see how they grew and how they might mature into something viable. I’ve already talked about this. I talk to so many people during the day. Did it sound the same though? Another one of my students got hooked up with a – oh, I told you this one – of the radiologist? I’ll go back to the beginning. Using ultrasound to track tumors?

Q:

Yeah.

Ken Salisbury:

I did talk about that, okay.

Q:

How do they make these connections, I mean do the students, do you have any clue?

Ken Salisbury:

They go to parties with doctors. No it’s an interesting question, I mean we have a lot of casual interactions with doctors, I mean right adjacent to our lab is the bio design group and there are a lot of MDs who come through there evaluating the student’s projects and we talk with them, occasionally my department chair, the surgery or other medical colleagues will introduce me to other ones. There’s a lot of word of mouth connections that are going on here. We’ve sort of got a reputation for being able to do some interesting things, software/hardware wise and clinically relevant. So we don’t hold problem finding seminars yet, I mean I could imagine doing that with a little more resources, setting up monthly lunches where people come and present a problem, we present the potential solutions and kind of a more mix of medical engineering mixers, I guess you would call it. Yeah I mean some problems; I guess I find them on my own. There is a company nearby that makes a robotic catheter for going inside the body and finding its way to different structures, less invasively and my students have done internships with that company and other medical companies, so they often come back with problems and one – David Camarillo came back with a problem of how do we improve the accuracy of a robotic catheter, a little snake that can move and twist and find its way through vessels in the body. So we sat down and started looking at what engineering analysis could we do that would contribute to improving the accuracy of that and that ultimately became his thesis work. It’s kind of something I knew could be done but it needed some real attentions and analytical work and some experimentation, all of which he did, sort of like made it happen.

I’ve been interested in improved haptics, today’s haptics in the sense of good force feedback is generally three degrees of freedom, you can feel the force in three different directions on the tip of an instrument and that’s been pretty exciting and pretty astonishing when you first feel it but we’ve gotten to the point now where we’re interested in the six degree of freedom interactions. So if I touch a rigid object against a rigid object that the contacts are not just point contacts but they’re line contacts, potentially surface contacts and so the interactions between these different geometries feel quite different. In particular if you’re drilling into the head and you have a hole that you’re going through, if you wedge like this, it should stop, it should just pass through and so this next generation of six degree of freedom haptic interfaces is starting to emerge with a very high fidelity one that we prototyped and again it was designed around doctor’s needs. We’re doing in this case we’re doing craniofacial surgery so the student in that case Curt Salisbury measured the doctor’s movements and kind of figured out the workspace, measured some of the forces and kind of came up with the design specs that again were relevant to a specific task. So designing in a vacuum is kind of frustrating at a time because you don’t know when you’re done. Here when you’ve got sufficient quality out of it, you kind of have a sense, okay this is good, now we should go get using it and try it out to see if it really does the right thing and this for interacting with simulations of the skull for a craniofacial surgeon, on working when she deals with repairing trauma or congenital defects and sort of needs to interact with a 3D model that needs the fidelity of being able to feel the textures and fitting parts together and feeling the fit to be correct.

Other Projects and Collaborations

Q:

Have you done any interdisciplinary projects outside of medical work?

Ken Salisbury:

Nothing really formal, but as I spoke about earlier, I’m very much interested in communication through touch. For the medical bit here, we’ve been talking more about enhancing capability of humans and I always think in terms of human in the move, I think autonomy will come in some of these areas. But working with touch communication for example with the Communications Department, they have a perspective on how people feel about interacting with machines. If the door too quickly opens and it’s an automatic door, people kind of feel weird about that and there’s going to come a time when we’ll be physically interacting with robots and not just silly – it’s not silly but simple things like shaking hands or leading it along but cooperative working together if it’s in a construction environment and it’s helping me put up some wall board, will it sense my pull, my kind of leading the task or is it going to be kind of yanking it around. If the robot is moving through my building delivering things, food, medicine, tools, maybe just Googling for my missing screwdriver, I don’t know. How should it react to me, I mean one thing is should it just look at me and acknowledge my existence so that I know that it knows that I’m here that I’m not afraid it’s going to drive over my foot. Or if we really are in each other’s way can it kind of gently move to the side if I nudge it, it’ll kind of react appropriately to that.

So understanding those, like I said effective aspects of physical interaction is quite interesting to me. We’re working with some folks at Disney, one of my former students again, developed a physical version of Wall-e from the movie Wall-e and it’s a 300 pound, pretty powerful thing, it looks kind of fun, it’s just like the real character, the movie character. But they want it to be able to have touch interactions with people, so again hand a child a bunny or take something from somebody and so beyond safety which is critical, making that an interesting and pleasant interaction. Could the robot act nervous when it first meets somebody, could it be really welcoming and say “Oh come with me, I want to show you something.” Those are sort of more character simulation ideas, physical interactions. Other fields, well I want to build a robot that can do magic tricks. If you look at dexterity there’s a wonderful range from well I can imagine picking a lock which is something I like to do is totally by feel, there’s no robot that can do that. There are some hardcore instruments that will just break the lock but so a human can pick a lock, they can also put a key in the lock, which is a little easier, they can grab the doorknob and turn it which is a little easier, they can open the door, which is a little easier. But in those four tasks we span quite a range of force control and position control and that’s what I would call the dynamic range of the robot. So kind of a general goal I have is increasing the dynamic range of robots so they can do not just one category of tasks but a very broad range of tasks. In the robots industry and even some of the ones we have now tend to be kind of one trick ponies that can do something in a fairly narrow domain and this is what I’m beginning to call high fidelity robotics. Finding what task requirements, what bigger, more important, more diverse tasks require and then technically how do we get there, how do we build a robot that has variable compliance or more sensors or finer quality motors or whatever and I don’t want to build another gold plated robot, I want to do this inexpensively. We don’t quite have the benefit of Moore's Law on the motor side, although sensing and computation is just exquisitely growing and helping us. But building mechanical system, motors and batteries are only increasing linearly rather than exponentially. So I’m sort of jealous of my circuit’s friends. But there are new motors and the MEMS folks and other electronic component designers are starting to come up with some pretty interesting devices that we can capitalize on.

For example you can put an accelerometer, a little 3x accelerometer like what’s in some of our cell phones and tell if the robot is scraping across a textured surface or impacting, it’s a compliant surface. So there are some interesting components; in this case enabled by the air bag industry which made accelerometers really cheap. Those a couple of bucks and you can get a really high quality, high bandwidth accelerometer or you could put several of them on the robot and it can detect not only contact and quality of contact of material properties but also just the condition of the robot as if it’s gears are getting dirty or something’s start to wear and it’s getting backlash, you can see all that in the spectrum of what’s coming out or at least what I have hypothesized. There’s some interesting new tactile sensors that are coming out of one of the labs here at Stanford which look to me to be the first practical tactile sensor. It's high resolution which is okay for certain tasks but more than that it’s physically durable so that it doesn’t fall apart as soon as you pick up something and have any excessive forces on it. I’m actually with Professor Bao and I’m trying to push her to make a surgical glove that has tactile sensors all over it. What’s the point of doing that, well at the moment it’s just to tell what people are really doing, how hard is a person pushing, how far did they slip, what’s the distribution of forces between their fingers and I think we’re on the verge of having that kind of technology available for monitoring the activity of persons, I think yet to emerge is the feedback side of it. If you want to have a totally high quality touch hand, you need to display something to the hand and how to do that is I think maybe a bigger challenge or at least it’s an equally large task. And what’s the point of doing this, well if you wanted to scale somebody’s ability to deal with little tiny nerves that they’re repairing, where the ossicles in the ear and you don’t want to squeeze them excessively or you want to feel for diagnostic reasons the texture and the shape, could you do the kind of "Fantastic Voyage" image where you’ve scaled way down. We’ve done some experiments with just sort of simple one degree of freedom force scaling. I think we have about a hundred to one running at one point and you could touch a little piece of tissue and it felt like a giant piece of cardboard, so it was very interesting to kind of transform my physical sensations down to a domain that I normally wouldn’t be able to distinguish much in.

Q:

What about artificial tendons, artificial lumps, muscles, technologies that allow the kind of manipulation, movement of these systems, do you see changes in that field?

Ken Salisbury:

Yeah, I’m not a real fan of the term artificial muscles because it’s sort of giving it more anthropomorphism than I think is right. I mean it’s an actuator, or maybe it’s just my engineering background but the question really you’re touching on is, "Are actuators getting better, what’s becoming available?" And there have been some pretty interesting improvements in the quality of magnets, the strength of a magnet and a very small space you can get a very powerful magnet now and that’s been driven by the motor industry and driven by other ones, it shows up in all sorts of places. But it means you can get more torque per unit volume or more energy through put in a motor per unit volume and the costs are coming down because there’s commercial demand outside of this, our needs. Batteries, the power density or energy intensity in batteries is getting much, much better. You put little Lithium or Polymer-ion batteries that have huge energy entities compared to the old lead acid batteries. So that begins to offer well maybe a prosthesis with enough energy in it that I can do multiple degree of freedom grasping for a whole day without having to recharge it or just have one degree of freedom in gripping. And as we start building autonomous robots that are going roam the building doing useful things or patrolling or retrieving things, gives them some longevity. Materials are getting better, lighter weight, high strength to weight ratio materials and carbon fiber, other metals, other forming processes. So there’s a lot of technology driven by higher quantity needs outside of robotics that we’re trying to capitalize on. A great example is the cameras in our cell phones, you’ve got these little 2mm squares of good quality camera, why not stick that on your hand or in your fingertips so that you can what you’re doing before you land on it.

So that’s why I kind of like it, I keep scanning technology and encourage my students to do that because we may find little bits out there that come into support our needs. Tendons, there are increasingly better quality synthetic materials if you’re going to use tendons and they’re a pretty good way to transmit force over a distance. If you think of high quality hydraulics, you can get up to maybe five or even ten thousand pounds per square inch in like a super fired aircraft. But the piece of steel you can get up to a couple of hundred thousand pounds per square inch. So in a very small area you can transmit a huge amount of force or power as in through put if you want. And there are materials better than steel now, some of the synthetics, Kevlar and Kapton and several others that again are driven by the sports industry, the high performance aircraft industry, lots of other places where they want good quality materials but we can again skim off the top components that help us a lot. I'm going to go off on a tangent.

There’s a lot of interest in Japan especially but also in this country in building anthropomorphic robots and I’m not sure that’s the way to get to highly functional robots as quick as we want. I mean number one, the walking problem is the whole research problem in its own right. When we built the PR1, the first personal robot that we developed, we quickly said, “We’re going to put this thing on wheels; we’re going to let it go or enable it to go where wheelchairs could go.” Because the world around us here has been engineered pretty much to accommodate to that modality of moving around. So mobility problem solved and then we could focus on putting enough sensors on it, putting the right kind of arms and creating a functional robot that could move and manipulate and use dual arm capabilities and have a good sense of platform all over it. So and the same thing with hands that we built hands with fingers, you could make them double jointed or you could do lots of different things that make them not anthropomorphic, you don’t have to have tendons going back to the forearm, you can put very small high reduction motors in the fingers and make them work pretty well too. But I’m known as a tendon guy or the cable guy because a lot of things I build tend to use mechanical tendons in them. The next arm/hand I’d like to build is probably a forearm where you allocate the motor volume into the forearm and then have it actuate the wrist and the fingers and again I’m not trying to be particularly anthropomorphic but just taking advantage of emerging materials and motor capabilities.

DARPA Hand

Ken Salisbury:

We’re working on a hand for DARPA right now, to help them with their need to diffuse or make safe IEDs and they put out a solicitation to build a low cost moderately high dexterity hand and I with one of my students who has the same last name, Curt Salisbury won one of the contracts and so I’m sort of back to what I did when I was an undergrad, when I was a PhD student, building another hand. We’re trying to keep it really low cost and make it very durable and as a research project it’s interesting in and of itself, but they have a clear set tasks, picking up certain kinds of objects, picking up a screwdriver, pliers, actuating a drill, digging around in a bag for objects.

<crew talk>

Ken Salisbury:

So one of the side benefits that I enjoy about working on this new hand design for DARPA is it has the same goals, has durability, capability, low cost that we want for prosthetic applications and I’m quite interested in developing upper limb prostheses for underserved populations. As a counterpoint, DARPA’s investing three billion dollars in building a very high-tech upper limb prosthesis. And it’s going to be an expensive arm. The population that can use it is going to be folks with military insurance and got a lot of money. Well, there are a lot more people who are farmers and machinists and other folks who’ve lost a limb who could really use something like that. So that’s kind of getting dual use out of this DARPA work. And it’s fun for me just to get back into thinking about hands again with some fresh ideas. I’ve got some new students working on it and they’re challenging my assumptions, which is perfect. <laughs> So that’s a good project. We’re also working on a low-cost arm. Low cost, high fidelity is where I’m hoping we’ll get to. I don’t know how to solve all the problems around doing that, but I think it would be a real contribution. I keep wanting to get robotics away from being the elite, corporate or university group that has millions of dollars or hundreds of thousands of dollars to spend on their hardware. I’d like to get it down to like the PCs where in the early days, where people could run them in their home and invent new things.

Q:

What do you think are some of the obstacles to that beyond just maybe price, but in terms of getting the kind of non-experts perhaps involved in the development of robotics and in the development of what they actually want the robots to do?

Ken Salisbury:

Yeah. So I could give you a parallel example. Computers didn’t really take off until somebody built a platform that people could interact with at a higher level. They could just write software. They didn’t have to go and wire wrap all the chips and make sure the power supplies were working. They could just write code. And if it didn’t work, reboot it and recompile and run again. So it got away from the technical people had to design it and let people who had sort of a application or product concept to work at a manageable level. Same thing in robotics. The long history of computer scientists trying to build mechanical robots and mechanical folk trying to program the things. And there’s sort of this disconnect. That’s not to say neither one is good in a cross-disciplinary work, but lot of my work has been in building platforms that then other people write code for. And once we get a good mechanical platform built, then there’s the next level of writing a code layer that makes it accessible to a more average or not mechanical person. So the Phantom haptic interface is a good example of that. There was a code and now many layers of code that makes it accessible to completely non-mechanical folks. A very, very good example is the personal robot now from Willow Garage. They’ve developed a whole suite of open-source code which again allows people whose skills may be in image processing or physical planning or dynamic simulation and it allows them to share the work and also focus on doing their work rather than fixing the broken wires on the robot. I can’t tell you how much time my grad students spent fixing broken wires. <laughs>

And so that’s why I’m kind of shy of complexity. Tactile sensors in the old days had gazillions of wires and they never worked for very long, if they worked at all. Always repairing wires. And so I’m not trying to build commercial quality things that sell in Kmart yet, but I want them to be good enough that people can do the research that they want to do or even product development and not have to worry about the thing falling apart. So that’s part of the answer of how you make it accessible to different populations. Reliability, low cost, software interfaces or API’s that let people talk to it and not have to worry about the details. And then making their accomplishments portable. If I can run this bread-making algorithm on a PR2, I’d like to be able to take the top level parts of that that do the planning and perception and move it over to the next generation or somebody else’s robot, and so that the robot becomes an abstracted element and if it has the capabilities to execute the higher level code it can. And I think that kind of ferment of ideas and the support of the hardware, I think it’s going to revolutionize robotics. It’s really interesting to see what’s happening with Willow Garage. Already you’re seeing people do things with robots that you’ve never seen done before and it’s caught the eye of a lot of people. It’s not the only robot in town, but I think it’s been a well-planned program of creating a durable, rich or robust platform, coupled with a really aggressive software development environment. So you’re going to see kids programming those PR2s in time.

Just like the Lego robot’s got that going. Got young folks and not so young folks accustomed to controlling electromechanical devices, sensors and motors and reacting to changing environments. That’s a good start, but it’s not rich enough to really do real tasks. A typical robot will wiggle its hands and everybody goes, “Yeah, yeah, it’s great.” <laughs> You’re picking up something and putting it together, and if it drops it and recover, it has to be much more robust in its behavior than what you see in a Roomba or a Lego robot. I don’t mean to deprecate those. They’re good examples, but we need to move up to a level where the robots can start having physical interactions with the world. A lot of roboticists worry about the path planning problem. They don’t want to touch anything <laughs> by definition almost. They don’t want to collide with anything. Where what we want to do is intentionally interact with objects, whether it’s putting things together, opening doors to get through places or shaking hands. And that’s an even more difficult task, to build a platform that other people can use, because physical contact implies energetic exchange, which implies you can do damage to the robot or to the person or both. So we need to be very careful in doing that. So things like covering the robot with a soft exterior greatly reduces the impact forces if you get hit with it. Just a little bit of rubber or soft material on the outside really reduces the injury potential. Making the arm compliant so that if you push on it, it gets out of the way is very important. And our WAM arm that Townsend worked on is very much an example of that. I got hit by it one time and it’s low mass and was compliant and got out of the way and it was not a big deal. If you look at the PUMA robot, which is a very stiff, very non-backdriveable robot, if you get hit with one of those things it’s not good. It has sharp edges on everything. Course, it was designed for industrial applications and I don’t want to put it down. It’s everybody’s favorite robot to complain about <laughs> and that it’s everywhere. People have gotten a lot of good work out of it, but it’s sort of an old style.

And I think there are some new styles emerging that have controllable compliance. You can make your arm very stiff or you could make it very compliant depending on the task that you’re doing. So being able to modulate that is a challenge that many of us have recognized as worth doing. How to do it is a good challenge. We’ve been able to do it in software by changing the gain so it’s sort of soft or stiff and then you can put a peg in a hole by making it soft and stiff in different ways. But if we can do it mechanically by modulating the spring stiffnesses, and there are turning out to be some clever ways to do that, then we don’t rely so much on the calibration of all the servo elements, all the potential failure modes of that, and it’s intrinsically backdriveable. I don’t have to wait for the computer to respond to the fact that I’m pushing on the fourth sensor and having it get out of the way or having it not even notice that I’m hitting it over here <laughs> because it doesn’t have a sensor downstream of where I’m touching. Again, I’m not trying to be anthropomorphic. There are a lot of qualities that humans have that we admire and are part of us functioning properly in our world, but there are going to be different ways to put those into robots. And robots will do very different things. The analogy of getting flight by flapping wings, that was tried and didn’t work. <laughs> They had to deal with the physics in a different way and with the resources they had, jet engines and carbon fiber wings, and we can mock three or more and it’s way different from a bird. Same thing with a robot. I can imagine fingers that have little conveyor belts on them, so to grab something and just runs into the hand, and rotating it means turning the conveyor belts the other way. Or I don’t know. I don’t know what it’s going to be, but I think we really need to be open to those alternate solutions, because we have a different tool kit of things with which to build these new robots.

Personalization of Robotics

Q:

Do you think people are often kind of I don’t want to say blinded, but in a sense limited because of this vision of, “Let’s make it more human-like,” “Let’s make it more animal-like,” “Let’s make it like something that’s already out there,” or…?

Ken Salisbury:

That’s a good question. There’s certain subcultures within robotics that are very focused on anthropomorphism and the sort of simple but almost trite arguments are, “Well, they should be able to accommodate to the environment that humans have designed around themselves.” I think that could be accomplished without legs and without heads that look like heads. I don’t know if you want to quote me on this one or not but…

<laughter>

Ken Salisbury:

In Japan there is this great passion for anthropomorphic robots. It’s huge. And I sometimes wonder if it was influenced by a lot of the folks were doing the work when they were children watching “Astro Boy” movies, a sort of Pinocchio-like character that was very strong in that culture and so they grew up and <laughs> did this. I don’t know what the American equivalent is. There are less of us interested in anthropomorphism, but R2-D2 was not very anthropomorphic but C-3PO was. If you’re trying to have a interaction with a person where they’re sort of immediately comfortable with you, should they even look like a robot or is that going to be scary? You look at what Pixar can do with just a little Luxo lamp. It had to be friendly and cute and there’s a lot you can do just thinking about the motion and the color and the texture of it to make it welcoming, just like a car. I like to sit down in my car because it’s got leather seat and it feels nice. It doesn’t look like a horse I’m not running, but it accommodates to what I like.

I think there’s going to be a time here when we work to make robots appealing and welcoming, just as we’ve done with many products. Chairs. It’s not just the shape, it’s the color, it’s the texture, it’s the motion that it has. Or lots of things in our lives. And so how to build them to be welcoming and also useful. It’s got to be useful. Can we go to another thread here? There’s some examples of products becoming acceptable to people when they weren’t acceptable. My favorite one is just simply putting color on braces. Kids used to hate to get braces and they’d be hiding their faces. Now it’s like, “Look. I got green ones.”

<laughter>

Ken Salisbury:

So it’s become a fashion trend. And they’re anxious to get their braces at times. The Baby Boomer generation starting to need hearing aids. There’s some that look like Bluetooth connections, so they look like cool guy with his or her cell phone. But in fact it’s an amplifier so they can actually hear the person <laughs> across the room. I don’t know if this is going to be possible, but it’s a direction I want to go. With prostheses, somebody who’s missing an arm, they’ll tend to wear a cosmetic hand when they’re in a social environment and they’ll tend to wear a more functional, a very old hook, the Dorrance hook or something like that. And it’s very awkward sort of body image issue and acceptability. A lot of times people just don’t want to use these appliances because it’s better just to have your arm below to remain above the sleeve. Could we make those attractive? Could we allow individuals who need a prosthesis to be involved in the design of it, custom design? Get on your CAD system and tweak the shape, tweak the color, design it to be a tool that is useful for your particular hobby or interest, and then hit the print button and it comes out Kinko’s down the street which has a 3D printer and you can get exactly what you want the way you want. So get people involved in the design of these things.

But the bigger, the higher level thing, is making the sort of supplementary devices attractive and personalized. And I think robots are going to become personalized. And I don’t mean just the cute little seal that they have in Japan that kind of is a companion. That’s kind of interesting, but if something’s roaming around my house, just fixing up things and I don’t know if that’s the killer app or not. I’d like it to be sort of friendly towards me. A really funny example is my cat discovered how to turn on my Roomba and…

<laughter>

Ken Salisbury:

So she would run over and whack the button and then watch it go around. She never rode on top of it but <laughs> it was really funny. And she tired after some months of that game, but SmartCat toys is a growing industry by the way. <laughs>

Q:

Yeah.

Ken Salisbury:

That inadvertently was one.

Q:

We tried attaching a cat toy onto the Roomba. You know the ones that –

Ken Salisbury:

Oh, yeah.

Q:

–flop like this?

Ken Salisbury:

Yeah.

Q:

But yeah.

Ken Salisbury:

Did it work for a while or…?

Q:

They weren’t that crazy about it.

Ken Salisbury:

Oh. <laughs>

Q:

They’re too old or something. I don’t know.

Ken Salisbury:

This one was a young kitten almost.

<laughter>

Q:

There you go.

Ken Salisbury:

It was novel.

Q:

My cat likes to turn on the printer, because it goes through that cycle, it sends the thing back and forth.

Ken Salisbury:

<laughs> Yeah.

Q:

It’s really exciting.

Ken Salisbury:

Yeah.

<laughter>

Ken Salisbury:

<inaudible> it.

Q:

Yeah.

Ken Salisbury:

So I don’t know what the human equivalent of that is. Yeah. Little kids sit in their parents’ cars and they go, “vroom, vroom, vroom,” they pretend <laughs> they’re doing something.

Q:

But people are I think amazed by the Roomba as well. There’s been some work where they’ve – well, one they give them clothing and stuff, but they’ve also, yeah, they make clothing for Roombas. But people give them names and they’ve after a while of living with them, sometimes they found that they start kind of referring to them in social ways because they really change the way that even people go about cleaning or you have to do certain things for them, like move stuff around and kind of save them when they get stuck and all this.

Ken Salisbury:

Yeah.

Q:

This kind of thing.

Ken Salisbury:

I think this is where we come to the communications folks who really study that. And others. Anthropological folks. There are others who are quite interested in those relationships. Even my GPS in my car, I tried to get it to have a voice that was sort of little more human, and that was okay. Now, I wish it would get kind of sassy, like if I missed my turnoff it would go, “You missed that turn again.” <laughs> “Make a U-turn very carefully this time.” <laughs> And that’s going to happen. There’s going to be a personality in these things. I hope it’s more than Dave.

<laughter>

Q:

Well, Cliff is working on those, those kinds of things, right?

Ken Salisbury:

Yeah. Yeah, yeah. And I think that could be good. It might be creepy to some people at first, but when it starts being responsive and novel. We see some of that on our computers. It’s mostly annoying, but where they pop up some information now and then. They try to kind of be proactive in providing something. I think they’re better versions of that. And also adapting to the way that we act. When I use my computer, there’s certain websites I go to frequently and I wish it would sort of realize that, sort of prompt me in an intelligent way, fragments of that. Same thing with the robot. I don’t know he’s going to show up at the door with my newspaper or <laughs> things that our animals do for us. As robots get smarter and more sensitive to our needs they may well be able to do that.

<crew talk>

Important Career Contributions

Q:

You’ve already talked I think a bit about the different challenges just now of what you think is coming and what kinds of stuff that you want to look at as well. When you look back at your career, what strikes you as a few of the most important contributions you’ve made or the things that you’re most proud of or…?

Ken Salisbury:

That’s good question. Well, in fact, I wouldn’t be where I am without my students. And in the end, they’re the real product. They I think pick up some ideas with me, get some experience and then run off into the world and create companies and really make some interesting things happen. There are lots of example of my students that have just gone and done amazing things. And we’ve licensed quite a bit of our technology, some of which has gone to surgical applications. Some of it’s gone to NASA applications. They’ve gone different places. So life after thesis is important to me, both for the people and for the concepts and the devices that they develop. If we look at particular devices, my robot hand, which was just kind of a fun, exciting project for me, has become an icon. I still get people call me up and ask me can I sell them one of them. I was like, “No. <laughs> We’ve gotten all we can out of it. Wait for the next generation.” And yet that hand took on a life of its own. People would send me videos from all over the world and it’s like, “You can do that with that?” <laughs>

And it sort of reinforced that, well, I had some pretty good ideas at the mechanical level, what functionality it should have, but I didn’t quite note what to do with all of that. I did write some programs. It could wiggle objects, but people went well beyond that. Same thing with the WAM arm. Had kind of a central idea of behavior and intrinsic mechanical capability and Townsend took that to be commercial and has done lots of wonderful things with it, his customers. And what else? The Phantom haptic interface apparently seems to have changed haptics. <laughs> I’m sometimes too humble about these things. I’m sort of surprised maybe when I shouldn’t be that it’s going to make a difference. Because you never know. You hatch an idea. It’s like, “Well, is this going to make a difference or not?” “I don’t know. Let’s just build it and see how people react.” And I have some amazing pictures of people feeling virtual objects for the first time with the Phantom. And it’s like maybe showing a mirror to somebody who’s never seen their own face. And like, “Wow. There’s something really different here.” And people get bug-eyed and there’s such whimsy in some of what we do here. It’s sort of playful and yet it has real value to it. It’s different. Makes it sort of seem novel, but it’s different in a good way. The work I did with Akhil Madhani on developing early versions of surgical robots, which eventually got licensed to Intuitive Surgical, was very interesting. He came from a family of doctors and yet was very good at machining and making things and it just turned out to be another pivotal kind of design that helped hatch a new industry. <laughs> Intuitive brought a lot of, certainly lot of, their own ideas to the table, but it was nice being part of that flow and feeling like I helped make things happen there. What else have I built? You probably know more than I do. <laughs>

Some of the robots my students are building now, the one for tracking tumors while radiation therapy is happening, it’s sort of at the moment a narrow application, but it has a human in the loop and it allows them to enhance the quality of care that a person is getting. And that hasn’t been productized yet, but I can see it eventually working its way into a real clinical use. I don’t know. It’s more on the software side, but at one point I got very interested in using these four sensing fingers and using them to detect textures, the presence of objects, the orientation of objects. And it’s began getting me interested in the temporal quality of contact information rather than the spatial quality. And so even this simple little fourth sensing finger which was developed by Dave Brock, it didn’t become a long-term product but it was very pivotal in my understanding of what robots were doing. I saw that if you had high quality force sensing and controllability you could do a lot broader range of tasks. Oh, lost my thread. Yeah, okay.

One of the funnest parts when we began using the Phantom haptic interface was developing algorithms for rendering the way things feel. And I sort of liken it to the early days of graphics. First graphics, what did you see? Well, after dots on the screen you saw lines on the screen. So you’d see a wire framed cube moving around. That was pretty exciting at that time. We’re just a little bit beyond that stage in haptics now. We can feel the shape of something, some texture, some compliance, but when you start getting fingers involved so you can pick up something and feel the size of it or the compliance of it you’ve added an order of magnitude, more capability. You’ve put two hands on it, so now I can pick up something and feel it this way. You’ve added another 10-time increase in the richness of what you can do. And couple of the algorithms that were developed early on, one of them that I liked was for rendering implicit surfaces. Very mathematically clean way of representing a class of objects and the rendering, deciding how much force to exert when I push this object, worked pretty well. And it was clean. I’m not much of a complex algorithm person. I like a more simple insight that’s consistent with my mechanical way of doing things. And that was pretty exciting for me, and that algorithm has shown up in lots of other people’s work. Recently one of my students figured out how to run it with six degrees of freedom rather than just single-point contacts. And that’s kind of satisfying to see something that’s 15 or 20 years old, a mathematical concept come back into helping him get his thesis and ultimately do some of the really complex medical rendering we’re trying to do. Another example of early haptic rendering algorithms was some work done by –

<crew talk>

Ken Salisbury:

Another example of an early haptic rendering algorithm that one of my students, Zilles came up with, enabled us to render polygonal objects. So he just grabbed off the ‘net a geometric model of the space shuttle and then he came up with a method for determining what forces should be applied when you’re touching this kind of faceted surface. And out of that came the idea of a reference point that lived on the surface of the object that you were rendering. And that’s used throughout haptics rendering, yeah, and just a way of keeping track so you don’t push through the object. You keep track of where you touched it and you have a reference point to bring it back to the surface. Two very simple concepts that have really broad application. And whether we get credit because we just did it first and happened to be lucky enough, in the right place, or whether we really had great insight, I don’t know sometimes.

<laughter>

Ken Salisbury:

Sometimes it’s like if you get there first, it’s sort of obvious what to do. And these are sort of obvious things that we did, but we got to do them first and we got a lot of excitement out of it and fair amount of credit, which makes it kind of fun. Yeah. It’s funny, because I have a list of people. Well, in some ways it’s the people too that have come out of our group. Helen Greiner who was one of the founders of iRobot, just charged into that company and made it really go very well. And I recently pulled her thesis out. It’s a little-known master’s thesis from MIT and it’s got wonderful things in it, which I’m handing to my students now. How to understand a curling finger so that it grabs onto an object in a mechanically smart way. Another woman who worked with me who has done many, several, astonishing things. Catherine Mohr, started out as a mechanical engineering student with me, one of the finest machinists I knew at MIT at the time, and she developed some interesting mechanical hands, little more theoretical than some of the other medically applied ones. But she spent 10 years in industry and then decided to become a doctor. <laughs> She was good with her hands in the machine shop. Turns out she’s a really good surgeon as well, so same skill kind of maps over to this new domain. She’s now director of Intuitive Surgical’s medical research program. And so it’s sort of fun to have been with somebody when they were sort of coming out and figuring out what they’re going to do and now she’s changing the world. I should probably give more credit to the admissions department than to myself <laughs> because they bring these wonderful people in the door and then I try to figure out who’s going to be compatible and work with our group. Bill Townsend who’s just had this great passion for building the WAM arm.

<crew talk>

Ken Salisbury:

I spoke about the Phantom haptic interface. Well, the fellow who really got that off the ground was Thomas Massie. He’s one of those guys that I worked with from early on freshman year probably. He came in with a stack of pictures of some amazing things he’d done in high school. I kind of worked with him for couple years. I had some interesting problem that I wanted to give him but sometimes you need to build trust before you hand over a good problem to somebody. And so I told Thomas I wanted to build a device that would let me feel objects on the screen, and he – this is while he was still working on his bachelor’s degree and within six weeks he had a working model of the Phantom haptic interface and we were feeling virtual objects. And it’s kind of a nice teamwork. I’d kind of nudge him away in a certain direction and he’d come back and that kind of interchange or exchange with students is one of the really fun parts of this job.

These days I don’t go into the shop and make many things, but I like hanging out in this lab. I can walk over there, see 10 different gadgets, and these guys have gotten so good at building robots fast. I tease the world and say we build a robot a week, which we almost do. If you walk around the lab you’ll see all kinds of things with motors and computers connected to them. And that’s kind of the spirit I like to have of having this intense culture here where everybody’s kind of helping each other. We’re not competing with each other. Although they’re individual projects, each of the students is helping the other ones, so it’s a really nice synergy that comes out here. Another one of my students who’s had an interesting career, John Morrell, did some very nice controls work understanding, explaining, the fidelity of robots. And this will come back into my work on high fidelity robots, but after that he went to work for Dean Kamen’s corporation, DEKA Corporation, and he wouldn’t tell me for five years what he was doing. <laughs> And then out came Dean Kamen’s – what’s it called?

Q:

The Segway?

Ken Salisbury:

Yeah.

<laughter>

Ken Salisbury:

So he worked with Dean Kamen for about five years and then came out with the Segway. Turned out John had been doing the control system on that, which is a really exciting and I think actually pretty practical. It’s novel certainly, but it’s making a lot of changes in some people’s lives. And it’s not just Joe Average wheeling down the street. It’s folks who have some mobility difficulty or people, I’ve seen policemen, using them to cruise around. And they’re really fun to ride. I don’t know if you’ve ever ridden one, but it is a gas. <laughs> It’s just so well designed, and if the batteries get weak it kind of has a lot of human safety capabilities in it. It’s very hard to do something wrong with it. That’s another kind of nice <laughs> success story. And then, course, there’s the personal robot work. At times people say, “Well, you’re the grandfather of <laughs> some really interesting things.” I don’t know what it is, but certainly I’m proud to see the PR2 and how it’s changing the world. It has some interesting mechanical concepts in it that I helped with. Eric and Keenan really did the lion’s share of the really hard work and they continue to do that. But it had been a dream of mine for a long time to build a platform that was functionally quite good, and distribute it, give it away. <laughs> Which they actually did give them away. And that’s inspired a lot of people. So that’s another device that ultimately did come from our lab here and has now taken on a life of its own.

Q:

Why is it a personal robot?

Ken Salisbury:

Well, it’s a counterpoint to the hydraulic 500-pound brute that can pick up car engines and has flashing red lights and cages around it. That’s not a personal robot. You’re not going to go shake hands with that puppy. <laughs> It’s perhaps a little bit of a bow to the word “personal computer.” Something you can have in your space and work with. It’s not a card-munching UNIVAC with thousands of <laughs> switches on it and you got to wear a white coat to operate it. It’s something I can sit down at my desk or even carry around my hand now. So to make it a little more intimate with your daily life. The father of a friend of mine who’s one of the early pioneers in computers spec’ed out one of the first personal computers. This was Wesley Clark. Not the military fellow, but a different fellow, who was at Lincoln Labs. And he spec’ed the first personal computer and one of the things he said that I love is it should not be taller than a human being. He sort of had a sense that it should not be imposing or threatening in any way. It should be kind of down at a human level. And I think that’s true about robots. If you look at the PR2 it can kind of shrink down and be relatively un-intimidating. It can also stand up and be quite tall, it’s function useful, but little things like that make a difference. And so it’s trying to be personal. And you look at it, it’s got a certain amount of whimsy. It’s not a scary, sharp-edged thing. It’s got nice, rounded corners. There’s a nice bit of industrial design in it that makes it not scary. And it’s designed to work in human environments. It’s not something with tank treads that’s going out to explore caves with explosives in them. There’s great value to that too, but it’s designed to be in our environment and there’s great interest in that. I don’t really think we know what the real application is going to be. Is it elder care? Is it factory box stacking? Is it finding things that I lose in my house? <laughs> I think Eric Berger came up with this idea of a Googlebot, which would just spend its idle time going around the house and looking at stuff and then whenever you say, “Where’d I put my glasses?” it’s like, “Oh, it’s over there.” <laughs> It’s not entirely crazy. One of my dreams is to have a personal robot drive across campus and get me a cup of coffee and come back, primarily to watch people’s reactions to it and kind of learn from that. Partly because it’s technically challenging to do that.

Let me say a little bit about what I think the future of robotics development is going to be. On one end of the spectrum we’ve got fully autonomous robots, and the Roomba is that, but its functionality is not really huge. Does good job at what it does. On the other hand, you have the fully human controlled robots, which tend to be like a telemanipulator, going into a nuclear environment, changing out fuel or fixing broken things. But in between there’s this huge bit of territory, which is semi-autonomous or supervisory controlled robots. And just like airplanes, the first time they flew with a autopilot you didn’t have a 747 full of 400 people. <laughs> They tried those things out over time, developed them. As trust and capability developed they finally got to a crossover point where they were willing to accept having people on the planes with them. So robots, we’re not going to turn an autonomous robot loose in a factory and have it stack all the boxes, unless it’s a highly structured system, like a factory automation. It’s going to take time to trust the robot. So people are going to be in the loop. You look at the intuitive surgical robot. That robot is not stitching up the person all by itself and yet it is in some ways augmenting the person. It’s scaling their motion, it’s taking jitter out of it. It’s bringing visual and haptic frames of reference to be coincidence, so it enables the person to be more effective. And so it’s these little bit of inroads on autonomy, first for safety, next for maybe usability and then what’s next?

Well, if I’m going to drive a robot through my interface to it, but I don’t want to have to think about running over things, well, the robot can scan an avoid obstacles and can tell me I can’t go there because the door is locked. Or next higher level might be, “Go to these coordinates and if you need help, dial back to me and I’ll help you open the door or help you find the key.” So there’s going to be this synergistic relationship that I think is going to go on for a long time. I think it makes business sense in that we can get these machines doing something useful early on. And technical sense in that we kind of learn what’s worth doing, what’s possible. There’s so many emerging technologies, I can’t predict what sensors and what algorithms are going to be available to make this autonomous Roomba. And I’m also impatient. I want to see my robot doing something soon. Yeah, it’s great, but it’s taken time to really understand how to do high level planning or to make judgment. Getting to Asimov’s three laws. Ain’t going to get there for a long time. But being able to disarm a mine with a human in the loop but enough machine capability to do it right. That’s happening and that can happen better. Helping folks get out, people who can’t get out of the house for whatever reason. But have them enabling them to have a remote presence so they can be back in society in some way. I think that’s going to become attractive.

People, what’s it called, Taxi [ph?], from the mobile TV screen? <laughs> And microphones and camera. People are beginning to dress them up. They put hats on them, they draw mustaches on them. They become sort of personal extensions of themself. I think more and more people are going to identify with these things and personalize them, and that’s somewhere along the spectrum. I’m not going to tell this robot to behave like me and go to a party. I’m going to be there. Maybe I can’t be there in all respects, so it’s going to take care of not running over people’s feet or getting itself there without my having to drive it there. So I love that future view of how robots and people will cooperate and we’ll each learn about each. Robots aren’t learning. Well, they are learning actually. <laughs> In our own ways, we will learn to cooperate, and that’ll drive the designs of them. I don’t know if we’ll ever get to autonomous robots, C-3PO will make me dinner. <laughs> That’s a little far out there of me to predict if they will within our lifetimes. They’ll be doing very amazing things, and they already are, in surgery and a lot of other places. So maybe that’s a good place to stop. <laughs>

Final Remarks

Q:

I think so.

Ken Salisbury:

I can do some card tricks for you.

<laughter>

Ken Salisbury:

I forgot my cards. <laughs>

Q:

Collaborating with the robot hands. That would be something.

Ken Salisbury:

Yeah.

Q:

Yeah.

Ken Salisbury:

I was reading a book on a fellow who only had one hand and he does these amazing card tricks. One of the reasons I read more than do about magic is it’s talking about dexterity. Completely unconsciously, but they’re just talking about, “How do you make this thing disappear? How do you not make it show? How do you make it flashy?” And so it’s great fun, and I do practice in front of the mirror quite a bit.

<laughter>

Ken Salisbury:

The other thing I like, I do lots of things with my hands. I do pick locks. I’m very interested in knot-tying. I’ve certainly learned some knots that are not very well-known. And I play the flute. And if we come back to the tactile sensors, the fine array tactile sensors I mentioned, my teacher and I have been just wanting to put touch sensors on people’s hands so you could track the growth of a person’s skill. If you’re wrong, you squeeze too hard or you put sheer forces on it, and would make a very interesting Ph.D. project for somebody to sort of track the accumulation of talent in that touch domain. Because sometimes she looks at my hands and she can’t tell why <laughs> it’s not working, so it has to kind of feel my finger or whatever. But anyway.

Q:

So I think in Waseda they have the flautist robot and they’re really interested in doing this kind of more human-like, more individual feeling of playing rather than just the mechanics of playing?

Ken Salisbury:

Okay. Where is this?

Q:

Do you know?

Ken Salisbury:

I’ve seen some of this but where?

Q:

In Waseda University. They call it Kansei---

Ken Salisbury:

Yeah.

Q:

–engineering and I know that there – so I think, I don’t, maybe it would be fun for them to hear that idea. Because I think one of the things that they seem to be really interested in is, “How can we actually make something where it’s not just playing these flat notes, that there’s a feeling or a personality or the kind of specific individual playmanship that a actual human has when they’re playing an instrument?” My brother’s a professional piano player.

Ken Salisbury:

Okay.

Q:

And sometimes he’s forced to use an electric sort of thing and they used to be really terrible. They’re a little bit better.

Ken Salisbury:

Yeah.

Q:

They have weighted keys and they’ve gotten much better at the sensitivity of –

Ken Salisbury:

Yeah. Velocity and force sensing.

Q:

–the velocity and things.

Ken Salisbury:

Yeah.

Q:

But it’s still like he doesn’t, there’s not that haptic feedback that you get from a real piano and the kind of vibration and the sense you get when you’re actually playing the piano.

Ken Salisbury:

So the dual of that is these folks who want to build a robotic flute player. To me that sort of comes from the same place of people who want to make a legged robot. It’s a really interesting technical challenge and it’s impressive when they succeed, but it’s sort of putting the cart before the horse in my mind. If they’re developing better flute mechanics, which is a different research domain to get better quality sound on it, that makes sense. The electrical piano getting better and better so it can be truly expressive and a person can really think about the movement of the music rather than the mechanics of the keys, where with a flute, it’s more an interesting study of biomechanics, I think, getting the armature right, getting the air flow right, sort of understanding the physics of what’s going on. That’s the first thing they had to do, and I have seen that version of the robot. I’ve not seen the version that’s trying to be expressive.

And that seems like the long way around. If I wanted to make a robot that could jump really high, I wouldn’t necessarily put legs on it. I would use gas-powered pogo sticks <laughs> or something like that. And if I wanted to make an instrument that was expressive, I would skip the difficulties of mechanical interactions and I’d do just electronic production of it and work on the higher level issues of how do I make it sound interesting? If I’m trilling on a song that’s going slowly, it’s very different from trilling on a song that’s going fast. It’s not just a mechanical transformation. So to get that to be aesthetically pleasing, you can look at all those problems without building an anthropomorphic finger pushing, mouth-twitching flute. <laughs> Again, I don’t want to deprecate the work. There’s a lot of great engineering that’s going on in that area and it’s years since I’ve seen what they’re doing. But if I were trying to understand expressiveness and presentation, those things, I wouldn’t impede that process by requiring fingers and robot mouths <laughs> to do it.

<laughter>

Ken Salisbury:

There was a piano-playing robot around the time that I was working on my hand. I think it was from Waseda or one of the Japanese universities, and it could play and it could even scan with a camera the music and transcribe that into finger motions. And it was sort of interesting, “When should I shift my hand? When should I just reach with my fingers?” and it solved a lot of classic problems more in the mechanical sense that, “What algorithms? Looking ahead in the music, what’s the optimal shifting of my hands that I could do?” But do you have to go so far as building a machine that actually has fingers to do that? Again, it’s interesting and it’s entertaining, but I don’t think it’s solving a deep scientific problem to build a hand like that, at least not directly. Maybe along the way you learn something about it. I’ve programmed my hand to play the National Geographic theme for a <laughs> film that they were doing. It was really pretty easy. I just put it on each key and said, “Play the sequence <laughs> over and over,” and it did it. Wasn’t very expressive.

<laughter>

Ken Salisbury:

But it worked. Yeah. The expressiveness that I’m interested in, it comes back to the touching of the robot. That is meaningful if it’s helping me walk or if it’s helping me do a task or if it’s serving me noodles or whatever. There is a reason to study that quality of interaction so that I’m comfortable with it. Same way as designing a car. How should I design the steering mechanism with variable ratios so that I feel comfortable driving fast and slow and turning? So accommodating to humans’ capabilities and enhancing them I think is a really good thing to do. But just trying to replicate them for replication’s sake, yeah, I’m not quite ready for that.

<crew talk>

Q:

Unless you think there’s something that we haven’t touched on that you’d like to mention.

Ken Salisbury:

Appreciate that question. No. I’ve made the point about the students are really the real output. I think that’s pretty important. And I’m not the first one to say that. But it’s something that makes me very proud. As I was going through my list of students and I go, “Wow. He did this, she did that. Yeah.” <laughs> It’s good stuff. We’re all kind of on the same family tree now. Bernie was such a huge inspiration to me. I’ve had lots of good mentors. He’s probably been the most significant one after my father though in just attitude about, “You can do it, and you don’t have to go in a linear path. You can kind of wander around in your thinking and let it stir in your head until something good comes out.” And his just kind of gentle manner. He’s not putting everybody on the Gantt chart saying, “This is what you’re going to do and I don’t want you to think about it. Just be a soldier.” <laughs> He was very encouraging of some independent thinking and so certainly learned a lot from him. We go to Burning Man together too.

<laughter>

Q:

Right. Going to ask about Burning Man. <Inaudible>.

Ken Salisbury:

You want to get –?

Q:

How long have you been going?

<laughter>

Q:

Yeah, yeah. Was like, “I know we had something that we didn’t have.”

<laughter>

Ken Salisbury:

How much time you got for this one?

<laughter>

Ken Salisbury:

Okay. For quite some years Bernie Roth was trying to talk me into going to Burning Man and he said, “It’s going to change your life.” And I used to camp a lot and I’m sort of up for kind of crazy things. So I went when my daughter graduated from high school. She and I went and camped out with Bernie and his clan of friends and it was life-changing. It was huge for my relationship with my daughter just to be in this crazy environment and <laughs> we have a sign now that I’ve put up in the camp that says, “Recess all day.” <laughs> And the other sign I put up says, “Mistakes are okay.” That’s relevant to my flute playing. I get pretty anxious when I play in a formal recital, but in front of this crowd, I just, I played for half an hour and maybe everybody was drinking or half of them were.

<laughter>

Ken Salisbury:

Maybe I was. I don’t know. But it was the first recital that I really, really enjoyed, just because it was this whimsical, free environment. I’ve been getting interested in making chimes. In the orchestra you call them tubular bells. They’re not wind chimes but they’re chimes. And I wanted to make an interactive musical instrument for people to play with, so I made these huge eight-foot long chimes and tuned them. My geek-self went in high drive on this and tuned them really, really exquisitely and then bought an old swing set to hang them on and took it to Burning Man and hundreds of people came by and played these chimes. Some of them were really good musicians, there were gamelan players and there were percussionists and there were blind folks who came by and just kind of figured them out and lots of folks in between. I have lots of pictures of people with smiles just playing away on this. And next to my hand, that’s probably the most satisfying thing that I’ve done. <laughs>

That’s because I got lots of good feedback. People like it. They played with it. I made a toy that people played with, but it was sort of aesthetically pleasing and I put a lot of science into it and suddenly it worked. One funny anecdote. I had these chimes as I’m tuning them hanging in the hallway in my house, in the stairwell of my house. And my son would come by and he goes, “Dad, why do you have a stripper pole hanging in the stairway?”

<laughter>

Ken Salisbury:

“How come you know about stripper poles?” was my question.

<laughter>

Ken Salisbury:

And then I have three of them hanging there and he said, “Well, Dad, now you got three of them. What goes?” I said, “Well, when you’re at Mom’s house you don’t know what goes on here.” <laughter> Crazy stuff. But it was really fun. And I’ll probably do that again next year.

Q:

So have you been once or –?

Ken Salisbury:

Four –

Q:

Four times.

Ken Salisbury:

Four times.

<laughter>

Ken Salisbury:

It changes.

<laughter>

Ken Salisbury:

And I love going with my daughter. It’s just a time when we get away from our normal interaction modality. School isn’t there, the phone’s not ringing and all this other stuff, and we really had a lot of fun. We get really close about this. And we still talk about it constantly. “What are you going to wear next year? What are you going to make?” <laughs> Or “Who can we invite?” My son who’s 16 is just dying to go. I’ll wait until he’s 18. You need a certain level of maturity or experience to go there. And it’s such a rugged place and you got sand and wind and dirt and it’s crazy. But the adversity almost makes it good. You survive it and you’re all in it together. You’re all wearing your goggles and your dust masks at times. But it’s a really good experience. I used to be in the Scouts and used to love camping and taking care of myself and being prepared and all of that. Well, you got to do that in spades <laughs> when you go to this place. And so I feel pretty helpful. I love going around with my tools and helping people fix things that are broken. Their sculpture that’s falling down or something that needs to be welded. So the gifting attitude is interesting too. It’s not just, “I’ll pay you this for that.” It’s, “Here’s something I want you to have, and I’m not expecting something in return.” So it’s sort of an artificial culture but it really works well. And it does echo over rest of your life. Just kind of questioning the way we do things and maybe being more generous, whether it’s in spirit or materially. I don’t know. It’s really been a good experience. I have Bernie to thank for that as well, yeah.

Q:

Burning Bernie.

<laughter>

Q:

Weekend with Bernie.

Ken Salisbury:

Yeah. We were going to call it, yeah, Bernie Man we were going to call our camp. <laughs> That didn’t stick. It became called Nice Nice for some other reasons. But yeah, it’s a good group. And it’s really eclectic. People don’t – their identity is not what they do. You rarely hear somebody say, “What’s your job?” It’s like, “What did you see today?” or “Your costume’s cool. What do you like to do?” or “I found this noodle bar. Won’t you come with me?” And we get away from the normal social exchanges in a really nice way. I have no idea what some of the people I camped with do. I know what exhibits they like and the funny costumes that they like to wear in some cases and what music and stuff, so yeah, it’s good in a very different way.

Q:

Okay. We’re good. Great. Thank you.