Oral-History:Norihiro Hagita: Difference between revisions

From ETHW
Line 161: Line 161:
Before I joined the ATR, he was already a member of the visiting researchers in ATR.  At that time they focused on how to create a humanoid type robot.  Especially they focus on the communication functions.  So at that time that’s the Robovie version zero, a very tall robot.  It was a can, the shape is almost a can.  But Robovie 1 has no sensors.  Almost the same similar to the Robovie version 2.  The Robovie 2 has special tactile sensors.  Whenever the people touch the robot, the robot learn who might be touched, which part of the sensors are detected or not.  So before only Hiroshi Ishiguro developed these robots.  But after we joined each other, we just invest the Geminoid  I just invest to him.  After that he got a breakthrough for the humanoid.  Because he always focused on not only communication, but appearance of the robot might be a very important thing.  So he always focused on the ultimate research.  Very simple, more reality.  If selecting the more reality he selected Geminoid, and more simplest, simple and simple.  After that he selected a Telenoid.  Very simple, white.  So maybe whenever people developed a humanoid robot maybe in the range that these things, I see, that he covered the dynamic range of the human appearance, I think.
Before I joined the ATR, he was already a member of the visiting researchers in ATR.  At that time they focused on how to create a humanoid type robot.  Especially they focus on the communication functions.  So at that time that’s the Robovie version zero, a very tall robot.  It was a can, the shape is almost a can.  But Robovie 1 has no sensors.  Almost the same similar to the Robovie version 2.  The Robovie 2 has special tactile sensors.  Whenever the people touch the robot, the robot learn who might be touched, which part of the sensors are detected or not.  So before only Hiroshi Ishiguro developed these robots.  But after we joined each other, we just invest the Geminoid  I just invest to him.  After that he got a breakthrough for the humanoid.  Because he always focused on not only communication, but appearance of the robot might be a very important thing.  So he always focused on the ultimate research.  Very simple, more reality.  If selecting the more reality he selected Geminoid, and more simplest, simple and simple.  After that he selected a Telenoid.  Very simple, white.  So maybe whenever people developed a humanoid robot maybe in the range that these things, I see, that he covered the dynamic range of the human appearance, I think.


===Challenges of Robotics===
===Agent Systems and Pattern Recognition===


'''Q:'''   
'''Q:'''   
Line 217: Line 217:
'''Nori Hagita:'''   
'''Nori Hagita:'''   


Yes, very big ones.  We just collaborate with ASIMO right now so the last October we made a special experimentation nearby here.  Other time we just introduce the very, very precise behavior for what a precise gesture generated depend on the people’s activity.  Some visitors are moving and some visitors are take a look at different directions.  Depend on the ASIMO just detecting these situations and collaborating with the ambient intelligent systems.  So depend on the people’s activities, ASIMO just moves and says something.  Almost human relationship I think, human to human relationship.
Yes, very big ones.  We just collaborate with ASIMO right now so the last October we made a special experimentation nearby here.  Other time we just introduce the very, very precise behavior for what a precise gesture generated depend on the people’s activity.  Some visitors are moving and some visitors are take a look at different directions.  Depend on the ASIMO just detecting these situations and collaborating with the ambient intelligent systems.  So depend on the people’s activities, ASIMO just moves and says something.  Almost human relationship I think, human to human relationship.


===Robotics Collaborations and Conferences===
===Robotics Collaborations and Conferences===

Revision as of 17:51, 29 May 2015

Norihiro Hagita

Norihiro Hagita was born in 1954. He attended Keio University where he received a B.E., M.E., and Ph.D. in electrical engineering in 1976, 1978, and 1986, respectively. In 1978 he joined the Nippon Telegraph and Telephone Public Corporation (now NTT) where he worked until 2001. From 1996 to 2001 he served as the executive manager of the NTT Communication Science Labs. . In 2001 he moved to the Advanced Telecommunications Research Institute International (ATR), where he launched the Media Information Science Lab in October 2001 and the Intelligent Robotics and Communication (IRC) lab in October 2002. He currently serves as director for both groups, as well as Board Director of ATR, an ATR fellow, and chair of ATR Creative.

In this interview, Hagita discusses his work in robotics, with a focus on human-robot interaction, agent systems and networked robots, visual perception, and pattern recognition and learning. He recounts his robotics work at various research laboratories, such as Steven Palmer's lab at UC Berkeley, and his time and contributions at NTT and ATR. He reflects on the influences and successes of his career, his involvement in creating the IRC and Media Information Lab at ATR, and the collaborations on various projects (such as Robovie and RFID tags). Additionally he comments on the evolution and difficulties of robotics, and provides advice to young people interested in the field.

About the Interview

NORIHIRO HAGITA: An Interview Conducted by Peter Asaro, IEEE History Center, 5 November 2013.

Interview #701 for Indiana University and IEEE History Center, The Institute of Electrical and Electronic Engineers Inc.

Copyright Statement

This manuscript is being made available for research purposes only. All literary rights in the manuscript, including the right to publish, are reserved to Indiana University and to the IEEE History Center. No part of the manuscript may be quoted for publication without the written permission of the Director of IEEE History Center.

Request for permission to quote for publication should be addressed to the IEEE History Center Oral History Program, IEEE History Center at Stevens Institute of Technology, Castle Point on Hudson, Hoboken, NJ 07030 USA or ieee-history@ieee.org. It should include identification of the specific passages to be quoted, anticipated use of the passages, and identification of the user. Inquiries concerning the original video recording should be sent to Professor Selma Sabanovic, selmas@indiana.edu.

It is recommended that this oral history be cited as follows:

Norihiro Hagita, an oral history conducted in 2013 by Peter Asaro, Indiana University, Bloomington Indiana, for Indiana University and the IEEE.

Interview

INTERVIEWEE: Norihiro Hagita
INTERVIEWER: Peter Asaro
DATE: 5 November 2013
PLACE: Tokyo, Japan

Early Life and Education

Q:

I’ll start by asking you to introduce yourself and tell us where you were born and where you grew up and went to school.

Nori Hagita:

Yeah. I was born in 1954. And I graduated from Keio University. I got the master degree from Keio University. That’s 1978. And also I joined to the NTT, Nippon Telegraph and Telephone Public Corporation. I also got a Ph.D. from the Keio University in 1986.

Q:

What did you study as an undergraduate?

Nori Hagita:

The major study is pattern recognition, especially the handwritten Chinese character regulations. It’s a very hard job, yes.

Q:

What made it really difficult at the time?

Nori Hagita:

Most of the difficult things is the number of thousand categories. How to manage these large character sets. But in this case I just introduced a very interesting statistical method at that time.

Q:

What kind of computers were you using?

Nori Hagita:

The VAX-11/780. It’s a miracle minicomputer at that time.

Q:

And were you scanning the images?

Nori Hagita:

Yeah. Scanner and printer is also at that time the Versatec printer, I was always using. And also we contributed to some OCL systems. So the scanner always we can set it to the CCD camera or the CCD sensors.

Q:

And were the techniques familiar when you were looking at Chinese characters from Japanese characters or Latin characters?

Nori Hagita:

Ah, yes. Japanese character included the phonetic letters rather than Chinese character. The phonetic letter we call the hiragana and the katakana. It came from the Chinese character and more smoothly it generate and corresponded to the sound.

Q:

So is that easier to recognize optically?

Nori Hagita:

No. Very distorted. Depending on the people, the different shapes are automatically generated. <laughs> It’s the harder one.

Graduate Work and Years at NTT

Q:

So who did you work with on your master’s degree?

Nori Hagita:

Oh, in the NTT researchers. Yeah. Because the NTT did such work. They did constant research on the speech recognition and image processing and imagery recognition. So we got many, many researchers, or our colleagues related to that character recognition, the printed of Chinese character recognitions. But some people focus on the numeral recognition systems. So some of them come together at that time many people focus on not only the computer, digital machines. It’s new things at that time. So some people focus on the syntactic pattern recognition. My major interest is pattern recognitions rather than syntactic pattern recognitions.

Q:

And you continued that work for your Ph.D. ? Or did you do a different project?

Nori Hagita:

Yes, after Ph.D. in the Keio University I’m barely interested in the vision perceptions. So after that I just learned lots of the psychological findings on the vision perceptions. After I loved psychology so that I just moved to the U.C. Berkeley Department of Psychology, especially the vision perceptions. And at times, I joined to Steve Palmer’s lab. He’s very expert for the neural network and cognitive psychologies. Because he came from the Rumelhart who had done Norman’s [Anderson?] in the UCSD. So he runs a very, very wide range of vision research including the computer visions. And also I join to the club right away with some computer vision laboratories in the UC in Berkeley. So sometimes I learned a lot of things. But one year I stayed. After that I back to the NTT. I have to do the management rather than doing basic research. Especially I focused on the personnel divisions. And after this at NTT at that time we just introduced the very interesting post-doctoral assistants. So I just interviewed in many, many, many countries including the United States. And after that we always hired more than 20 post-docs from foreign countries. I just managing these things.

Q:

So when you got into the psychology of vision were you influenced by the works of David Marr or J.J. Gibson?

Nori Hagita:

Yeah. At that time they were the craziest, the research on David Marr and neural network at the PDP, parallel distribution processing. Written by the Rumelhart and McClelland. I love it. Yes, very much.

Q:

Did you get back to that work eventually?

Nori Hagita:

Yes. I ended also up when I was a university student; I’m first generation of the neural network. Simulations of the respirations or the pain mechanisms that sometimes I submit to the IEEE SMC. Younger. <laughs>

Q:

So when did you get back into research after this?

ATR Labs

Nori Hagita:

Yeah. I just focus on, still continue to the distorted character recognitions or the image understandings. And also I have to serve as the manager of the basic research laboratory. At that time the focus was on agent communications, as if it were human. So I just integrate the speech recognition and the image processing and linguistic research. Combined all things. After that generate some agent, many kinds of agents, until 2001. But in 2001 I just moved to the ATR even that though I am an NTT member. And I just launched a special laboratory, we call it the Media Information Science Lab at ATR. And I just served as the director of this laboratory. 2001 October. And one year passed. I also ran a second laboratory we called the Intelligent Robotics and Communication Lab. 2002, October. And October 2, IROS is held in the – Which one? EPFL. Yeah. I start, I just run the Intelligent Robotic Communication lab. And two days after we just visit, I just met many, many very important people in the IROS. Only two days or three days I met the whole thing. All very famous researchers in the world. I just loved them. Very exciting societies. So after that I also collaborated with Hiroshi Ishiguro. He was the department head of our laboratories. When I just run this laboratory only a few people. But one year passed. More than 20 people joined. And at that time though we just brought the humanoid robot to the elementary schools with the speech recognition systems. But after that I learned that it didn’t work because the kids always spoke loudly, loudly, loudly. So the speech recognition did not work. So after that I just introduced the RFID Tags, radio frequency ID tags. So I brought these tags to each pupil that whenever the pupil approach, come close to the robot, the robot said his name or her name. So the kids were surprised. After that just to start the learning process of the 50 English words within two weeks. It works very well. So after that the networked robot might be a very powerful way instead of the human-robot interaction. So how to collaborate with a sensor networks and cyber network on cyberspace informations? That means the Internet, not only the standalone visible robot. So after that we just proposed to the government the combination of the visible type robot and the sensor network and the Internet information. And so after that we just proposed the concept of the networked robot. And proposed the end problem with ubiquitous computing researchers or the computer vision researchers or psychology or machine translations. We just collaborate with each other to generate the new types of the robot that we called the “networked robotics.” So the new project is start in 2004. So after that almost one decade passed. Yes.

Q:

And was that a humanoid robot that you started working on?

Nori Hagita:

Yes. Before I came I joined to the ATR, Hiroshi Ishiguro already they built the special robot they called the “Robovie.” The “vie” in French come from everyday life, like the English. So at that time Hiroshi wanted to realize the robot within daily life. So after that the Robovie is a very, we always brought the Robovie to the demonstration system rather than the laboratory, in the real world always. So in the real world we learned lots of things. Sometimes very interesting experimentation. Two hours the mom and the child talking with the two Robovies, for two hours. What happens, after that sometimes, whenever the one Robovie communicates with mom, the child got jealous. A very interesting phenomenon. And after that we just collaborated with psychologists or sociologists. So that means the extent of the research field of the human-robot interaction to be more cognitive, more psychological collaboration is made.

Q:

And for the agent systems that you were doing before you have the robot attached to it, so it’s very different kind of interaction when you have this robot running.

Nori Hagita:

Yeah. But if combining the agent systems and the humanoid robot, for the users it’s easy to use. For example, here, let’s imagine, here is a humanoid robot. But the other agent system is collaborate with cyberspace. Whenever people want to get to some information from the station informations, so this agent collaborates with sensor network in the stations and they communicate with each other to get this information. And if this agent give this to humanoid robot and the humanoid robot said to us, the current situation on the stations. So that these collaborations easy to generate and realize if the agent and visible type robot collaborate with each other and with the sensor network.

Q:

Now was that the first robotic system that you worked with in your career?

Nori Hagita:

Yes. Past robotic systems was introduced to the supermarket. And some supermarket is open. After that some the visitors tried to find us, where they want to go. At that time they had special RFID tags. In advance we brought them to the citizens and whenever the visitors approached to the Robovie and Robovie said, “Hi. Hello. Can I help you?” And after that the first time, the visitor said something. And other times the robot memorized these words. And the second time the visitor came, depending on the first times utterance or interactions, the Robovie would say something or sometimes recommend a special excellent food. And it’s a kind of affiliation service in the real world, not cyberspace. We just realized these things. And after that we learned the ambient intelligence is more important to continue to the human-robot interactions. So we brought the special sensor networks into real world. And some street, in advance accumulating the people’s activities. At that time just to introduce the laser range finders and the cameras, mainly multiple cameras. This system detects people’s positions and behaviors. The position of within 5 centimeter precision, even though more than 20 or 30 people come walking around here, each people’s positions we can detect it within the 5 centimeter precision. And after it accumulates these data for the one week and 24 hours we can have a very interesting ambient intelligence map. So even though unknown peoples visitors passed through here, even though they are without RFID tags, the system estimated or anticipated people’s activities. There’s some people the system anticipates that that guy might enter this shop. Then we can estimate these situations. They’re very exciting. So combining these ambient intelligent systems and the humanoid robot. Sometimes some couples came. And sometimes humanoid robot come close to them and said something and recommended, “You should go to this shops.” And some people join and enter this shop. It’s a kind of affiliation service. We realized these things. And sometimes we just collaborate with ASIMO because ASIMO is a very interesting in these sensor network systems. We call them network robotto platforms. And also the Paolo Dario the Dustbot also joins these systems. And they brought the Dustbot system to the Osaka already implemented for sensor network. It works very well.

Q:

And where did you first meet Ishiguro and how did you develop a collaboration with him?

Nori Hagita:

Before I joined the ATR, he was already a member of the visiting researchers in ATR. At that time they focused on how to create a humanoid type robot. Especially they focus on the communication functions. So at that time that’s the Robovie version zero, a very tall robot. It was a can, the shape is almost a can. But Robovie 1 has no sensors. Almost the same similar to the Robovie version 2. The Robovie 2 has special tactile sensors. Whenever the people touch the robot, the robot learn who might be touched, which part of the sensors are detected or not. So before only Hiroshi Ishiguro developed these robots. But after we joined each other, we just invest the Geminoid I just invest to him. After that he got a breakthrough for the humanoid. Because he always focused on not only communication, but appearance of the robot might be a very important thing. So he always focused on the ultimate research. Very simple, more reality. If selecting the more reality he selected Geminoid, and more simplest, simple and simple. After that he selected a Telenoid. Very simple, white. So maybe whenever people developed a humanoid robot maybe in the range that these things, I see, that he covered the dynamic range of the human appearance, I think.

Agent Systems and Pattern Recognition

Q:

What were some of the big challenges that the agent systems and pattern recognition that you’d been doing -

Nori Hagita:

Yes. We just focus on the big area, wide area. Right now the town becomes a robot. That means I just defined a robotic service has three functions. The robot service always including the three functions. One is a sensing and actuation and its control. And especially the most with the robotics is intelligent robotics research as a focus on the intelligent control I think. So if the systems or device or robot including these functions we called the robotic functions. Let’s imagine the building has special sensors and the building has special actuation functions, we can realize the robotic building I think. And also the Japanese government focused on how to create new types of cities. Cities sometimes include some sensing informations and right now open to the public. The people goes to this prefecture and get out of this prefecture. We can see these situations. After that some people focus on the, “Oh, we have to create the new market.” So this is a sensing part. But sometimes robot systems or personal mobile systems or <inaudible> systems depend on the people’s situations to give the different services. I already developed these systems. We call them ubiquitous network robotic platform. This platform is authorized. That means made in the consent of the ITU-T in this use, open to the public. Most of the people focus on the internet of things, IOT. And IOT things means I think including the robot. And most of the people focus on not only the machine but also the robotics because a robot is more intelligence. So we realize these cities, smarter city we call them.

Q:

What kinds of things would a building want to communicate to people? And how would it communicate to people?

Nori Hagita:

It depends on the situation I think. In the case of Japan we are facing the number of increasing number of elderly and decreasing number of kids. So especially the elderly problem or disabled. So many government, Japanese government focus on these systems. So we would like to contribute to the network robot systems, including the humanoid robots. Because the elderly wants to talk. Even though they are alone, they want to talk. So sometimes a robot or some agent helps facilitate the communications.

Q:

Would it also be used to direct people in public buildings?

Nori Hagita:

Yeah, yeah. Right now we also implemented in some street or town areas. For example, in the South Port of Osaka we just implemented 900 square meters of sensor networks. Other times sensor detecting the people’s directions and speed and height of the people automatically even though there are more than 100 people walking, individual people’s activity is detected.

Q:

Would it make self-driving cars a lot easier?

Nori Hagita:

Yeah, yeah, yeah.

Q:

So I’m still curious about some of the work you did with NTT. And what kind of agents were you building? What were some of the challenges of computer agents talking to human agents?

Nori Hagita:

Yes. I think in not only the computer agent but also maybe agent is a cyber agent. Not only the cyber agent but also the real agent like a robot were introduced. Because elderly wants to bring something or sometimes they themselves by themselves want to move using these vehicles or robotic wheelchair systems. NTT is also focused on this field. And also DoCoMo is focused on the smart phone. They assume some agent, cyber agent.

Q:

So what’s the hardest part of the interaction? Is it sensing what the human wants? Or is it communicating to the human?

Nori Hagita:

In a city or also the DoCoMo, they developed a special spoken speech recognition system. Currently we just say something like that. It not natural things and like that. It is more the same as the human to human communication. This one is, in place of these ones, maybe robot or say something. But in this case we have to using the special interface. But human to human interface easy to say to each other even though the elderly and sometimes no graphic. It’s easy to communicate agents. So ultimate interface might be the robot, human-robot interactions I think.

Q:

Is gesture a big part of that?

Nori Hagita:

Yes, very big ones. We just collaborate with ASIMO right now so the last October we made a special experimentation nearby here. Other time we just introduce the very, very precise behavior for what a precise gesture generated depend on the people’s activity. Some visitors are moving and some visitors are take a look at different directions. Depend on the ASIMO just detecting these situations and collaborating with the ambient intelligent systems. So depend on the people’s activities, ASIMO just moves and says something. Almost human relationship I think, human to human relationship.

Robotics Collaborations and Conferences

Q:

So who are some other people you collaborated with on robotics?

Nori Hagita:

Yes. Currently we have more than 1540 researchers. And some people is physics. Physicians. Because they just analyze people’s activities. And also the psychology and cognitive scientists always collaborate. And in the case of Hiroshi is a focus on the artist. Artists. So Lintz you know, Ars Electronica. More artists. And in a case of the ATR, always branching out to new types of research. The artificial life or the media information in creating the new types of media. We are always interested in the fields. Currently is in vivo biology is introduced to our laboratory. So in this case the zebrafish is just, we can see what’s going on, the zebrafish internal organs. Because we can see easily the liver. We can see the vessel also visualized. And collaborating with simulations and we just analyzed the mechanisms of cancer. In the near future many robot systems introduced in everyday life, at that time the soft robotics might be very important to sometime to support the health care. So sometimes we collaborate with medical doctors or psychologists or the physiologists or the biologists. Combining all the things creates new services.

Q:

And you said you went to a meeting in 2001 where you met a bunch of the leaders of the field, so: Who were they and then whose work really influenced you?

Nori Hagita:

2020?

Q:

2001. You said you went to a conference and met a lot of leaders.

Nori Hagita:

Ah, yes. At that time we just focus on ubiquitous computing. Let me see, 2002 we just developed a special Google glass like a Google Glass. And at that time each one wear these special systems. See cities, see LED tabs, each panel attached a special tab. Whenever we take a look at the special panels we can detect a panel ID. And if we just focus on the special humans, human also wear the same systems, we can easily detect the people’s activity automatically without name card or greeting. So after that automatically home page is generated at this conference. And after that the JARA focus on the other activities. It’s first stage of the combination of the robotics and the ubiquitous computing systems. So sometimes we just probably do it with some researchers on the UbiComp over group wear. Because currently most of the robotics researchers focus on the real robotics or classical robotics. But at that time that we came from, I came from the ICT companies, so how to collaborate with the robotics and ICT technologies. So at that time the first stage, how to join these systems. This is the story of our 2001 research, yeah.

Q:

And have you trained many Ph.D. students to work in robotics?

Nori Hagita:

Yes. Yeah.

Q:

Can you tell us about that?

Nori Hagita:

More than 100 people. Although some people came from Sweden or the United States or Canada. I know especially that training people in the summer seasons, join to the ATR, our lab. And ATR is established in the 1986, almost 30 years past. More than 2000 foreign researchers joined our laboratory. Not only intelligent robotics and communications lab but in our case right now 15 countries, researchers from 15 countries. One-fourth come from foreign countries out of the 60 researchers in my lab.

Q:

But any particular students that you supervised who are still working in robotics?

Nori Hagita:

Yeah. And also that I am serviced as a visiting professor in the NARA Institute of Science and Technology graduate school and also served as the division professor of Osaka University and Kobe University. So little bit hard work. <laughs> Because I have to serve as the director of the Intelligent Robotics and Communications and Social Media Laboratory Group to do extra service, yeah.

Q:

And did you do any other sabbaticals other than at the University of California at Berkeley when you went to another lab for a short period?

Nori Hagita:

No, no, no.

Q:

Okay. Is there anything we haven’t covered that you’d like to talk about?

Nori Hagita:

That’s it.

Advice to Young People

Q:

Well there’s a question we always end with which is what’s your advice to young people who are interested in a career in robotics?

Nori Hagita:

Dig out the more intelligence part of robotics I think. Especially the human-robot interaction is very interesting for young guys because they are facing these situations after the elderly, number of the elderly increase in any countries. And also I’m very interested in IOT, Internet of Things. Because things become more intelligent and collaborate with cyberspace and real space, the how to design these things including robotics and agent or sensor network systems. How to develop the easy way to commercialization.

Future Challenges of Human-Robot Interaction

Q:

What do you think are the really hard problems still facing human-robot interaction?

Nori Hagita:

Depend on the people’s situations. How to give good services to people. That means the current robot could not understand the people’s real situations. To cope with these problems we have to analyze not only the explicit information or external information but also the internal information, how to get there. Especially the intentions are very difficult to analyze automatically. And in case of speech recognitions most of the people focus on direct speech recognition. But whenever we analyze elderly to elderly communications, they could not understand opposite sides or utterance, but still continue to communicate to each others. This is a story. But most of the people focus on the someone say something, high fidelity recognize in speech but we don’t know. We don’t need these speech recognition systems in the real world. Continue to communicate and give very important information for each user. At that time the robot recognized the people’s intentions rather than recognize the words.

Q:

Great. Say anything else?

Nori Hagita:

That’s it.

Q:

Thank you very much.

Nori Hagita:

It’s okay.