Oral-History:Maurice Bellanger

From ETHW

About Maurice Bellanger

Maurice Bellanger, 1997

Maurice Bellanger was born in 1941 in France. He received his undergraduate degree in electronics engineering in 1965 from Ecole Nationale Superiéure Des Télécommunications. He joined Télécommunications, Radioélectriques and Téléphoniques [TRT], a subsidiary of Phillips Communications, in 1967 and since then has worked on digital signal processing and its applications in telecommunications. He returned to graduate school and received his doctorate at the University of Paris-Orsay in 1981. At TRT, he rose to become head of the telecommunications department by 1983 and from 1988 to 1991 served as the scientific director of the company. He accepted a University appointment as Professor of Electronics at Conservatoire National Des Arts et Métiers [CNAM] in 1991. His major technical contributions include the development of an integrated FIR filter in 1970 and a 60 channel PCM-FDM Transmultiplexer in 1974. He holds 16 patents in the field and is the author of two textbooks on signal processing Digital Signal Processing: Theory and Practice, (John Wiley, 2nd ed 1989) and Adaptive Digital Filters and Signal Analysis (Marcel Dekker, 1987). His professional activities have included editorship of the ASSP Transactions and he is a past president of EURASIP, the European Association for Signal Processing. He has been a member of the IEEE since 1973 and in 1983 was elected a Fellow [fellow award for "contributions to the theory of digital filtering, and the applications to communications systems"]. His IEEE awards include the Leonard Abraham Paper Award of the Communications Society.

The bulk of the interview concerns Bellanger's work at TRT on digital filtering, especially for speech communications. He describes the application, independent of American researchers, of the famous Cooley-Tukey paper on the FFT. Bellanger comments on the impact of rapid technical innovation on signal processing, in terms of the development of research tools and the rapid obsolescence of some fruitful research. At the end of the interview, Bellanger discusses the dilemma facing senior researchers saddled with management responsibilities that take them out of the lab.

Other interviews detailing the emergence of the digital signal processing field include C. Sidney Burrus Oral History, James W. Cooley Oral History, Ben Gold Oral History, Robert M. Gray Oral History, Alfred Fettweis Oral History, James L. Flanagan Oral History, Fumitada Itakura Oral History, James Kaiser Oral History, William Lang Oral History, Wolfgang Mecklenbräuker Oral History, Russel Mersereau Oral History, Alan Oppenheim Oral History, Lawrence Rabiner Oral History, Charles Rader Oral History, Ron Schafer Oral History, Hans Wilhelm Schuessler Oral History, and Tom Parks Oral History.

About the Interview

MAURICE BELLANGER: An Interview Conducted by Frederik Nebeker, Center for the History of Electrical Engineering, 22 April 1997

Interview # 337 for the Center for the History of Electrical Engineering, The Institute of Electrical and Electronics Engineers, Inc.

Copyright Statement

This manuscript is being made available for research purposes only. All literary rights in the manuscript, including the right to publish, are reserved to the IEEE History Center. No part of the manuscript may be quoted for publication without the written permission of the Director of IEEE History Center.

Request for permission to quote for publication should be addressed to the IEEE History Center Oral History Program, IEEE History Center, 445 Hoes Lane, Piscataway, NJ 08854 USA or ieee-history@ieee.org. It should include identification of the specific passages to be quoted, anticipated use of the passages, and identification of the user.

It is recommended that this oral history be cited as follows:

Maurice Belanger, an oral history conducted in 1997 by Frederik Nebeker, IEEE History Center, Piscataway, NJ, USA.

Interview

Interview: Maurice Bellanger

Interviewer: Frederik Nebeker

Date: 22 April 1997

Place: Munich, Germany

Childhood, family, and education

Nebeker:

Can we start with when and where you were born, and a little about your family?

Bellanger:

I was born in 1941 in the West of France. I was educated there and later moved to Paris in 1963, where I received a degree in communication from the Electronics Engineering School of Ecole Natiònale Superièure des Telecommunications in 1965. Later on, I presented my thesis at the University of Paris-Orsay in 1981. So that's my vocation.

Nebeker:

So your education was as a communications engineer?

Bellanger:

Yes. Mathematics first, for basics, then engineering, and later on, contacts with the university led me, at the end of my career, to become a professor.

Philips TRT career

Digital communication telephone network

Nebeker:

Did you go directly from your engineering degree to work on the doctorate?

Bellanger:

No. After my military service in 1965, I got a position in industry and worked in an industry laboratory for 24 years, until 1991. In '91 I became a professor at CNAM in Paris. I was never in an academic research laboratory until '91.

Nebeker:

What was the first job that you had as an engineer?

Bellanger:

With a company called TRT, a subsidiary of Philips Communications, in '67, to develop equipment for the digital communication telephone network. It was a time when PCM was being considered for standardization. In fact, there were two modulations competing: delta modulation and PCM modulation. My first job was to develop a delta modulation system, which was called Continuous Slope Delta Modulation. We installed the 60-channel equipment in a field trial in 1968, and a couple of years later it went into production. Some ten thousand channels were manufactured, up until roughly 1980.

Nebeker:

This was the Philips subsidiary that produced for the French PTT?

Bellanger:

Exactly. But our product was used only on local lines because the PCM was standardized by CCITT in 1968. From then on, we moved to a PCM concept which was derived from the delta concept. It was a channel-by-channel approach, contrary to most companies which were using common encoding for 30 channels. This channel-by-channel approach was based on digital filters. That's how we entered the digital processing area.

We had two products in the laboratory which justified development in digital processing. The first was the delta and PCM per channel terminal using digital features, and the second was data modems which needed adaptive digital filters. So adaptive techniques were going on with one project and the fixed coefficient filters with the others.

Nebeker:

Would you tell me a little about this research group at the Philips subsidiary?

Bellanger:

It was a small group, ten engineers and twenty technicians, typically. There were thirty to forty people altogether.

Nebeker:

Had it existed for a long time?

Bellanger:

Yes, since the early '60s and I think it still exists now. It is continuously working on applications for telecommunications for the national operator.

Nebeker:

Is that its main client?

Bellanger:

It used to be, but less and less now. It used to be closely connected to France Telecom on one hand and to Philips International on the other hand.

Nebeker:

When you started, what did you know of the work that Philips was doing elsewhere?

Bellanger:

At that time, Philips considered us the local center of development for this kind of product. So we were alone in development of these applications. But we had close connections with the main research labs of Philips and we could borrow what we needed from them.

Nebeker:

Was there much interaction?

Bellanger:

Sure. Their theoretical work, for example, was quite useful for us. It was coming from Brussels or from Eindhoven in the Netherlands which has a large research lab, so we had good connections with them. But at this early stage, 1968-70, we were really left alone on the project and our objective was to develop a PCM per-channel codec using integrated circuits. So for example, we developed and implemented in silicon a digital transversal filter, an FIR filter.

Filter design

Bellanger:

It was produced and working in 1970, one of the first integrated FIR filters. It was a half band FIR filter which formed the basis of what later was called multi-rate digital filtering, since we claimed that the half band filter was a basic tool to perform multi-rate digital filtering. We wrote several papers about that.

Nebeker:

Where was that integrated circuit produced?

Bellanger:

In Italy, by a subsidiary of AMI of Silicon Valley, California. They had a nice facility in Naples, Italy and they could do what we now called ASICs (application specific integrated circuits).

Nebeker:

They must have been one of the first ASICs.

Bellanger:

Certainly in Europe. We were fortunate to have within Philips a company which was very involved in those kinds of chips. It was called TMC at that time, and they helped us with the technology and implementation. But concerning FIR digital filters, it was quite exciting to have a real implementation at that time because we had to consider not only the filter response, but silicon area and the number of bits in coefficients.

Nebeker:

Your group did all the design on that?

Bellanger:

Everything, all the simulation, the coefficient rounding, coefficient optimization, filter design. We had filter design packages which were based on the least squares, so we tried to get flat frequency response using weighted least squares. It was not as good as the Parks-McClellan program which came in 1974. I visited Rice University in '74 and Jim McClellan gave me the program and we used it, instead of our least squares.

Nebeker:

How did that first production go? Were there real problems getting it to work?

Bellanger:

Not really, except that the technology was very limited. It was PMOS with ten microns of gate, which limited processing power very much. Everything went reasonably well for the prototypes, but the chips were never mass-produced because of the cost. Remember, it was 1970. We needed too many chips. For example, we used two chips for a digital filter with ten taps. That was too much.

Nebeker:

What kind of communication system was this for?

Bellanger:

A 30-channel PCM system. We finally dropped the digital filter approach and used standard analog filters to make the equipment. After that, it was delivered for ten to fifteen years to the French Telecom operator.

Conversion of FDM telephone channels to PCM

Bellanger:

Since we didn't succeed in putting such a product on the market, we used our expertise in a different area: the conversion of FDM telephone channels to PCM. In 1970, we embarked on the design of a 60-channel PCM-FDM converter and wrote the first patents on that in 1972, if memory serves, and we published the first paper in 1974. That's when we introduced the so-called transmultiplexer which was accepted by CCITT for standardization.

Nebeker:

Did that result in a product for the company?

Bellanger:

Yes, it was put on the market in 1978, something like that, and it was on the commercial market for about ten years.

Multirate filtering

Nebeker:

Can you explain to me what multirate filtering is?

Bellanger:

The multirate filtering idea came from our work on delta modulation. Since our PCM channel bank had a per-channel digital filter, we had to minimize the number of multiplications. The multiplier was the most area-consuming operation. A second reason was that, in order to perform filtering in that context, we had to start from a high sampling rate. The final sampling rate is the regular PCM rate, 8 kilohertz, and we had to start from at least 32 kilohertz. So we tried to optimize the conversion from 32 kilohertz to 8 kilohertz using digital filters, and that's where the concept of multirate came in. We noticed that two half-band filters drastically reduced the number of multipliers, and thought that the concept could be applied in different fields. We gave some presentations of the concept within Philips, and it was used in other areas, not only for sample-rate reduction but also for sample-rate increase, which was called decimation and interpolation.

Nebeker:

Why did you have the higher initial sampling rate?

Bellanger:

Because what we wanted to make was a telephone band filter from 0 hertz to 3400 hertz. But, as you know, a digital filter is a sampled system, so it has a frequency periodicity. And after the base band, which is what we really want to keep with digital filters, we have image bands around all the multiples of the sampling frequency, which have to be canceled by analog filters in any case, since the output has to be analog. But the higher these images, the easier it is to suppress them. Our goal was to be able to get rid of those image bands with just resistors and capacitors—no coil, no severe filtering. Doing that required at least 32 kilohertz of over-sampling. Sixty-four would probably have been better but was too expensive, so we choose the 32.

Nebeker:

Was this done elsewhere at that time?

Bellanger:

I don't think so. To my knowledge we were the only group considering per channel encoding for PCM at that time. Ten years later it was generalized, but at that time, all the companies used a common coder. It was indeed shared by all the channels, but less flexible. So it was just a question of technology.

Nebeker:

You published your paper on multirate filtering in 1974. Was it picked up on quickly?

Bellanger:

Yes. We patented this technique of sample rate reduction and interpolation and we saw that it would be useful in many technical areas. That's why we published this paper in '74. There was an analysis of this paper in the Spectrum, by the famous columnist—Gadi Kaplan. He emphasized the importance of the paper and it was very good for promotion of the idea. From then on multi-rate became one of the areas of digital filtering.

Nebeker:

What other applications did it find?

Bellanger:

At that time, most people used IIR filters for digital filtering because with fewer operations, they were more efficient. But from then on, people realized that with multi-rate techniques, you can have an implementation even more efficient than with IIR filtering, if you design your application carefully. So I believe that a number of applications moved from IIR to FIR at that time: communications, data modems, and acoustics. Philips used the technique extensively for the compact disc, and I assume in a number of other areas.

Nebeker:

That was an important patent for Philips.

Bellanger:

Yes, really. It was 1972.

Data modems; fractionally spaced equalizer

Nebeker:

Data modems were the other area of work in your group?

Bellanger:

Yes, we designed modems from 300 Baud to 48, 96 kbits/s and even more recently in the higher speeds. That was the second part of my career. At the beginning, I concentrated on fixed coefficient filters and took just a look at adaptive equalizers, but later on, I moved to the adaptive field. That's the reason why I have done both. But the modems were designed by a different team and I personally had no responsibility. I was advising and we were exchanging ideas, but I had no responsibilities for this.

Nebeker:

Was there much sharing of ideas between the two?

Bellanger:

Yes, indeed. One of the interesting inventions of that team of modems was the fractionally spaced equalizer, which came directly from the half band FIR filter. The idea came from considering in detail the way the half band filter works: it's an adaptive version of the half band FIR filter. This fractionally spaced equalizer was invented in '75 or '74, if I remember.

Nebeker:

By that group?

Bellanger:

Yes. It was Loic Gijidoux who was my colleague in that group.

Multirate techniques and the transmultiplexer systems

Bellanger:


Audio File
MP3 Audio
(337 - bellanger - clip 1.mp3)


It also might be interesting at this point to say more about the application of multirate techniques to the transmultiplexer system. We were thinking about this problem from the beginning of my career; it involves a conversion between FDM and PCM systems. One of the objectives of my boss was to work on this interface because it was a real problem for the telecom operators. The telecom operators used FDM systems in their network, but they saw the PCM rising and anticipated the interconnection problem. Since our group was recognized as innovative, they asked us to think about an efficient conversion between the two. One of the first papers I read when I came into industry was the famous paper by Cooley and Tukey on the FFT. We implemented the algorithm almost immediately. Considering the structure of the FDM multiplex, we thought there was some connection with the FFT and tried to find what the connection could be. We read a number of papers by Weaver, for example, and we tried to mix the FFT with the multi-rate filters, but it took some time before we came to the idea of the filter-bank, that is of having a prototype filter modulated and shifted on the frequency axis and the operation of modulation shared by all the channels. It was, in fact, the Weaver paper of around '68 which gave me the idea of factoring the equations. In '71—the patent goes back to January '72 if I remember—we had succeeded in factoring the operations. We could show that the complete bank of filters could be factored into an FFT on one hand and a set of digital filters on the other. I had a lot of difficulties explaining to my colleagues how it worked. I asked some technicians to implement the system, but had to explain to them how it worked. Explanation was easier for the FFT, particularly because we took the matrix approach for it rather than the arithmetic approach, which was explained in the paper by Cooley and Tukey. With a matrix approach it's rather easy to explain, but the other part was really difficult. To really understand the explanation, the technicians wanted to see what function was performed by each of these subsets. So we looked at the function and realized that it was a phase shifter, because if you put a sine wave at the input, you get the same sine wave at the output but with a different phase. We thought that since the phase shifts were multiples of one pi divided by the number of channels, it was like a polyphase filter. That's why we called the system the Polyphase Network.

Implementation of Cooley-Tukey FFT algorithm

Nebeker:

Regarding the significance of the Cooley-Tukey paper, the story I heard was that engineers in this area didn't appreciate the significance of the paper until the MIT people demonstrated it. But you said that you immediately appreciated the relevance of the Cooley-Tukey paper?

Bellanger:

Oh yes. From my arrival at the lab we used the Cooley-Tukey program for other applications. We also had in the company some military activity for communications.

Nebeker:

That came directly from the Cooley-Tukey paper, not from Rader or Gold or any of that work?

Bellanger:

No, no. We implemented Cooley-Tukey right from the beginning and used it in several sections of the company.

Nebeker:

Do you recall what other areas?

Bellanger:

At that time we had frequency keying data modems for military applications and we used Cooley-Tukey to simulate the filters. We also had the radio altimeter product which was FMCW with impulse compression. We used the Cooley-Tukey program for the spectral analysis of the impulses.

So it was a tool of the laboratory at that time. But it was not implemented in real time; in fact, it's only in the transmultiplexer that it was implemented in real time on silicon. It was used for simulation and for research and development at the beginning, and later on it was included in the products.

Patents and product development; transmultiplexer and polyphase network

Nebeker:

How long did it take to go from the patent and first paper to a product?

Bellanger:

For the transmultiplexer it took a long time. The telecom network experimented on it in 1978, something like that.

Nebeker:

So you built this chip very early.

Bellanger:

Yes, with silicon chips we built a complete 60-channel FDM-PCM transmultiplexer system with ASICs. At that time it did not use our PMOS ten microns anymore, but NMOS with one micron, I believe. So we designed similar digital processing filter chips to implement the polyphase network and the FFT—also in the ancillary filters, because there are many other filters in the system.

One of the reasons it took eight to ten years to develop was because it's an interface product. To meet the requirements of the user you have to look at both sides: the FDM and the PCM. It took a lot of time with the customer to define all the functions which had to be implemented. The analog-to-digital conversion problem was not easy because a 60-channel FDM system is 60-channels multiplied by one khz. It requires a wide band for the analog-to-digital conversion to be accurate.

Nebeker:

Did any other company beat Philips to the market?

Bellanger:

There were other companies manufacturing this equipment, but we were probably the first on the market. One company was AT&T, NEC was another one, also Granger and later on a German company. From the commercial point of view it was a little disappointing because the market was much smaller than we expected. PCM completely invaded the telecom system more quickly than we expected. So FDM disappeared, and with it disappeared the conversion problem.

Nebeker:

But the work was influential far beyond that.

Bellanger:

Yes. It is interesting now to see the success of multi-carrier systems. When we were using this transmultiplexer and polyphase network approach in 1972, we suggested using it for data transmission also. We could have made a parallel modem that was in fact a multi-carrier data modem—we had a project on that topic. But in something like 1973, our management said, "You have two approaches for one product. You have to choose." Finally, the adaptive full-band equalizer turned out to be more economical than the multi-carrier transmission technique. So we dropped the multi-carrier approach, although later it was taken up again for broadcasting. France Telecom used these multi-carrier techniques for digital sound broadcasting in 1995.

Nebeker:

Did you have much interaction with the engineers of France Telecom?

Bellanger:

Not on that topic, because digital broadcasting was not our responsibility within Philips. It was managed in the main research lab. But it's interesting to see that the basic tools have been used in other areas.

Other research

Nebeker:

Looking at your publications from the 1970s on, what other topics did you work on?

Bellanger:

What we have described are the main topics, I would say. There were some side effects, one of these was what was called odd-time, odd-frequency discrete Fourier transform, which was the result of applying the FFT algorithm to the real signals that we had in our hands. Later on, this gave rise to DCT. We elaborated on multirate filtering and some implementation aspects of it; A-to-D conversion was also a critical subset.

Nebeker:

Did the whole group work on that a lot?

Bellanger:

We tried to spend the least possible time on that topic because it was not our core business. We used the expertise of the Philips Research Lab a lot. They have a very good group on that, so we concentrated on the A-to-D specifications. The final circuits were not done by us.

Nebeker:

What was the publication listed here on computational complexity?

Bellanger:


Audio File
MP3 Audio
(337 - bellanger - clip 2.mp3)


It tried to generalize the polyphase concept and show how it could be applied and how attractive it could be. One of the issues with this polyphase approach (as well as with the multi-rate approach) was the computational complexity. At that time we were competing with other companies and with analog solutions for the product. We worked out some measurements of the complexity and tried to show that the approach was more or less optimal. So in this paper we had a way to measure the complexity and we applied it to the transmultiplexer product. It was an effort to show the efficiency of the approach.

It was interesting to see the QMF invented at that time in 1977. In the filter bank we designed the channels are separated, one channel per telephone conversation. But at that time there was another problem, which was to perform sub-band decomposition of speech signals for speech compression. Indeed, our scheme could be applied to the decomposition of speech signals, but not directly as such, because of the separation of the bands.

Bellanger:

So if we had applied this polyphase filter bank as designed to the separation of speech in sub-bands, we would have had some holes in the frequency band. In 1977 someone called Croisier at IBM in the South of France invented the QMF filters, which were exactly the right answer to this problem. It showed, at least for two bands, that it was possible to have perfect decomposition and reconstruction. But in fact, we were not completely satisfied with the QMF filter bank by Croisier because it was limited to two bands, and at that time we were working on speech coders based on sub-band decomposition. It was only five years later, around 1983, that someone in the group I was heading found a way to use both the polyphase approach and the QMF concept in what was later called the pseudo-QMF approach. But in any case, it wasn't published at the time because we didn't publish all the work we did in this area. There was an excellent paper in the ASSD Transactions which described the pseudo-QMF technique in '85.

Nebeker:

Oh, much later than that.

Bellanger:

In fact, the engineer in my group published his approach using all the refinements of the polyphase implementation in 1985 at ICASSP ‘85 and it is still the most efficient way to implement Pseudo-QMF filters for sub-band decomposition. That's another interesting outgrowth of this filter bank technique.

Publications

Nebeker:

What was the publication policy of the group you worked with, how often did you publish?

Bellanger:

The publication policy was not very well defined. We were in an industrial environment, so our first priority was to apply for patents. But still, we were funded by the French government and were in a large consortium with Philips. It's a very large company, so in order to get funding we needed some publicity. In consideration of both aspects, it was agreed that we could publish some of our results. But we had little time to write, so we tried to put out just a couple of papers per year.

Nebeker:

The reason for not doing more was simply that you were too busy?

Bellanger:

Yes, and I think also that the management of the company would not have accepted too many publications. We had to make a selection of what to publish.

Nebeker:

Why did you publish almost entirely in English?

Bellanger:

Oh, no, no. You are looking at the list of English papers, but I have a similar list of French papers. We tried to balance between publishing in French and in English. First of all, we had to ensure that French technical knowledge was disseminated. Plus, we can compose more easily in detail in French than in English. But as soon as we are on the international scene, we have to write in English. The official language of the Philips concern is English. It's the same for the books I have written. I always have written the French and the English versions at almost the same time.

Nebeker:

Do you do your own translating, or recompose in English?

Bellanger:

Generally I translate myself and then re-compose, more or less.

Adaptive techniques; ADPCM standardization

Bellanger:

What we have discussed so far was a period from '67 to roughly '80, and from the year '78 to '80 I became more interested in adaptive techniques, particularly for speech compression and also for equalization. At that time we became involved in standardization of the ADPCM technique, which gave rise to the 32 kilobit standard. We worked on adaptive predictors, for example, as well as equalizers. This explains why we became interested in least squares and fast least squares algorithms.

Nebeker:

How did this change in research areas come about? Did your supervisor say that you should develop this product?

Bellanger:

Yes, it came more or less from management and from reorientation of our markets. PCM was working properly, research was done, transmultiplexers were also more or less finished, and so the emphasis was on the terminals themselves. We tried to design products which were more efficient than PCM, and ADPCM was one of them. With ADPCM, we used the same capacity of a PCM to transmit two channels instead of one. That's how our interest came to the 32 kbit/s ADPCM, which, by the way, is now the standard for the mobile system, DCT.

Transition to academic career

Bellanger:

Around that time in 1985 I became more and more connected to the academic world. I was giving lectures, courses, advising students, and I had some connections with research groups in parallel with the activity of the lab. Finally, it led me to become a professor in 1991. This move to academia can be recognized in my publications: I moved more and more to the algorithmic aspects of adaptive filtering, and all those fast least squares. It was also related to some applications in our group. But indeed, as soon as I moved to the university I published much more than before. And that explains why the list of recent publications is longer.

Nebeker:

You were attracted to the university setting for what reasons?

Bellanger:

It was mainly for personal convenience. As you grow older in industry—particularly in small units in a large system—you are more and more involved in management tasks. When I was the scientific director I spent a significant part of my time in meetings, so at that point I had to choose whether to continue a management career or come back to the laboratory. That's the main reason why I moved to the university: in order to research.

Nebeker:

How did you find a position?

Bellanger:

It took some time. I've always liked teaching; I tried to teach all during my career. But the way problems are approached in industry and in academia are quite different, and it required some adaptation. Technical resources are scarce in the university, while in a large industry group you can get a lot of equipment rather easily. It changes the way you see the field. But still, the academic world has this great advantage of permitting someone to do research as long as he wishes.

Nebeker:

Was that a source of frustration earlier, wanting to pursue something that the company was not interested in?

Bellanger:

Yes, particularly as I watched filter bank research develop wavelets and applications to multidimensional signals. I couldn't be part of it anymore, so I felt a little bit frustrated. That explains why, when I had the opportunity, I came back to these filter banks with the multi-carrier techniques, which is a way to combine adaptability and filter banks in the same topic. For example, the paper we proposed on multi-carrier transmission combined with sub-band equalization. Sub-band equalization is adaptive filtering, but to perform the sub-band separation you need a filter bank. So it's a combination of both.

Nebeker:

I see a number of your publications also dealt with image processing.

Bellanger:

Yes, that's always been a minor activity in my career dating back to the '70s. At that time, we were using the FFT for the polyphase filter bank and thought it might be useful for image compression. But again, technology was a severely limiting factor, we used just the Hadamard transform. But our group still performed compression of speech, sound, and image signals. It was part of the activity of our laboratory, but we were not the major innovator in that field, because Philips has very strong teams in it.

Conservatoire National Des Arts et Métiers; signal processing lab and teaching

Nebeker:

Can you tell me something about your university position? You're at the University of Paris?

Bellanger:

No, my thesis was presented at the University of Paris-Orsay, but the position I have is with Conservatoire National Des Arts et Métiers—CNAM. It's a unique institution founded during the Revolution in 1794. The idea was to have a single place in Paris for a museum and all the inventions, in order to teach people how to use them and to invent new ones. That's where the name “Conservatoire” comes from. Its duty is to record the history of the technologies. There are a number of teachers recruited from industry; they know the inventions best, because they more or less made them. It's like a political election. A group of 80 people are electing their successors to fill a position in a particular technical field. So it's an open recruitment, which is not the case in the university. In university you must have some degree, there is an internal competition, so it's very difficult, at least in our country, for an engineer to become a university professor. At CNAM, I was encouraged to compete and I was elected.

Nebeker:

Is there a well-defined area that you are filling?

Bellanger:

Electronics, and due to my expertise, the emphasis is on signal processing and related electronics.

Nebeker:

What is the research setting? Do you have students?

Bellanger:

Yes. First of all, we teach people who are working people, not beginners. In order to take courses at CNAM you must work in industry, so we teach in the evening, Saturdays, and sometimes on Sunday. The interesting aspect is that people who take the courses are really involved in practical aspects of signal processing.

Our job is to run a laboratory, so we have students. I have eight Ph.D. students and we are free to choose our field of research as long as we can find the money to support the students. Those are the constraints.

Nebeker:

You have money both to run the laboratory and to support the students?

Bellanger:

Yes, exactly. We have basic money; then we complete the funding with contracts from industry or other institutions.

Nebeker:

How would you summarize the signal processing work that you've done in this laboratory setting?

Bellanger:

We try to exploit our expertise, and I have tried to recruit good, young, assistant professors in signal processing. My area is where I can most easily find contracts, so the field is still communications. For example, right now we are working on mobile radio, which is both a good source for funding and has a number of signal processing challenges. We try to choose topics in mobile radio which are linked to equalization, to filter banks—that's where multi-carrier communications comes in, also adaptive antennas, because it's a challenging signal processing problem.

Comparison of academic and industrial research

Bellanger:

Another aspect which is less apparent from the papers is what we call ancillary functions. Part of the richness of an industrial position is that you do not work on just one idea and specific subset. You have to design a complete product—a terminal, for example, or a receiver. But what you get from conferences or from publications is big ideas: filter banks, array processing, and so on. But in order to make the equipment work, there are a number of other functions which are extremely critical. Among these functions are synchronization, the feedback loops, PLL phase lock loops, and gain control. These devices are more and more based on digital approaches, which encompass a significant part of the knowledge, and where you need to apply some good tricks. I find research about ancillary functions very enriching and have tried to publish some of those results, but still it's less glorious than the main functions.

Nebeker:

A great deal of signal processing work in recent decades has been done both in academic settings and in industrial, such as Bell Labs.

Bellanger:

That's right, both are certainly necessary. Some great academic ideas are difficult to apply in industry. A typical example these days is blind equalization, which is a great academic topic and probably of limited use in industry.

Nebeker:

How much danger is there of academic research going off in directions that will not be important industrially?

Bellanger:

I don't think it's a danger; there is an automatic regulation. I think it's not dangerous as long as the dialogue is maintained. One of the reasons for staying involved with ICASSP is that it is a unique meeting place for academic and industry people. You can talk, and get new ideas. If academic people realize that an invention cannot be employed, they will switch to more applicable topics.

Signal Processing Society

Nebeker:

I noticed you were Technical Program Chair for ICASSP back in 1982. From your perspective, has ICASSP functioned as a forum over the years?

Bellanger:

Yes, to some extent. By the way, 1982 was the first ICASSP outside the US. We had big attendance and it was extremely demanding in time and effort, but extremely rewarding because it was a unique opportunity to know more people and get a broader view of signal processing.

Nebeker:

ICASSP helped make the Signal Processing Society more international?

Bellanger:

Definitely. I think it was very useful in recognizing the contributions of people, particularly from Europe, in signal processing. Until then, it was not very apparent in European eyes.

International influences on development of signal processing field

Nebeker:

Could I ask you to speak frankly about what may be an American-centered view of the way signal processing developed? Accounts told in the U.S. tend to mention only work done in the U.S., such as at Lincoln Labs and Bell Labs. Does it appear that way to you?

Bellanger:

We feel indeed that in Europe we have always had some excellent laboratories, certainly at the same level. We had, for example, excellent background in mathematics and in the basics, which prepared the young people to develop these techniques. The advantage of being in the U. S. is certainly the availability of the technology, which undoubtedly came at that time from U. S. companies, particularly California companies. Another difference is that in Europe the students and engineers are not forced to publish. It's perfectly feasible to present a thesis without having published internationally. Local publication and local appreciation may be enough. There is also the language issue, so there are a number of reasons why the signal-processing field might appear more developed in the U. S. than in Europe, but I don't think that's the case. We could draw a history of European contributions to signal processing quite easily.

Nebeker:

Has communication been reasonable in both directions? In the 30 years you've been in the field, has it seemed that Americans have been reading the European publications?

Bellanger:

I'm not sure. Some. All along I have been able to verify that a number of European publications were read by U. S. scientists. There is no doubt that there have been good exchanges in that respect, but the emphasis might be on different topics. For example, in our country the impetus for developing signal processing came from digital telephone networks and from military applications. The military emphasis has certainly existed in the U. S., but it's probably been a little different for digital communications. So the different emphasis is related to the technical context and markets in the different countries.

Nebeker:

The two books which you did simultaneously in English and in French are widely used in the United States. They are cited very often.

Bellanger:

One motivation for writing those books, especially Digital Signal Processing, was that I found that existing books were written with a university education in mind—I tried to give a different perspective, one in accord with the needs of engineers. That, and it's always useful for an author to put ideas and techniques together, that's very enriching. I looked at the available techniques and possible solutions for a problem and tried to select the best ones for the engineer. It gives a slightly different perspective, that's why I called the book Theory and Applications, because application was behind all these techniques.

IEEE and Signal Processing Society

Nebeker:

I want to ask also about your involvement with the IEEE and the Signal Processing Society. When did you join and what has been the history of your involvement?

Bellanger:

It must have been 1972 or '73. I became familiar with the journals from the very beginning of my career. My boss was an IEEE member and he encouraged me to join. It's important to get to know the people as soon as you become part of a field. It's nice to have good friends, and it's certainly useful from the technical point of view.

After thirty years, I can really say that reading the papers is not enough. I feel that I gain a lot by knowing the people personally, because of the confidence you need in an author. Reading is not enough because you have to interpret the reading, and if you know someone for many years, you know that he will not write such-and-such a statement if he is not convinced that it is true. It's like soft decoding. The level of confidence in a decision is quite important, and you get that confidence best by personal contact. A good way to establish good personal contacts is to work together on some activity. Starting a workshop, conference, or other common activity is an excellent way to know people. There is a network of old friends you can meet at ICASSP, and that's one of the reasons for coming.

Nebeker:

Have you attended most ICASSPs?

Bellanger:

Virtually all of them except the first one. I missed the first one in 1976, also the Australian one three years ago, which was too far. Otherwise, I have always been to the ICASSP. I was the Technical Program Chairman in '82 and have been involved with the Awards Board since '84. I was Awards Board Chairman for five years and this also is a useful contribution to the community. I find that participating in IEEE activity is an important part of my professional career and I enjoy it very much. I believe it's useful for my own activity and for my students.

Technological advances and signal processing education

Nebeker:

Are there things I haven't asked about that you want to comment on?

Bellanger:


Audio File
MP3 Audio
(337 - bellanger - clip 3.mp3)


Concerning the field itself and how I see it evolving, there is no doubt that the most important aspect of the signal-processing field these days is the progress of technology. It's interesting from the present point of view, but also for the future, because we know that this trend in technology will continue for some years. The problem now seems to be complexity—complexity and the combination of different techniques and system aspects. We have seen signal-processing move from being a subset, with small devices and well defined local products, into large systems. That complexity is a key aspect.

The problem which comes with this is the difficulty of managing these complex systems from both an academic and industrial point of view. In order to be successful, we need signal processing tools, and we have been happy to see those tools emerge with the years. The Mat Lab set of products is a typical example.

Nebeker:

What other tools are you thinking of?

Bellanger:

Mainly software tools, but also simulation and hardware tools with the signal processors. DSP chips have helped a lot in the applications of signal processing. On one hand, we have many different techniques which must be integrated into larger systems, and on the other hand we need good tools to make the system manageable.

As concerns education, which indeed is difficult in such a context, we need to think about how to adapt education to this kind of a situation. In my view, we should stick to the basic mathematical concepts which will always be there. In a large or in a small system, mathematics will always be useful; also, the use of complexity management methods, as well as the basic signal processing functions. It's certainly a challenge to teach signal processing at this point in time, that's one of my tasks these days.

Digital signal processing hardware and applications

Nebeker:

I'm interested in some generalizations about the way the field has developed. The idea of doing things digitally emerged in the 1960s but was limited by the technology of the time. Later, tremendous advances in microelectronics removed a lot of those constraints. How do you see the field now? Is it still a hardware-constrained field, or is it application-driven?

Bellanger:

The hardware constraints still exist in the high frequencies. More and more functions are performed digitally at high frequencies, for example, the analog front ends in radio and TV receivers are shrinking more and more and one can imagine in a few years a completely digital mobile radio receiver. But the main evolution of signal processing is certainly related to complexity and to the human interface. One of the challenges of signal processing is to make friendly interfaces with the users, since DSP is now included in most technical fields, not only communications, but also military, consumer electronics, control, the car industry. You have digital signal processing everywhere.

Nebeker:

And more and more in instrumentation.

Bellanger:

Yes. One important aspect for the acceptance of more sophisticated, intelligent, terminals and appliances is that they must be friendly, easy to manipulate. That's certainly one area where signal processing has a lot more potential. We need more flexible terminals, and in computer science we can see the agent technology being developed. I believe that the concept of agent might apply to signal processing as well. By agent, I mean an autonomous subset which can find its own way of operating and just report to the user. If we take again the mobile radio receiver, one can imagine a receiver which would help the user—it would select the best available channel, best modulation, do all that by itself, and just report its decisions to the user, some kind of internal intelligence. Definitely the man-machine interface is really critical for the near future.

Nebeker:

You see that as the biggest area of development?

Bellanger:

Certainly, because it requires speech interaction and image interaction—the recognition of patterns, recognition of data exchanges. It seems to me that more focus should be put on such themes.

Nebeker:

Thank you very much.